Deploying Ascent on AWS EKS with Aurora PostgreSQL and ElastiCache Redis using Cloud Formation
Deploying Apica Ascent on AWS EKS with Aurora PostgreSQL and ElastiCache Redis on production VPC using Cloudformation
Prerequisites
Before proceeding, ensure the following prerequisites are met:
Amazon EKS Kubernetes Version Compatibility
AWS Resources
Note: These resources will be automatically generated during the CloudFormation deployment process and are not prerequisites for initiating it.
The Cloudformation template provisions the following resources:
S3 Bucket
EKS Cluster
EKS Node Pools
Aurora PostgreSQL
ElastiCache Redis
Deploy IAM Role, Aurora PostgreSQL and ElastiCache
Note: Ensure you're operating within the same region as your Virtual Private Cloud (VPC).
Step 3: Configure stack options
Nothing required here, navigate to the bottom of the page and click "Next"
Step 4: Review and create
You can review your configurations, acknowledge the capabilities and click "Submit"

Deployment might take a while. Please wait until the stack status shows "CREATE_COMPLETE" before proceeding.

If the stack for some reason would fail, make sure to check the stack events (select your stack, and click on "Events") to understand the error. In order to fix your error, delete the stack and re-do the above.
Create EKS Cluster
Create an EKS Cluster with CloudFormation
After successfully deploying the initial CloudFormation stack, follow these steps to create an EKS Cluster:
From the previous steps, you can click on "Stacks" or with the search bar on your top left, search for "CloudFormation" and select the CloudFormation Service
On your top right, click "Create Stack" and select "With new resources (standard)"

Step 1: Create stack
On the following page (step 1 of Stack creation) select "Template is ready" and "Amazon S3 URL". In the "Amazon S3 URL" textfield, enter https://logiq-scripts.s3.ap-south-1.amazonaws.com/EKSCluster-multiset.yaml

Click "Next"
Step 2: Specify stack details
Enter a stack name (Whatever you want to call the cluster)

Enter a name for the EKS cluster (Save this value)
Enter the ARN value of the IAM role you created in the previous CloudFormation deployment (Navigate to the previous stack and check outputs tab to find the value for the key LogiqEKSClusterRole)


Select a VPC id in the dropdown (This guide assumes you’ve created these previously)
Select two VPC Private subnets with NAT GATEWAY Attatched for the above VPC from each dropdown.

Enter "2" in the fields for “Ingest Worker Node count” and “Common Worker Node count”
Enter the S3 bucket name you used in the previous CloudFormation deploy in “S3 bucket for Logiq”

Click "Next"
Step 3: Configure stack options
Nothing required here, navigate to the bottom of the page and click "Next"
Step 4: Review and create
You can review your configurations, acknowledge the capabilities and click "Submit"

Deployment might take a while. Please wait until the stack status shows "CREATE_COMPLETE" before proceeding.
AWS CLI commands
Open a terminal and execute the following:
aws eks --region <AWS REGION> update-kubeconfig --name <EKS-cluster-name>Example:
aws eks --region eu-north-1 update-kubeconfig --name apicaeksclusterdatafabricExpected output:
Added context arn:aws:eks:eu-north-1:123123123123:cluster/apicaeksclusterdatafabric in /Users/christley/.kube/config
Execute the following command:
kubectl get namespaceExpected output:
NAME STATUS AGE default Active 25d kube-node-lease Active 25d kube-public Active 25d kube-system Active 25d
Download the following file:
Change directory to where you downloaded the file with your terminal (using the command cd)
Example:
cd /Users/myUser/Downloads
Execute the following command:
kubectl apply -f gp3-sc.yamlExpected output:
storageclass.storage.k8s.io/gp3 created
Execute the following command:
kubectl delete sc gp2Expected output:
storageclass.storage.k8s.io "gp2" deleted
Execute the following command:
kubectl get pods -n kube-systemExpected output:
NAME READY STATUS RESTARTS AGE aws-node-9cc6g 2/2 Running 0 25d aws-node-bhv5n 2/2 Running 0 25d aws-node-flrc9 2/2 Running 0 25d aws-node-j88ln 2/2 Running 0 25d aws-node-xl76w 2/2 Running 0 25d aws-node-z96lm 2/2 Running 0 25d coredns-3123sadds-bgdbg 1/1 Running 0 25d coredns-3123sadds -m6rww 1/1 Running 0 25d ebs-csi-controller-12332d32-gflvj 5/5 Running 0 18d ebs-csi-controller-12332d32-lztfp 5/5 Running 0 18d ebs-csi-node-4fpp9 3/3 Running 0 18d ebs-csi-node-9mbgs 3/3 Running 0 18d ebs-csi-node-fgrsj 3/3 Running 0 18d ebs-csi-node-s5nqm 3/3 Running 0 18d ebs-csi-node-vpbn4 3/3 Running 0 18d ebs-csi-node-w9xvk 3/3 Running 0 18d kube-proxy-5qnfb 1/1 Running 0 25d kube-proxy-8sh24 1/1 Running 0 25d kube-proxy-9pkmd 1/1 Running 0 25d kube-proxy-9ppt4 1/1 Running 0 25d kube-proxy-b8vx6 1/1 Running 0 25d kube-proxy-kc6sd 1/1 Running 0 25d
Deploy Apica Ascent using HELM
Execute the following command:
kubectl create namespace apica-ascentDownload the following file
Open the file in a text editor and replace the following values:
awsServiceEndpoint:
Replace
<region>with your specific AWS region, for exampleeu-north-1. The updated URL format should look like this:awsServiceEndpoint: https://s3.eu-north-1.amazonaws.com
s3_bucket:
Replace the placeholder
<>with the actual name of the S3 bucket that was created during the initial CloudFormation deployment:s3_bucket: "adf-helm-bucket"
s3_region:
Replace the AWS service endpoint region in the URL with the appropriate region, for example,
eu-north-1:s3_region: "eu-north-1"
s3_url:
Replace
<region>with the region where you installed it. For example:s3_url: "https://s3.eu-north-1.amazonaws.com"
redis_host:
Replace
<>with your specific ElastiCacheCluster endpoint generated from the first CloudFormation deploy. For example, if your generated endpoint isapicaelasticache.hdsue3.0001.eun1.cache.amazonaws.com, you would update the configuration as follows:redis_host: "apicaelasticache.hdsue3.0001.eun1.cache.amazonaws.com"You can find this value from the output tab of the first CloudFormation deploy

postgres_host:
Replace
<>with your AuroraEndpoint endpoint. For example, if your generated endpoint isapicadatafabricenvironment-aurorapostgresql-0vqryrig2lwe.cluster-cbyqzzm9ayg8.eu-north-1.rds.amazonaws.com, you would update the configuration as follows:postgres_host: "apicadatafabricenvironment-aurorapostgresql-0vqryrig2lwe.cluster-cbyqzzm9ayg8.eu-north-1.rds.amazonaws.com"You can find this value from the output tab of the first CloudFormation deploy

postgres_user:
Replace
<>with the master username you created during the first CloudFormation deployment:postgres_user: "apicauser"
postgres_password:
Replace
<>with the password for the user you created during the first CloudFormation deployment:postgres_password: "myPassword123!!"
s3_access:
Replace
<>with your AWS CLI access key id.s3_access: "myAwsAccessKeyID"To retrieve your AWS credentials from your local machine, execute the command below in your terminal:
cat ~/.aws/credentials
AWS_ACCESS_KEY_ID
Replace
<>with your AWS CLI access key id.AWS_ACCESS_KEY_ID: "myAwsAccessKeyID"To retrieve your AWS credentials from your local machine, execute the command below in your terminal:
cat ~/.aws/credentials
s3_secret
Replace
<>with your AWS CLI secret access key id.s3_secret: "myAwsSecretAccessKeyID"To retrieve your AWS credentials from your local machine, execute the command below in your terminal:
cat ~/.aws/credentials
AWS_SECRET_ACCESS_KEY
Replace
<>with your AWS CLI secret access key id.s3_secret: "myAwsSecretAccessKeyID"To retrieve your AWS credentials from your local machine, execute the command below in your terminal:
cat ~/.aws/credentials
Namespace
Search the file for "namespace" and replace
<namespace>/<namespace>-prometheus-prometheuswith the following:apica-ascent/logiq-prometheus-prometheus
To modify the administrator username and password, replace the existing details with your desired credentials.
admin_name: "[email protected]" admin_password: "flash-password" admin_org: "flash-org" admin_email: "[email protected]"
Save the file
Execute the following command:
helm repo add apica-repo https://logiqai.github.io/helm-chartsExpected output:
"apica-repo" has been added to your repositories
Ensure that the path to your
values.yamlfile is correctly set, or run the commands from the directory that contains the file. Use the following command to deploy:helm upgrade --install apica-ascent -n apica-ascent -f values.yaml apica-repo/apica-ascentExpected output:
NAME: apica-ascent LAST DEPLOYED: Tue Mar 26 15:38:48 2024 NAMESPACE: apica-ascent STATUS: deployed REVISION: 1 TEST SUITE: None
Access the Ascent UI
To get the default Service Endpoint, execute the below command:
kubectl get svc -n apica-ascent | grep LoadBalancerUnder the EXTERNAL-IP column you will find a URL similar to below:
NAME TYPE CLUSTER-IP EXTERNAL-IP
logiq-kubernetes-ingress LoadBalancer <cluster_ip> a874cbfee1cc94ea18228asd231da444-2051223870.eu-north-1.elb.amazonaws.comUse this in your browser to access the Ascent UI
Login credentials is as defined in your values.yaml file
Security Group Rules for EKS Cluster
As the EKS Cluster has been created, we can now set up the access rules for our VPC.
From the 1st stack, we need to find the
SecurityGroupswhich was created
Navigate to either
EC2orVPCby using the search bar, and then look forSecutiry Groupson the left hand side menuSearch for your security group using the
IDextracted from the 1st stack and click on theID
Click on "Edit inbound rules"

Now we need to set up 2 rules
TCPon Port6379and source is yourVPC CIDRPostgresql (TCP)on Port5432and source is yourVPC CIDR
Your VPC CIDR can be found by navigating to VPC, select your region in theVPCsdropdown and on the VPC list you have a column calledIPv4 CIDR, copy yourCIDRand use it as a source.
Click "Save Rules"
Enabling HTTPS on your instance (optional)
Use auto-generated self-signed certificate
To enable https using self-signed certificates, please add additional options to helm and provide the domain name for the ingress controller.
In the example below, replace apica.my-domain.com with the https domain where this cluster will be available.
helm upgrade --install apica-ascent -n apica-ascent \
--set global.domain=apica.my-domain.com \
--set ingress.tlsEnabled=true \
--set kubernetes-ingress.controller.defaultTLSSecret.enabled=true \
-f values.yaml apica-repo/apica-ascentUse your own certificate
To customize your TLS configuration by using your own certificate, you need to create a Kubernetes secret. By default, if you do not supply your own certificates, Kubernetes will generate a self-signed certificate and create a secret for it automatically. To use your own certificates, perform the following command, replacing myCert.crt and myKey.key with the paths to your certificate and key files respectively:
kubectl create secret tls https --cert=myCert.crt --key=myKey.keyIn order to include your own secret, please execute the below command and replace $secretName with your secret to enable HTTPS and replace apica.my-domain.com with the https domain where this cluster will be available.
helm upgrade --install apica-ascent -n apica-ascent \
--set global.domain=apica.my-domain.com \
--set ingress.tlsEnabled=true \
--set kubernetes-ingress.controller.defaultTLSSecret.enabled=true \
--set kubernetes-ingress.controller.defaultTLSSecret.secret=$secretName \
-f values.yaml apica-repo/apica-ascentLast updated
Was this helpful?