Deploying Ascent on AWS EKS with Aurora PostgreSQL and ElastiCache Redis using Cloud Formation
Deploying Apica Ascent on AWS EKS with Aurora PostgreSQL and ElastiCache Redis on production VPC using Cloudformation
Last updated
Was this helpful?
Deploying Apica Ascent on AWS EKS with Aurora PostgreSQL and ElastiCache Redis on production VPC using Cloudformation
Last updated
Was this helpful?
Before proceeding, ensure the following prerequisites are met:
Note: These resources will be automatically generated during the CloudFormation deployment process and are not prerequisites for initiating it.
The Cloudformation template provisions the following resources:
S3 Bucket
EKS Cluster
EKS Node Pools
Aurora PostgreSQL
ElastiCache Redis
Note: Ensure you're operating within the same region as your Virtual Private Cloud (VPC).
Step 3: Configure stack options
Nothing required here, navigate to the bottom of the page and click "Next"
Step 4: Review and create
You can review your configurations, acknowledge the capabilities and click "Submit"
Deployment might take a while. Please wait until the stack status shows "CREATE_COMPLETE" before proceeding.
If the stack for some reason would fail, make sure to check the stack events (select your stack, and click on "Events") to understand the error. In order to fix your error, delete the stack and re-do the above.
After successfully deploying the initial CloudFormation stack, follow these steps to create an EKS Cluster:
From the previous steps, you can click on "Stacks" or with the search bar on your top left, search for "CloudFormation" and select the CloudFormation Service
On your top right, click "Create Stack" and select "With new resources (standard)"
Step 1: Create stack
Click "Next"
Step 2: Specify stack details
Enter a stack name (Whatever you want to call the cluster)
Enter a name for the EKS cluster (Save this value)
Enter the ARN value of the IAM role you created in the previous CloudFormation deployment (Navigate to the previous stack and check outputs tab to find the value for the key LogiqEKSClusterRole)
Select a VPC id in the dropdown (This guide assumes you’ve created these previously)
Select two VPC Private subnets with NAT GATEWAY Attatched for the above VPC from each dropdown.
Enter "2" in the fields for “Ingest Worker Node count” and “Common Worker Node count”
Enter the S3 bucket name you used in the previous CloudFormation deploy in “S3 bucket for Logiq”
Click "Next"
Step 3: Configure stack options
Nothing required here, navigate to the bottom of the page and click "Next"
Step 4: Review and create
You can review your configurations, acknowledge the capabilities and click "Submit"
Deployment might take a while. Please wait until the stack status shows "CREATE_COMPLETE" before proceeding.
Open a terminal and execute the following:
Example:
Expected output:
Execute the following command:
Expected output:
Download the following file:
Change directory to where you downloaded the file with your terminal (using the command cd)
Example:
Execute the following command:
Expected output:
Execute the following command:
Expected output:
Execute the following command:
Expected output:
Execute the following command:
Download the following file
Open the file in a text editor and replace the following values:
awsServiceEndpoint:
Replace <region>
with your specific AWS region, for example eu-north-1
. The updated URL format should look like this:
s3_bucket:
Replace the placeholder <>
with the actual name of the S3 bucket that was created during the initial CloudFormation deployment:
s3_region:
Replace the AWS service endpoint region in the URL with the appropriate region, for example, eu-north-1
:
s3_url:
Replace <region>
with the region where you installed it. For example:
redis_host:
Replace <>
with your specific ElastiCacheCluster endpoint generated from the first CloudFormation deploy. For example, if your generated endpoint is apicaelasticache.hdsue3.0001.eun1.cache.amazonaws.com
, you would update the configuration as follows:
You can find this value from the output tab of the first CloudFormation deploy
postgres_host:
Replace <>
with your AuroraEndpoint endpoint. For example, if your generated endpoint is apicadatafabricenvironment-aurorapostgresql-0vqryrig2lwe.cluster-cbyqzzm9ayg8.eu-north-1.rds.amazonaws.com
, you would update the configuration as follows:
You can find this value from the output tab of the first CloudFormation deploy
postgres_user:
Replace <>
with the master username you created during the first CloudFormation deployment:
postgres_password:
Replace <>
with the password for the user you created during the first CloudFormation deployment:
s3_access:
Replace <>
with your AWS CLI access key id.
To retrieve your AWS credentials from your local machine, execute the command below in your terminal:
AWS_ACCESS_KEY_ID
Replace <>
with your AWS CLI access key id.
To retrieve your AWS credentials from your local machine, execute the command below in your terminal:
s3_secret
Replace <>
with your AWS CLI secret access key id.
To retrieve your AWS credentials from your local machine, execute the command below in your terminal:
AWS_SECRET_ACCESS_KEY
Replace <>
with your AWS CLI secret access key id.
To retrieve your AWS credentials from your local machine, execute the command below in your terminal:
Namespace
Search the file for "namespace" and replace <namespace>/<namespace>-prometheus-prometheus
with the following:
To modify the administrator username and password, replace the existing details with your desired credentials.
Save the file
Execute the following command:
Expected output:
Ensure that the path to your values.yaml
file is correctly set, or run the commands from the directory that contains the file. Use the following command to deploy:
Expected output:
To get the default Service Endpoint, execute the below command:
Under the EXTERNAL-IP
column you will find a URL similar to below:
Use this in your browser to access the Ascent UI
Login credentials is as defined in your values.yaml
file
As the EKS Cluster has been created, we can now set up the access rules for our VPC.
From the 1st stack, we need to find the SecurityGroups
which was created
Navigate to either EC2
or VPC
by using the search bar, and then look for Secutiry Groups
on the left hand side menu
Search for your security group using the ID
extracted from the 1st stack and click on the ID
Click on "Edit inbound rules"
Now we need to set up 2 rules
TCP
on Port 6379
and source is your VPC CIDR
Postgresql (TCP)
on Port 5432
and source is your VPC CIDR
Click "Save Rules"
To enable https using self-signed certificates, please add additional options to helm and provide the domain name for the ingress controller.
In the example below, replace apica.my-domain.com
with the https domain where this cluster will be available.
To customize your TLS configuration by using your own certificate, you need to create a Kubernetes secret. By default, if you do not supply your own certificates, Kubernetes will generate a self-signed certificate and create a secret for it automatically. To use your own certificates, perform the following command, replacing myCert.crt
and myKey.key
with the paths to your certificate and key files respectively:
In order to include your own secret, please execute the below command and replace $secretName
with your secret to enable HTTPS and replace apica.my-domain.com
with the https domain where this cluster will be available.
On the following page (step 1 of Stack creation) select "Template is ready" and "Amazon S3 URL". In the "Amazon S3 URL" textfield, enter
Make sure to apply
On the following page (step 1 of Stack creation) select "Template is ready" and "Amazon S3 URL". In the "Amazon S3 URL" textfield, enter
Note: Once the stack is fully provisioned, authenticate AWS CLI. If you have not downloaded AWS Cli and set it up yet you can do so here:
FOR APICA ONLY:
VPC
, select your region in the VPCs
dropdown and on the VPC list you have a column called IPv4 CIDR
, copy your CIDR
and use it as a source.