Before proceeding, ensure the following prerequisites are met:
is installed on your machine.
AWS CLI is installed and configured on your machine.
You have permissions on your AWS account to create resources including Elastic Kubernetes Service (EKS), S3 Bucket, Aurora PostgreSQL, and ElastiCache.
You have configured an AWS Virtual Private Cloud (VPC) and two (2) Private subnets.
AWS Resource
Note: These resources will be automatically generated during the CloudFormation deployment process and are not prerequisites for initiating it.
The Cloudformation template provisions the following resources:
S3 Bucket
EKS Cluster
EKS Node Pools
Aurora PostgreSQL
ElastiCache Redis
Deploy IAM Role, Aurora PostgreSQL and ElastiCache
Note: Ensure you're operating within the same region as your Virtual Private Cloud (VPC).
Click "Next"
Enter a stack name
Enter an IAM role name for Logiq-EKS (Save this value for later),This will create the IAM role
Enter a master username for Postgresql. (Save this value for later),Master Username can include any printable ASCII character except /, ', ", @, or a spac
Enter a password for the above Postgresql user. (Save this value for later),Master Password should be > 8 characters.
Enter a database name for the Postgresql database,Start with small letter.
You can find this by searching for "VPC" on the top left search bar, select the VPC service, click the VPCs resource and select your region. Locate your VPC and copy the VPC ID.
From where you left of extracting your VPC ID, on the left hand side menu, select Private Subnets and copy the two Subnet IDs you intend you use
Nothing required here, navigate to the bottom of the page and click "Next"
You can review your configurations, acknowledge the capabilities and click "Submit"
Deployment might take a while. Please wait until the stack status shows "CREATE_COMPLETE" before proceeding.
If the stack for some reason would fail, make sure to check the stack events (select your stack, and click on "Events") to understand the error. In order to fix your error, delete the stack and re-do the above.
Create EKS Cluster
Note: This is the second time you're creating a stack in CloudFormation. Do not mix them up.
Create an EKS Cluster with CloudFormation
Enter a stack name (Whatever you want to call the cluster)
Enter a name for the EKS cluster (Save this value)
Enter the ARN value of the IAM role you created in the previous CloudFormation deployment (Navigate to the previous stack and check outputs tab to find the value for the key LogiqEKSClusterRole)
Select a VPC id in the dropdown (This guide assumes you’ve created these previously)
Select two VPC Private subnets with NAT GATEWAY Attatched for the above VPC from each dropdown.
Enter "2" in the fields for “Ingest Worker Node count” and “Common Worker Node count”
Enter the S3 bucket name you used in the previous CloudFormation deploy in “S3 bucket for Logiq”
Click "Next"
Step 3: Configure stack options and Click "Next"
Step 4: Review and create
Deployment might take a while. Please wait until the stack status shows "CREATE_COMPLETE" before proceeding.
AWS CLI commands
Access the bastion host via SSH from your workstation to ensure it works as expected.
Check that the security group attached to your EKS control plane can receive 443 traffic from the public subnet. You can create a rule by adding port HTTPS (443) and giving the Security group id of bastion host in EKS security group.
This will enable communication between the bastion host in the public subnet and the cluster in the private subnets.
Access the bastion host and then use it to communicate with the cluster just as you would with your personal machine.
Replace the placeholder <> with the actual name of the S3 bucket that was created during the initial CloudFormation deployment:
s3_bucket: "adf-helm-bucket"
s3_region:
Replace the AWS service endpoint region in the URL with the appropriate region, for example, eu-north-1:
s3_region: "eu-north-1"
s3_url:
Replace <region> with the region where you installed it. For example:
s3_url: "https://s3.eu-north-1.amazonaws.com"
redis_host:
Replace <> with your specific ElastiCacheCluster endpoint generated from the first CloudFormation deploy. For example, if your generated endpoint is apicaelasticache.hdsue3.0001.eun1.cache.amazonaws.com, you would update the configuration as follows:
You can find this value from the output tab of the first CloudFormation deploy
postgres_host:
Replace <> with your AuroraEndpoint endpoint. For example, if your generated endpoint is apicadatafabricenvironment-aurorapostgresql-0vqryrig2lwe.cluster-cbyqzzm9ayg8.eu-north-1.rds.amazonaws.com, you would update the configuration as follows:
Ensure that the path to your values.yaml file is correctly set, or run the commands from the directory that contains the file. Use the following command to deploy:
NAME: apica-ascent
LAST DEPLOYED: Tue Mar 26 15:38:48 2024
NAMESPACE: apica-ascent
STATUS: deployed
REVISION: 1
TEST SUITE: None
Access the Ascent UI
To get the default Service Endpoint, execute the below command:
kubectl get svc -n apica-ascent | grep LoadBalancer
Under the EXTERNAL-IP column you will find a URL similar to below:
NAME TYPE CLUSTER-IP EXTERNAL-IP
logiq-kubernetes-ingress LoadBalancer <cluster_ip> internal-a9205bedc8dd94d27bbd10eb799b8651-238631451.us-east-1.elb.amazonaws.com
Create windows server with same vpc and create a rule for RDP in security group of windows server.RDP into that and access the application with "EXTERNAL-IP"
Login credentials is as defined in your values.yaml file
Security Group Rules for EKS Cluster
As the EKS Cluster has been created, we can now set up the access rules for our VPC.
From the 1st stack, we need to find the SecurityGroups which was created
Navigate to either EC2 or VPC by using the search bar, and then look for Secutiry Groups on the left hand side menu
Search for your security group using the ID extracted from the 1st stack and click on the ID
Click on "Edit inbound rules"
Now we need to set up 2 rules
TCP on Port 6379 and source is your VPC CIDR
Postgresql (TCP) on Port 5432 and source is your VPC CIDR
Click "Save Rules"
Enabling HTTPS on your instance (optional)
Use auto-generated self-signed certificate
To enable https using self-signed certificates, please add additional options to helm and provide the domain name for the ingress controller.
In the example below, replace apica.my-domain.com with the https domain where this cluster will be available.
To customize your TLS configuration by using your own certificate, you need to create a Kubernetes secret. By default, if you do not supply your own certificates, Kubernetes will generate a self-signed certificate and create a secret for it automatically. To use your own certificates, perform the following command, replacing myCert.crt and myKey.key with the paths to your certificate and key files respectively:
In order to include your own secret, please execute the below command and replace $secretName with your secret to enable HTTPS and replace apica.my-domain.com with the https domain where this cluster will be available.
Your VPC CIDR can be found by navigating to VPC, select your region in the VPCs dropdown and on the VPC list you have a column called IPv4 CIDR, copy your CIDR and use it as a source.