Apica Docs
  • Welcome to Apica Docs!
  • PRODUCT OVERVIEW
    • Ascent Overview
    • Ascent User Interface
  • TECHNOLOGIES
    • Ascent with Kubernetes
      • Kubernetes is a Game-Changer
      • Ascent: Built on Kubernetes
    • Ascent with OpenTelemetry
      • Why Implement OpenTelemetry?
      • Common Use Cases for OpenTelemetry
      • How to Get Started with OpenTelemetry
      • Best Practices for OpenTelemetry Implementations
  • RELEASE NOTES
    • Release Notes
      • Ascent 2.10.2
      • Ascent 2.9.0
      • Ascent 2.8.1
      • Ascent 2.8.0
      • Ascent 2.7.0
      • Ascent 2.6.0
      • Ascent 2.5.0
      • Ascent 2.4.0
      • Ascent 2.3.0
      • Ascent 2.2.0
      • Ascent 2.1.0
        • Data Fabric
          • Releases-old
        • Synthetic Monitoring
        • Advanced Scripting Engine
        • IRONdb
      • Synthetic Monitoring
  • GETTING STARTED
    • Getting Started with Ascent
      • Getting Started with Metrics
      • Getting Started with Logs
        • OpenTelemetry
    • Ascent Deployment Overview
    • Quickstart with Docker-Compose
    • On-Premise PaaS deployment
      • On-Premise PaaS Deployment Architecture
      • Deploying Apica Ascent PaaS on Kubernetes
      • Deploying Apica Ascent PaaS on MicroK8s
      • Deploying Apica Ascent PaaS on AWS
      • Deploying Apica Ascent EKS on AWS using CloudFormation
      • Deploying Ascent on AWS EKS with Aurora PostgreSQL and ElastiCache Redis using Cloud Formation
        • Deploying Apica Ascent on AWS EKS with Aurora PostgreSQL and ElastiCache Redis using CloudFormation
        • Apica Ascent on AWS EKS (Private Endpoint) with Aurora PostgreSQL and ElastiCache Redis on prod VPC
      • Deploying Apica Ascent EKS on AWS using custom AMI
      • Deploying Apica Ascent EKS with AWS ALB
      • Deploying Apica Ascent PaaS in Azure Kubernetes Service
        • Azure Blob Storage Lifecycle Management
      • Deploying Apica Ascent with OpenShift
      • Deploying Apica Ascent PaaS on MicroK8s in Red Hat v8 / v9
    • Boomi RTO Quick Start Guide
      • RTO Dashboarding
      • Alerting on RTO Metrics
      • Alerting on RTO Logs
    • Dashboards & Visualizations
  • DATA SOURCES
    • Data Source Overview
    • API
      • JSON Data source
      • RSS
    • AWS
      • Amazon Athena
      • Amazon CloudWatch ( YAML )
      • Amazon Elasticsearch Service
      • Amazon Redshift
      • MySQL Server (Amazon RDS)
    • NoSQL Data Sources
      • MongoDB
    • OLAP
      • Data Bricks
      • Druid
      • Snowflake
    • SQL Data Sources
      • PostgreSQL
      • Microsoft SQL Server
      • MySQL Server
    • Time Series Databases
      • Prometheus Compatible
      • Elasticsearch
      • InfluxDB
    • Ascent Synthetics
      • Checks
    • Ascent Logs
      • Logs
  • INTEGRATIONS
    • Integrations Overview
      • Generating a secure ingest token
      • Data Ingest Ports
    • List of Integrations
      • Apache Beam
        • Export Metrics to Prometheus
          • Pull Mechanism via Push-Gateway
        • Export Events to Apica Ascent
      • Apica ASM
      • Apica Ascent Observability Data Collector Agent
      • AWS
        • AWS CloudWatch
        • AWS ECS
          • Forwarding AWS ECS logs to Apica Ascent using AWS FireLens
          • ECS prometheus metrics to Apica Ascent
        • AWS S3
      • Azure
        • Azure Databricks
        • Azure Eventhub
        • Azure Event Hubs
      • Docker Compose
      • Docker Swarm logging
      • Docker Syslog log driver
      • F5 Big-Ip System
      • Filebeat
      • Fluent Bit
        • Forwarding Amazon-Linux logs to Apica Ascent using Fluent Bit
        • Fluent Bit installation on Ubuntu
        • Enabling IoT(MQTT) Input (PAAS)
        • IIS Logs on Windows
      • Fluentd
      • FortiNet Firewalls
      • GCP PubSub
      • GCP Cloud Logging
      • IBM QRadar
      • ilert
      • Incident Management
        • Webhooks
      • Jaeger
      • Kafka
      • Kinesis
      • Kubernetes
      • Logstash
      • MQTT
      • Network Packets
      • OpenTelemetry
      • Object store (S3 Compatible)
      • Oracle OCI Infrastructure Audit/Logs
      • Oracle Data Integrator (ODI)
      • OSSEC Variants (OSSEC/WAZUH/ATOMIC)
        • Apica Ascent-OSSEC Agent for Windows
      • Palo Alto Firewall
      • Prometheus
        • Spring Boot
        • Prometheus on Windows
        • Prometheus Remote Write
        • MongoDB Exporter
        • JMX Exporter
      • Rsyslogd
      • Syslog
      • Syslog-ng
      • Splunk Universal Forwarder
      • Splunk Heavy Forwarder
      • SNMP
      • Splunk Forwarding Proxy
      • Vault
        • Audit Vault Logs - AWS
        • Audit Vault Logs - OCI
        • Audit Vault Metrics
    • Apica API DOCS
  • DATA MANAGEMENT
    • Data Management Overview
    • Data Explorer Overview
      • Query Builder
      • Widget
      • Alerts
      • JSON Import
      • Creating Json Schema
        • Visualization
          • Line chart
          • Bar chart
          • Area chart
          • Scatter chart
          • Status chart
          • Counter chart
          • Stat chart
          • Size chart
          • Dense Status chart
          • Honeycomb chart
          • Gauge chart
          • Pie chart
          • Disk chart
          • Table chart
          • Date time chart
      • Time-Series AI/ML
        • Anomaly Detection
        • Averaging
        • Standard Deviation(STD)
      • Data Explorer Dashboard
        • Create a Dashboard
        • Editing Dashboard
          • Dashboard level filters
    • Timestamp handling
      • Timestamp bookmark
    • Large log/events/metrics/traces
    • Vault
      • Certificates
      • Variables
      • Lookups
  • OBSERVE
    • Monitoring Overview
      • Connecting Prometheus
      • Connecting Amazon Managed Service for Prometheus
      • Windows Redis Monitoring
      • Writing queries
        • Query Snippets
      • Query API
      • Use Apica API to ingest JSON data
    • Distributed Tracing
      • Traces
      • Spans
      • Native support for OTEL Traces
      • Windows .NET Application Tracing
      • Linux+Java Application Tracing
    • Log Management
      • Terminology
      • Explore Logs
      • Topology
      • Apica Ascent Search Cheat Sheet
      • Share Search Results
      • Severity Metrics
      • Log2Metrics
      • Native support for OTEL Logs
      • Reports
        • Accessing Reports results via API
      • Role-Based Access Control (RBAC)
      • Configuring RBAC
    • AI and LLM Observability
      • AI Agent Deployment
      • Ascent AI Agent Monitoring
      • Ascent Quick Start Guide
    • Synthetic Check Monitoring
      • Map View
      • List View
      • Alerting for Check Results
  • Flow
    • Overview
    • Data Flow Pipelines
    • Data Flow Visualize Pipelines
    • Data Flow Pipeline Dashboard
    • Rules
      • FILTER
      • EXTRACT
      • SIEM and TAG
      • REWRITE
      • CODE
      • FORWARD
        • Rename Attributes
      • STREAM
    • List of Forwarders
      • Mapping Applications
    • Splunk Forwarding
      • Apica UF Proxy App Extension
        • Standalone Instance
        • List of Indexer Instances
        • Indexer Discovery
      • Metric Indexes
      • Non Metric Indexes
      • Syslog Forwarding
    • Real-Time Stream Forwarding
      • AWS Kinesis
      • Azure Eventhub
      • Google Pub/Sub
    • Security Monitor Forwarding
      • Arc Sight
      • RSA New Witness
    • Forwarding to Monitoring Tools
      • Datadog Forwarding
      • New Relic Forwarding
      • Dynatrace Forwarding
      • Elasticsearch Forwarding
      • Coralogix Forwarding
      • Azure Log Analytics Forwarding
      • JS Code Forwarding
    • Object Store Forwarding
      • S3 Compatible
      • Azure Blob Storage
    • Forwarding to Data Warehouse
      • GCP Bigquery
    • Functions
      • ascent.encode
      • ascent.decode
      • ascent.persist
      • Ascent.variables
      • ascent.crypto
      • Ascent.mask
      • Ascent.net
      • Ascent.text
      • Ascent.time
      • Ascent.lookups
  • LAKE
    • Powered by Instastoreâ„¢
  • FLEET MANAGEMENT
    • Overview
    • Agents
    • Configurations
    • Packages
    • Fleet Repository Management
    • Advanced Search
    • List of Agents
      • Datadog Agent
      • Fluent-bit Agent
      • Grafana Alloy
      • OpenTelemetry Collector
      • OpenTelemetry Kubernetes
      • Prometheus Agent
  • COMMAND LINE INTERFACE
    • apicactl Documentation
  • AUTONOMOUS INSIGHTS
    • Time Series AI-ML
      • Anomaly Detection
      • Averaging
      • Standard Deviation(STD)
      • Forecasting
      • AI-ML on PromQL Query Data Set
      • Statistical Data Description
    • Pattern-Signature (PS)
      • Log PS Explained
        • Unstructured Logs
        • Semi-structured JSON
        • Reduce Logs Based on PS
        • Log PS Use Cases
          • Log Outlier Isolation
          • Log Trending Analysis
          • Simple Log Compare
      • Config PS
        • Config JSON PS
    • ALIVE Log Visualization
      • ALIVE Pattern Signature Summary
      • ALIVE Log Compare
    • Log Explained using Generative AI
      • Configuring Generative AI Access
      • GenAI Example Using Log Explain
    • Alerts
    • Alerts (Simple/Anomaly)
    • Alerts On Logs
    • Rule Packs
    • AI-powered Search
  • PLATFORM DOCS
    • Synthetic Monitoring Overview
      • Getting Started with ASM
        • Achieving 3 Clicks to Issue Resolution via ASM
        • FAQ - Frequently Asked Questions
        • Creating A New Check
          • Creating a New Real Browser Check
      • Explore the Platform
        • API Details
        • Check Types
          • Android Check
          • Command Check
          • Compound Check
          • Browser Check
          • Desktop Application Check
          • AWS Lambda Check
          • DNS Resolver Check
          • DNS Security Check
          • Domain Availability Check
          • Domain Delegation Check
          • Domain Expiration Date Check
          • Hostname Integrity Check
          • iPad Check
          • iPhone Check
          • Ping Check
          • Port Check
          • Postman Check
          • Response Time Check
          • SSL Certificate Expiration Check
          • Scripted Check
        • Dashboards
        • Integrations
          • DynaTrace Integration
          • Google Analytics Integration
          • Akamai Integration
          • Centrify Integration
          • AppDynamics Integration
          • PagerDuty Integration
          • ServiceNow Integration
          • Splunk Integration
        • Metrics
          • Analyze Site
          • Result Values
          • Trends
          • Analyze Metrics
        • Monitoring
          • Integrating ASM Metrics into Grafana Using Apica Panels
            • Understanding the ASM Imported Dashboards
            • Using the Apica Panels Dashboards
          • Understanding ASM Check Host Locations
        • Navigation
          • Manage Menu
        • Reports
        • Use Cases
      • Configurations
        • Configuring Checks
          • Understanding Check Results
            • Understanding ZebraTester Check Results
            • Understanding Browser Check Results
            • Understanding Check Details
          • Editing Checks
            • Editing Browser Checks
            • Editing ZebraTester Checks
          • Using Regular Expressions Within the ASM Platform
          • Understanding the Edit Scenario Page
          • Comparing Selenium IDE Scripts to ASM Scenarios
          • Configuring Apica DNS Check Types
          • Implementing Tags Effectively Within ASM
          • Storing and Retrieving Information Using the ASM Dictionary
        • Configuring Users
          • Configuring SSO Within ASM
        • Configuring Alerts
          • Configuring Webhook Alerts
      • How-To Articles
        • ASM Monitoring Best Practices
        • API Monitoring Guide
        • IT Monitoring Guide
        • Monitor Mission-Critical Applications through the Eyes of Your Users
        • How To Mask Sensitive Data in ASM
        • How to Mask Sensitive Data When Using Postman Checks
        • How to Handle URL Errors in a Check
        • How To Set Up SSO Using Azure AD
        • How to Set Up SSO Using Centrify
        • ASM Scenarios How-To
          • How To Pace a Selenium Script
          • How to Utilize XPath Within a Selenium Script
          • How to Mask Sensitive Information Within an ASM Scenario
          • Handling Elements Which Do Not Appear Consistently
          • How to Handle HTML Windows in ASM Scenarios
    • ZebraTester Scripting
      • ZebraTester Overview
      • Install ZebraTester
        • Download ZebraTester
          • Core ZebraTester V7.5-A Documentation
          • Core ZebraTester V7.0-B Documentation
          • Core ZebraTester V7.0-A Documentation
          • Core ZebraTester V5.5-Z Documentation
          • Core ZebraTester V5.5-F Documentation
        • Download the ZebraTester Recorder Extension
        • Windows Installation
          • ZebraTester on Windows
          • Generate Private CA Root Certificate
          • Windows System Tuning
          • Install a new ZT version on Windows Server
          • Install/Uninstall ZT Windows Installer Silently
        • macOS Installation
          • macOS Preinstallation Instructions
          • Generate Private CA Root Cert (macOS X)
          • System Tuning (macOS)
          • Import a CA Root Certificate to an iOS device
          • Memory Configuration Guidelines for ZebraTester Agents
      • ZebraTester User Guide
        • Menu and Navigation Overview
        • 1. Get a Load Test Session
          • Recording Web Surfing Sessions with ZebraTester
            • Further Hints for Recording Web Surfing Sessions
            • Recording Extension
              • Record Web Session
              • Cookies and Cache
              • Proxy
              • Page Breaks
              • Recording Extension Introduction
              • Troubleshooting
            • Add URL to ZebraTester
            • Page Scanner
          • Next Steps after Recording a Web Surfing Session
        • 2. Scripting the Load Test Session
          • 1. Assertions - HTTP Response Verificaton
          • 2. Correlation - Dynamic Session Parameters
            • 2b. Configuring Variable Rules
            • 2a. Var Finder
          • 3. Parameterization: Input Fields, ADR and Input Files
            • ADR
          • 4. Execution Control - Inner Loops
          • 5. Execution Control - URL Loops
          • 6. Execution Control -User-Defined Transactions And Page Breaks
          • 7. Custom Scripting - Inline Scripts
          • 8. Custom Scripting - Load Test Plug-ins
            • ZebraTester Plug-in Handbooks
          • Modular Scripting Support
        • 3. Recording Session Replay
        • 4. Execute the Load Test
          • Executing a First Load Test
          • Executing Load Test Programs
            • Project Navigator
              • Configuration of the Project Navigator Main Directory
            • Real-Time Load Test Actions
            • Real-Time Error Analysis
            • Acquiring the Load Test Result
            • More Tips for Executing Load Tests
          • Distributed Load Tests
            • Exec Agents
            • Exec Agent Clusters
          • Multiple Client IP Addresses
            • Sending Email And Alerts
            • Using Multiple Client IP Addresses per Load-Releasing System
        • 5. Analyzing Results
          • Detail Results
          • Load Test Result Detail-Statistics and Diagrams
          • Enhanced HTTP Status Codes
          • Error Snapshots
          • Load Curve Diagrams
          • URL Exec Step
          • Comparison Diagrams
            • Analysis Load Test Response Time Comparison
            • Performance Overview
            • Session Failures
        • Programmatic Access to Measured Data
          • Extracting Error Snapshots
          • Extracting Performance Data
        • Web Tools
        • Advanced Topics
          • Execute a JMeter Test Plan in ZebraTester
          • Credentials Manager for ZebraTester
          • Wildcard Edition
          • Execution Plan in ZebraTester
          • Log rotation settings for ZebraTester Processes
          • Modify Session
          • Modular Scripting Support
          • Understanding Pacing
          • Integrating ZebraTester with GIT
            • GitHub Integration Manual V5.4.1
      • ZebraTester FAQ
      • ZebraTester How-to articles
        • How to Combine Multiple ZebraTester Scripts Into One
        • Inline Scripting
        • How to Configure a ZebraTester Script to Fetch Credentials from CyberArk
        • How to Configure a ZebraTester Scenario to Fetch Credentials from CyberArk
        • How to Convert a HAR file into a ZebraTester Script
        • How to Convert a LoadRunner Script to ZebraTester
        • How to Import the ZT Root Certificate to an iOS device
        • How to iterate over JSON objects in ZebraTester using Inline Scripts
        • How to round a number to a certain number of decimal points within a ZebraTester Inline Script
        • How to Use a Custom DNS Host File Within a ZebraTester Script
        • How to Move a ZebraTester Script to an Older Format
        • API Plugin Version
        • Setting up the Memu Player for ZebraTester Recording
        • Inline Script Version
      • Apica Data Repository (ADR) aka Apica Table Server
        • ADR related inline functions available in ZT
        • Apica Data Repository Release Notes
        • REST Endpoint Examples
        • Accessing the ADR with Inline Scripts
      • ZebraTester Plugin Repository
      • Apica YAML
        • Installing and Using the ApicaYAML CLI Tool
        • Understanding ApicaYAML Scripting and Syntax
    • Load Testing Overview
      • Getting Started with ALT
      • Creating / Running a Single Load Test
      • Running Multiple Tests Concurrently
      • Understanding Loadtest Results
    • Test Data Orchestrator (TDO)
      • Technical Guides
        • Hardware / Environment Requirements
        • IP Forwarding Instructions (Linux)
        • Self-Signed Certificate
        • Windows Server Install
        • Linux Server Install
        • User Maintenance
        • LDAP Setup
        • MongoDB Community Server Setup
        • TDX Installation Guide
      • User Documentation
        • End User Guide for TDO
          • Connecting to Orson
          • Coverage Sets and Business Rules
          • Data Assembly
          • Downloading Data
        • User Guide for TDX
          • Connecting to TDX
          • Setting up a Data Profile
          • Extracting Data
          • Analyzing Data Patterns
          • Performing Table Updates
        • API Guide
          • API Structure and Usage
          • Determining Attribute APIs
            • Create Determining Attribute (Range-based)
            • Create Determining Attribute (Value-based)
            • Update Determining Attributes
            • Get Determining Attribute Details
            • Delete a Determining Attribute
          • Coverage Set API’s
            • Create Coverage Set
            • Update Coverage Set
            • Get All Coverage Set Details
            • Get Single Coverage Set Details
            • Lock Coverage Set
            • Unlock Coverage Set
            • Delete Coverage Set
          • Business Rule API’s
            • Create Business Rule
            • Update Business Rule
            • Get Business Rule Details
            • Get All Business Rules
            • Delete Business Rule
          • Workset API's
            • Create Workset
            • Update Workset
            • Get All Worksets
            • Get Workset Details
            • Unlock Workset
            • Clone Workset
            • Delete Workset
          • Data Assembly API's
            • Assemble Data
            • Check Assembly Process
          • Data Movement API's
            • Ingest (Upload) Data Files
            • Download Data Files
              • HTML Download
              • CSV Download
              • Comma Delimited with Sequence Numbers Download
              • Pipe Delimited Download
              • Tab Delimited with Sequence Numbers Download
              • EDI X12 834 Download
              • SQL Lite db Download
              • Alight File Format Download
          • Reporting API's
            • Session Events
            • Rules Events
            • Coverage Events
            • Retrieve Data Block Contents
            • Data Assembly Summary
        • Workflow Guide
      • Release Notes
        • Build 1.0.2.0-20250213-1458
  • IRONdb
    • Getting Started
      • Installation
      • Configuration
      • Cluster Sizing
      • Command Line Options
      • ZFS Guide
    • Administration
      • Activity Tracking
      • Compacting Numeric Rollups
      • Migrating To A New Cluster
      • Monitoring
      • Operations
      • Rebuilding IRONdb Nodes
      • Resizing Clusters
    • API
      • API Specs
      • Data Deletion
      • Data Retrieval
      • Data Submission
      • Rebalance
      • State and Topology
    • Integrations
      • Graphite
      • Prometheus
      • OpenTSDB
    • Tools
      • Grafana Data Source
      • Graphite Plugin
      • IRONdb Relay
      • IRONdb Relay Release Notes
    • Metric Names and Tags
    • Release Notes
    • Archived Release Notes
  • Administration
    • E-Mail Configuration
    • Single Sign-On with SAML
    • Port Management
    • Audit Trail
      • Events Trail
      • Alerts Trail
Powered by GitBook
On this page
  • 1 - Prerequisites
  • 1.1 Add Apica Ascent helm repository
  • 1.2 Create namespace where Apica Ascent will be deployed
  • 1.3 Prepare your Values YAML file
  • 2. Install Apica Ascent
  • 3 Customizing the deployment
  • 3.1 Enabling https for the UI
  • 3.2 Using an AWS S3 bucket
  • 3.3 Install Apica Ascent server certificates and Client CA [OPTIONAL]
  • 3.4 Changing the storage class
  • 3.5 Using external AWS RDS Postgres database instance
  • 3.6 Upload Apica Ascent professional license
  • 3.7 Customize Admin account
  • 3.8 Using external Redis instance
  • 3.9 Configuring cluster id
  • 3.10 Sizing your Apica Ascent cluster
  • 3.11 NodePort/ClusterIP/LoadBalancer
  • 3.12 Using Node Selectors
  • 3.13 Installing Grafana
  • 3.14 Configuring ALB Ingress on EKS
  • 4 Teardown

Was this helpful?

Edit on GitHub
Export as PDF
  1. GETTING STARTED
  2. On-Premise PaaS deployment

Deploying Apica Ascent PaaS on Kubernetes

This page describes the Apica Ascent deployment on Kubernetes cluster using HELM 3 charts.

1 - Prerequisites

  • Kubernetes 1.18, 1.19 or 1.20

  • Helm 3.2.0+

  • Dynamic PV provisioner support in the underlying infrastructure

  • ReadWriteMany volumes for deployment scaling

Apica Ascent K8S components are made available as helm charts.

1.1 Add Apica Ascent helm repository

1.1.0 Adding Apica Ascent's helm repository to your HELM repositories

helm repo add apica-repo https://logiqai.github.io/helm-charts

The HELM repository will be named apica-repo. For installing charts from this repository please make sure to use the repository name as the prefix e.g.

helm install <deployment_name> apica-repo/<chart_name>

You can now run helm search repo apica-repo to see the available helm charts

$ helm search repo apica-repo
NAME                CHART VERSION     APP VERSION              DESCRIPTION
apica-repo/apica-ascent      v3.0.9           v3.5.9.1        Apica Ascent Observability HELM chart for Kubernetes

1.1.1 Update your HELM chart

If you already added Apica Ascent's HELM repository in the past, you can get updated software releases using helm repo update

$ helm repo update
$ helm search repo apica-repo
NAME                CHART VERSION    APP VERSION          DESCRIPTION
apica-repo/apica-ascent       v3.0.9         v3.5.9.1    Apica Ascent Observability HELM chart for Kubernetes

1.2 Create namespace where Apica Ascent will be deployed

NOTE: Namespace name cannot be more than 15 characters in length

kubectl create namespace apica-ascent

This will create a namespace apica-ascent where we will deploy the Apica Ascent Log Insights stack.

If you choose a different name for the namespace, please remember to use the same namespace for the remainder of the steps

1.3 Prepare your Values YAML file

Sample YAML files for small, medium, large cluster configurations can be downloaded at the following links.

These YAML files can be used for deployment with -f parameter as shown below in the description.

helm install apica-ascent --namespace apica-ascent \
--set global.persistence.storageClass=<storage class name> apica-repo/apica-ascent -f values.small.yaml

2. Install Apica Ascent

helm install apica-ascent --namespace apica-ascent \
--set global.persistence.storageClass=<storage class name> apica-repo/apica-ascent

3 Customizing the deployment

3.1 Enabling https for the UI

helm install apica-ascent --namespace apica-ascent \
--set global.domain=ascent.my-domain.com \
--set ingress.tlsEnabled=true \
--set kubernetes-ingress.controller.defaultTLSSecret.enabled=true \
--set global.persistence.storageClass=<storage class name> apica-repo/apica-ascent

You should now be able to login to Apica Ascent UI at your domain using https://ascent.my-domain.com that you set in the ingress after you have updated your DNS server to point to the Ingress controller service IP

The default login and password to use is flash-admin@foo.com and flash-password. You can change these in the UI once logged in.

HELM Option
Description
Defaults

global.domain

DNS domain where the Apica Ascent service will be running. This is required for HTTPS

No default

ingress.tlsEnabled

Enable the ingress controller to front HTTPS for services

false

kubernetes-ingress.controller.defaultTLSSecret.enabled

Specify if a default certificate is enabled for the ingress gateway

false

kubernetes-ingress.controller.defaultTLSSecret.secret

Specify the name of a TLS Secret for the ingress gateway. If this is not specified, a secret is automatically generated of option kubernetes-ingress.controller.defaultTLSSecret.enabled above is enabled.

3.1.1 Passing an ingress secret

If you want to pass your own ingress secret, you can do so when installing the HELM chart

helm install apica-ascent --namespace apica-ascent \
--set global.domain=ascent.my-domain.com \
--set ingress.tlsEnabled=true \
--set kubernetes-ingress.controller.defaultTLSSecret.enabled=true \
--set kubernetes-ingress.controller.defaultTLSSecret.secret=<secret_name> \
--set global.persistence.storageClass=<storage class name> apica-repo/apica-ascent

3.2 Using an AWS S3 bucket

Depending on your requirements, you may want to host your storage in your own K8S cluster or create a bucket in a cloud provider like AWS.

Please note that cloud providers may charge data transfer costs between regions. It is important that the Apica Ascent cluster be deployed in the same region where the S3 bucket is hosted

3.2.1 Create an access/secret key pair for creating and managing your bucket

Go to AWS IAM console and create an access key and secret key that can be used to create your bucket and manage access to the bucket for writing and reading your log files

3.2.2 Deploy the Apica Ascent helm in gateway mode

Make sure to pass your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and give a bucket name. The S3 gateway acts as a caching gateway and helps reduce API costs. Create a bucket in AWS s3 with a unique bucket name in the region where you plan to host the deployment.

You will need to create the S3 bucket manually along with access and secret keys to access the bucket. Check to make sure the access and secret key work with the newly created bucket.

Once the bucket is created and access/secret is verified, provide the bucket name and access credentials in the step below.

helm install apica-ascent --namespace apica-ascent --set global.domain=ascent.my-domain.com \
--set global.environment.s3_bucket=<bucket_name> \
--set global.environment.awsServiceEndpoint=https://s3.<region>.amazonaws.com \
--set global.environment.s3_region=<region> \
--set global.environment.AWS_ACCESS_KEY_ID=<access_key> \
--set global.environment.AWS_SECRET_ACCESS_KEY=<secret_key> \
--set global.persistence.storageClass=<storage class name> apica-repo/apica-ascent
HELM Option
Description
Defaults

global.cloudProvider

This helm option specifies the supported cloudProvider that is hosting the S3 compatible bucket. Right now only aws is supported.

aws

global.environment.s3_bucket

Name of the S3 bucket in AWS

logiq

global.environment.awsServiceEndpoint

S3 Service endpoint : https://s3.**<region>**.amazonaws.com

global.environment.AWS_ACCESS_KEY_ID

AWS Access key for accessing the bucket

No default

global.environment.AWS_SECRET_ACCESS_KEY

AWS Secret key for accessing the bucket

No default

global.environment.s3_region

AWS Region where the bucket is hosted

us-east-1

S3 providers may have restrictions on bucket names for e.g. AWS S3 bucket names are globally unique.

3.3 Install Apica Ascent server certificates and Client CA [OPTIONAL]

Apica Ascent supports TLS for all ingest. We also enable non-TLS ports by default. It is however recommended that non-TLS ports not be used unless running in a secure VPC or cluster. The certificates can be provided to the cluster using K8S secrets. Replace the template sections below with your Base64 encoded secret files.

If you skip this step, the Apica Ascent server automatically generates a ca and a pair of client and server certificates for you to use. you can get them from the ingest server pods under the folder /flash/certs

apiVersion: v1
kind: Secret
metadata:
  name: logiq-certs
type: Opaque
data:
  ca.crt: {{ .Files.Get "certs/ca.crt.b64" }}
  syslog.crt: {{ .Files.Get "certs/syslog.crt.b64" }}
  syslog.key: {{ .Files.Get "certs/syslog.key.b64" }}

Save the secret file e.g. logiq-certs.yaml. Proceed to install the secret in the same namespace where you want to deploy Apica Ascent

The secret can now be passed into the Apica Ascent deployment

helm install apica-ascent --namespace apica-ascent --set global.domain=ascent.my-domain.com \
--set logiq-flash.secrets_name=logiq-certs \
--set global.persistence.storageClass=<storage class name> apica-repo/apica-ascent
HELM Option
Description
Defaults

logiq-flash.secrets_name

TLS certificate key pair and CA cert for TLS transport

No default

3.4 Changing the storage class

If you are planning on using a specific storage class for your volumes, you can customize it for the Apica Ascent deployment. By default, Apica Ascent uses the standard storage class

It is quite possible that your environment may use a different storage class name for the provisioner. In that case please use the appropriate storage class name. E.g. if a user creates a storage class ebs-volume for the EBS provisioner for their cluster, you can use ebs-volume instead of gp2 as suggested below

Cloud Provider
K8S StorageClassName
Default Provisioner

AWS

gp3

EBS

Azure

UltraSSD_LRS

Azure Ultra disk

GCP

standard

pd-standard

Digital Ocean

do-block-storage

Block Storage Volume

Oracle

oci

Block Volume

Microk8s

microk8s-hostpath

helm upgrade --namespace apica-ascent \
--set global.persistence.storageClass=<storage class name> \
apica-ascent apica-repo/apica-ascent

3.5 Using external AWS RDS Postgres database instance

To use external AWS RDS Postgres database for your Apica Ascent deployment, execute the following command.

helm install apica-ascent --namespace apica-ascent \
--set global.chart.postgres=false \
--set global.environment.postgres_host=<postgres-host-ip/dns> \
--set global.environment.postgres_user=<username> \
--set global.environment.postgres_password=<password> \
--set global.persistence.storageClass=<storage class name> apica-repo/apica-ascent
HELM Option
Description
Default

global.chart.postgres

Deploy Postgres which is needed for Apica Ascent metadata. Set this to false if an external Postgres cluster is being used

true

global.environment.postgres_host

Host IP/DNS for external Postgres

postgres

global.environment.postgres_user

Postgres admin user

postgres

global.environment.postgres_password

Postgres admin user password

postgres

global.environment.postgres_port

Host Port for external Postgres

5432

While configuring RDS, create a new parameter group that sets autoVaccum to true or the value "1", associate this parameter group to your RDS instance.

Auto vacuum automates the execution of VACUUM and ANALYZE (to gather statistics) commands. Auto vacuum checks for bloated tables in the database and reclaims the space for reuse.

3.6 Upload Apica Ascent professional license

Setup your Apica Ascent Cluster endpoint
- apicactl config set-cluster ascent.my-domain.com

Sets a Apica Ascent UI api token
- apicactl config set-token api_token

Upload your Apica Ascent deployment license
- apicactl license set -f=license.jws

View License information
 - apicactl license get

3.7 Customize Admin account

helm install apica-ascent --namespace apica-ascent \
--set global.environment.admin_name="Ascent Administrator" \
--set global.environment.admin_password="admin_password" \
--set global.environment.admin_email="admin@example.com" \
--set global.persistence.storageClass=<storage class name> apica-repo/apica-ascent
HELM Option
Description
Default

global.environment.admin_name

Apica Ascent Administrator name

flash-admin@foo.com

global.environment.admin_password

Apica Ascent Administrator password

flash-password

global.environment.admin_email

Apica Ascent Administrator e-mail

flash-admin@foo.com

3.8 Using external Redis instance

To use external Redis for your Apica Ascent deployment, execute the following command.

NOTE: At this time Apica Ascent only supports connecting to a Redis cluster in a local VPC without authentication. If you are using an AWS Elasticache instance, do not turn on encryption-in-transit or cluster mode.

helm install apica-ascent --namespace apica-ascent \
--set global.chart.redis=false \
--set global.environment.redis_host=<redis-host-ip/dns> \
--set global.persistence.storageClass=<storage class name> apica-repo/apica-ascent
HELM Option
Description
Default

global.chart.redis

Deploy Redis which is needed for log tailing. Set this to false if an external Redis cluster is being used

true

global.environment.redis_host

Host IP/DNS of the external Redis cluster

redis-master

global.environment.redis_port

Host Port where external Redis service is exposed

6379

3.9 Configuring cluster id

helm install apica-ascent --namespace apica-ascent \
--set global.environment.cluster_id=<cluster id> \
--set global.persistence.storageClass=<storage class name> apica-repo/apica-ascent
HELM Option
Description
Default

global.environment.cluster_id

Apica AscentQ

3.10 Sizing your Apica Ascent cluster

Apica Ascent Cluster
vCPU
Memory
NodeCount

small

24

32 gb

3

medium

40

64 gb

5

large

64

128 gb

8

3.11 NodePort/ClusterIP/LoadBalancer

The service type configurations are exposed in values.yaml as below

flash-coffee:
  service:
    type: ClusterIP
logiq-flash:
  service:
    type: NodePort
kubernetes-ingress:
  controller:
    service:
      type: LoadBalancer

For e.g. if you are running on bare-metal and want an external LB to front Apica Ascent, configure all services as NodePort

helm install apica-ascent -n apica-ascent -f values.yaml \
--set flash-coffee.service.type=NodePort \
--set logiq-flash.service.type=NodePort \
--set kubernetes-ingress.controller.service.type=NodePort \
apica-repo/apica-ascent

3.12 Using Node Selectors

The Apica Ascent stack deployment can be optimized using node labels and node selectors to place various components of the stack optimally

logiq.ai/node=ingest

The node label logiq.ai/node above can be used to control the placement of ingest pods for log data into ingest optimized nodes. This allows for managing cost and instance sizing effectively.

The various nodeSelectors are defined in the globals section of the values.yaml file

globals:
  nodeSelectors:
    enabled: true
    ingest: ingest
    infra: common
    other: common
    db: db
    cache: cache
    ingest_sync: sync

In the example above, there are different node pools being used - ingest , common , db, cache and sync

Node selectors are enabled by setting enabled to true for globals.nodeSelectors

3.13 Installing Grafana

The Apica Ascent stack includes Grafana as part of the deployment as an optional component. To enable Grafana in your cluster, follow the steps below

helm upgrade --install apica-ascent --namespace apica-ascent \
--set global.chart.grafana=true \ 
--set global.persistence.storageClass=<storage class name> apica-repo/apica-ascent

The Grafana instance is exposed at port 3000 on the ingress controller. The deployed Grafana instance uses the same credentials as the Apica Ascent UI

3.14 Configuring ALB Ingress on EKS

Apica Ascent creates an Ingress resource in the namespace it is deployed.

4 Teardown

If and when you want to decommission the installation using the following commands

helm delete apica-ascent --namespace apica-ascent
helm repo remove apica-repo
kubectl delete namespace apica-ascent

If you followed the installation steps in section 3.1 - Using an AWS S3 bucket, you may want to delete the s3 bucket that was specified at deployment time.

PreviousOn-Premise PaaS Deployment ArchitectureNextDeploying Apica Ascent PaaS on MicroK8s

Last updated 10 months ago

Was this helpful?

Please refer to for sizing your Apica Ascent cluster as specified in these YAML file Latest image tags.

This will install Apica Ascent and expose the Apica Ascent services and UI on the ingress IP. If you plan to use an AWS S3 bucket, please refer to section before running this step. Please refer to for details about storage class. Service ports are described in the . You should now be able to go to http://ingress-ip/

The default login and password to use is flash-admin@foo.com and flash-password. You can change these in the UI once logged in. HELM chart can override the default admin settings as well. See section on customizing the admin settings

Apica Ascent server provides Ingest, log tailing, data indexing, query, and search capabilities. Besides the web-based UI, Apica Ascent also offers for accessing the above features.

The ascent.my-domain.com also fronts all the Apica Ascent service ports as described in the .

Additionally, provide a valid amazon service endpoint for s3 else the config wll default to

Provisioning GP3 CSI Driver on AWS EKS -

The deployment described above offers 30 days trial license. Send an e-mail to support@apica.io to obtain a professional license. After obtaining the license, use the apicactl tool to apply the license to the deployment. Please refer to apicactl details at . You will need API-token from Apica Ascent UI as shown below

When deploying Apica Ascent, configure the cluster id to monitor your own Apica Ascent deployment. For details about the cluster_id refer to section

Cluster Id being used for the K8S cluster running Apica Ascent. See Section on clusters for more details.

When deploying Apica Ascent, size your infrastructure to provide appropriate VCPU and memory requirements. We recommend the following minimum size for small. medium and large cluster specification from values yaml files.

Creating an OIDC provider for your EKS cluster -

Please refer to the EKS configuration on how to automatically provision an ALB here -

apicactl, Apica CLI
port details section
https://s3.us-east-1.amazonaws.com
https://docs.logiq.ai/deploying-logiq/logiq-paas-deployment/deploying-logiq-eks-on-aws-using-cloudformation#5.3-enable-gp3-storage-class-for-eks
https://github.com/ApicaSystem/apicactl
Managing multiple K8S clusters
https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.htm
https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
Section 3.10
Port details section
3.2
Section 3.4
3.7
Section 1.3
https://s3.us-east-1.amazonaws.com
Managing multiple K8S
6KB
values.small.yaml
6KB
values.medium.yaml
7KB
values.large.yaml
Apica Ascent Insights Login Api-token