Apica Docs
  • Welcome to Apica Docs!
  • PRODUCT OVERVIEW
    • Ascent Overview
    • Ascent User Interface
  • TECHNOLOGIES
    • Ascent with Kubernetes
      • Kubernetes is a Game-Changer
      • Ascent: Built on Kubernetes
    • Ascent with OpenTelemetry
      • Why Implement OpenTelemetry?
      • Common Use Cases for OpenTelemetry
      • How to Get Started with OpenTelemetry
      • Best Practices for OpenTelemetry Implementations
  • RELEASE NOTES
    • Release Notes
      • Ascent 2.10.2
      • Ascent 2.9.0
      • Ascent 2.8.1
      • Ascent 2.8.0
      • Ascent 2.7.0
      • Ascent 2.6.0
      • Ascent 2.5.0
      • Ascent 2.4.0
      • Ascent 2.3.0
      • Ascent 2.2.0
      • Ascent 2.1.0
        • Data Fabric
          • Releases-old
        • Synthetic Monitoring
        • Advanced Scripting Engine
        • IRONdb
      • Synthetic Monitoring
  • GETTING STARTED
    • Getting Started with Ascent
      • Getting Started with Metrics
      • Getting Started with Logs
        • OpenTelemetry
    • Ascent Deployment Overview
    • Quickstart with Docker-Compose
    • On-Premise PaaS deployment
      • On-Premise PaaS Deployment Architecture
      • Deploying Apica Ascent PaaS on Kubernetes
      • Deploying Apica Ascent PaaS on MicroK8s
      • Deploying Apica Ascent PaaS on AWS
      • Deploying Apica Ascent EKS on AWS using CloudFormation
      • Deploying Ascent on AWS EKS with Aurora PostgreSQL and ElastiCache Redis using Cloud Formation
        • Deploying Apica Ascent on AWS EKS with Aurora PostgreSQL and ElastiCache Redis using CloudFormation
        • Apica Ascent on AWS EKS (Private Endpoint) with Aurora PostgreSQL and ElastiCache Redis on prod VPC
      • Deploying Apica Ascent EKS on AWS using custom AMI
      • Deploying Apica Ascent EKS with AWS ALB
      • Deploying Apica Ascent PaaS in Azure Kubernetes Service
        • Azure Blob Storage Lifecycle Management
      • Deploying Apica Ascent with OpenShift
      • Deploying Apica Ascent PaaS on MicroK8s in Red Hat v8 / v9
    • Boomi RTO Quick Start Guide
      • RTO Dashboarding
      • Alerting on RTO Metrics
      • Alerting on RTO Logs
    • Dashboards & Visualizations
  • DATA SOURCES
    • Data Source Overview
    • API
      • JSON Data source
      • RSS
    • AWS
      • Amazon Athena
      • Amazon CloudWatch ( YAML )
      • Amazon Elasticsearch Service
      • Amazon Redshift
      • MySQL Server (Amazon RDS)
    • NoSQL Data Sources
      • MongoDB
    • OLAP
      • Data Bricks
      • Druid
      • Snowflake
    • SQL Data Sources
      • PostgreSQL
      • Microsoft SQL Server
      • MySQL Server
    • Time Series Databases
      • Prometheus Compatible
      • Elasticsearch
      • InfluxDB
    • Ascent Synthetics
      • Checks
    • Ascent Logs
      • Logs
  • INTEGRATIONS
    • Integrations Overview
      • Generating a secure ingest token
      • Data Ingest Ports
    • List of Integrations
      • Apache Beam
        • Export Metrics to Prometheus
          • Pull Mechanism via Push-Gateway
        • Export Events to Apica Ascent
      • Apica ASM
      • Apica Ascent Observability Data Collector Agent
      • AWS
        • AWS CloudWatch
        • AWS ECS
          • Forwarding AWS ECS logs to Apica Ascent using AWS FireLens
          • ECS prometheus metrics to Apica Ascent
        • AWS S3
      • Azure
        • Azure Databricks
        • Azure Eventhub
        • Azure Event Hubs
      • Docker Compose
      • Docker Swarm logging
      • Docker Syslog log driver
      • F5 Big-Ip System
      • Filebeat
      • Fluent Bit
        • Forwarding Amazon-Linux logs to Apica Ascent using Fluent Bit
        • Fluent Bit installation on Ubuntu
        • Enabling IoT(MQTT) Input (PAAS)
        • IIS Logs on Windows
      • Fluentd
      • FortiNet Firewalls
      • GCP PubSub
      • GCP Cloud Logging
      • IBM QRadar
      • ilert
      • Incident Management
        • Webhooks
      • Jaeger
      • Kafka
      • Kinesis
      • Kubernetes
      • Logstash
      • MQTT
      • Network Packets
      • OpenTelemetry
      • Object store (S3 Compatible)
      • Oracle OCI Infrastructure Audit/Logs
      • Oracle Data Integrator (ODI)
      • OSSEC Variants (OSSEC/WAZUH/ATOMIC)
        • Apica Ascent-OSSEC Agent for Windows
      • Palo Alto Firewall
      • Prometheus
        • Spring Boot
        • Prometheus on Windows
        • Prometheus Remote Write
        • MongoDB Exporter
        • JMX Exporter
      • Rsyslogd
      • Syslog
      • Syslog-ng
      • Splunk Universal Forwarder
      • Splunk Heavy Forwarder
      • SNMP
      • Splunk Forwarding Proxy
      • Vault
        • Audit Vault Logs - AWS
        • Audit Vault Logs - OCI
        • Audit Vault Metrics
    • Apica API DOCS
  • DATA MANAGEMENT
    • Data Management Overview
    • Data Explorer Overview
      • Query Builder
      • Widget
      • Alerts
      • JSON Import
      • Creating Json Schema
        • Visualization
          • Line chart
          • Bar chart
          • Area chart
          • Scatter chart
          • Status chart
          • Counter chart
          • Stat chart
          • Size chart
          • Dense Status chart
          • Honeycomb chart
          • Gauge chart
          • Pie chart
          • Disk chart
          • Table chart
          • Date time chart
      • Time-Series AI/ML
        • Anomaly Detection
        • Averaging
        • Standard Deviation(STD)
      • Data Explorer Dashboard
        • Create a Dashboard
        • Editing Dashboard
          • Dashboard level filters
    • Timestamp handling
      • Timestamp bookmark
    • Large log/events/metrics/traces
    • Vault
      • Certificates
      • Variables
      • Lookups
  • OBSERVE
    • Monitoring Overview
      • Connecting Prometheus
      • Connecting Amazon Managed Service for Prometheus
      • Windows Redis Monitoring
      • Writing queries
        • Query Snippets
      • Query API
      • Use Apica API to ingest JSON data
    • Distributed Tracing
      • Traces
      • Spans
      • Native support for OTEL Traces
      • Windows .NET Application Tracing
      • Linux+Java Application Tracing
    • Log Management
      • Terminology
      • Explore Logs
      • Topology
      • Apica Ascent Search Cheat Sheet
      • Share Search Results
      • Severity Metrics
      • Log2Metrics
      • Native support for OTEL Logs
      • Reports
        • Accessing Reports results via API
      • Role-Based Access Control (RBAC)
      • Configuring RBAC
    • AI and LLM Observability
      • AI Agent Deployment
      • Ascent AI Agent Monitoring
      • Ascent Quick Start Guide
    • Synthetic Check Monitoring
      • Map View
      • List View
      • Alerting for Check Results
  • Flow
    • Overview
    • Data Flow Pipelines
    • Data Flow Visualize Pipelines
    • Data Flow Pipeline Dashboard
    • Rules
      • FILTER
      • EXTRACT
      • SIEM and TAG
      • REWRITE
      • CODE
      • FORWARD
        • Rename Attributes
      • STREAM
    • List of Forwarders
      • Mapping Applications
    • Splunk Forwarding
      • Apica UF Proxy App Extension
        • Standalone Instance
        • List of Indexer Instances
        • Indexer Discovery
      • Metric Indexes
      • Non Metric Indexes
      • Syslog Forwarding
    • Real-Time Stream Forwarding
      • AWS Kinesis
      • Azure Eventhub
      • Google Pub/Sub
    • Security Monitor Forwarding
      • Arc Sight
      • RSA New Witness
    • Forwarding to Monitoring Tools
      • Datadog Forwarding
      • New Relic Forwarding
      • Dynatrace Forwarding
      • Elasticsearch Forwarding
      • Coralogix Forwarding
      • Azure Log Analytics Forwarding
      • JS Code Forwarding
    • Object Store Forwarding
      • S3 Compatible
      • Azure Blob Storage
    • Forwarding to Data Warehouse
      • GCP Bigquery
    • Functions
      • ascent.encode
      • ascent.decode
      • ascent.persist
      • Ascent.variables
      • ascent.crypto
      • Ascent.mask
      • Ascent.net
      • Ascent.text
      • Ascent.time
      • Ascent.lookups
  • LAKE
    • Powered by Instastore™
  • FLEET MANAGEMENT
    • Overview
    • Agents
    • Configurations
    • Packages
    • Fleet Repository Management
    • Advanced Search
    • List of Agents
      • Datadog Agent
      • Fluent-bit Agent
      • Grafana Alloy
      • OpenTelemetry Collector
      • OpenTelemetry Kubernetes
      • Prometheus Agent
  • COMMAND LINE INTERFACE
    • apicactl Documentation
  • AUTONOMOUS INSIGHTS
    • Time Series AI-ML
      • Anomaly Detection
      • Averaging
      • Standard Deviation(STD)
      • Forecasting
      • AI-ML on PromQL Query Data Set
      • Statistical Data Description
    • Pattern-Signature (PS)
      • Log PS Explained
        • Unstructured Logs
        • Semi-structured JSON
        • Reduce Logs Based on PS
        • Log PS Use Cases
          • Log Outlier Isolation
          • Log Trending Analysis
          • Simple Log Compare
      • Config PS
        • Config JSON PS
    • ALIVE Log Visualization
      • ALIVE Pattern Signature Summary
      • ALIVE Log Compare
    • Log Explained using Generative AI
      • Configuring Generative AI Access
      • GenAI Example Using Log Explain
    • Alerts
    • Alerts (Simple/Anomaly)
    • Alerts On Logs
    • Rule Packs
    • AI-powered Search
  • PLATFORM DOCS
    • Synthetic Monitoring Overview
      • Getting Started with ASM
        • Achieving 3 Clicks to Issue Resolution via ASM
        • FAQ - Frequently Asked Questions
        • Creating A New Check
          • Creating a New Real Browser Check
      • Explore the Platform
        • API Details
        • Check Types
          • Android Check
          • Command Check
          • Compound Check
          • Browser Check
          • Desktop Application Check
          • AWS Lambda Check
          • DNS Resolver Check
          • DNS Security Check
          • Domain Availability Check
          • Domain Delegation Check
          • Domain Expiration Date Check
          • Hostname Integrity Check
          • iPad Check
          • iPhone Check
          • Ping Check
          • Port Check
          • Postman Check
          • Response Time Check
          • SSL Certificate Expiration Check
          • Scripted Check
        • Dashboards
        • Integrations
          • DynaTrace Integration
          • Google Analytics Integration
          • Akamai Integration
          • Centrify Integration
          • AppDynamics Integration
          • PagerDuty Integration
          • ServiceNow Integration
          • Splunk Integration
        • Metrics
          • Analyze Site
          • Result Values
          • Trends
          • Analyze Metrics
        • Monitoring
          • Integrating ASM Metrics into Grafana Using Apica Panels
            • Understanding the ASM Imported Dashboards
            • Using the Apica Panels Dashboards
          • Understanding ASM Check Host Locations
        • Navigation
          • Manage Menu
        • Reports
        • Use Cases
      • Configurations
        • Configuring Checks
          • Understanding Check Results
            • Understanding ZebraTester Check Results
            • Understanding Browser Check Results
            • Understanding Check Details
          • Editing Checks
            • Editing Browser Checks
            • Editing ZebraTester Checks
          • Using Regular Expressions Within the ASM Platform
          • Understanding the Edit Scenario Page
          • Comparing Selenium IDE Scripts to ASM Scenarios
          • Configuring Apica DNS Check Types
          • Implementing Tags Effectively Within ASM
          • Storing and Retrieving Information Using the ASM Dictionary
        • Configuring Users
          • Configuring SSO Within ASM
        • Configuring Alerts
          • Configuring Webhook Alerts
      • How-To Articles
        • ASM Monitoring Best Practices
        • API Monitoring Guide
        • IT Monitoring Guide
        • Monitor Mission-Critical Applications through the Eyes of Your Users
        • How To Mask Sensitive Data in ASM
        • How to Mask Sensitive Data When Using Postman Checks
        • How to Handle URL Errors in a Check
        • How To Set Up SSO Using Azure AD
        • How to Set Up SSO Using Centrify
        • ASM Scenarios How-To
          • How To Pace a Selenium Script
          • How to Utilize XPath Within a Selenium Script
          • How to Mask Sensitive Information Within an ASM Scenario
          • Handling Elements Which Do Not Appear Consistently
          • How to Handle HTML Windows in ASM Scenarios
    • ZebraTester Scripting
      • ZebraTester Overview
      • Install ZebraTester
        • Download ZebraTester
          • Core ZebraTester V7.5-A Documentation
          • Core ZebraTester V7.0-B Documentation
          • Core ZebraTester V7.0-A Documentation
          • Core ZebraTester V5.5-Z Documentation
          • Core ZebraTester V5.5-F Documentation
        • Download the ZebraTester Recorder Extension
        • Windows Installation
          • ZebraTester on Windows
          • Generate Private CA Root Certificate
          • Windows System Tuning
          • Install a new ZT version on Windows Server
          • Install/Uninstall ZT Windows Installer Silently
        • macOS Installation
          • macOS Preinstallation Instructions
          • Generate Private CA Root Cert (macOS X)
          • System Tuning (macOS)
          • Import a CA Root Certificate to an iOS device
          • Memory Configuration Guidelines for ZebraTester Agents
      • ZebraTester User Guide
        • Menu and Navigation Overview
        • 1. Get a Load Test Session
          • Recording Web Surfing Sessions with ZebraTester
            • Further Hints for Recording Web Surfing Sessions
            • Recording Extension
              • Record Web Session
              • Cookies and Cache
              • Proxy
              • Page Breaks
              • Recording Extension Introduction
              • Troubleshooting
            • Add URL to ZebraTester
            • Page Scanner
          • Next Steps after Recording a Web Surfing Session
        • 2. Scripting the Load Test Session
          • 1. Assertions - HTTP Response Verificaton
          • 2. Correlation - Dynamic Session Parameters
            • 2b. Configuring Variable Rules
            • 2a. Var Finder
          • 3. Parameterization: Input Fields, ADR and Input Files
            • ADR
          • 4. Execution Control - Inner Loops
          • 5. Execution Control - URL Loops
          • 6. Execution Control -User-Defined Transactions And Page Breaks
          • 7. Custom Scripting - Inline Scripts
          • 8. Custom Scripting - Load Test Plug-ins
            • ZebraTester Plug-in Handbooks
          • Modular Scripting Support
        • 3. Recording Session Replay
        • 4. Execute the Load Test
          • Executing a First Load Test
          • Executing Load Test Programs
            • Project Navigator
              • Configuration of the Project Navigator Main Directory
            • Real-Time Load Test Actions
            • Real-Time Error Analysis
            • Acquiring the Load Test Result
            • More Tips for Executing Load Tests
          • Distributed Load Tests
            • Exec Agents
            • Exec Agent Clusters
          • Multiple Client IP Addresses
            • Sending Email And Alerts
            • Using Multiple Client IP Addresses per Load-Releasing System
        • 5. Analyzing Results
          • Detail Results
          • Load Test Result Detail-Statistics and Diagrams
          • Enhanced HTTP Status Codes
          • Error Snapshots
          • Load Curve Diagrams
          • URL Exec Step
          • Comparison Diagrams
            • Analysis Load Test Response Time Comparison
            • Performance Overview
            • Session Failures
        • Programmatic Access to Measured Data
          • Extracting Error Snapshots
          • Extracting Performance Data
        • Web Tools
        • Advanced Topics
          • Execute a JMeter Test Plan in ZebraTester
          • Credentials Manager for ZebraTester
          • Wildcard Edition
          • Execution Plan in ZebraTester
          • Log rotation settings for ZebraTester Processes
          • Modify Session
          • Modular Scripting Support
          • Understanding Pacing
          • Integrating ZebraTester with GIT
            • GitHub Integration Manual V5.4.1
      • ZebraTester FAQ
      • ZebraTester How-to articles
        • How to Combine Multiple ZebraTester Scripts Into One
        • Inline Scripting
        • How to Configure a ZebraTester Script to Fetch Credentials from CyberArk
        • How to Configure a ZebraTester Scenario to Fetch Credentials from CyberArk
        • How to Convert a HAR file into a ZebraTester Script
        • How to Convert a LoadRunner Script to ZebraTester
        • How to Import the ZT Root Certificate to an iOS device
        • How to iterate over JSON objects in ZebraTester using Inline Scripts
        • How to round a number to a certain number of decimal points within a ZebraTester Inline Script
        • How to Use a Custom DNS Host File Within a ZebraTester Script
        • How to Move a ZebraTester Script to an Older Format
        • API Plugin Version
        • Setting up the Memu Player for ZebraTester Recording
        • Inline Script Version
      • Apica Data Repository (ADR) aka Apica Table Server
        • ADR related inline functions available in ZT
        • Apica Data Repository Release Notes
        • REST Endpoint Examples
        • Accessing the ADR with Inline Scripts
      • ZebraTester Plugin Repository
      • Apica YAML
        • Installing and Using the ApicaYAML CLI Tool
        • Understanding ApicaYAML Scripting and Syntax
    • Load Testing Overview
      • Getting Started with ALT
      • Creating / Running a Single Load Test
      • Running Multiple Tests Concurrently
      • Understanding Loadtest Results
    • Test Data Orchestrator (TDO)
      • Technical Guides
        • Hardware / Environment Requirements
        • IP Forwarding Instructions (Linux)
        • Self-Signed Certificate
        • Windows Server Install
        • Linux Server Install
        • User Maintenance
        • LDAP Setup
        • MongoDB Community Server Setup
        • TDX Installation Guide
      • User Documentation
        • End User Guide for TDO
          • Connecting to Orson
          • Coverage Sets and Business Rules
          • Data Assembly
          • Downloading Data
        • User Guide for TDX
          • Connecting to TDX
          • Setting up a Data Profile
          • Extracting Data
          • Analyzing Data Patterns
          • Performing Table Updates
        • API Guide
          • API Structure and Usage
          • Determining Attribute APIs
            • Create Determining Attribute (Range-based)
            • Create Determining Attribute (Value-based)
            • Update Determining Attributes
            • Get Determining Attribute Details
            • Delete a Determining Attribute
          • Coverage Set API’s
            • Create Coverage Set
            • Update Coverage Set
            • Get All Coverage Set Details
            • Get Single Coverage Set Details
            • Lock Coverage Set
            • Unlock Coverage Set
            • Delete Coverage Set
          • Business Rule API’s
            • Create Business Rule
            • Update Business Rule
            • Get Business Rule Details
            • Get All Business Rules
            • Delete Business Rule
          • Workset API's
            • Create Workset
            • Update Workset
            • Get All Worksets
            • Get Workset Details
            • Unlock Workset
            • Clone Workset
            • Delete Workset
          • Data Assembly API's
            • Assemble Data
            • Check Assembly Process
          • Data Movement API's
            • Ingest (Upload) Data Files
            • Download Data Files
              • HTML Download
              • CSV Download
              • Comma Delimited with Sequence Numbers Download
              • Pipe Delimited Download
              • Tab Delimited with Sequence Numbers Download
              • EDI X12 834 Download
              • SQL Lite db Download
              • Alight File Format Download
          • Reporting API's
            • Session Events
            • Rules Events
            • Coverage Events
            • Retrieve Data Block Contents
            • Data Assembly Summary
        • Workflow Guide
      • Release Notes
        • Build 1.0.2.0-20250213-1458
  • IRONdb
    • Getting Started
      • Installation
      • Configuration
      • Cluster Sizing
      • Command Line Options
      • ZFS Guide
    • Administration
      • Activity Tracking
      • Compacting Numeric Rollups
      • Migrating To A New Cluster
      • Monitoring
      • Operations
      • Rebuilding IRONdb Nodes
      • Resizing Clusters
    • API
      • API Specs
      • Data Deletion
      • Data Retrieval
      • Data Submission
      • Rebalance
      • State and Topology
    • Integrations
      • Graphite
      • Prometheus
      • OpenTSDB
    • Tools
      • Grafana Data Source
      • Graphite Plugin
      • IRONdb Relay
      • IRONdb Relay Release Notes
    • Metric Names and Tags
    • Release Notes
    • Archived Release Notes
  • Administration
    • E-Mail Configuration
    • Single Sign-On with SAML
    • Port Management
    • Audit Trail
      • Events Trail
      • Alerts Trail
Powered by GitBook
On this page
  • Prerequisites
  • Installing MicroK8s
  • Enabling add-ons
  • Provisioning an IP address (optional)
  • Installing Apica Ascent PaaS
  • Accessing Apica Ascent PaaS

Was this helpful?

Edit on GitHub
Export as PDF
  1. GETTING STARTED
  2. On-Premise PaaS deployment

Deploying Apica Ascent PaaS on MicroK8s in Red Hat v8 / v9

This page describes the deployment of Apica Ascent PaaS on MicroK8s Red Hat 8/9.

PreviousDeploying Apica Ascent with OpenShiftNextBoomi RTO Quick Start Guide

Last updated 3 days ago

Was this helpful?

is a lightweight, pure-upstream Kubernetes aiming to reduce entry barriers for K8s and cloud-native application development. It comes in a single package that installs a single-node (standalone) K8s cluster in under 60 seconds. The lightweight nature of Apica Ascent PaaS enables you to deploy Apica Ascent on lightweight, single-node clusters like MicroK8s. The following guide takes you through deploying Apica Ascent PaaS on MicroK8s.

Prerequisites

  • Red Hat v8 / v9

  • 32 vCPU

  • 64GB RAM

  • 500GB disk space on the root partition

Installing MicroK8s

The first step in this deployment is to install MicroK8s on your machine. The following instructions pertain to RHEL-based Linux systems. To install MicroK8s on such systems, do the following.

  1. Update package lists by running the following command.

    we need to use following commands to install microk8s on Red Hat

    sudo yum -y update
    #follow the article for installation of [microk8s] (https://snapcraft.io/install/microk8s/rhel)
    #The EPEL repository can be added to RHEL 9 with the following command:
    
    sudo dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
    sudo dnf upgrade
    
    #The EPEL repository can be added to RHEL 8 with the following command:
    
    sudo dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
    sudo dnf upgrade

    Once you added these repl repos to server we need to run the below commands - Note: If you are running RHEL On-Premises with Red Hat CDN (Connected Environment) where subscription management is handled automatically:

    sudo subscription-manager repos --enable "rhel-*-optional-rpms" --enable "rhel-*-extras-rpms"
    sudo yum -y update
    
    If you are using RHEL in Disconnected or Air-Gapped Environments
    like cloud environments AWS, Azure, and Google Cloud then you need to
    run below commands in order to pull RHEL updates via RHUI:
    
    sudo yum-config-manager --enable codeready-builder-for-rhel-8-rhui-rpms
    sudo yum-config-manager --enable rhel-8-supplementary-rhui-rpms
    
    #enable snapd for installation
    sudo yum install snapd
    
    sudo systemctl enable --now snapd.socket
    sudo ln -s /var/lib/snapd/snap /snap
  2. Install core using Snap by running the following command.

    sudo snap install core
    
    In case the core installation gives timeouts or throws any error because
    snapd socket couldn't activate, then try the following commands to install
    core successfully:
    
    sudo dnf install -y epel-release
    sudo dnf update -y
    sudo dnf install -y snapd
    systemctl status snapd.socket
    sudo systemctl disable --now snapd.socket
    sudo systemctl restart snapd
    sudo ln -s /var/lib/snapd/snap /snap
    sudo snap install core
    sudo firewall-cmd --add-service=https --permanent
    sudo firewall-cmd --reload
    sudo snap refresh core
    yum repolist
    sudo snap install core
    
  3. Install MicroK8s using Snap by running the following command.

    sudo snap install microk8s --classic --channel=1.21/stable
  4. Join the group created by MicroK8s that enables uninterrupted usage of commands that require admin access by running the following command.

    sudo usermod -a -G microk8s $USER
  5. Create the .kube directory.

    mkdir ~/.kube
  6. Add your current user to the group to gain access to the .kube caching directory by running the following command.

    sudo chown -f -R $USER ~/.kube
  7. Generate your MicroK8s configuration and merge it with your Kubernetes configuration by running the following command.

    microk8s config > ~/.kube/config
  8. Check whether MicroK8s is up and running with the following command.

    microk8s status

MicroK8s is now installed on your machine.

Enabling add-ons

Now that we have MicroK8s up and running, let’s set up your cluster and enable the add-ons necessary such as Helm, CoreDNS, ingress, storage, and private registry. MicroK8s readily provides these addons and can be enabled and disabled at any time. Most of these add-ons are pre-configured to work without any additional setup.

To enable add-ons on your MicroK8s cluster, run the following commands in succession.

  1. Enable Helm 3.

microk8s enable helm3

If you get a message telling you have insufficient permissions, a few of the commands above which tried to interpolate your current user into the command with the $USER variable did not work. You can easily fix it by adding your user to the microk8s group by specifying the name of the user explicitly:

sudo usermod -a -G microk8s ec2-user
sudo chown -R ec2-user ~/.kube
  1. Enable a default storage class that allocates storage from a host directory.

microk8s enable storage
  1. Enable CoreDNS.

microk8s enable dns
  1. Enable ingress.

To enable the Ingress controller in MicroK8s, run the following command:

microk8s enable ingress
  1. Enable HTTPS (optional)

This step is optional; you can still access the site using HTTP if you don't install an SSL certificate on the host.

How to Create a Self-Signed Certificate using OpenSSL:

  • Create server private key

    openssl genrsa -out cert.key 2048
  • Create certificate signing request (CSR)

    openssl req -new -key cert.key -out cert.csr
  • Sign the certificate using the private key and CSR

    openssl x509 -req -days 3650 -in cert.csr -signkey cert.key -out cert.crt

To create a TLS secret in MicroK8s using kubectl, use the following command:

microk8s kubectl create secret tls https --cert=cert.crt --key=cert.key

This command creates a secret named "https" containing the TLS keys for use in your Kubernetes cluster. Ensure you have the cert.crt and cert.key files in your current directory or specify full paths.

To enable Ingress on microk8s with a default SSL certificate, issue the following command:

microk8s enable ingress:default-ssl-certificate=secret/https
  1. Enable private registry.

microk8s enable registry
  1. Copy over your MicroK8s configuration to your Kubernetes configuration with the following command.

microk8s.kubectl config view --raw > $HOME/.kube/config

Provisioning an IP address (optional)

Note: This step is optional and will depend on your individual access needs - for instance, if you need to access the PaaS instance from a certain IP. You can skip this step if you are installing the app locally - in that case, you can access the UI after installation via the machine's public IP address.

Note: Since MetalLB is available as an add-on for MicroK8s, you can also run these steps while enabling add-ons for your MicroK8s cluster.

To provision an IP address, do the following:

  1. Check your local machine's IP address by running the ifconfig command, as shown below.

     ifconfig:
     wlp60s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
     inet 192.168.1.27 netmask 255.255.255.0 broadcast 192.168.1.255
  2. Enable MetalLB by running the following command.

     microk8s enable metallb
     Enabling MetalLB
     Enter each IP address range delimited by comma (e.g.'192.168.0.105-192.168.0.111'): <host-ip>-<host-ip>

Note: MetalLB will not work on macOS due to network filtering that macOS applies. MetalLB might not work if you're provisioning an EC2 instance on AWS due to your private/public IP configuration. if you want to use it in any cloud provider then create a target group to the instance and map it to nodeport which you will get from ingress and then attach the target group to your load balancer which we creatd for this.

Installing Apica Ascent PaaS

Now that your MicroK8s environment is configured and ready, we can proceed with installing Apica Ascent PaaS on it. To install Apica Ascent PaaS using Helm, do the following:

  1. Add the Apica Ascent PaaS Helm chart to your Helm repository by running the following command.

    microk8s helm3 repo add apica-repo https://logiqai.github.io/helm-charts
  2. Update your Helm repository by running the following command.

    microk8s helm3 repo update
  3. Create a namespace on MicroK8s on which to install Apica Ascent PaaS.

    microk8s kubectl create namespace apica-ascent
scp -i /path/to/private_key.pem /path/to/local/file username@remote_host:/path/to/remote/directory

Make sure you have the necessary permissions to copy a file to the specified folder on the Linux machine. If you are not providing the cloud S3 details and want to spin a S3 bucket internally within the VM then comment out below lines in the values.yaml file:

accessKey: <TODO: your-s3-access-key-id>
secretKey: <TODO: your-s3-secret-access-key-id>

cloudProvider: aws

s3_url: "https://s3.<TODO: aws-bucket-region>.amazonaws.com"
s3_access: <TODO: your-s3-access-key-id>
s3_secret: <TODO: your-s3-secret-access-key-id>
s3_bucket: <TODO: bucket-name>
s3_region: <TODO: bucket-region>
AWS_ACCESS_KEY_ID: <TODO: your-aws-access-key-id>
AWS_SECRET_ACCESS_KEY: <TODO: your-aws-secret-access-key-id>

And then change s3gateway to 'true':

s3gateway: true
microk8s enable metallb
Enabling MetalLB
Enter each IP address range delimited by comma (e.g.  '10.64.140.43-10.64.140.49,192.168.0.105-192.168.0.111'): 192.168.1.27-192.168.1.27

In the values file, add the below fields global-> environment section with your own values.

s3_bucket: <your-s3-bucket>
AWS_ACCESS_KEY_ID: <your-aws-access-key-id>
AWS_SECRET_ACCESS_KEY: <your-aws-secret-access-key-id>

In the global -> chart section, change S3gateway to false.

s3gateway: false

In the global -> persistence section, change storageClass as below.

storageClass: microk8s-hostpath
  1. Install Apica Ascent PaaS using Helm with the storage class set to microk8s-hostpath with the following command.

microk8s helm3 install apica-ascent -n apica-ascent --set global.persistence.storageClass=microk8s-hostpath apica-repo/apica-ascent -f  values.microk8s.yaml  --debug

If you see a large wall of text listing configuration values, the installation was successful - Ascent PaaS is now installed in your MicroK8s environment!

Spin up an internal S3 bucket using minio - If you are not using S3 cloud related variables in the values.yaml file and want to create an internal S3 bucket, then create a s3-batch.yaml file and execute the below batch job to spin S3 bucket using minio:

Create s3-batch.yaml file and insert the below contents:

apiVersion: batch/v1
kind: Job
metadata:
  name: s3-gateway-make-bucket-job
  namespace: apica-ascent
  labels:
    app: s3gateway-make-bucket-job
    chart: s3gateway-5.0.20
    release: apica-ascent
    heritage: Helm
  annotations:
    "helm.sh/hook": post-install
    "helm.sh/hook-weight": "1"
    "helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation
spec:
  template:
    metadata:
      labels:
        app: s3gateway-job
        release: apica-ascent
    spec:
      restartPolicy: OnFailure
 
      volumes:
        - name: minio-configuration
          projected:
            sources:
            - configMap:
                name: s3-gateway
            - secret:
                name: s3-gateway
      serviceAccountName: "s3-gateway"
      containers:
      - name: minio-mc
        image: "minio/mc:RELEASE.2020-03-14T01-23-37Z"
        imagePullPolicy: IfNotPresent
        command: ["/bin/sh", "/config/initialize"]
        env:
          - name: MINIO_ENDPOINT
            value: s3-gateway
          - name: MINIO_PORT
            value: "9000"
        volumeMounts:
          - name: minio-configuration
            mountPath: /config
        resources:
          {}

Apply the batch job:

kubectl apply -f s3-batch.yaml

Delete the Thanos pods (apica-ascent-thanos-compactor-XXXXXX and apica-ascent-thanos-storegateway-0) so it can created again after applying the s3-batch.yaml:

kubectl delete pod apica-ascent-thanos-storegateway-0 apica-ascent-thanos-compactor-XXXXXX -n apica-ascent

Accessing Apica Ascent PaaS

Now that Apica Ascent PaaS is installed on your MicroK8s cluster, you can visit the Apica Ascent PaaS UI by either accessing the MetalLB endpoint we defined in the pre-install steps (if you installed/configured MetalLB), or by accessing the public IP address of the instance over HTTP(S) (if you aren't utilizing MetalLB).

If you are load balancing the hosting across multiple IPs using MetalLB, do the following to access the Apica Ascent PaaS UI:

  1. Inspect the pods in your MicroK8s cluster in the apica-ascent namespace by running the following command.

    microk8s kubectl get pod -n apica-ascent
  2. Find the exact MetalLB endpoint that's serving the Apica Ascent PaaS UI by running the following command.

    microk8s kubectl get service -n apica-ascent |grep -i loadbalancer

    The above command should give you an output similar to the following.

    apica-ascent-kubernetes-ingress LoadBalancer   10.152.183.45  192.168.1.27
    
    80:30537/TCP,20514:30222/TCP,24224:30909/TCP,24225:31991/TCP,2514:30800/TCP,3000:32680/TCP,514:32450/    TCP,7514:30267/TCP,8081:30984/TCP,9998:31425/TCP     18m
  3. Using a web browser of your choice, access the IP address shown by the load balancer service above. For example, http://192.168.1.27:80.

If you aren't utilizing MetalLB, you can access the Ascent UI simply by accessing the public IP or hostname of your machine over HTTP(S); you can utilize HTTPS by following the "enabling HTTPS" step in the "Enabling Add-Ons" section above.

You can log into Apica Ascent PaaS using the following default credentials.

  • Username: flash-admin@foo.com

  • Password: flash-password

Note: You can change the default login credentials after you've logged into the UI.

MicroK8s Networking Note: Services default to host IP using NodePort/ClusterIP; MetalLB is enabled for explicit LoadBalancer use only. Automatic MetalLB IP assignment is disabled.\

4. Deactivates MetalLB, enabling services of type LoadBalancer to utilize the host's IP, thereby designating the host as the load

microk8s kubectl disable metallb

Troubleshooting

If we have any issues on injecting the logs are something then we have to add new paths that we need to add as part of upgrade of the image, from cli edit the ingress.

microk8s kubectl get ingress -n<namespace>
microk8s kubectl edit ingress -n<namespace>

Copy the bellow paths and paste them and save.

          - path: /
            pathType: Prefix
            backend:
              service:
                name: coffee
                port:
                  number: 80
          - backend:
              service:
                name: logiq-flash
                port:
                  number: 8080
            path: /live
            pathType: Prefix
          - path: /live
            pathType: Prefix
            backend:
              service:
                name: logiq-flash
                port:
                  number: 8080
          - backend:
              service:
                name: logiq-flash
                port:
                  number: 8080
            path: /ready
            pathType: Prefix
          - backend:
              service:
                name: logiq-flash
                port:
                  number: 9999
            path: /v1/logs
            pathType: Prefix
          - backend:
              service:
                name: logiq-flash
                port:
                  number: 9999
            path: /v1/traces
            pathType: Prefix
          - backend:
              service:
                name: logiq-flash
                port:
                  number: 9999
            path: /v1/metrics
            pathType: Prefix

          - path: /v1/json_batch
            pathType: Prefix
            backend:
              service:
                name: logiq-flash
                port:
                  number: 9999
          - path: /v1/json
            pathType: Prefix
            backend:
              service:
                name: logiq-flash
                port:
                  number: 9999
          - path: /v1/tenant
            pathType: Prefix
            backend:
              service:
                name: logiq-flash
                port:
                  number: 9999
          - path: /api/traces
            pathType: Prefix
            backend:
              service:
                name: logiq-flash
                port:
                  number: 14268
          - path: /v1
            pathType: Prefix
            backend:
              service:
                name: logiq-flash-ml
                port:
                  number: 9999
          - path: /v2
            pathType: Prefix
            backend:
              service:
                name: logiq-flash-ml
                port:
                  number: 9999
          - path: /dtracing
            pathType: Prefix
            backend:
              service:
                name: logiq-flash-ml
                port:
                  number: 16686
          - path: /api/v1/receive
            pathType: Prefix
            backend:
              service:
                name: apica-ascent-thanos-receive
                port:
                  number: 19291

Kubernetes cluster is unreachable

If you see an error message indicating the Kubernetes cluster is unreachable, the Microk8s service has stopped - simply restart it. Error text:

Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "https://127.0.0.1:16443/version": dial tcp 127.0.0.1:16443: connect: connection refused
helm.go:84: [debug] Get "https://127.0.0.1:16443/version": dial tcp 127.0.0.1:16443: connect: connection refused
...

Solution:

ubuntu@ip-172-31-31-72:~$ microk8s status
microk8s is not running. Use microk8s inspect for a deeper inspection.
ubuntu@ip-172-31-31-72:~$ microk8s start

Restarting the Ascent installation after a failed installation

If the Ascent installation using the supplied .yaml file fails, you must first remove the name in use. Error text:

Error: INSTALLATION FAILED: cannot re-use a name that is still in use
helm.go:84: [debug] cannot re-use a name that is still in use
helm.sh/helm/v3/pkg/action.(*Install).availableName
...

Solution:

ubuntu@ip-172-31-31-72:~$ microk8s helm3 uninstall apica-ascent -n apica-ascent
release "apica-asent" uninstalled
ubuntu@ip-172-31-31-72:~$ microk8s helm3 install apica-ascent -n apica-ascent --set global.persistence.storageClass=microk8s-hostpath apica-repo/apica-ascent -f values.microk8s.yaml --debug --timeout 10m

In this step, we'll provision an endpoint or an IP address where we access Apica Ascent PaaS after deploying it on MicroK8s. For this, we'll leverage which is a load-balancer implementation that uses standard routing protocols for bare metal Kubernetes clusters.

Prepare your values.microk8s.yaml file. You can use the starter file we've created to configure your Apica Ascent PaaS deployment. If you need to download the file to your own machine, edit, and then transfer to a remote linux server, use this command:

Optionally, if you are provisioning public IP using Metallb, use the instead. run the following command.

MicroK8s
MetalLB
values.yaml
values.yaml

Note: once the setup is ready disable the metallb because its no longer usefull because we are using our hostip instead of metallb