Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The digital landscape is evolving at an unprecedented pace. Enterprises are migrating to cloud-native architectures, embracing microservices, Kubernetes, and distributed applications to stay competitive. However, this shift introduces a new set of challenges—traditional monitoring tools are struggling to keep up with the scale, complexity, and velocity of modern applications.
This is where next-generation observability platforms, like Apica’s Ascent Platform, come in. Built on Kubernetes and OpenTelemetry, Apica delivers a scalable, AI-powered, cloud-native observability solution designed to handle billions of logs, metrics, and traces in real time.
More information on Kubernetes with OpenTelemetry
In the past, traditional monitoring tools were sufficient for monolithic applications deployed on static infrastructure. However, modern applications are distributed, dynamic, and ephemeral.
Data Silos – Logs, metrics, and traces are collected separately, making root-cause analysis slow and inefficient.
Scalability Issues – Legacy tools struggle to handle high-cardinality telemetry data from microservices.
Lack of Context – Traditional APM tools focus on isolated performance metrics, failing to provide full-stack observability.
High Costs – Observability data grows exponentially, leading to excessive storage and retention costs.
Manual Effort – Engineers spend too much time managing telemetry pipelines and analyzing fragmented data.
To address these challenges, enterprises must shift to a cloud-native observability approach that is scalable, cost-efficient, and AI-driven.
Apica’s Ascent Platform is designed from the ground up to tackle modern observability challenges. Unlike traditional monitoring tools, it is built on Kubernetes, enabling infinite scalability and seamless multi-cloud deployments.
Kubernetes-Powered – Dynamically scales observability pipelines, eliminating bottlenecks.
Unified Data Store (InstaStore™) – Eliminates data silos by storing logs, metrics, traces, and events in a single repository.
ZeroStorageTax Architecture – No more expensive tiered storage; data is stored in infinitely scalable object stores (AWS S3, Azure Blob, Ceph, etc.).
AI-Driven Insights – Uses AI/ML anomaly detection, GenAI assistants, and automated root-cause analysis to accelerate issue resolution.
Multi-Cloud & Hybrid Ready – Seamlessly integrates with AWS, Azure, GCP, and on-prem environments.
Full OpenTelemetry Support – No proprietary agents needed—fully compatible with OpenTelemetry, Prometheus, Jaeger, and Loki.
As enterprises scale their applications, they need an observability platform that scales with them. Apica’s Kubernetes-native approach enables organizations to gain full-stack observability across highly distributed, multi-cloud environments.
Apica Ascent is a powerful full-stack Telemetry Data Management and Observability platform designed to streamline and optimize your entire data life-cycle: Collect, Control, Store, and Observe.
The Apica Ascent platform consolidates observability data into a single platform, focusing on (M)etrics, (E)vents, (L)ogs, and (T)races, commonly known as MELT data. This integrated approach to MELT data is crucial for efficient root cause analysis. For example, if you encounter an API performance issue represented by latency metrics, being able to drill down to the API trace and accompanying logs becomes critical for faster root cause identification. Unlike traditional observability implementations, where data sits in separate silos that don't communicate, Apica Ascent ensures a cohesive view of all MELT data, leading to faster root cause outcomes.
This makes the Ascent platform a reliable first-mile solution for consolidating MELT data within your enterprise environments. Experience a seamless, fully integrated observability solution that enhances performance and efficiency across your infrastructure.
Apica Ascent employs a unified view of your enterprise, utilizing a full-stack approach to observability data life cycle management. By seamlessly integrating various capabilities, Apica Ascent facilitates a smoother and more effective root cause analysis process.
Apica Ascent takes pride in its commitment to security and compliance. The platform adheres to SOC 2 Type II Compliance standards and is an esteemed member of the Cloud Native Computing Foundation (CNCF).
The Apica Ascent UI is your window to your IT data, logs, and metrics - ingested from all of your data sources and converged onto a single layer. The Apica Ascent UI enables you to perform a wide range of operations – from simple uptime monitoring and error troubleshooting to capacity planning, real-time forensics, performance studies, and many more.
You can access the Apica Ascent UI by logging into your Apica Ascent instance URL using your account credentials.
The navigation bar at the right side of the UI allows you to access your:
Dashboards
Queries
Alerts
Explore
Events
Rules
Settings
The following sections in this article describe the various elements of the Apica Ascent UI and their purposes.
A dashboard is a collection of visualizations and queries that you've created against your log data. You could create dashboards to house visualizations and queries for specific as well as multiple data sources. Everything contained within a dashboard is updated in real-time.
The Dashboards page on the Apica Ascent UI lists all the dashboards you've created within Apica Ascent. Dashboards that you've favorited are marked with a yellow star icon and are also listed under the Dashboards dropdown menu for quick access in the navigation bar. The following images depict dashboards that you can create using Apica Ascent.
Apica Ascent enables you to write custom queries to analyze log data, display metrics and events, view and customize events and alerts, and create custom dashboards. The Queries page lists all of the queries you've created on Apica Ascent. You can mark some of them as favorites or archive the ones, not in use. Your favorite queries also appear in the drop-down menu of the Queries tab for quick access.
Apica Ascent enables you to set alerts against events, data, or metrics of interest derived from your log data. The Alerts page on the UI lists all of the alerts you've configured on Apica Ascent. You can sort and display the list of alerts by their name, message, state, and the time they were last updated or created. Depending on your user permissions within Apica Ascent, you can click an alert to view more information or reconfigure the alert based on your need.
The following image depicts a typical Alerts page on the Apica Ascent UI.
The Explore page lists all of the log streams generated across your IT environment that are being ingested into Apica Ascent. The Explore page lists and categorizes logs based on Namespace, Application, ProcID, and when they were last updated. By default, logs are listed by the time they were ingested with the most recent applications appearing on the top. You can filter log streams by namespaces, applications, and ProcIDs. You can also filter them by custom time ranges.
You can also click into a specific application or ProcID to view logs in more detail and to search through or identify patterns within your log data.
The following image depicts a typical Explore page on the Apica Ascent UI.
The Events page lists all the important events that have occurred in the Apica Ascent Platform. Events are listed by their Name, Message, and the time they were created. The Event page tracks important service notifications like service restarts, license expiry, etc...
The Create dropdown menu enables you to create new reports, queries, dashboards, and alerts, as shown in the following image.
A function-specific modal appears based on what you select from this dropdown menu.
As enterprises scale their cloud-native applications, they need an observability platform that can keep up with dynamic workloads, high-velocity data streams, and distributed architectures. Kubernetes has emerged as the foundation for modern observability platforms because it enables infinite scalability, automated resilience, and superior resource efficiency.
Apica’s Ascent Platform, built on Kubernetes, leverages these advantages to deliver next-generation observability—one that scales on demand, ensures high availability, and optimizes infrastructure resources efficiently.
Observability data is massive and continuously growing. Logs, metrics, traces, and events are ingested at an unprecedented scale, especially in high-throughput environments like fintech, telecom, and SaaS platforms.
Traditional monitoring solutions struggle because they rely on static infrastructure, making it difficult to scale on demand. Kubernetes, on the other hand, provides:
Horizontal Scalability – Automatically scales observability workloads based on real-time ingestion rates.
Dynamic Resource Allocation – Ensures workloads receive the right amount of CPU, memory, and storage.
Event-Driven Autoscaling – Uses Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA) to dynamically adjust observability workloads.
Auto-Scaling Observability Pipelines – Apica’s OpenTelemetry-based collectors automatically scale based on traffic volume, ensuring consistent performance.
Seamless Multi-Cluster Deployment – Apica’s platform runs across Kubernetes clusters in AWS, Azure, GCP, and on-premise for global observability.
Optimized Data Processing – High-throughput workloads are distributed across multiple nodes for maximum efficiency and minimal latency.
Result: Enterprises using Apica Ascent based on Kubernetes can ingest billions of logs, traces, and metrics in real time without worrying about infrastructure limitations.
Modern enterprises operate in multi-cloud and hybrid environments, where observability data comes from Kubernetes clusters, virtual machines, serverless functions, and on-premises data centers.
Kubernetes removes infrastructure constraints by allowing observability workloads to be deployed across any cloud provider or on-prem environment, ensuring consistent visibility across all application layers.
Run observability workloads anywhere – On-prem, hybrid cloud, or multi-cloud setups.
Unified Observability Across Diverse Environments – Monitor Kubernetes, VMs, and serverless environments in a single platform.
Zero Vendor Lock-In – Apica’s platform is built on open standards (OpenTelemetry, Prometheus, Jaeger) and deployable across AWS, Azure, GCP, Oracle Cloud, and private data centers.
Scalable Resourcing – Kubernetes allows Apica to scale observability workloads for multiple customers or teams without resource contention.
Cloud-Agnostic Deployment – Apica’s observability platform runs natively across any cloud provider or on-premises Kubernetes cluster.
Unified Observability at Global Scale – Centralized data collection, analytics, and AI-driven insights across all Kubernetes environments.
Result: Enterprises gain full observability across all environments, whether running in AWS, Azure, GCP, or on-prem.
One of Kubernetes' biggest advantages is its self-healing capabilities, ensuring that observability workloads remain highly available and fault-tolerant.
Automatic Failover & Pod Recovery – Kubernetes automatically replaces failed observability agents and collectors, ensuring no gaps in monitoring.
Load Balancing for Observability Workloads – Kubernetes evenly distributes data ingestion, preventing bottlenecks in observability pipelines.
Multi-Region & Disaster Recovery Ready – Kubernetes automates failover between cloud regions, ensuring continued observability in case of outages.
Redundant Observability Agents – Apica deploys multiple OpenTelemetry Collectors to prevent data loss during failures.
AI-Driven Incident Recovery – Apica’s AI proactively detects infrastructure failures and triggers automated remediation workflows.
Built-in Kubernetes Load Balancing – Ensures efficient routing of telemetry data to optimize performance.
Result: No single point of failure, ensuring continuous observability, even during infrastructure outages.
Observability data contains sensitive business insights, requiring enterprise-grade security and compliance measures. Kubernetes provides built-in security capabilities that make it ideal for running observability platforms at scale.
RBAC (Role-Based Access Control) – Granular access control for observability workloads.
End-to-End Encryption – TLS encryption for telemetry data in transit and at rest.
Network Segmentation & Pod Security – Prevents unauthorized access to observability data.
Multi-Tenant Observability with Isolation – Ensures customers or teams have secure, isolated access to their data.
Secure Observability Pipelines – Data is encrypted at rest and in transit using TLS 1.2+ and AES encryption.
Multi-Tenant Data Isolation – RBAC ensures fine-grained access control across teams and business units.
Long-Term Retention for Compliance – Apica’s InstaStore™ data lake ensures observability data meets SOC 2, GDPR, HIPAA, and enterprise compliance standards.
Result: Enterprises gain observability at scale while ensuring full security and compliance.
Kubernetes is redefining the observability landscape, enabling platforms like Apica’s Ascent to deliver:
Infinite Scalability – Dynamically scales telemetry pipelines to handle billions of logs, traces, and metrics.
Global Observability Across Any Environment – Runs on AWS, Azure, GCP, Oracle Cloud, and on-premises Kubernetes clusters.
AI-Driven Automation & Self-Healing – Automatically detects and resolves failures, reducing operational overhead.
Cost-Optimized & Storage-Efficient – Leverages object storage (S3, Azure Blob, Ceph) to eliminate unnecessary costs.
Built-In Security & Compliance – Ensures full encryption, RBAC access control, and regulatory compliance.
Observability isn’t just about collecting logs, metrics, and traces—it’s about ensuring real-time insights, high performance, and cost-efficiency at any scale. Traditional monitoring solutions often struggle with large-scale data ingestion, leading to performance bottlenecks, slow query times, and high storage costs.
Apica’s Ascent Platform, built on Kubernetes, solves these challenges by providing infinite scalability, AI-driven optimization, and seamless multi-cloud support. With a unified data store, OpenTelemetry-native architecture, and intelligent workload management, Apica delivers unparalleled observability performance while reducing operational complexity and costs.
More on Ascent Kubernetes Integration
One of the biggest challenges in observability is data storage and retention. Traditional monitoring solutions rely on tiered storage models, leading to high costs, data fragmentation, and slow query times.
More information on Lake and InstaStore
Apica’s InstaStore™ data lake, built on Kubernetes, eliminates these limitations by providing:
Infinite scalability – Stores billions of logs, traces, and metrics without performance degradation.
ZeroStorageTax architecture – No more storage tiering, reducing storage costs by up to 60%.
Real-time data indexing – Instant query access to historical and real-time telemetry data.
Multi-cloud compatibility – Supports AWS S3, Azure Blob, Ceph, MinIO, and other object storage providers.
Single source of truth – Eliminates data silos by storing logs, metrics, traces, and events in a unified repository.
On-demand query acceleration – Uses high-speed indexing for sub-second query response times.
Long-term retention & compliance – SOC 2, GDPR, HIPAA-compliant storage for enterprise observability data.
Result: Enterprises can store, query, and analyze observability data instantly, at a fraction of the cost of traditional solutions.
Observability pipelines must ingest, process, and export massive volumes of telemetry data while maintaining low latency and high efficiency. Without proper optimization, unstructured data overloads monitoring systems, leading to delays, noise, and unnecessary costs.
Apica’s Telemetry Pipeline, built on Kubernetes, solves this by:
Filtering, enriching, and transforming telemetry data in real time.
Automatically routing observability data to the most cost-effective storage backend.
Optimizing ingestion rates to reduce infrastructure costs and enhance performance.
Providing a drag-and-drop interface for managing data pipelines effortlessly.
In modern enterprise environments, observability data is collected from thousands of microservices, virtual machines, containers, and cloud functions. Manually deploying, configuring, and maintaining OpenTelemetry agents, Fluent-bit log collectors, and Prometheus exporters is resource-intensive and error-prone.
Config drift leads to inconsistent telemetry data across environments.
Manual agent updates result in security vulnerabilities and broken data pipelines.
Lack of centralized management makes troubleshooting difficult.
Apica solves these challenges with Fleet Management, an automated system for managing OpenTelemetry collectors and other observability agents at enterprise scale.
Automated Agent Deployment – Uses Kubernetes DaemonSets and StatefulSets to deploy and manage observability agents across clusters.
Zero-Drift Configuration Management – Ensures all observability agents stay in sync with the latest configurations.
Real-Time Health Monitoring – Continuously tracks agent status, performance, and data collection efficiency.
Multi-Cloud & Hybrid Support – Deploys agents across AWS, Azure, GCP, on-prem environments, and edge locations.
Result: Enterprises eliminate manual observability agent management, ensuring consistent, reliable telemetry collection at scale.
OPENTELEMETRY TRACING AND METRICS IN ACTION
End-to-end distributed tracing for microservices: OpenTelemetry enables deep visibility into microservices interactions by capturing traces across service boundaries. This allows developers and operations teams to understand request flow, identify problematic dependencies, and detect failures in a distributed system. By leveraging OpenTelemetry’s context propagation, teams can follow a request from its origin to its termination, providing a clear picture of dependencies and bottlenecks. This improves troubleshooting efficiency, reduces the mean time to resolution (MTTR), and helps organizations build more resilient, scalable architectures. Additionally, OpenTelemetry supports integrations with distributed tracing backends such as Jaeger, Zipkin, and commercial solutions, ensuring flexibility in visualization and analysis.
Identifying latency bottlenecks in cloud-native environments: By collecting granular performance data, OpenTelemetry helps teams pinpoint where delays are occurring in an application. Whether it’s a slow database query, an overloaded service, or network latency, OpenTelemetry provides the data needed to optimize system responsiveness and improve user experience. With built-in support for metrics and histograms, OpenTelemetry allows teams to measure request duration, throughput, and error rates, enabling proactive performance tuning. Furthermore, OpenTelemetry facilitates real-time alerting on latency spikes, allowing DevOps teams to quickly diagnose and mitigate issues before they impact users. This level of insight is particularly beneficial for cloud-native applications where dynamic scaling and complex service interactions demand constant monitoring and optimization.
Collecting host and container-level metrics: OpenTelemetry provides extensive support for collecting system-level and container-level metrics, including CPU, memory, disk usage, and network statistics. This enables teams to track resource consumption across distributed environments, identify performance anomalies, and optimize infrastructure utilization. By leveraging OpenTelemetry’s support for metric aggregation and real-time monitoring, organizations can ensure their applications remain resilient under varying workloads.
Monitoring Kubernetes clusters at scale: Kubernetes environments introduce unique challenges due to their dynamic and ephemeral nature. OpenTelemetry integrates seamlessly with Kubernetes to provide real-time visibility into cluster health, pod performance, and service-to-service communications. It enables DevOps teams to monitor workload scheduling efficiency, detect failing pods, and correlate application performance with underlying infrastructure issues. By centralizing observability across multiple clusters, OpenTelemetry empowers organizations to maintain high availability and reduce operational overhead in cloud-native environments.
Unified observability for root cause analysis: OpenTelemetry provides a comprehensive approach to observability by linking logs, metrics, and traces together, enabling teams to perform in-depth root cause analysis. By correlating log events with specific traces and spans, teams can identify exactly where failures occur within a distributed system, reducing the time spent diagnosing incidents and improving mean time to resolution (MTTR). This unified observability approach ensures that developers and operators have a complete understanding of system behavior, making debugging and performance optimization more efficient.
Enriching logs with trace and span context: OpenTelemetry enhances logging by automatically injecting trace and span identifiers into log messages, allowing for precise contextualization of events. This enrichment enables teams to follow an event from initiation through completion, offering clear insights into request flow and dependencies. Additionally, integrating log correlation with tracing helps detect patterns, anomalies, and dependencies that might not be immediately visible when logs are analyzed in isolation. This capability is especially beneficial in microservices architectures, where tracking down issues across multiple services can be complex without proper log-trace correlation.
Capturing audit trails with OTEL logs and traces: OpenTelemetry enables organizations to create detailed audit trails by collecting logs and traces that capture user activity, API calls, and system interactions. These audit trails help organizations meet compliance requirements by providing clear, verifiable records of all system activities. By maintaining an immutable record of telemetry data, OpenTelemetry enhances accountability and security, ensuring that organizations can detect and investigate security incidents efficiently.
Detecting anomalies and unauthorized access patterns: OpenTelemetry’s advanced telemetry data collection allows security teams to analyze trends, detect anomalies, and identify unauthorized access attempts in real-time. By correlating logs, traces, and metrics, OpenTelemetry provides a holistic view of system behavior, helping teams recognize suspicious patterns, mitigate security threats, and prevent potential data breaches. This proactive security monitoring is essential for maintaining regulatory compliance and protecting sensitive data in distributed and cloud-native environments.
Defining Service Level Objectives (SLOs): Service Level Objectives (SLOs) are key performance indicators (KPIs) that define the desired reliability and performance targets for services. OpenTelemetry enables organizations to collect and analyze telemetry data that aligns with predefined SLOs, ensuring services meet business expectations. By leveraging OpenTelemetry metrics, organizations can measure service uptime, response times, and error rates, allowing teams to proactively address performance degradations before they impact end users. This approach fosters a culture of reliability engineering and helps teams adhere to Service Level Agreements (SLAs).
Analyzing user behavior and optimizing transactions: OpenTelemetry provides deep insights into user interactions and application workflows by capturing traces and metrics across distributed systems. By analyzing user journeys, organizations can identify friction points, optimize performance, and enhance user experience. OpenTelemetry allows businesses to track critical transactions, detect drop-offs, and correlate them with system behavior, ensuring continuous improvement. Additionally, businesses can leverage telemetry data to fine-tune application logic, allocate resources efficiently, and personalize user interactions based on real-time performance trends.
OPENTELEMETRY VS PROMETHEUS: A COMPARISON
Before OpenTelemetry, organizations struggled with multiple proprietary monitoring agents, each producing telemetry data in incompatible formats, leading to data silos, increased complexity, and a lack of correlation between logs, metrics, and traces. These challenges made it difficult to gain a comprehensive view of system health and troubleshoot issues efficiently.
OpenTelemetry unifies telemetry collection across all environments, ensuring seamless interoperability between tools and services, allowing teams to consolidate their observability strategy while reducing operational overhead and improving incident response times.
Traditional observability solutions force users into proprietary ecosystems, making migrations and integrations difficult. This often leads to increased costs, limited customization, and difficulty in adapting to evolving business needs.
OpenTelemetry decouples telemetry collection from backend storage and analysis, allowing organizations to switch observability platforms without re-instrumenting applications, ensuring greater flexibility, scalability, and long-term sustainability of their monitoring strategy.
With OpenTelemetry, organizations can eliminate redundant monitoring agents, reducing system overhead and lowering observability costs. By optimizing data collection and leveraging intelligent sampling and filtering, businesses can gain deep visibility without incurring excessive data ingestion costs. Additionally, OpenTelemetry provides the flexibility to fine-tune data collection strategies, ensuring that only the most valuable telemetry data is retained and processed.
This targeted approach not only enhances performance but also mitigates the risk of overwhelming storage and analytics systems with excessive data. By standardizing observability across an organization, OpenTelemetry helps engineering teams make data-driven decisions more effectively while controlling infrastructure spending.
OpenTelemetry’s automatic instrumentation and standardized APIs make it easier for developers and SRE teams to implement observability across applications. By reducing the need for manual instrumentation, teams can accelerate deployment cycles and ensure that telemetry data is collected consistently and reliably.
This results in faster debugging, reduced MTTR (Mean Time to Resolution), and increased deployment confidence, while also enabling proactive issue detection and automated root cause analysis. With OpenTelemetry, organizations can shift from reactive troubleshooting to predictive monitoring, allowing engineering teams to optimize performance before issues escalate into major incidents.
Many industries require strict compliance with security and auditing standards, including regulations such as GDPR, HIPAA, and SOC 2. OpenTelemetry provides structured telemetry data that simplifies compliance reporting by offering detailed, real-time insights into system activity, ensuring better traceability and transparency. By capturing rich metadata within traces, metrics, and logs, OpenTelemetry enhances auditability, enabling security teams to quickly detect and respond to anomalies.
Furthermore, OpenTelemetry’s vendor-neutral approach allows organizations to centralize security monitoring across multiple platforms, ensuring consistency in compliance efforts while reducing reliance on proprietary solutions.
OpenTelemetry (OTEL) is an open-source observability framework that provides a standardized approach to collecting, processing, and exporting telemetry data—including traces, metrics, and logs—from applications and infrastructure. It is a vendor-neutral solution designed to help organizations gain deep insights into the performance, health, and behavior of their distributed systems without being locked into proprietary observability tools.
By unifying telemetry collection across different platforms, programming languages, and monitoring solutions, OpenTelemetry simplifies instrumentation, reduces integration complexities, and enhances observability capabilities for modern cloud-native applications.
At its core, OpenTelemetry serves the following primary purposes:
Standardization of Observability Data – OTEL defines a common set of APIs, libraries, and protocols for collecting and transmitting telemetry data, ensuring that observability data is structured and consistent across different environments.
Vendor-Neutral Telemetry Collection – Unlike proprietary solutions, OpenTelemetry is not tied to a single vendor, giving users the flexibility to export data to multiple observability platforms, including Prometheus, Jaeger, Zipkin, Elasticsearch, and various commercial solutions.
Comprehensive Observability for Distributed Systems – OTEL helps organizations monitor, trace, and analyze applications running in microservices architectures, Kubernetes clusters, serverless environments, and hybrid cloud infrastructures.
Simplified Instrumentation – Developers can use OpenTelemetry’s SDKs and automatic instrumentation to collect telemetry data without manually modifying large portions of their application code.
Better Troubleshooting and Performance Optimization – By correlating traces, metrics, and logs, OTEL enables teams to detect bottlenecks, troubleshoot incidents faster, and optimize system performance proactively.
OpenTelemetry originated as a merger of two popular open-source observability projects:
OpenTracing – Focused on distributed tracing instrumentation.
OpenCensus – Provided metrics collection and tracing capabilities.
Recognizing the need for a unified observability framework, the Cloud Native Computing Foundation (CNCF) merged OpenTracing and OpenCensus into OpenTelemetry in 2019, creating a single, industry-wide standard for telemetry data collection.
2016 – OpenTracing & OpenCensus emerge as separate projects to address distributed tracing and metrics collection.
2019 – CNCF consolidates both projects into OpenTelemetry to create a single, unified standard.
2021 – OpenTelemetry tracing reaches stable release, making it production-ready.
2022 – OpenTelemetry metrics reach general availability (GA), expanding beyond tracing.
2023-Present – Work continues on log correlation, profiling, and deeper integrations with various observability platforms.
The Cloud Native Computing Foundation (CNCF), a part of the Linux Foundation, serves as the governing body for OpenTelemetry. CNCF provides:
Project oversight and funding to support OpenTelemetry’s development.
Community-driven governance, ensuring OTEL remains an open and collaborative initiative.
Integration with other CNCF projects, such as Kubernetes, Prometheus, Fluentd, and Jaeger, to enhance observability capabilities for cloud-native workloads.
CNCF’s involvement ensures OpenTelemetry remains a widely adopted, industry-backed, and continuously evolving framework. With support from major cloud providers (Google, Microsoft, AWS), observability vendors (Datadog, New Relic, Dynatrace), and enterprise technology companies, OpenTelemetry has become the de facto standard for open-source observability.
By adopting OpenTelemetry, organizations align with a future-proof, community-driven observability strategy, ensuring compatibility across cloud environments and monitoring solutions.
SETTING UP THE OPENTELEMETRY COLLECTOR
Understanding OTEL architecture: OpenTelemetry consists of multiple components, including APIs, SDKs, Collectors, and exporters. Selecting the right components depends on the architecture of your system and the telemetry data you need to collect. Organizations must assess whether they need distributed tracing, metrics, logs, or a combination of all three to achieve complete observability.
Deployment considerations: Choosing between an agent-based or sidecar deployment model affects resource utilization and scalability. OpenTelemetry provides flexible deployment options that integrate directly into microservices, Kubernetes clusters, and traditional monolithic applications.
Links for using OpenTelemetry with Apica Ascent:
Language-specific SDKs: OpenTelemetry provides official SDKs for multiple programming languages, including Java, Python, JavaScript, Go, .NET, and more. Choosing the correct SDK ensures seamless instrumentation of applications to capture relevant telemetry data without requiring excessive code modifications.
Automatic vs. manual instrumentation: Many OpenTelemetry SDKs support automatic instrumentation, which simplifies the collection of telemetry data by automatically instrumenting common frameworks and libraries. Manual instrumentation, on the other hand, allows developers to capture more granular details specific to their business logic, providing richer observability insights.
Configuration and customization: Each OpenTelemetry SDK offers various configuration options, such as sampling rates, exporters, and resource attributes. Understanding these settings helps optimize observability while minimizing overhead on production systems.
Role of the OpenTelemetry Collector: The OpenTelemetry Collector acts as a central hub for processing, filtering, and exporting telemetry data. It eliminates the need to send data directly from applications to multiple backends, reducing the complexity of observability pipelines.
Collector pipeline configuration: OpenTelemetry Collectors support a pipeline model consisting of receivers (data ingestion), processors (data transformation), and exporters (data forwarding). Configuring these pipelines efficiently ensures that only relevant telemetry data is retained and sent to the appropriate monitoring backends.
Scalability and performance tuning: Organizations with high-volume telemetry data must optimize Collector performance using batching, compression, and load balancing techniques. Running multiple Collector instances or deploying Collectors at the edge can enhance data aggregation efficiency while minimizing network latency.
Understanding the differences: OpenTelemetry offers two approaches to instrumenting applications—automatic and manual instrumentation. Choosing the right approach depends on the level of detail required and the effort an organization is willing to invest.
Automatic Instrumentation: OpenTelemetry provides auto-instrumentation libraries that hook into commonly used frameworks (e.g., Spring Boot, Express, Flask, Django) to capture telemetry data without modifying application code. This is an easy way to get started and ensures coverage across key application functions with minimal effort. However, automatic instrumentation may not capture business-specific logic or custom events that organizations want to track.
Manual Instrumentation: With manual instrumentation, developers explicitly insert OpenTelemetry SDK calls into the application code. This provides precise control over what telemetry data is collected and allows capturing custom metrics, business transactions, and domain-specific spans. While more effort is required to implement, manual instrumentation results in richer observability data tailored to an organization’s needs.
Combining both approaches: Many organizations use a hybrid approach where auto-instrumentation provides baseline observability, and manual instrumentation is used to track critical business operations, unique workflows, or domain-specific logic.
Why context propagation matters: In distributed systems, requests travel through multiple services, making it difficult to correlate logs, traces, and metrics. Context propagation ensures that telemetry data remains linked throughout an entire request lifecycle, enabling effective debugging and root cause analysis.
Using Trace Context and Baggage: OpenTelemetry follows the W3C Trace Context standard, which passes unique trace identifiers across service boundaries. Additionally, baggage propagation allows attaching custom metadata to traces, which can be used for debugging or business analytics.
Instrumentation strategies: Developers need to ensure that trace context is carried through HTTP requests, gRPC calls, and message queues. OpenTelemetry SDKs provide middleware and client libraries that handle this automatically for popular frameworks and protocols.
Ensuring compatibility across environments: Organizations using multiple tracing tools should verify that OpenTelemetry context propagation integrates well with existing logging and monitoring solutions, avoiding data fragmentation.
OpenTelemetry Protocol (OTLP): OTLP is the native protocol for OpenTelemetry, offering a standardized and efficient way to transmit telemetry data. It supports traces, metrics, and logs in a unified format, ensuring compatibility with a broad range of observability backends. Organizations using OTLP benefit from reduced complexity and better performance, as the protocol is optimized for high-throughput data collection.
Prometheus Exporter: OpenTelemetry integrates seamlessly with Prometheus, a widely used open-source monitoring system. The Prometheus exporter allows applications instrumented with OpenTelemetry to send metrics to Prometheus, enabling real-time monitoring and alerting. This is particularly useful for organizations leveraging Prometheus as their primary observability backend.
Jaeger and Zipkin Exporters: OpenTelemetry supports both Jaeger and Zipkin, two popular distributed tracing backends. These exporters allow organizations to continue using their existing tracing infrastructure while benefiting from OpenTelemetry’s standardized instrumentation. By enabling these exporters, teams can visualize request flows and troubleshoot latency issues effectively.
Commercial Observability Platforms: Many commercial observability platforms, such as Datadog, New Relic, and Dynatrace, support OpenTelemetry exporters. This ensures that organizations adopting OpenTelemetry can seamlessly integrate their telemetry data into these platforms without vendor lock-in.
Configuring Exporters for Seamless Data Ingestion: OpenTelemetry provides a flexible exporter configuration, allowing organizations to send telemetry data to multiple observability platforms simultaneously. This enables hybrid monitoring strategies where teams can leverage both open-source and commercial solutions for observability.
Optimizing Data Flow with the OpenTelemetry Collector: The OpenTelemetry Collector can be used as an intermediary layer to receive, process, and export telemetry data efficiently. By implementing batch processing, filtering, and data enrichment, organizations can optimize data flow while reducing unnecessary storage and processing costs.
Ensuring High Availability and Performance: When integrating OpenTelemetry with an observability backend, organizations should ensure that exporters and collectors are configured to handle high-volume telemetry data. Strategies such as load balancing, horizontal scaling, and adaptive sampling help maintain reliability while keeping infrastructure costs under control.
Security and Compliance Considerations: Organizations should implement encryption (e.g., TLS) and authentication mechanisms when exporting telemetry data to observability platforms. Ensuring secure transmission prevents unauthorized access and aligns with regulatory requirements.
Release Notes for recent software and platform updates across all products:
Synthetic Monitoring (ASM)
Loadtesting (ALT)
Importance of consistency in observability: Ensuring that all teams follow a standardized approach to instrumentation is crucial for maintaining a reliable and actionable observability strategy. Without consistency, correlating telemetry data across services becomes challenging, leading to blind spots in monitoring and troubleshooting.
Collaborative approach to instrumentation: Organizations should establish cross-functional teams that include developers, SREs, and platform engineers to define and implement observability standards. This ensures alignment on best practices and reduces redundant or conflicting telemetry data collection.
Continuous improvement and governance: Standardization should not be a one-time effort. Organizations should regularly review and refine their observability practices to adapt to evolving business needs, new technologies, and OpenTelemetry updates.
Links for using OpenTelemetry with Apica Ascent:
Defining clear guidelines for telemetry data collection: Organizations should document best practices for collecting, processing, and exporting telemetry data. This includes specifying which types of data (traces, metrics, logs) should be collected for different applications and environments.
Ensuring minimal performance impact: Instrumentation policies should balance comprehensive observability with system performance. Teams should implement sampling strategies, rate limiting, and filtering mechanisms to prevent excessive data collection from impacting application performance.
Establishing ownership and accountability: Clear guidelines should specify who is responsible for instrumenting different parts of the system. Assigning ownership ensures that observability is an integral part of the development and operational lifecycle rather than an afterthought.
Automating instrumentation where possible: Using automatic instrumentation libraries and OpenTelemetry’s SDKs can help enforce consistent observability standards with minimal manual effort. Automation reduces the likelihood of human errors and ensures that new services are consistently instrumented from day one.
Consistent span naming for improved traceability: Using a structured and descriptive naming convention for spans ensures that distributed traces are easy to interpret. Naming should follow a hierarchical structure that includes service name, operation type, and key function details (e.g., order-service.db.query instead of queryDB).
Standardized metric naming for cross-team compatibility: Metric names should follow a standardized format that aligns with industry best practices. This includes using prefixes for different metric types (http_request_duration_seconds for latency metrics) and ensuring clear labels for filtering and aggregation.
Using semantic conventions: OpenTelemetry provides semantic conventions for naming spans, attributes, and metrics. Adhering to these standards improves interoperability and makes it easier to integrate OpenTelemetry data with third-party observability tools.
Documenting naming conventions for long-term consistency: Organizations should maintain a centralized documentation repository outlining agreed-upon naming conventions and examples. This ensures that new teams and developers can easily adopt and follow established best practices.
Efficient resource allocation: OpenTelemetry Collectors process a large volume of telemetry data, making it essential to allocate adequate CPU and memory resources. Organizations should assess their workloads and set appropriate limits to prevent excessive resource consumption that could degrade system performance.
Using lightweight configurations: To optimize resource usage, organizations should enable only necessary receivers, processors, and exporters. Disabling unused components minimizes CPU and memory overhead, improving overall efficiency.
Load balancing Collectors: Deploying multiple Collector instances in a load-balanced configuration helps distribute processing across nodes, reducing bottlenecks and ensuring high availability. This is particularly important for large-scale deployments handling massive telemetry data volumes.
Monitoring Collector performance: Continuously tracking Collector resource usage through built-in metrics helps teams identify performance bottlenecks and optimize configurations. Organizations can set up alerts for CPU spikes, memory saturation, and dropped telemetry events to maintain system stability.
Batch processing for efficiency: Instead of sending individual telemetry events, OpenTelemetry Collectors support batch processing to aggregate and compress data before transmission. This reduces network overhead and optimizes performance while ensuring minimal data loss.
Adaptive sampling techniques: Organizations can use head-based and tail-based sampling techniques to limit the volume of telemetry data collected without losing critical observability insights. Tail-based sampling allows prioritizing high-value traces while discarding less useful data, improving cost efficiency.
Configuring sampling rates based on workload: Setting appropriate sampling rates based on application traffic patterns prevents excessive data ingestion while retaining sufficient observability coverage. Dynamic sampling strategies can adjust rates in real-time based on system health and alert conditions.
Ensuring data integrity with intelligent filtering: Organizations can filter and enrich telemetry data within OpenTelemetry Collectors, ensuring that only relevant metrics, logs, and traces are stored. This reduces storage costs and improves the relevance of observability data for troubleshooting and performance optimization.
Understanding the risks of exposed telemetry data: Logs and traces often contain sensitive information such as user credentials, API keys, personally identifiable information (PII), and payment details. If not properly handled, this data can be exposed in observability pipelines, leading to security breaches and compliance violations.
Implementing data masking and redaction: Organizations should establish policies for automatically redacting or masking sensitive data before it is ingested into logs or traces. OpenTelemetry allows for processors to be configured to scrub sensitive fields, ensuring that only anonymized data is transmitted.
Using attribute-based filtering: OpenTelemetry provides mechanisms to filter telemetry attributes before they reach a storage backend. By defining attribute allowlists and blocklists, teams can prevent the transmission of confidential information while preserving necessary observability data.
Enforcing encryption in transit and at rest: All telemetry data should be encrypted both in transit (e.g., using TLS) and at rest within storage systems. This ensures that intercepted data cannot be accessed by unauthorized entities.
Compliance with industry regulations: Many industries require specific security practices, such as GDPR's data minimization principle and HIPAA’s de-identification requirements. By implementing structured masking and redaction policies, organizations can align with these regulatory standards while maintaining robust observability.
Defining access levels for different roles: Not all users need access to all telemetry data. Organizations should define clear RBAC policies that grant varying levels of access based on job responsibilities. For example, developers may only need application performance data, while security teams require access to audit logs.
Segmenting telemetry data by sensitivity: Logs, traces, and metrics can be categorized based on their sensitivity levels. By assigning access controls to different categories, organizations can prevent unauthorized personnel from accessing highly sensitive information.
Using authentication and authorization mechanisms: OpenTelemetry integrates with identity management systems to enforce authentication and authorization. Implementing Single Sign-On (SSO), multi-factor authentication (MFA), and API key restrictions ensures that only authorized users and services can access telemetry data.
Auditing and monitoring access logs: Continuous monitoring of who accesses telemetry data helps detect unauthorized access attempts. Audit logs should track all interactions with observability data, including user actions, query requests, and data exports.
Automating policy enforcement with infrastructure as code: RBAC policies should be defined in infrastructure as code (IaC) templates to ensure consistency across deployments. By automating role assignments and access restrictions, organizations can enforce security best practices at scale.
Understanding the risks of excessive instrumentation: Instrumenting every possible function, service, or transaction can introduce significant processing overhead, increasing CPU and memory consumption and impacting application performance. While observability is crucial, excessive instrumentation can slow down systems and lead to noise in telemetry data, making it harder to extract meaningful insights.
Implementing strategic instrumentation: Teams should focus on capturing key telemetry data that aligns with business and operational needs. Instead of collecting every possible trace or metric, organizations should define specific service-level objectives (SLOs) and monitor the most critical performance indicators, reducing unnecessary data collection.
Using adaptive sampling techniques: OpenTelemetry provides head-based and tail-based sampling, which allows teams to collect meaningful traces while reducing the data volume. Adaptive sampling dynamically adjusts based on traffic, ensuring visibility into important transactions without overwhelming observability pipelines.
Optimizing trace and metric retention policies: Organizations should implement retention policies that store only high-value telemetry data while discarding redundant or less critical information. This ensures that logs, traces, and metrics remain relevant and actionable while keeping storage costs manageable.
Regularly auditing telemetry data collection: Conduct periodic reviews of instrumentation policies and collected data to identify unnecessary metrics, spans, or logs that could be removed or optimized. Automating this audit process can help enforce efficient observability practices without human intervention.
The importance of unified observability: Metrics, logs, and traces serve different observability functions, but when analyzed in isolation, they provide an incomplete picture of system health. Ensuring proper correlation between these data types is critical for effective root cause analysis and performance optimization.
Implementing trace-log correlation: OpenTelemetry allows injecting trace and span identifiers into log messages, providing direct relationships between traces and log events. This makes it easier for engineers to investigate issues by linking logs to the specific traces that triggered them, reducing time spent on debugging.
Enriching metrics with trace and log context: By tagging metrics with trace identifiers and relevant log metadata, organizations can improve visibility into system-wide performance trends. This approach helps correlate spikes in error rates, latency fluctuations, and anomalous behaviors with specific transactions.
Leveraging OpenTelemetry semantic conventions: Using standardized naming conventions and attributes for spans, logs, and metrics ensures consistency across telemetry data. Following OpenTelemetry’s semantic conventions improves interoperability with various backends and enhances observability tool integrations.
Centralized observability dashboards: Organizations should aggregate and visualize logs, metrics, and traces in a unified observability platform. Tools like Grafana, Kibana, and OpenTelemetry-compatible backends enable cross-referencing telemetry data for more efficient troubleshooting and deeper insights.
Automation for Checks and Alerts
Added support for CI/CD pipeline to streamline check creation and maintenance through ASM APIs.
Reduced manual efforts and ensured consistency across different environments through automation.
Ability to perform CRUD operations for ZebraTester, Browser, and URL checks.
Ability to create, upload, and assign ZebraTester and Browser scenarios for checks.
Ability to create and assign Email, SMS, and Webhook alert targets or alert groups.
Chrome 130 Upgrade for Browser Checks
All existing Browser checks have been upgraded from Chrome 115 to Chrome 130.
All new Browser checks run on Chrome 130.
NG Private Locations
Private locations/agents can be shared among sub-customer accounts to run checks.
Users can utilize their own CA certificates for checks in Private locations to monitor internal applications.
Apica Grafana Plugin
Upgraded the Apica Grafana plugin to version 2.0.11.
Added support for page metrics, allowing users to analyze response time for specific pages instead of entire scenario metrics.
Fixed the issue where invalid "acceptedCodes" were being accepted for URL checks in the POST /checks/url-v2
API.
Check Management
Introduced new check types: Browser, Postman, Traceroute, Mobile Device, and Compound.
Added support for the full workflow of check management: Edit, Delete, Clone, and Run Check.
Added support for Bulk Edit, Run, and Delete checks.
Inclusion and exclusion periods can be added in the check schedule.
Private Location and Agent Management
Introduced full self-service (Add, Edit, Delete, Reissue Certificate) for new check-type agnostic Private agents.
Private locations can be added, edited, deleted, enabled, and disabled with the ability to associate Private repositories.
A new "Private Locations" section in the UI allows easy navigation and management.
Check Analytics
Enabled alerting and reporting on checks.
Alerts and reports for a particular check can be created directly from the check page.
Screenshots taken during Browser check execution can now be viewed in the Check Analysis page.
Fixed an issue where filter criteria were not working correctly on the Checks page.
Fixed a bug where some check results were missing on the Check Details page.
Fleet Agent Limits: Enforced license-based agent limits.
Telemetry Enhancements: Added telemetry support for Fleet agents.
Fleet UI Revamp: Major UI improvements, better agent configuration management, and pagination fixes.
Fleet Summary Table: Redesigned the summary table for better usability.
Kubernetes Agent Status: Fleet UI now displays Kubernetes agent statuses.
Data Explorer Graph Enhancements: Enhanced GroupBy plotting with multiple Y-axis selection.
Widgets Enhancements: Added delete functionality and improved widget load time.
New Chart Type: Introduced Pie and HoneyComb charts for visualization.
Grafana to Data Explorer: Added Grafana JSON conversion support in Data Explorer.
GenAI Enhancements: Integrated "Log Explain" feature for enhanced log analysis in ALIVE.
Data Explorer Enhancements: Improved metrics screen and query list support.
Dashboard Optimization: Reduced load times for Data Explorer dashboards and preserved widget data across tabs.
RCA Workbench: Introduced diagnostics and debugging features based on Data Explorer widgets.
Dashboard Validation: Added validation for Data Explorer dashboard creation.
React Page Migration: Migrated Login, Setup, Signup, Reset, and Forgot Password pages to React (TSX) to reduce tech debt.
Ascent Invitation Feature: Implemented user invitation functionality via Casdoor.
Casdoor Sync: Synced Casdoor users and groups with the Ascent database.
Port Management: Resolved open TCP/UDP port issues.
Casdoor Integration: Enhanced authentication, session management, and email integration.
API Key Support: Added API key support for Casdoor in Ascent.
Casdoor Mail Service: Integrated Ascent mail service with Casdoor for email functionality.
Casdoor Signing Certificates: Added support for Casdoor signing certificates to enhance security.
GCP PubSub Plugin: Resolved file loading issues.
ResizeObserver Compatibility: Fixed compatibility issues with the latest Chrome version.
Alert Email Output: Truncated query output in triggered alert emails for better readability.
Agent Sorting: Fixed sorting by "Last Modified" in Fleet UI.
Incorrect Trace Volume: Fixed trace volume display on the Ascent landing page.
Alert Bug Fix: Resolved discrepancies in triggered alert counts displayed in the navbar.
Pipeline View: Fixed visual bugs in forwarder mapping and improved rule persistence.
Fleet Improvements: Enhanced Fleet installer, improved Kubernetes token creation, and fixed pagination issues.
Password Generation UI: Improved UI for password generation in Ascent.
Query Save Fix: Resolved unknown error when saving queries in the Freemium tier.
Moving Average Bug: Fixed AI-based query creation issues for Moving Average.
Alert UNKNOWN Issue: Resolved alerts triggering with an UNKNOWN state.
Alert Evaluation Fix: Fixed issues with alerts not evaluating after the first trigger.
SNMP Source Bug: Fixed SNMP ingest source extension bugs.
Fluent-Bit Installation: Addressed issues with Fluent-Bit post-agent manager installation.
Dual Active Packages: Resolved the issue of showing two active packages in Fleet.
Inclusion/Exclusion Fixes: Addressed syntax and period-saving issues.
Certificate Upload: Fixed certificate upload issues and removed the feature from Freemium.
Default Otel Configuration: Updated default Otel configuration for metric ingestion.
Platform Validation: Enhanced platform validation in Fleet.
Fleet Assign Package Error: Fixed package assignment issues.
Disable Pattern Signature: Disabled pattern signature functionality in Freemium.
Namespace Bug: Resolved incorrect namespace selection in Data Explorer.
Fleet Advanced Search: Fixed and improved advanced search functionality.
Dark Mode Fixes: Addressed UI inconsistencies, including Waterfall statistics and button styling.
Fleet Installation: Resolved installation errors on Linux and Windows.
Kubernetes Dropdown Fix: Fixed duplicate Kubernetes entries in Fleet dropdowns.
Configuration Refresh: Addressed package reassignment and configuration refresh issues.
Documentation Updates: Updated user and technical documentation.
For further details or inquiries, please refer to the official documentation or contact our support team.
Features
NG Private Locations/Agents API Support: Added ASM API support for full self-serve new check-type agnostic Private Agents which can be grouped into Private Locations. Features include:
Creation and management of Private location.
Creation and management of Private agents.
Configuration of Private Container repositories for Private locations to use during check run.
Added API support for Timezone selection for Check Inclusion/Exclusion periods during UrlV2 check creation.
Extended the subscription page to include more check statistics per check type like Info, Warning, Error, and Fatal check counts.
Enhanced status updates for NG Private agents.
Bug Fixes
Fixed the sporadic non-availability of agents in the Stockholm location issue when debugging a Selenium scenario.
Fixed a bug with downloading scripts from http sources for Scripted and Postman checks.
Fixed a bug where some block domain rules were not being respected in Browser checks.
Fixed the issue where setLocation command was not working properly if it is not used at the start of a Selenium script for Browser checks.
Features
Added native support for OTEL logs using the OTLP HTTP exporter.
Native Support for OTEL Traces.
Added native support for OTEL trace using the OTLP HTTP exporter.
Introduced a new rule type for STREAMS.
Moving Average Improved
Enhanced moving average calculation using SMV (Simple Moving Average) and CMV (Cumulative Moving Average).
Feature to compare the logs and patterns side by side to different time ranges.
Improved ALIVE summary graph highlighting depending on table content to provide better data visualisation.
Data Explorer: Tabs Scrolling and Improvement
Added scrolling functionality and various improvements to the Data Explorer tabs for better navigation.
GPT-4o-mini and Limited Model Support
Introduced support for GPT-4o, GPT-4o-mini, GPT-3.5-Turbo.
API-Based Create Data-Explorer Dashboard
Added the ability to create Data-Explorer dashboards via API.
API-Based Create Sharable Dashboard
Enabled the creation of sharable dashboards through API.
Generic Implementation for Data Explorer Header
Made the Data Explorer header implementation generic and interdependent.
Check Management Map View
Introduced a map view for check management.
Check Management List View UI Changes
Updated the UI for the check management list view.
Data Explorer Header to Persist Data
Added functionality for the header of data explorer to persist data.
Automatically Create Splunk Universal Forwarder for Splunk S2S Proxy
Added automatic creation of Splunk universal forwarder for Splunk S2S Proxy.
Pipeline Tab in Search View
Added a new pipeline tab in the search view.
Introduced a preview feature for code rules.
Health Check for Agents
Implemented a health check feature for agents.
Improvements
Trace/Default App Performance Improved
Enhanced the performance of the trace/default application.
New Algorithm for PS Compare and Anomalies Compare
Implemented a new algorithm for comparing architecture PS and detecting anomalies.
Widget Refresh Performance
Improved the performance of widget refresh operations.
Query API Performance for search
Enhanced the performance of the Query API for search.
Default Namespace for Logs for Syslog vs Per Host Namespaces
Enhanced default namespace handling for logs, distinguishing between syslog and per host namespaces.
UI Enhancements for Pipeline and Topology View
Improved UI for pipeline and topology views.
Agent Manager Improvements for Installation Scripts
Enhanced agent manager installation scripts.
Delete Agent Cleanup
Improved the cleanup process when deleting agents.
Remove Unsupported Agents
Enhanced the process to remove unsupported agents.
Bug Fixes
Y-Axis Overlapping on View
Fixed an issue where the Y-axis was overlapping on the view in the ALIVE application.
Gauge Widget Color Render Based on Zone
Fixed the rendering of gauge widget colors based on specified zones.
Group By for Data-Explorer
Fixed the group by functionality in the Data-Explorer.
Creating Alert Creates Panic
Resolved an issue where creating an alert caused a panic.
Overview This release introduces a host of updates to enhance user experience, streamline operations, and address known issues across Fleet, Data Explorer, the Ascent platform, and ASM+. New features and improvements focus on usability, performance, and customization, while bug fixes enhance platform stability and reliability.
OpenTelemetry
OpenTelemetry Collectors can now be configured to use the standard ingest endpoints when pushing data to Apica Ascent
Traces - /v1/traces
Logs - /v1/logs
Metrics - /v1/metrics
Telemetry Pipelines
New forwarders added for Oracle Cloud
OCI Buckets
OCI Observability & Monitoring - Logs
Freemium Support
Experience Apica Ascent with the Freemium release. The Freemium is a FREE FOREVER release which includes all the capabilities of the Apica Ascent Intelligent Data Management Platform available as a convenient SaaS offering
Fleet Management
Telemetry Pipelines with support for platforms such as Splunk, Elasticsearch, Kafka, Datadog among others
Digital Experience Monitoring (Synthetic Monitoring for URL, Ping, Port and SSL Checks)
Log Management
Distributed Tracing
Infrastructure Monitoring
Enterprise ready with features such as SAML based SSO
ITOM integration with platforms such as PagerDuty, ServiceNow, and OpsGenie
Fleet Updates
Agent Management:
- Introduced controls for managing agents within the Fleet UI for better administration.
- A summary table was added to display agent statistics, providing quick insights.
- Enabled rules for assigning configurations or packages to agents.
- User-defined Fleet resource types (rules, alerts, agent_types, configurations, and packages) can now be imported via Git.
- Fleet REST API search endpoints now support the ?summary
query parameter for result summarization.
- Expanded fleetctl CLI tool capabilities to manage Fleet API resources directly.
Advanced Search and Customization: - Users can save and retrieve advanced search queries in the Fleet Advanced Search Modal.
Data Explorer Enhancements
Improved Analytics Options: - Added support for PostgreSQL, expanding data integration capabilities. - Enhanced GroupBy functionality and a “Select All” label for better data analysis. - Enabled parameterized queries for dashboards, allowing dynamic user input for real-time customization. - Users can edit the dashboard header query and set the dropdown type (query, enum, or text) for customization.
Visualization Improvements: - Introduced a DenseStatusType chart to monitor active and inactive pods/instances in real time. - Added time zone customization for chart displays. - Optimized dark theme UI components with updated icons and design assets.
Ascent Platform Enhancements
ASM UI Enhancements: - Integrated repository and certificate management for streamlined admin controls. - Implemented a persistent last-view setting on the Check Management page.
General Improvements: - Enhanced navigation with streamlined redirection flows for faster page loads.
AI/ML and GenAI Enhancements
Pattern-Signature Processing: - Improved compaction with meaningful aliasing during pattern-signature (PS) merging. - Enhanced performance through PS coding representation for faster processing and UI responsiveness. - Fixed functionality for PS compaction at the backend.
GenAI Features: - GenAI document search functionality was added to the NavBar.
Fleet UI and Backend Fixes
UI and Agent Issues: - Resolved banner display inconsistencies during agent updates. - Fixed errors in anonymous report generation for Grafana Alloy. - Fixed agent-manager token refresh failures on Windows hosts.
Backend and API: - Fixed errors preventing default configuration/package assignments via the install endpoint. - Resolved OpAMP client failures and Windows socket exhaustion issues. - Corrected lookup errors for agents by instance ID during OpAMP registration.
Data Explorer Fixes
Performance and Stability:
- Resolved crashes on the Data Explorer page.
- Corrected schema issues and bugs affecting *
-based queries and widget calculations.
- Fixed default date-type inputs and adjusted other input defaults for smoother workflows.
UI Updates: - Fixed CSS and overflow issues in modals and alert render pages.
General UI and Usability Fixes
Resolved usability regressions from the v3.11.2 update, improving input defaults and widget updates.
Fleet-Specific Improvements: - Improved response times in Fleet views for queries involving large datasets.
Ascent Platform: - Resolved permission issues for non-admin users in the Namespace endpoint.
These updates reflect our commitment to delivering a robust and user-friendly platform. As always, we value your feedback to enhance our services further.
New Features & Enhancements:
ASM Private Location Management:
Introduced the ability for Customer Administrators to Add, Edit, and Delete private locations and private repositories, giving more control over location and data management.
Added a "Private Locations" section in the UI, allowing easy navigation and management of these locations.
Implemented endpoints to Enable/Disable Private Locations, retrieve lists of private locations and repositories, and associate repositories with specific private locations.
Included a Timezone selection feature for URL V2 endpoints, enhancing configuration flexibility for global deployments.
New options for managing Private Agents with functionalities such as Adding, Editing, and Deleting agents, as well as Reissuing Certificates for enhanced security.
Check Management Enhancements:
Integrated ZebraTester within Check Management, improving performance testing capabilities.
Enhanced the Check Analytics screen for a smoother experience, including a redesigned Schedule and Severity Handling screen supporting Dark Theme.
Improved API & Documentation:
Refined API Endpoints: Added support for handling advanced configuration for missing checks, private agent solutions, and new fields in the SSL Certificate Expiration Check.
Documentation Improvements: Updated ASM API documentation to include better descriptions, missing fields, and request/response formats for enhanced usability.
Canary Release Support:
Extended Deployment APIs to support Canary Releases, ensuring more robust testing and rollouts.
Performance Optimization:
Implemented pre-fetching of access groups to reduce database calls and improve the performance of core endpoints.
Optimized Sampling Interval for tables based on time duration to reduce load times.
Agent Status Monitoring:
Added visual indicators for the Enable/Disable Status of private locations, improving overall monitoring and management.
Bug Fixes:
Check Management:
Fixed inconsistencies in the Check Results Graph to ensure linear representation of data on the X-axis.
Addressed issues with timestamp formatting when clicked from different parts of the graph, which led to parsing errors.
Fleet Management:
Corrected the behavior of agent ID and customer GUIDs during initial state setup.
Resolved problems causing memory issues in multi-cluster environments.
UI & Visual Fixes:
Eliminated scroll issues when hovering over charts.
Adjusted the Date Picker to revert to its previous version for consistency and usability.
Multi-Cluster Stability:
Fixed degradation issues occurring when one of the single tenants in a multi-cluster environment was down.
Ensured smoother data loading and resolved UI lock-up issues when handling larger datasets.
Certificate Management:
Added validation checks and improved error handling for operations like adding, editing, and deleting SSL certificates and repositories.
Fleet Management Improvements:
Fleet UI Enhancements: Redesigned Fleet management screens, including Agents and Configuration, with consolidated controls for improved usability and support for Dark Theme.
Kubernetes Environment Support: Introduced support for Kubernetes environments in Fleet, enabling better agent management and installation flexibility.
Fleet Agent Support for OpenTelemetry Collectors, Datadog and Grafana Alloy: Expanded the ecosystem of supported agents with compatibility for OpenTelemetry Collector, Datadog and Grafana Alloy agents.
Agent Liveness Status Metrics: Implemented new metrics to monitor the liveness status of each Fleet agent, ensuring better visibility and alerting.
Advanced Search for Fleet: Enhanced search capabilities with a new advanced search feature, making it easier to locate specific data and agents.
Data Explorer Enhancements:
Y-Axis Multi-Column Plotting: Enhanced Y-axis plotting, allowing for the selection and visualization of multiple columns, making complex data analysis simpler.
Time Range in Headers: Added time range indicators in the header, improving context and navigation during data exploration.
Custom Chart Integration: New customizable charts, such as Counters, are available for Data Explorer, providing enhanced visualization options.
Color Selection for Widgets: Users can now customize the colors of rendered data inside each widget on the Data Explorer page, making it easier to personalize and distinguish visual components.
Performance & Optimization:
Lazy Loading Implementation: Optimized data explorer dashboards by implementing lazy loading, improving initial load times, and reducing resource consumption.
Custom Hooks for Skipping Component Mount Calls: Enhanced performance by introducing custom React hooks to skip unnecessary component mounts, minimizing UI lag.
UI/UX Improvements:
Dark Mode Icons & Design Adjustments: Optimized icon sets and UI components for a more consistent experience in dark mode.
New Toggle & Theme Options: Added a toggle for switching between Dark and Light modes in the navbar, giving users more control over their viewing experience.
Integration & API Updates:
Grafana Integration: Implemented a converter to transform Grafana JSON into Data Explorer JSON format, simplifying the migration of dashboards.
User Onboarding:
Improved Onboarding Experience: A dedicated onboarding screen for new users has been added to streamline the setup process and introduce key features.
Fleet Management:
Fixed issues where disconnected Fleet agents could not be deleted.
Resolved problems with log collection on Windows machines.
Addressed duplicate agent entries when reinstalling Fleet agents.
Data Explorer:
Corrected data inconsistency issues when switching between dashboards.
Fixed bugs related to alert tabs being incorrectly linked across dashboards.
Resolved intermittent behavior where data from one dashboard was erroneously stored in another.
ALIVE:
Improved the alignment and visualization of PS compare graphs and log comparisons.
Added zoom-in and enlarge options for better graph analysis.
Enhanced visual feedback for log loading during comparisons.
UI Bug Fixes:
Resolved AI button shadow and sizing issues for a more polished interface.
Corrected modal rendering in header dropdowns for persistent selections across tabs.
Gitbook AI Powered Search: Users can now ask questions directly in the search bar using Gitbook AI and receive answers instantly, enhancing accessibility to documentation and support.
Discover the latest advancements and improvements of the Apica Ascent platform. This is your go-to destination for updates on platform enhancements and new features. Explore what's new to optimize your observability and data management strategies.
Features
We have added bulk GET support for the API endpoint /checks/config. Users can now request multiple check configurations in one go, preventing issues caused by rate limiting. This is especially beneficial for those automating the synchronization of their own versions of check configurations with the Ascent platform through the ASM API.
A user must be able to see the response body from a failed URL call in a ZebraTester checks, if available, to enable the identification of what error messages or content might be returned.
Bugs Fixes:
We have eliminated the inconsistencies (spikes) in NG check result metrics previously impacted by infrastructure resource constraints. This has now been rolled out to all public and dedicated check locations available.
We have fixed the bug where the location API endpoint for Zebratester checks GET /checks/proxysniffer/locations was not returning all NG locations.
Expanding urls in check results for URLv2 check will display readable response content.
Features:
Display response body for failed URL calls in a ZebraTester checks result.
Bug Fixes:
We have fixed a bug that prevented new Browser check scenarios from syncing with the controlling agents effectively making them unavailable at time of check execution.
Bug Fixes:
Not all transaction names are available in ‘Edit Non-Functional Requirements (NFR)’.
Features
We have added an OTel forwarder to be used in ADF/FLOW to send OTel data untouched downstream to external OTel collector.
Bug Fixes:
ASM+ Pagination bug on Check Analytics
Email delivery bug
ASM+ check data ingest stability improvements
Features
React-grid-layout Integration: React-grid-layout has been integrated into Data Explorer for widget flexibility and condensed dashboards.
Legend Component: A separate component for displaying legends in Data Explorer widgets was implemented, which shows statistics for the data that is being rendered in the widget.
Port Management via UI: Added support for enabling and disabling k8s cluster ports via the UI.
Ping Checks: Implemented Ping Checks in Check Management.
Port Checks: Implemented Port Checks in Check Management.
Logs as a Data Source: Apica logs can now be integrated as a data source for Data Explorer and users can create/run queries on top of logs. This also introduces a new way to set alerts on the platform using logs.
File Compare Graph Y-axis Scale: The Y-axis of the File Compare graph now supports two modes: PS count and percentage.
PS Compare Anomaly Marker: Added anomaly markers for better visualization in PS Compare.
Dashboard Data Migration: Dashboard schemas are now formatted into Data Explorer format and moved from LogiqHub to ApicaHub Github Repositories.
Legacy Dashboard Converter: A converter was implemented to convert legacy Dashboard JSON to Data Explorer JSON format.
Data Explorer: Editing Controls and Breakpoints: Added editing controls and breakpoints in Data Explorer.
Scatter Chart Support: Data Explorer now supports scatter chart visualizations.
Dark Theme: Improved dark themes for multiple screens, including Logs & Insights, Dashboards, Topology, and Pipelines.
Dashboard Import in Data Explorer Format: Frontend changes were implemented to import dashboards in Data Explorer format.
Check Analytics Reports Integration: Enhanced check analytics by integrating it with reporting.
FPR Checks Consolidated Metrics: Added the ability to enrich check data at time of ingestion using a new domain-specific language (DSL).
Check Status Widget: Added custom configuration options for the check status widget.
Performance Improvements: Extended efforts to improve the performance of Data Explorer for smoother usage.
Gauge Chart Design: Modified the Gauge chart design, providing more user-configurable options and units for charts.
New Visualizations in Data Explorer: New widget types were added, including Check Status, Stat, Size, Date/Time, and Scatter chart visualizations.
Statistical Data in Legends: Introduced statistical data to the new legend component in Data Explorer.
Auto Gradient Colors: Implemented an automatic gradient color generator for area charts in Data Explorer.
Grafana Dashboard Converter: Developed a converter for Grafana dashboards to be compatible with Data Explorer.
Invalid Log Timestamp: Fixed an issue where log timestamps were invalid.
Tracing Volume Query Issue: Addressed an issue affecting tracing volume queries.
File Compare Graph Display: Resolved issues with the display of the file compare graph summary.
Data Explorer Page Crashing: Fixed errors causing the Data Explorer page to crash due to undefined values.
Widgets Deletion Handling: Implemented proper handling for widget deletion to prevent crashes.
Tab Loss on Reload: Resolved the issue where Data Explorer page tabs were lost on reload.
Chart Label Issues: Fixed chart label issues and improved chart rendering.
Overview
Introducing Apica Ascent Freemium—a FREE FOREVER version of our Intelligent Data Management Platform, now available as a convenient SaaS offering. This release democratizes intelligent observability, providing access to powerful features at no cost. Experience all the core capabilities of Ascent and take your telemetry data management to the next level.
New Features and Enhancements
Freemium Support
Added Freemium support via the Freemium license, offering free access to Ascent's capabilities.
Core Features
Fleet Management
Efficiently manage data collection with support for up to 25 agents, including OpenTelemetry Collectors for Windows, Linux, and Kubernetes.
Telemetry Pipelines
Seamlessly integrate with popular platforms, including Splunk, Elasticsearch, Kafka, and Datadog, among others.
Digital Experience Monitoring
Leverage Synthetic Monitoring for URL, Ping, Port, and SSL checks to optimize the digital experience.
Log Management
Centralize log collection, analysis, and management for improved observability.
Distributed Tracing
Gain deep insights into application performance with distributed tracing capabilities.
Infrastructure Monitoring
Monitor and manage infrastructure performance to ensure optimal operations.
Enterprise-Ready Features
Enable SAML-based Single Sign-On (SSO) for enhanced security and ease of access.
ITOM Integration
Integrate seamlessly with IT operations management platforms such as PagerDuty, ServiceNow, and OpsGenie.
Key Benefits of Ascent Freemium
Process up to 1TB/month of telemetry data, including logs, metrics, traces, events, and alerts.
Unlimited users and dashboards for collaboration and real-time data visualization.
No storage costs or credit card requirements.
Built-in AI-driven insights to enhance troubleshooting and decision-making.
Browser Compatibility
Apica Ascent Freemium is available immediately. Sign up now at https://www.apica.io/freemium and start transforming your data management experience today.
Discover the latest advancements and improvements of the Apica Ascent platform. This is your go-to destination for updates on platform enhancements and new features. Explore what's new to optimize your observability and data management strategies.
Features
NG Private Locations/Agents: New check-type agnostic Private Agents can be grouped into Private Locations with full self-serve functionality in the ASM UI portal. *ASM API support for full self-server ability will be added during Q3.
Features include the creation and management of Private Location and Agent along with Private Container Repositories for Private Agent use. Private Agent install packages (.rpm and .deb) will be available with support for RHEL v8+ and Debian v10+. Private locations can be set up to use either Docker or Podman driver for check execution.
New Browser checks will automatically accept dialogs/modals that can pop up during a test such as alert/confirmation/prompts.
New Browser checks will attach and include control of new tabs created by the target site. I.e. the chrome WebDriver will automatically attach to new tabs that are opened during check execution of a Browser check.
Added SAN/SNI options to SSL Cert Expiration and Fingerprint Validation for URL checks.
Compound check is available on NG locations.
Extended the ability to append the custom message specified in _Apica_Message
collection variable to Postman check result messages in case the Postman script fails.
Bug Fixes
Screenshots for Browser checks were not working in new tabs or windows created by the check. This is fixed as part of the above feature that include control of created tabs and windows by the target site.
Debug scenario of Browser checks from the Edit Check page will use the same location as the check does.
Fixed the issue where ASM UI was throwing a 500 error from Ajax while adding target value for newly created Selenium scenarios.
Fixed the sporadic non-availability of agents in the Stockholm location issue when debugging a Selenium scenario.
Enhanced encryptapica
feature in Scenarios for Browser checks. The target value of encryptapica
prefixed store commands used in Selenium scenarios will be masked across all scenario commands in the Browser check results in case the specified target value appears in any other scenario commands (eg. echo command).
Features
The display response body for failed URL calls in a ZebraTester checks the result, if available, to enable the identification of what error messages or content might be returned.
Added support for PUT API request to add or update URL v1 checks through ASM API.
Dark Mode: A new dark mode option is now available, providing a dark-themed interface for users who prefer it.
Code Rule Preview: Users can preview and compare the data after the code rule is applied.
apicactl: Introduced a new command-line tool in Apica Github for API management.
Bookmark date range: Users can now bookmark specific date ranges for quick access and reference.
Data Explorer API endpoint: A new API endpoint has been added to support data explorer for Boomi OEM.
Tabs are now scrollable: Improved usability by making the Tabs scrollable, ensuring better navigation and access.
Pipeline tab inside search view: This enhances the search view and the user can see the pipeline of the selected flow.
Pipeline application filter: While creating a new pipeline, users can filter which application to show in the pipeline view.
Enhanced the Fleet agent manager installation.
Inconsistent time range when moving from ALIVE to Facet Search page: Fixed the issue where the time range was inconsistent when moving from the ALIVE to the Facet Search page.
Orphan tab from ALIVE: Resolved the issue of orphan tabs appearing from ALIVE.
Alert page issue showing undefined value: Corrected the problem where the Alert view page was showing undefined values.
Discover the latest advancements and improvements of the Apica Ascent platform. This is your go-to destination for updates on platform enhancements and new features. Explore what's new to optimize your observability and data management strategies.
Welcome to the latest update of our product! We are excited to introduce several new features and improvements designed to enhance user experiences.
Refined User Interface:
Introduced a refined User Interface across the app, enhancing user experience on the following pages:
Search
Data explorer
Topology
Pipeline
Dashboards
Query/Report editor
Implemented dynamic quick date-time selection for granular control, empowering users to specify any date range they desire, not limited to predefined time ranges.
Infrastructure with Honeycomb View:
This view offers users a bird's-eye view of all flow statuses on a single page.
Users can customize group-by options like namespace, application, and severity to analyze the flow status of the entire stack.
Flexible time range selection allows users to analyze their stack effectively.
Counter Widget in Explore Page
Added a new counter widget on the Explore page, enabling users to monitor ingested Trace volume across selected time ranges.
Query Snippets
Added Query Snippet templates, allowing users to create and insert query snippets from the settings page into the query editor using keyboard triggers/shortcuts.
ASM Plus
ASM Plus is a new offering enabling users to analyze their ASM synthetic check data in OpenTelemetry(OTel) format. Features include viewing check data as an Opentelemetry trace, page-level check execution details in a timeseries graph, check aggregator view with dynamic pivot table visualization, and check analysis view offering various visualizations like Waterfall chart, Flame graph, and Graph view.
View checks data as a Opentelemetry trace in ASM plus.
Check execution details (page level) view in a timeseries graph. Users can select different check attributes to analyze the check execution data.
Check aggregator view
Provide a dynamic pivot table for visualizing the check data in different formats like Tabular, line chart, bar graph, etc. We have also added a feature where users can export their pivot table data in an excel format for further analysis.
Provides a timeseries graph for various kinds of service names.
Check analysis view provides an option to view the check results data in the following visualizations:
Waterfall chart
Flamegraph
Graph view
New Forwarder for ServiceNow ITOM Event Management Connectors API:
Added a new forwarder to facilitate integration with ServiceNow ITOM Event Management Connectors API.
New Query Parameter Type - Duration List:
Introduced a new Query parameter type called Duration list, enabling users to create a dropdown of relative time durations in templatized queries.
Improved Dashboard Widgets Visualization:
Enhanced dashboard widgets visualization by smoothing the data for better presentation.
Thank you for choosing our product! We hope you enjoy these new features and improvements. Should you have any questions or feedback, please do not hesitate to contact us.
Bug Fixes:
ALIVE Graph and Summary Fixes: Corrected issues where the "select-all" function wasn't applying across all pages in the ALIVE graph and the pattern index and y-axis didn't match in the summary table.
ALIVE Page Navigation: The "psid log select-all" operation now correctly spans across all pages instead of just the current one.
Browser Compatibility: Resolved a bug where the Check analysis view was breaking specifically in old Firefox browsers.
UI and Display Fixes: Made improvements to various UI elements such as ensuring subject time intervals adhere strictly to different function screens and fixing issues with long horizontal content on the ALIVE summary page.
Query and Data Handling: Handled edge cases where errors in results could lead to spans having no data.
Performance and Functionality: Made improvements to several areas such as handling ingest ratelimiters more effectively, reducing open connections errors, and enhancing byte buffer pool performance.
Enhancements:
Dashboard Widget: Improved the overflow behavior for Alive Filter tags on the dashboard page for better visibility and usability.
User Experience: Enhanced the Add widget dialog by fixing issues related to selecting visualization types and restricting multiple API calls while using the "Add tag" feature.
Other Improvements:
Performance Optimization: Made improvements to several backend processes, including moving from ReadAll to io.Copy for better performance and memory benefits.
License Management: Fixed issues with licenses not syncing correctly and removed unknown fields from license display.
Code Maintenance: Made updates to code repositories for better version parity and improved rules page images display.
We're continuously working to enhance your experience with Apica Ascent Development, and we hope you find these updates valuable. If you have any questions or feedback, please don't hesitate to reach out to us. Thank you for choosing Apica!
Added several new ASM commands to the ASM Manage Scenarios front end. See
for a complete list of supported Selenium IDE commands. Now, all of the commands listed in that article are available in the ASM Edit/Debug Scenarios page
ASM users now have the option to disable automatic page breaks when creating Browser checks:
Fixed an issue in which checks were not correctly saved when an incorrect inclusion/exclusion period was used and the user was not notified of a reason. After the fix, users will be notified explicitly if their inclusion/exclusion period is incorrect.
Fixed an issue which prevented custom DNS from being used on the latest infrastructure
Fixed an issue which prevented an error message from being generated and displayed in the event that auto refresh fails to refresh a Dashboard.
Fixed an issue with certain checks which prevented Request & Response Headers from showing correctly within the Check Details page:
Fixed an issue which prevented API calls from returning correct responses when a new user’s time zone was not set
Fixed an issue which prevented spaces in between the “accepted codes” field for a URLv2 check:
Updated API documentation for URL, URLv2 checks to include acceptable "secureProtocolVersion" values
Fixed an issue with Ad Hoc report generation for certain users
Fixed issues which prevented Command checks from being created or fetched via the ASM API.
Disabled the option to select "Firefox" on browser checks
Disabled location information in the API for deprecated checks
Disabled old Chrome versions when creating a Chrome check
Disabled location information in the API for deprecated Chrome versions
Disabled deprecated check types from the "create new check"
Disabled deprecated check types from the integration wizard
Disabled API endpoint for URLv1 checks
Disabled API endpoint for Command v1 checks
Disabled deprecated check types from /checks/command-v2/categories
Disabled deprecated browser version from /AnalyzeUrl
Replaced Firefox with Chrome when creating an iPhone, iPad, or Android Check in New Check Guide
Removed deprecated check versions as options from the Edit Scenario page
Disabled AppDynamics check types from the integration wizard
Added the ability to add/edit “Accepted Codes”, “Port Number” and all “Secure Protocol Versions” for URLv1 checks via the ASM API. API documentation was updated to reflect the new functionality.
Added SNI (Server Name Indication) support for URLv1 checks
Fixed an issue which prevented Power Users with limited check editing permissions from saving checks after performing edits.
ZebraTester 7.5-B release contains the following new features.
Support for Color Blindness: To improve support for vision impairments and color blindness adaptation we have added new themes to the GUI configuration section.
Ability to change request method from the ZT GUI: This gives the users the ability to change request method from the ZT GUI. Depending on the request method the Request body field will be enabled & visible or not.
Support user agent details from a file: Provides an option in ZT personal settings GUI settings area, where user can upload a JSON file, which have all the latest User-Agents details.
Updated Browser Agent List: All the current and latest browser agent list has been updated. • Option to Disable Page Breaks: Option to comment/disable a page break in the recorded session.
Variables as Page Break Names: Users can use variables when setting my page-breaks names to make scripts more dynamic.
Add OR condition for content type validation: Logical OR condition against content type validation can be tested by users.
ZebraTester Controller Pull file (.wri): User will be able to pull files from the execagent that have been written by the feature "writetofile". For this the files need to be pulled to the controller as any other out/err/result file.
WebSocket Extension (MS1): WebSocket implementation capabilities of Zebra Tester, allowing users to conduct more comprehensive testing of WebSocket-based applications. A detailed how guide on how to use WebSocket extension is added in the documentation folder.
In addition, Zebra Tester V7.5-B release contains the following bug fixes / improvements:
Bug Fix for XML extractor giving 500 internal error in ZT scripts.
.Har file conversion issue.
Conflict when using variables as Mime Type validation.
Zebra Tester -Auto assign Fix
Fix for time zone lists, shows the java standard supported time zones without the deprecated ones.
Detailed Replay logs in ZT (extended logs)
ALPN Protocol Negotiation
Page Break - Threshold Breach (Trigger & Abort)
Library Update (Update JGit library): Updated the JGit library to the latest version to leverage new features and improvements.
Fix issues with JavaScript editor in ZT.
NOTE: This release bumps the metric index version from 4 to 5. Upon restart, new indexes will be built and the old ones will be deleted. This process will use a significant amount of memory while the indexes are being rebuilt. It will also cause the first post-update boot to take longer than usual.
Update index version from 4 to 5.
Automatically clean up old index versions on startup to make sure outdated indexes don't clog the disk.
Fix Ubuntu 20.04 specific bug where nodes could crash when trying to clean up status files when rolling up raw shards.
Fix issue with level indexes where data was being lost when deleting metrics on levels where the metric has multiple tags.
Fix issue where level indexes were incorrectly reporting that levels existed when all underlying metrics had been removed.
Add new API endpoints, /compact_indexes
and /invalidate_index_cache
, that allow forcing compaction and cache invalidation for specific accounts, respectively.
Fix rollup bug where raw shards could be prematurely deleted if a rollup was aborted due to corruption.
Fix various potential memory corruption issues.
Fix issue where jlog journal data could get corrupted.
Updated the type to run on the latest infrastructure
Added a new supported Selenium IDE command,
Added missing attributes to the response bodies of the and API GET request endpoints
Fixed an issue which prevented who had limited editing permissions from saving checks. For instance, Power Users who could edit only the name, description, and tags of a check could not save the check after doing so. The bug fix resolved this issue.
Fixed the following API call: which was returning a 500 server error previously.
Read previous Release Notes, go to:
Read previous Release Notes, go to:
Data Fabric Release: v3.7
Bug Fixes:
ALIVE Graph and Summary Fixes: Corrected issues where the "select-all" function wasn't applying across all pages in the ALIVE graph and the pattern index and y-axis didn't match in the summary table.
ALIVE Page Navigation: The "psid log select-all" operation now correctly spans across all pages instead of just the current one.
Browser Compatibility: Resolved a bug where the Check analysis view was breaking specifically in old Firefox browsers.
UI and Display Fixes: Made improvements to various UI elements such as ensuring subject time intervals adhere strictly to different function screens and fixing issues with long horizontal content on the ALIVE summary page.
Query and Data Handling: Handled edge cases where errors in results could lead to spans having no data.
Performance and Functionality: Made improvements to several areas such as handling ingest ratelimiters more effectively, reducing open connections errors, and enhancing byte buffer pool performance.
Enhancements:
Dashboard Widget: Improved the overflow behavior for Alive Filter tags on the dashboard page for better visibility and usability.
User Experience: Enhanced the Add widget dialog by fixing issues related to selecting visualization types and restricting multiple API calls while using the "Add tag" feature.
Other Improvements:
Performance Optimization: Made improvements to several backend processes, including moving from ReadAll to io.Copy for better performance and memory benefits.
License Management: Fixed issues with licenses not syncing correctly and removed unknown fields from license display.
Code Maintenance: Made updates to code repositories for better version parity and improved rules page images display.
We're continuously working to enhance your experience with Apica Ascent Development, and we hope you find these updates valuable. If you have any questions or feedback, please don't hesitate to reach out to us. Thank you for choosing Apica!
Welcome to the latest update of our product! We are excited to introduce several new features and improvements designed to enhance user experiences.
Refined User Interface:
Introduced a refined User Interface across the app, enhancing user experience on the following pages:
Search
Data explorer
Topology
Pipeline
Dashboards
Query/Report editor
Implemented dynamic quick date-time selection for granular control, empowering users to specify any date range they desire, not limited to predefined time ranges.
Infrastructure with Honeycomb View:
This view offers users a bird's-eye view of all flow statuses on a single page.
Users can customize group-by options like namespace, application, and severity to analyze the flow status of the entire stack.
Flexible time range selection allows users to analyze their stack effectively.
Counter Widget in Explore Page
Added a new counter widget on the Explore page, enabling users to monitor ingested Trace volume across selected time ranges.
Query Snippets
Added Query Snippet templates, allowing users to create and insert query snippets from the settings page into the query editor using keyboard triggers/shortcuts.
ASM Plus
ASM Plus is a new offering enabling users to analyze their ASM synthetic check data in OpenTelemetry(OTel) format. Features include viewing check data as an Opentelemetry trace, page-level check execution details in a timeseries graph, check aggregator view with dynamic pivot table visualization, and check analysis view offering various visualizations like Waterfall chart, Flame graph, and Graph view.
View checks data as a Opentelemetry trace in ASM plus.
Check execution details (page level) view in a timeseries graph. Users can select different check attributes to analyze the check execution data.
Check aggregator view
Provide a dynamic pivot table for visualizing the check data in different formats like Tabular, line chart, bar graph, etc. We have also added a feature where users can export their pivot table data in an excel format for further analysis.
Provides a timeseries graph for various kinds of service names.
Check analysis view provides an option to view the check results data in the following visualizations:
Waterfall chart
Flamegraph
Graph view
New Forwarder for ServiceNow ITOM Event Management Connectors API:
Added a new forwarder to facilitate integration with ServiceNow ITOM Event Management Connectors API.
New Query Parameter Type - Duration List:
Introduced a new Query parameter type called Duration list, enabling users to create a dropdown of relative time durations in templatized queries.
Improved Dashboard Widgets Visualization:
Enhanced dashboard widgets visualization by smoothing the data for better presentation.
Thank you for choosing our product! We hope you enjoy these new features and improvements. Should you have any questions or feedback, please do not hesitate to contact us.
Enhanced Log Analysis with Generative AI like ChatGPT and Azure OpenAI Service
We're excited to introduce the integration of Generative AI, including ChatGPT and Azure OpenAI Service, into the Explore feature. Now, you can easily select logs and engage in dynamic conversations with Generative AI to gain in-depth insights into your log data. Ask questions, request explanations, and explore your logs to gain deeper insights into your log data, making log analysis more informative and versatile.
ALIVE (Autonomous Log Interaction Visual Explorer): ALIVE is a powerful interactive visualization tool designed to empower users in identifying issues and patterns within their applications. This innovative tool offers a rich and insightful representation of unstructured logs. Key features includes:
Autonomous Log Analysis: ALIVE autonomously analyzes logs, saving users time and effort.
Interactive Visualization: Enjoy an interactive and engaging experience when exploring log data.
Flow Representation: Understand the flow of log events with clear and intuitive visualizations.
Insightful Representation: Gain deep insights into your log data through meaningful visual representations.
Multivariate Analysis: Easily pinpoint issues across vast datasets at a glance.
Scalability: ALIVE scales effortlessly to accommodate your growing data needs, ensuring consistent performance.
Improved pattern compaction (PC) workload: With this release, we've enhanced the pattern compaction feature. Now, as the number of patterns increases, selected patterns can be further aggregated into the same group to prevent pattern count explosion. This process is called compaction. We've also added a button in the ingestion settings that allows you to disable or enable pattern compaction. Please note that this feature is not intended for use with static or small pattern sets, and excessive use of the PC action can result in pattern aliasing.
Enhanced Onboarding Experience with App Tour: Introducing App Tour, designed to provide both new and returning users with a seamless introduction to our platform's key features. This guided tour ensures a smooth and intuitive navigation experience right from the start, helping you quickly become familiar with our app's functionalities and empowering you to make the most of our platform.
App tour coverage:
Explore (Data, Topology, Flows)
Dashboards
Source Extensions
Search
Rules
Forwarder
Queries
Create Rule
Moved Source Extensions, Forwarders, Rule Packs and Import dashboards to new Integrations page.
New Source Extensions
Apica Source Extension: The Apica Source Extension is a component designed to integrate with the Apica Synthetics and Load test platform. Its main purpose is to retrieve check results from the Apica platform and make them available for further processing or analysis within another system or tool.
New Forwarders which can help users selectively send specific log data to downward destinations based on their filtering criteria, thereby reducing the amount of data stored and analyzed. This can lead to cost optimization as it allows users to focus their resources on the most relevant and important log data, rather than storing and processing unnecessary or redundant information.
Topology view Enhancements ✨ Recent enhancements in the topology view is the inclusion of the total events information. This improvement provides users with a clearer understanding of the overall event activity within the system or network.
Pipeline Changes:
Pipeline Application Filtering Support
We're excited to introduce support for pipeline application filtering in this release. With this enhancement, users can efficiently filter log data when managing multiple applications, streamlining their data management processes.
Error Indicator
We've also added an error indicator to the Pipeline View. This indicator serves as a valuable visual cue when forwarding logs to destinations, helping users quickly identify and address any issues in their data flow to downward destinations.
Faster Reports
In this release, we've significantly improved report generation speed by removing the 10-second polling delay.
Optimized Search
We've enhanced search performance by further optimizing the search query parallelism, ensuring quicker and more efficient results retrieval.
Enhancement in Search feature by adding Regex for Extract.
Get a holistic taxonomy of logs by automatically categorizing them based on their content, context and other characteristics. This capability provides users with a way to extract and classify logs automatically, improving the speed and accuracy of log-analysis. This saves time and effort by automating the process of field extraction, eliminating the need for users to manually identify and extract fields .
Aggregate Settings Persistence
We've introduced the convenience of persistent aggregate settings. Now, when users select an aggregate, the system will remember their choice, ensuring that their selection remains consistent across sessions.
Table View for Structured Data
We've deprecated the Tree view and introduced a more user-friendly Table View for structured data derived from log lines.
Revamped Forwarder Selection UX
Experience an enhanced user interface when selecting forwarders during creation. Our redesigned forwarder selection process is more intuitive and efficient.
Log-to-Traces Proxy
We've built a versatile proxy that can seamlessly convert logs into traces. This allows logs to be stitched into multiple spans, forming a comprehensive trace for improved monitoring and analysis.
Multiple Widgets in Dashboards
Enhance your dashboards with ease. Users can now add multiple visualizations related to various queries in a single step, providing more flexibility in dashboard creation.
Distribution Flow
We've deprecated the distribution flow feature from the forwarders page, making it even more accessible. Users can now access distribution flow directly from the explore page.
Increased gRPC Recovery Limits
We've addressed an issue that could sometimes result in partial search results. By increasing the gRPC recovery limits, we've improved the reliability of search operations in our platform.
If you have any questions, encounter issues, or want to share your thoughts, please don't hesitate to contact our support team. Thank you for choosing Apica as your data fabric partner. We look forward to continuously improving your experience.
Topology-powered root-cause analysis.
Visualize your data streams as a topology with drill down to errors and warnings for faster root causes. Helps visualize the health of your applications. Users can quickly investigate the issues by clicking errors or alerts.
Data flow Pipelines.
The pipeline is a series of processes or stages through which data flow systematically and efficiently. Helps to visualize the flow between nodes, rules, and filters applied for the data flow. Shows the inflow and outflow information of data, and also helps in identifying the data loss or optimizing the data flow to forward destinations.
Search results aggregates.
Buit-in Pivot table makes it easy to analyze large data sets from search queries. Summarize or Visualize a set of data points for instant analysis. Some common examples of aggregation functions include(Count, Value, Sum, Count Unique Values, List Unique Values, Average, Median, Min, Max). Aggregation functions are used to summarize large datasets into a more manageable form for further analysis and visualization. And includes different types of visualizations (Table, Line chart, Area chart, Scatter chart, Dot chart, and Multiple pie chart).
Re-designed Landing page.
Instantly get access to valuable insights when you login into our redesigned Explore page. Users now log in and directly land on the Explore page with quick summaries at their fingertips.
Introduced counter widgets for EPS, Total Flows, Total Events, Forwarders, and Source Extensions.
Added a new Event Statistics column, which has counts of (Errors, Alerts, Critical, Emergency), (Warnings) and (Total) events.
OSSEC HIDS Mappings
Automatically map OSSEC HIDS event severity and log messages for Linux and Windows environments.
Added support for exporting events and metrics from Apache Beam to Apica Ascent.
OpenTelemetry otel.status_code
Mapping
Detect OpenTelemetry severity and level embeddings and map them into severity levels.
Memory and performance improvements.
Automated agent installation for Linux and Windows.
Added support for Grafana dashboard import for:
Fluent Bit
Go
Kafka
Kubernetes
Node exporter
Postgres
Prometheus
Redis
Added support for Large log messages up to 1Mb
Added native support for Azure blob store for InstaStore
Added new Ingest plugins for:
Added new Forwarder plugins for:
Added support for Renaming attributes of logs before forwarding data to the destination
Added support for ingesting directly from Splunk UF (Universal Forwarder) and Splunk HF (Heavy Forwarder) using Splunk cooked mode
Added support for ingesting Splunk Metrics
Added support for Archiving alerts in InstaStore which will be available as an audit trail
Added support for Archiving events that are older than 24 hours which will be available under events history
Added new Severity Metrics to measure the logs levels within the time range.
Made Search and UI performance enhancements ✨
Distributed Tracing
Compatible with OpenTelemetry and Jaeger agents
Infinite Scale and retention on InstaStore
AnyTime - AI/ML Engine for any time-series database
Anomaly detection
Statistical data baselining and dynamic thresholds
Forecasting
Moving average
Anomaly and dynamic threshold-based alerts
Input Plugins
IBM QRadar
Splunk S2S
LogFlow Forwarders
Splunk Syslog, Splunk HEC, Splunk S2S
Datadog
Dynatrace
NewRelic
IBM QRadar
ArcSight
Syslog, Syslog CEF
New Data Sources
AWS CloudWatch Metrics ( YAML )
AWS CloudWatch Metrics (SQL Insights)
LD-35 multi-variate regEx in log2Metrics
A-4 Comparator operator results in lesser search results
LD-30 Custom Search Indices
Namespace level Log distribution graph in Search and Logs page
Performance Improvements
DEFECT#613 Query backend not honoring the startTime sent from UI
PERF#614 Uploader Optimizations
PERF#603 Query Improvements
Bloom filter for faster search
Query interval skip improvements
PERF#602 Metadata Improvements
Metadata uploader improvements
UI#599 UI enhancements
Search and logs page optimizations
FEATURE#580 AWS improvements
AWS ECS logging improvements
Customized AWS fargate 1.4 fluent driver image
AWS Cloudwatch exporter
Support for and expressions in search
Event rules designer support for && and || for individual parameters
Performance and memory improvements
Support for Apache DRUID connector
Optionally deploy Grafana with the LOGIQ stack
Logs compare view to select 2 logs to be viewed side by side
Easy toggle for activating/deactivating periodic queries
Support for full data/metadata recovery on service restarts
Support for application and process/pod context in log and search views
Support for node selectors. Both taints and node selectors are supported
Support for using spot instances on EKS/AWS
Support for using S3 compatible buckets directly without a caching gateway to optimize for region optimized deployments
Multi cluster support
License management UI
Ingest configuration settings exposed in the UI
Logs page and search now have application and process/pod level contexts
Parquet with Snappy compression for data at REST
Log view supports full JSON view for ingested log data like Search view
Performance improvements for faster search, logs, and tailing
Event deduplication can reduce event data by up to 1000x at peak data rates
Deduplication of monitoring events at Namespace granularity
Separation of LOGIQ server into microservices for data ingestion, ML/UI and S3/Metadata management
Support for taints in HELM chart for more control over large-scale deployments e.g. schedule ingest pods on dedicated nodes etc.
Log tailing infrastructure using Redis switches to diskless replication/no persistence
Support for AWS Fargate, Firelens, Fluent forward Protocol
LOGIQ Fluent-bit daemon-set for K8S clusters
Data extraction via Grok patterns, compatible with Logstash Grok patterns using the Grokky library
Redesigned - Elastic/Kibana like search UI that scales to infinite data from S3 compatible object store
Real-time alertable events and alerts from log data
Real-time extraction of log data facets using Grok expressions
1-Click conversion of log data events to time series visualization
Logiqctl command-line toolkit
Works with SAML users via API Key
Prometheus alert manager integration into LOGIQ alerts for unified alerting across logs and metrics
Built-in Logiq dashboard for LOGIQ cluster performance and health monitoring
Connect numerous popular data sources into the LOGIQ platforms such as Postgres, MySql, Elasticsearch, Athena, MongoDB, Prometheus, and more.
JSON Data source for easily converting arbitrary JSON data into tables, widgets, and alerts
Namespace RBAC for log data from K8S namespaces
SAML Integration for RBAC allowing SAML Attributes to map to RBAC groups
Fully secured HELM deployment using Role, RoleBindings, ServiceAccounts and Pod Security policies for all service
Cryptographically verified JWT token for API communication
Built-in audit logging for the entire product and infrastructure
Add support for ingress with http and optionally have https
ServiceMonitor for ingesting server if prometheus is installed
Logs modal ordering to match how developers view logs from a file
Highlight logline from search
Bug fixes for performance, graceful failure handling/recovery
Official GA of LOGIQ's complete Observability platform with support for metrics and log analytics
Scale-out and HA deployment for Kubernetes via HELM 3 chart ( https://github.com/logiqai/helm-charts )
Monitoring of time series metrics
New Log viewer
Log viewer integration with faceted search
Log time machine to go back in time to log state
logiqctl is now GA with support for log tailing, historical queries and search
Fluentd sends error logs as info - fixed with grok patterns to extract proper severity strings from incoming messages
Anomaly detection via Eventing with built-in and custom rules
Built-in UI Help with Intercom chat
Expand and collapse search facets
New AMI's for AWS marketplace
Official GA of LOGIQ's Log Insights platform
AWS Marketplace AMI for all regions including Gov cloud regions
AWS CloudFormation 1-click deployment
Rsyslog, Syslog protocol support for data ingest via Rsyslogd, syslogd, syslog-ng, logstash, fluentd, fluentbit, docker logging syslog driver.
Built-in UI with SQL Queries, Faceted search, Alerts, Dashboards
NOTE: This release bumps the metric index version from 4 to 5. Upon restart, new indexes will be built and the old ones will be deleted. This process will use a significant amount of memory while the indexes are being rebuilt. It will also cause the first post-update boot to take longer than usual.
Update index version from 4 to 5.
Automatically clean up old index versions on startup to make sure outdated indexes don't clog the disk.
Fix Ubuntu 20.04 specific bug where nodes could crash when trying to clean up status files when rolling up raw shards.
Fix issue with level indexes where data was being lost when deleting metrics on levels where the metric has multiple tags.
Fix issue where level indexes were incorrectly reporting that levels existed when all underlying metrics had been removed.
Add new API endpoints, /compact_indexes
and /invalidate_index_cache
, that allow forcing compaction and cache invalidation for specific accounts, respectively.
Fix rollup bug where raw shards could be prematurely deleted if a rollup was aborted due to corruption.
Fix various potential memory corruption issues.
Fix issue where jlog journal data could get corrupted.
For Debian/Ubuntu:
For RHEL/CentOS:
Verify that rsyslog is running:
Edit the rsyslog configuration file (usually /etc/rsyslog.conf or /etc/rsyslog.d/*.conf).
Open the configuration file:
Enable TCP forwarding by adding *.* @@remote-server-ip:514 to the config:
Save your changes and restart rsyslog
On your server, use logger to log a custom message which you can track easily in order to verify ingestion has been successful.
Use the logger command to trigger a custom log entry:
It might take a slight moment for this entry to appear in the Ascent platform, so if it doesn’t show up immediately, give it a moment and check again.
In your Ascent platform, navigate to Explore > Logs & Insights
In the filter view, search for namespace default_namespace. Then look for your username which generated the custom log entry, and click on it.
This view should only display the custom log entry generated earlier
Go to https://opentelemetry.io/docs/collector/installation/ or https://github.com/open-telemetry/opentelemetry-collector-releases/releases/ to find the package you want to install. At the point of writing this guide, 0.115.1 is the latest package so we’ll install otelcol-contrib_0.115.1_linux_amd64
On the machine you wish to collect metrics from, run the following 4 commands:
Deb-based
sudo apt-get update
sudo apt-get -y install wget
wget https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.115.1/otelcol-contrib_0.115.1_linux_amd64.deb
sudo dpkg -i otelcol-contrib_0.115.1_linux_amd64.deb
RHEL-based
sudo dnf update -y
sudo dnf install -y wget
wget https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.115.1/otelcol-contrib_0.115.1_linux_amd64.rpm
sudo rpm -ivh otelcol-contrib_0.115.1_linux_amd64.rpm
Navigate to /etc/otelcol-contrib/
Edit the file with your favourite file editor, for example: nano config.yaml
Paste the following into the config file overwriting it completely:
Replace <YOUR-ASCENT-ENV>with your Ascent domain, e.g. company.apica.io
Replace <YOUR-INGEST-TOKEN>with your Ascent Ingest Token, e.g. eyXXXXXXXXXXX...
Follow this guide on how to obtain your ingest token - https://docs.apica.io/integrations/overview/generating-a-secure-ingest-token
When you’ve finished editing the config, save it and run otelcol-contrib validate --config=config.yaml
If you get no error returned, the config file is valid.
Restart the service with sudo systemctl restart otelcol-contrib
Verify that the service is up and running correctly with sudo systemctl status otelcol-contrib
A good result should look like this:
You can also view live logs using journalctl -u otelcol-contrib -f. With the above config you would see entries every 10 seconds.
Click on the green “+ Create” button on the top navigation bar and select Query
In the dropdown menu on the left hand side, select Ascent Metrics
In the search bar, search for system_
This will present all the different system metrics that is being scraped with your Otel configuration
You can click any of the metrics directly to insert it into the query text, and hit execute to see the latest metrics.
ZebraTester 7.5-B release contains the following new features.
Support for Color Blindness: To improve support for vision impairments and color blindness adaptation we have added new themes to the GUI configuration section.
Ability to change request method from the ZT GUI: This gives the users the ability to change request method from the ZT GUI. Depending on the request method the Request body field will be enabled & visible or not.
Support user agent details from a file: Provides an option in ZT personal settings GUI settings area, where user can upload a JSON file, which have all the latest User-Agents details.
Updated Browser Agent List: All the current and latest browser agent list has been updated. • Option to Disable Page Breaks: Option to comment/disable a page break in the recorded session.
Variables as Page Break Names: Users can use variables when setting my page-breaks names to make scripts more dynamic.
Add OR condition for content type validation: Logical OR condition against content type validation can be tested by users.
ZebraTester Controller Pull file (.wri): User will be able to pull files from the execagent that have been written by the feature "writetofile". For this the files need to be pulled to the controller as any other out/err/result file.
WebSocket Extension (MS1): WebSocket implementation capabilities of Zebra Tester, allowing users to conduct more comprehensive testing of WebSocket-based applications. A detailed how guide on how to use WebSocket extension is added in the documentation folder.
In addition, Zebra Tester V7.5-B release contains the following bug fixes / improvements:
Bug Fix for XML extractor giving 500 internal error in ZT scripts.
.Har file conversion issue.
Conflict when using variables as Mime Type validation.
Zebra Tester -Auto assign Fix
Fix for time zone lists, shows the java standard supported time zones without the deprecated ones.
Detailed Replay logs in ZT (extended logs)
ALPN Protocol Negotiation
Page Break - Threshold Breach (Trigger & Abort)
Library Update (Update JGit library): Updated the JGit library to the latest version to leverage new features and improvements.
Fix issues with JavaScript editor in ZT.
Read previous Release notes,
A guide on how to collect logs using OpenTelemetry on Linux from installation to ingestion
For DEB-based:
For RHEL-based:
Edit /etc/otelcol-contrib/config.yaml
and replace the content with the below
Replace the following values:
<your_log_file_path>
Physical path to your log file
<your_domain>
Hostname of your Apica environment (example.apica.io)
<your_token>
Your ingest token, see how to obtain your ingest token
<namespace>
A name for high-level grouping of logs, isolating different projects, environments, or teams.
<application>
A name for logs generated by a specific service or process
line_start_pattern
The above example uses a regex to match on the timestamp of a log entry to capture the entire entry. This needs to be adjusted to match the beginning of your log structure. See below example of entries that matches this pattern.
When you're done with your edits, execute the below command to validate the config is valid (it should return nothing if everything is in order)
Restart OTel to apply your changes
Assuming everything has been done correctly, your logs will start to appear in Explore > Logs & Insight on your Ascent environment. They will show up based on the namespace and application names that you set in your config.yaml file.
If you are using Apica Ascent online, this section is not relevant. Please use this section if you are setting up and configuring Apica Ascent that is either ON-PREMISE or in a PRIVATE CLOUD.
Updated the Compound Check type to run on the latest infrastructure
Added a new supported Selenium IDE command, setLocation
Added missing attributes to the response bodies of the /users and /users/{user_guid} API GET request endpoints
Added several new ASM commands to the ASM Manage Scenarios front end. See
for a complete list of supported Selenium IDE commands. Now, all of the commands listed in that article are available in the ASM Edit/Debug Scenarios page
ASM users now have the option to disable automatic page breaks when creating Browser checks:
Fixed an issue in which checks were not correctly saved when an incorrect inclusion/exclusion period was used and the user was not notified of a reason. After the fix, users will be notified explicitly if their inclusion/exclusion period is incorrect.
Fixed an issue which prevented custom DNS from being used on the latest infrastructure
Fixed an issue which prevented an error message from being generated and displayed in the event that auto refresh fails to refresh a Dashboard.
Fixed an issue which prevented Power Users who had limited editing permissions from saving checks. For instance, Power Users who could edit only the name, description, and tags of a check could not save the check after doing so. The bug fix resolved this issue.
Fixed the following API call: https://api-wpm.apicasystem.com/v3/Help/Route/GET-checks-proxysniffer-checkId-results-resultId-errorlog which was returning a 500 server error previously.
Fixed an issue with certain checks which prevented Request & Response Headers from showing correctly within the Check Details page:
Fixed an issue which prevented API calls from returning correct responses when a new user’s time zone was not set
Fixed an issue which prevented spaces in between the “accepted codes” field for a URLv2 check:
Updated API documentation for URL, URLv2 checks to include acceptable "secureProtocolVersion" values
Fixed an issue with Ad Hoc report generation for certain users
Fixed issues which prevented Command checks from being created or fetched via the ASM API.
Disabled the option to select "Firefox" on browser checks
Disabled location information in the API for deprecated checks
Disabled old Chrome versions when creating a Chrome check
Disabled location information in the API for deprecated Chrome versions
Disabled deprecated check types from the "create new check"
Disabled deprecated check types from the integration wizard
Disabled API endpoint for URLv1 checks
Disabled API endpoint for Command v1 checks
Disabled deprecated check types from /checks/command-v2/categories
Disabled deprecated browser version from /AnalyzeUrl
Replaced Firefox with Chrome when creating an iPhone, iPad, or Android Check in New Check Guide
Removed deprecated check versions as options from the Edit Scenario page
Disabled AppDynamics check types from the integration wizard
Read previous Release Notes, go to: Knowledge Base
Added the ability to add/edit “Accepted Codes”, “Port Number” and all “Secure Protocol Versions” for URLv1 checks via the ASM API. API documentation was updated to reflect the new functionality.
Added SNI (Server Name Indication) support for URLv1 checks
Fixed an issue which prevented Power Users with limited check editing permissions from saving checks after performing edits.
Read previous Release Notes, go to: Knowledge Base
This page describes the Apica Ascent deployment on Kubernetes cluster using HELM 3 charts.
Kubernetes 1.18, 1.19 or 1.20
Helm 3.2.0+
Dynamic PV provisioner support in the underlying infrastructure
ReadWriteMany volumes for deployment scaling
Apica Ascent K8S components are made available as helm charts.
You can now run helm search repo apica-repo
to see the available helm charts
If you already added Apica Ascent's HELM repository in the past, you can get updated software releases using helm repo update
NOTE: Namespace name cannot be more than 15 characters in length
This will create a namespace apica-ascent
where we will deploy the Apica Ascent Log Insights stack.
Sample YAML files for small, medium, large cluster configurations can be downloaded at the following links.
These YAML files can be used for deployment with -f parameter as shown below in the description.
You should now be able to login to Apica Ascent UI at your domain using https://ascent.my-domain.com
that you set in the ingress after you have updated your DNS server to point to the Ingress controller service IP
The default login and password to use is flash-admin@foo.com
and flash-password
. You can change these in the UI once logged in.
If you want to pass your own ingress secret, you can do so when installing the HELM chart
Depending on your requirements, you may want to host your storage in your own K8S cluster or create a bucket in a cloud provider like AWS.
Please note that cloud providers may charge data transfer costs between regions. It is important that the Apica Ascent cluster be deployed in the same region where the S3 bucket is hosted
Go to AWS IAM console and create an access key and secret key that can be used to create your bucket and manage access to the bucket for writing and reading your log files
Make sure to pass your AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
and give a bucket name. The S3 gateway acts as a caching gateway and helps reduce API costs.
Create a bucket in AWS s3 with a unique bucket name in the region where you plan to host the deployment.
[OPTIONAL]
Apica Ascent supports TLS for all ingest. We also enable non-TLS ports by default. It is however recommended that non-TLS ports not be used unless running in a secure VPC or cluster. The certificates can be provided to the cluster using K8S secrets. Replace the template sections below with your Base64 encoded secret files.
Save the secret file e.g. logiq-certs.yaml
. Proceed to install the secret in the same namespace where you want to deploy Apica Ascent
The secret can now be passed into the Apica Ascent deployment
If you are planning on using a specific storage class for your volumes, you can customize it for the Apica Ascent deployment. By default, Apica Ascent uses the standard
storage class
To use external AWS RDS Postgres database for your Apica Ascent deployment, execute the following command.
While configuring RDS, create a new parameter group that sets autoVaccum to true or the value "1", associate this parameter group to your RDS instance.
Auto vacuum automates the execution of VACUUM
and ANALYZE
(to gather statistics) commands. Auto vacuum checks for bloated tables in the database and reclaims the space for reuse.
To use external Redis for your Apica Ascent deployment, execute the following command.
NOTE: At this time Apica Ascent only supports connecting to a Redis cluster in a local VPC without authentication. If you are using an AWS Elasticache instance, do not turn on encryption-in-transit or cluster mode.
The service type configurations are exposed in values.yaml as below
For e.g. if you are running on bare-metal and want an external LB to front Apica Ascent, configure all services as NodePort
The Apica Ascent stack deployment can be optimized using node labels and node selectors to place various components of the stack optimally
The node label logiq.ai/node
above can be used to control the placement of ingest pods for log data into ingest optimized nodes. This allows for managing cost and instance sizing effectively.
The various nodeSelectors are defined in the globals section of the values.yaml file
In the example above, there are different node pools being used - ingest
, common
, db
, cache
and sync
The Apica Ascent stack includes Grafana as part of the deployment as an optional component. To enable Grafana in your cluster, follow the steps below
The Grafana instance is exposed at port 3000 on the ingress controller. The deployed Grafana instance uses the same credentials as the Apica Ascent UI
Apica Ascent creates an Ingress resource in the namespace it is deployed.
If and when you want to decommission the installation using the following commands
If you followed the installation steps in section 3.1 - Using an AWS S3 bucket, you may want to delete the s3 bucket that was specified at deployment time.
This page describes the AWS CloudFormation based deployment for the Apica Ascent stack
Apica Ascent can be deployed on AWS in a single AMI instance in a 1-Click fashion using our CloudFormation template and the Apica Ascent AMI from the Amazon Marketplace.
All the resources required to create and configure Apica Ascent on AWS are taken care by the template. All you need to do is provide a few simple input parameters.
The CloudFormation template can be found in the software subscription at the AWS marketplace
After the Cloud formation template is complete, it may take several minutes for the UI to be available on the AMI.
The deployment exposes the UI on an http port by default. You can install an ELB to front the UI via https. This is the recommended production setup.
Once the Apica Ascent instance is created, you can login to the instance using the below credentials
user: flash-admin@foo.com
password: flash-password
e.g. if CloudFormation stack is called Foo and bucket is called Bar the password is Foo-Bar
This page describes the deployment architecture of a typical production deployment of the Apica Data Fabric.
A production deployment of the Apica Data Fabric requires the following key components
A Kubernetes cluster to run the Apica Data Fabric software components. The Kubernetes cluster should provide
A persistent storage class that is used for transient/permanent storage by the software components in the data fabric
An optional ingress controller integrated with the Kubernetes cluster to front the data fabric services. If an ingress controller is unavailable, the services in the data fabric are deployed as NodePorts that must then be programmed in an optional external ingress provider e.g. F5 etc.
An object store is where the data fabric stores its data at rest. An S3-compatible object store is required. If you are on Azure, you can take advantage of the native integration with the Azure Blob store which is not S3 compatible and needs bolt-on services.
Access to a container registry for docker images for the Apica Data Fabric.
A Postgres database that stores all of the Apica Data Fabric configurations. If an external Postgres instance is not available, the deployment can be configured to deploy a Postgres instance along with the Apica Data Fabric software components.
A Redis in-memory cache. If an external Redis instance is not available, the deployment can be configured to deploy a Redis instance along with the Apica Data Fabric software components.
Optional deployment of the K8S horizontal pod auto-scalar to enable auto-scaling of Apica Data Fabric software components. If you do not use K8S HPA, not to worry, using standard scaling using kubectl scale
is supported as well.
The deployment of the Apica Data Fabric is driven via a HELM chart.
The typical method of customizing the deployment is done with a values.yaml
file as a parameter to the HELM software when installing the Apica Data Fabric HELM Chart.
The reference deployment architecture shows a hybrid deployment strategy where the Apica stack is deployed in an on-prem Kubernetes cluster but the storage is hosted in AWS S3. There could be additional variants of this where services such as Postgres, Redis, and Container registry could be in the cloud as well.
This document describes the steps needed to bring up the Apica Ascent observability stack using docker-compose for trial and demo use
docker-compose based deployment should not be used for production envornments.
Log aggregation, search, reporting, and live tailing
APM using built-in Prometheus, using external Prometheus
Data sources - 21 data source connectors
Alerting
Incident response - PagerDuty, ServiceNow, Slack, Email
apicactl CLI connectivity
Dashboards and visualizations
Filtering rules and rule packs
User and group management
Log flow RBAC
UI Audit trail
The first step is to get the docker-compose
YAML file from the URL below.
You are now ready to bring up the Apica Ascent stack.
If you are done with your evaluation and want to cleanup your environment, please run the following command to stop and delete the Apica Ascent stack and free up the used system resources.
Once the Apica Ascent server is up and running, the Apica Ascent UI can be accessed as described above on port 80 on the server docker-compose. You will be presented with a login screen as shown below.
The quickstart compose file includes a test data tool that will generate test log data and also has a couple of dashboards that show Apica Ascent's APM capabilities.
The test log data can be viewed under Explore page
Click on any Application and you will be taken to the Flows page with detailed logs and a search view. You can search for any log pattern, searches can also be run using regex expressions along with conditional statements using Advanced search across a time period.
Apica Ascent provides application performance monitoring (APM) which can help end-to-end monitoring for microservices architectures, traces can be sent over 14250 (gRPC port). To view traces, navigate to Trace page under Explore.
select the Service and a list of traces will appear on the right-hand side of the screen. The traces have titles that correspond to the Operator selector on the search form. The traces can be further analyzed by clicking on the Analyse icon which will pull up the entire logs for the corresponding trace-id.
Analyze icon displays all the logs for the respective trace-id in a given time range.
To view the detailed trace, you can select a specific trace instance and check details like the time taken by each service, errors during execution, and logs.
The Apica Ascent quickstart file includes Prometheus and Alertmanager services. 2 APM Dashboards to monitor the quickstart environments are included.
Ubuntu OS x64 - 20.04.6 LTS
32 vCPU
64GB RAM
500GB disk space on the root partition
The first step in this deployment is to install MicroK8s on your machine. The following instructions pertain to Debian-based Linux systems. To install MicroK8s on such systems, do the following.
Update package lists by running the following command.
Install core
using Snap by running the following command.
Install MicroK8s using Snap by running the following command.
Join the group created by MicroK8s that enables uninterrupted usage of commands that require admin access by running the following command.
Create the .kube directory.
Add your current user to the group to gain access to the .kube
caching directory by running the following command.
Generate your MicroK8s configuration and merge it with your Kubernetes configuration by running the following command.
Check whether MicroK8s is up and running with the following command.
MicroK8s is now installed on your machine.
Now that we have MicroK8s up and running, let’s set up your cluster and enable the add-ons necessary such as Helm, CoreDNS, ingress, storage, and private registry. MicroK8s readily provides these addons and can be enabled and disabled at any time. Most of these add-ons are pre-configured to work without any additional setup.
To enable add-ons on your MicroK8s cluster, run the following commands in succession.
Enable Helm 3.
If you get a message telling you have insufficient permissions, a few of the commands above which tried to interpolate your current user into the command with the $USER variable did not work. You can easily fix it by adding your user to the microk8s group by specifying the name of the user explicitly:
Enable a default storage class that allocates storage from a host directory.
Enable CoreDNS.
Enable ingress.
To enable the Ingress controller in MicroK8s, run the following command:
Enable HTTPS (optional)
How to Create a Self-Signed Certificate using OpenSSL:
Create server private key
Create certificate signing request (CSR)
Sign the certificate using the private key and CSR
To create a TLS secret in MicroK8s using kubectl
, use the following command:
This command creates a secret named "https" containing the TLS keys for use in your Kubernetes cluster. Ensure you have the cert.crt
and cert.key
files in your current directory or specify full paths.
To enable Ingress on microk8s with a default SSL certificate, issue the following command:
Enable private registry.
Copy over your MicroK8s configuration to your Kubernetes configuration with the following command.
To provision an IP address, do the following:
Check your local machine's IP address by running the ifconfig
command, as shown below.
Enable MetalLB by running the following command.
Now that your MicroK8s environment is configured and ready, we can proceed with installing Apica Ascent PaaS on it. To install Apica Ascent PaaS using Helm, do the following:
Add the Apica Ascent PaaS Helm chart to your Helm repository by running the following command.
Update your Helm repository by running the following command.
Create a namespace on MicroK8s on which to install Apica Ascent PaaS.
Create a namespace on MicroK8s on which to install Apica Ascent PaaS.
Make sure you have the necessary permissions to copy a file to the specified folder on the Linux machine.
In the values file, add the below fields global-> environment section with your own values.
In the global -> chart section, change S3gateway to false.
In the global -> persistence section, change storageClass as below.
Install Apica Ascent PaaS using Helm with the storage class set to microk8s-hostpath
with the following command.
If you see a large wall of text listing configuration values, the installation was successful - Ascent PaaS is now installed in your MicroK8s environment!
Now that Apica Ascent PaaS is installed on your MicroK8s cluster, you can visit the Apica Ascent PaaS UI by either accessing the MetalLB endpoint we defined in the pre-install steps (if you installed/configured MetalLB), or by accessing the public IP address of the instance over HTTP(S) (if you aren't utilizing MetalLB).
If you are load balancing the hosting across multiple IPs using MetalLB, do the following to access the Apica Ascent PaaS UI:
Inspect the pods in your MicroK8s cluster in the apica-ascent
namespace by running the following command.
Find the exact MetalLB endpoint that's serving the Apica Ascent PaaS UI by running the following command.
The above command should give you an output similar to the following.
Using a web browser of your choice, access the IP address shown by the load balancer service above. For example, http://192.168.1.27:80
.
If you aren't utilizing MetalLB, you can access the Ascent UI simply by accessing the public IP or hostname of your machine over HTTP(S); you can utilize HTTPS by following the "enabling HTTPS" step in the "Enabling Add-Ons" section above.
You can log into Apica Ascent PaaS using the following default credentials.
Username: flash-admin@foo.com
Password: flash-password
If you see an error message indicating the Kubernetes cluser is unreachable, the Microk8s service has stopped - simply restart it. Error text:
Solution:
If the Ascent installation using the supplied .yaml file fails, you must first remove the name in use. Error text:
Solution:
Apica Ascent SaaS enables you to converge all of your IT data from disparate sources, manage your telemetry data, and monitor and troubleshoot your operational data in real-time. The following guide assumes that you have signed up for Apica Ascent in the cloud. If you are not yet a registered user, please follow the steps below.
To sign up for Apica Ascent SaaS, follow these steps:
Provide your Name, your business Email Address, and Company and Country details.
Click Submit and you will receive a confirmation email to validate your contact information.
This completes the sign-up process. We'll send your Apica Ascent account credentials to your registered email shortly after you sign up.
To access your Apica Ascent SaaS instance, do the following:
Using your favorite web browser, navigate to your Apica Ascent SaaS instance URL. Your instance URL is listed in the onboarding email we send you post sign up and will resemble https://<unique name>.apica.io/
.
Enter the login credentials shared in the onboarding email.
Click Login.
You'll now have access to and can interact with the Apica Ascent UI.
Now that you have access to the Apica Ascent UI, you can go start ingesting Metrics and Logs:
Here are helpful links to other "Getting Started" technical guides:
Discover the latest advancements and improvements of the Apica Ascent platform. This is your go-to destination for updates on platform enhancements and new features. Explore what's new to optimize your observability and data management strategies.
Features
Browser checks will automatically accept dialogs/modals that can pop up during a test such as alert/confirmation/prompts.
Browser checks will attach and include control of new tabs created by the target site. I.e. the chrome WebDriver will automatically attach to new tabs that are opened during check execution of a Browser check.
Added SAN/SNI options to SSL Cert Expiration and Fingerprint Validation for URL checks.
Bugs Fixes:
Screenshots for Browser checks were not working in new tabs or windows created by the check. This is fixed as part of the above feature that include control of created tabs and windows by the target site.
Features
AWS XRay Forwarder. This allows users to send trace data to AWS XRay.
Alert page search. Ability to search across all existing Alerts by use of central search bar within Alert list view.
Improvements
Revamped Alert API to support multiple severities (Info, Warning, Critical, Emergency) with multiple thresholds, in the same alert.
Changed the location of Track duration in alert screens to be adjacent to the Alert condition.
All the alert destinations (Slack, PagerDuty, Mattermost, Chatwork, Zenduty, Opsgenie, Webhook, ServiceNow, and Email) will now start receiving values that triggered that specific alert.
Further UI changes for Alert Screens, Integrations Screen, and Distributed Tracing to align with the new design system.
Search improvements in ASM+. Now search by location, severity, type, and checkID are supported. Search is also a lot faster because of parallel queries.
Improved waterfall chart in ASM+ analysis view.
Improved pattern signature enables/disables usability.
Bug Fixes:
Fixed ServiceNow alert destination API errors.
Fixed Email settings page bug.
Fixed User page bug because of which admin was not able to change groups of users.
Fixed missing services in ASM+.
Bring back scenario commands and request/response headers for FPR checks in ASM+.
Others:
Deprecated Hipchat alert destination.
Bugfixes
Avoid metric index corruption by using pread(2) in jlog instead of mmap(2).
Fix the bug where a node could crash if we closed a raw shard for delete, then tried to roll up another shard before the delete ran.
Fix the bug where setting raw shard granularity values above 3w could cause data to get written with incorrect timestamps during rollups.
Fix the NNTBS rollup fetch bug where we could return no value when there was valid data to return.
Fix the bug where histogram rollup shards were sometimes not being deleted even though they were past the retention window.
Improvements
Deprecate max_ingest_age from the graphite module. Require the validation fields instead.
Change the Prometheus module to convert nan and inf records to null.
Add logging when when the snowth_lmdb_tool copy operation completes.
Improve various listener error messages.
Add checks for timeouts in the data journal path where they were missing.
Improve graphite PUT error messages.
To get you up and running with the Apica Ascent PaaS, we've made Apica Ascent PaaS' Kubernetes components available as Helm Charts. To deploy Apica Ascent PaaS, you'll need access to a Kubernetes cluster and Helm 3.
Before you start deploying Apica Ascent PaaS, let's run through a few quick steps to set up your environment correctly.
Add Apica Ascent's Helm repository to your Helm repositories by running the following command.
The Helm repository you just added is named apica-repo
. Whenever you install charts from this repository, ensure that you use the repository name as the prefix in your install command, as shown below.
You can now search for the Helm charts available in the repository by running the following command.
Running this command displays a list of the available Helm charts along with their details, as shown below.
If you've already added Apica Ascent's Helm repository in the past, you can update the repository by running the following command.
Create a namespace where we'll deploy Apica Ascent PaaS by running the following command.
Running the command shown above creates a namespace named apica-ascent
. You can also name your namespace differently by replacing apica-ascent
with the name of your choice in the command above. In case you do, remember to use the same namespace for the rest of the instructions listed in this guide.
Important: Ensure that the name of the namespace is not more than 15 characters in length.
Just as any other package deployed via Helm charts, you can configure your LOGIG PaaS deployment using a Values file. The Values file acts as the Helm chart's API, giving it access to values to populate the Helm chart's templates.
To give you a head start with configuring your Apica Ascent deployment, we've provided sample values.yaml
files for small, medium, and large clusters. You can use these files as a base for configuring your Apica Ascent deployment. You can download these files from the following links.
You can pass the values.yaml
file with the helm install
command using the -f
flag, as shown in the following example.
Now that your environment is ready, you can proceed with installing Apica Ascent PaaS in it. To install Apica Ascent PaaS, run the following command.
Running the above command installs Apica Ascent PaaS and exposes its services and UI on the ingress' IP address. Accessing the ingress' IP address in a web browser of your choice takes you to the Apica Ascent PaaS login screen, as shown in the following image.
If you haven't changed any of the admin settings in the values.yaml
file you used during deployment, you can log into the Apica Ascent PaaS UI using the following default credentials.
Username: flash-admin@foo.com
Password: flash-password
You can customise your Apica Ascent PaaS deployment either before or after you deploy it in your environment. The types of supported customisations are listed below.
Enabling HTTPS for the Apica Ascent UI
Using an AWS S3 bucket
Install Apica Ascent server and client CA certificates(optional)
Updating the storage class
Using an external AWS RDS Postgres database instance
Uploading a Apica Ascent professional license
Customising the admin account
Using an external Redis instance
Configuring the cluster_id
Sizing your Apica Ascent cluster
NodePort/ClusterIP/LoadBalancer
Using Node Selectors
Installing Grafana
You can enable HTTPS and assign a custom domain in the ingress for your Apica Ascent UI while installing Apica Ascent in your environment by running the following command.
The following table describes all of the Helm options passed in the command above.
After you run the command, you should then update your DNS server to point to the ingress controller service's IP. Once you've done this, you can access your Apica Ascent UI at the domain https://ascent.my-domain.com
that you set in the ingress controller service.
You can pass your own ingress secret while installing the Helm chart by running the following command.
If you want to pass your own ingress secret, you can do so when installing the HELM chart
Depending on your requirements, you may want to host your storage in either your own Kubernetes cluster or create a new storage bucket in a cloud provider like AWS.
If you choose to use an S3 bucket, be sure to deploy your Apica Ascent PaaS cluster in the same region that hosts your S3 bucket. Failing to do so can lead to you incurring additional data transfer costs for transferring data between regions.
To use your own S3 bucket, do the following.
Go to your AWS IAM console and create an access key and secret key using which you can create your S3 bucket. Also provide access to the bucket for writing and reading your log files.
The S3 gateway acts as a caching gateway and helps reduce API costs. Deploy the Apica Ascent Helm chart in gateway mode by running the following command. Ensure you pass your AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
and name your S3 bucket uniquely.
The following table describes all the Helm options passed in the command above.
Apica Ascent supports TLS for all of your log ingest sources. Apica Ascent also enables non-TLS ports by default. However, we recommend that you don't use non-TLS ports unless you're running Apica Ascent in a secure VPC or cluster.
You can provide server and client CA certificates to the cluster using a Kubernetes secrets file. Before using the following secrets file template, replace the template sections below with your Base64 encoded secret files.
Once you've filled out this template, be sure to save the secrets file and name it appropriately, such as logiq-certs.yaml
. You can now install the Apica Ascent Helm chart, along with the certificates using the following command.
The following table describes the Helm options passed in the install command.
If you plan on using a specific storage class for your volumes, you can configure your Apica Ascent deployment to use that storage class. Apica Ascent uses the standard storage class by default.
The following table details the Kubernetes StorageClass
names and their default provisioner for each cloud provider.
You can update the storage class name for your Apica Ascent deployment by running the following command.
To use an external AWS RDS Postgres database for your Apica Ascent deployment, run the following command.
The following table describes the Helm options that are passed with the command above.
Important: While configuring RDS, create a new parameter group that sets autoVacuum
to true
or the value 1
. Associate this parameter group to your RDS instance.
autoVacuum
automates the execution of the VACUUM
and ANALYZE
commands to gather statistics. autoVacuum
checks for bloated tables in the database and reclaims the space for reuse.
The Apica Ascent PaaS Community Edition gives you access to Enterprise Edition features but with lesser daily log ingest rates and ingest worker processes. If you feel the need to up your daily ingest rates and make the most out of Apica Ascent by extending its use to the rest of your team with SSO and RBAC, you can upgrade to the Apica Ascent PaaS Enterprise Edition.
To use apicactl
, generate an API token from the Apica Ascent UI, as shown in the following image.
Apica Ascent enables you to set your own admin credentials to log into your Apica Ascent cluster instead of using the default credentials. You can set your admin credentials while deploying Apica Ascent by running the following command.
The following table describes the Helm options passed with the command above.
You can specify an external Redis instance to be used with your Apica Ascent deployment by specifying the Redis host in the installation command, as shown below.
Important: Currently, Apica Ascent only supports connections to a Redis cluster in a local VPC without authentication. If you're using an AWS Elasticache instance, do not turn on encryption-in-transit or cluster mode.
The following table describes the Helm options that can be passed with the command above.
cluster_id
You can configure a cluster ID for your Apica Ascent instance at the time of deployment by passing the cluster_id
of your choice while running the following install command. This helps you identify your Apica Ascent cluster in case you'd like to monitor it.
The following table describes the Helm options passed with the command above.
When deploying Apica Ascent, it's advisable to size your infrastructure appropriately to provide adequate vCPU and memory for the Apica Ascent instance to utilise. The following table describes the minimum recommended sizes for small, medium, and large cluster specifications.
NodePort
, ClusterIP
, and LoadBalancer
The service type configurations for your Apica Ascent deployment are exposed in the values.yaml
, as shown in the following example.
For example, if you are deploying Apica Ascent on a bare-metal server and want an external load balancer to front Apica Ascent, configure all services as NodePort
and pass the service types in the installation command, as shown in the following example.
You can optimise the deployment of the Apica Ascent stack using node labels and node selectors that help place various components of the stack optimally.
You can use the node label logiq.ai/node
to control the placement of ingest pods for log data into ingest-optimised nodes, thereby allowing you to manage costs and instance sizing effectively.
The various nodeSelectors are defined in the globals section of the values.yaml
file. In the following example, different node pools such as ingest
, common
, db
, cache
, and sync
are used.
The Apica Ascent stack bundles Grafana as part of the deployment as an optional component. You can enable Grafana in your Apica Ascent cluster by running the following command.
The Grafana instance is exposed at port 3000
on the ingress controller. The deployed Grafana instance uses the same login credentials as the Apica Ascent UI.
Please refer to for sizing your Apica Ascent cluster as specified in these YAML file Latest image tags.
This will install Apica Ascent and expose the Apica Ascent services and UI on the ingress IP. If you plan to use an AWS S3 bucket, please refer to section before running this step. Please refer to for details about storage class. Service ports are described in the . You should now be able to go to http://ingress-ip/
The default login and password to use is flash-admin@foo.com
and flash-password
. You can change these in the UI once logged in. HELM chart can override the default admin settings as well. See section on customizing the admin settings
Apica Ascent server provides Ingest, log tailing, data indexing, query, and search capabilities. Besides the web-based UI, Apica Ascent also offers for accessing the above features.
The ascent.my-domain.com
also fronts all the Apica Ascent service ports as described in the .
Additionally, provide a valid amazon service endpoint for s3 else the config wll default to
Provisioning GP3 CSI Driver on AWS EKS -
The deployment described above offers 30 days trial license. Send an e-mail to support@apica.io
to obtain a professional license. After obtaining the license, use the apicactl tool to apply the license to the deployment. Please refer to apicactl
details at . You will need API-token from Apica Ascent UI as shown below
When deploying Apica Ascent, configure the cluster id to monitor your own Apica Ascent deployment. For details about the cluster_id
refer to section
When deploying Apica Ascent, size your infrastructure to provide appropriate VCPU and memory requirements. We recommend the following minimum size for small. medium and large cluster specification from values yaml files.
Creating an OIDC provider for your EKS cluster -
Please refer to the EKS configuration on how to automatically provision an ALB here -
For setting up data ingestion from your endpoints and applications into Apica Ascent, please refer to the .
You can spin-up Apica Ascent using docker-compose
. Install guide for docker-compose
can be found here -
⬇ Download the YAML at the URL -
For setting up data ingestion from your endpoints and applications into Apica Ascent, please refer to the .
Please refer to the supported by Apica.
is a lightweight, pure-upstream Kubernetes aiming to reduce entry barriers for K8s and cloud-native application development. It comes in a single package that installs a single-node (standalone) K8s cluster in under 60 seconds. The lightweight nature of Apica Ascent PaaS enables you to deploy Apica Ascent on lightweight, single-node clusters like MicroK8s. The following guide takes you through deploying Apica Ascent PaaS on MicroK8s.
In this step, we'll provision an endpoint or an IP address where we access Apica Ascent PaaS after deploying it on MicroK8s. For this, we'll leverage which is a load-balancer implementation that uses standard routing protocols for bare metal Kubernetes clusters.
Prepare your values.microk8s.yaml file. You can use the file we've created to configure your Apica Ascent PaaS deployment. If you need to download the file to your own machine, edit, and then transfer to a remote linux server, use this command:
Optionally, if you are provisioning public IP using Metallb, use the instead. run the following command.
Go to the .
Data Explorer adds a new way to create queries, dashboards, and widgets directly from a browsable inventory of available metrics and events. With just a few clicks, a query builder is guiding the simple creation of dashboards and widgets. Please read further on this substation set of features in our product documentation:
Code Rule is a new rule type that is introduced with this release, where user can add JavaScript code to enhance the logs. With the help of Code Block, add Code Rule to improve your pipelines. Code Rules takes in a JavaScript function that gets integrated with your pipeline. Please read further on this in the product documentation:
Fleet 🚢 is the ultimate solution for making the collection of observability data responsive to changes in your environment using your pre-existing observability agents. With Fleet, you can collect more data when you need it and less when you don’t. And the best part? Almost all observability agents can be managed through configuration files describing how to collect, enrich, and send data. Fleet aims to simplify this process through an agent manager. The Fleet Agent Manager functions as a sidecar utility that checks for new configuration files and triggers the appropriate restart/reload functionality of the supported agent. The Agent Manager is kept intentionally simple, with the goal that it only needs to be installed once and updated infrequently. Please read further on this in the product documentation:
JS Code Forwarder is a robust batch processing tool designed to efficiently handle and forward batches of events. It supports forwarding arrays of event objects to a specified endpoint, and includes built-in functions for recording metrics, making HTTP requests, and logging.
Your Apica Ascent PaaS instance is now deployed and ready for use. Your Apica Ascent instance enables you to ingest and tail logs, index and query log data, and search capabilities. Along with the Apica Ascent UI, you can also access these features via Apica Ascent's CLI, .
The command above automatically provisions an S3 bucket for you in the region you specify using the access credentials you pass with the command. If you do not wish to create a new bucket, make sure the access credentials you pass work with the S3 bucket you specify in the command. Additionally, make sure you provide a valid Amazon service endpoint for your S3 bucket or else the configuration defaults to using the endpoint.
You can get yourself an Enterprise Edition license by contacting us via . Once you receive your new license, you can apply it to your Apica Ascent deployment using Apica Ascent's CLI, .
Once you've with your API token and Apica Ascent cluster endpoint, run the following commands to update your license.
global.domain
DNS domain where the Apica Ascent service will be running. This is required for HTTPS
No default
ingress.tlsEnabled
Enable the ingress controller to front HTTPS for services
false
kubernetes-ingress.controller.defaultTLSSecret.enabled
Specify if a default certificate is enabled for the ingress gateway
false
kubernetes-ingress.controller.defaultTLSSecret.secret
Specify the name of a TLS Secret for the ingress gateway. If this is not specified, a secret is automatically generated of option kubernetes-ingress.controller.defaultTLSSecret.enabled
above is enabled.
global.cloudProvider
This helm option specifies the supported cloudProvider that is hosting the S3 compatible bucket. Right now only aws
is supported.
aws
global.environment.s3_bucket
Name of the S3 bucket in AWS
logiq
global.environment.awsServiceEndpoint
S3 Service endpoint : https://s3.**<region>**.amazonaws.com
global.environment.AWS_ACCESS_KEY_ID
AWS Access key for accessing the bucket
No default
global.environment.AWS_SECRET_ACCESS_KEY
AWS Secret key for accessing the bucket
No default
global.environment.s3_region
AWS Region where the bucket is hosted
us-east-1
logiq-flash.secrets_name
TLS certificate key pair and CA cert for TLS transport
No default
AWS
gp3
EBS
Azure
UltraSSD_LRS
Azure Ultra disk
GCP
standard
pd-standard
Digital Ocean
do-block-storage
Block Storage Volume
Oracle
oci
Block Volume
Microk8s
microk8s-hostpath
global.chart.postgres
Deploy Postgres which is needed for Apica Ascent metadata. Set this to false if an external Postgres cluster is being used
true
global.environment.postgres_host
Host IP/DNS for external Postgres
postgres
global.environment.postgres_user
Postgres admin user
postgres
global.environment.postgres_password
Postgres admin user password
postgres
global.environment.postgres_port
Host Port for external Postgres
5432
global.environment.admin_name
Apica Ascent Administrator name
flash-admin@foo.com
global.environment.admin_password
Apica Ascent Administrator password
flash-password
global.environment.admin_email
Apica Ascent Administrator e-mail
flash-admin@foo.com
global.chart.redis
Deploy Redis which is needed for log tailing. Set this to false if an external Redis cluster is being used
true
global.environment.redis_host
Host IP/DNS of the external Redis cluster
redis-master
global.environment.redis_port
Host Port where external Redis service is exposed
6379
global.environment.cluster_id
Cluster Id being used for the K8S cluster running Apica Ascent. See Section on Managing multiple K8S clusters for more details.
Apica AscentQ
small
24
32 gb
3
medium
40
64 gb
5
large
64
128 gb
8
apica-repo/apica-ascent
Apica Ascent Data Fabric HELM chart for Kubernetes
global.domain
The DNS domain where the Apica Ascent service will be running. This option is required to enable HTTPS.
No default
ingress.tlsEnabled
Enables the ingress controller to front HTTPS for services
false
kubernetes-ingress.controller.defaultTLSSecret.enabled
Specifies if a default certificate is enabled for the ingress gateway
false
kubernetes-ingress.controller.defaultTLSSecret.secret
Specifies the name of a TLS secret for the ingress gateway. If this is not specified, a secret is automatically generated of option kubernetes-ingress.controller.defaultTLSSecret.enabled
global.cloudProvider
This helm option specifies the supported cloudProvider
that is hosting the S3 compatible bucket. Currently, only aws
is supported.
aws
global.environment.s3_bucket
The name of the S3 bucket in AWS
logiq
global.environment.awsServiceEndpoint
The S3 Service endpoint: https://s3.**<region>**.amazonaws.com
global.environment.AWS_ACCESS_KEY_ID
The AWS Access key for accessing the bucket
No default
global.environment.AWS_SECRET_ACCESS_KEY
The AWS Secret key for accessing the bucket
No default
global.environment.s3_region
The AWS Region where the bucket is hosted
us-east-1
logiq-flash.secrets_name
TLS certificate key pair and CA cert for TLS transport
No default
AWS
gp3
EBS
Azure
UltraSSD_LRS
Azure Ultra disk
GCP
standard
pd-standard
Digital Ocean
do-block-storage
Block Storage Volume
Oracle
oci-bv
Block Volume
Microk8s
microk8s-hostpath
global.chart.postgres
Deploys Postgres which is needed for Apica Ascent metadata. Set this to false
if an external Postgres cluster is being used
true
global.environment.postgres_host
The host IP/DNS for external Postgres
postgres
global.environment.postgres_user
The Postgres admin user
postgres
global.environment.postgres_password
The Postgres admin user password
postgres
global.environment.postgres_port
The host port for external Postgres
5432
global.environment.admin_name
The Apica Ascent Administrator's name
flash-admin@foo.com
global.environment.admin_password
The Apica Ascent Administrator password
flash-password
global.environment.admin_email
The Apica Ascent Administrator's e-mail
flash-admin@foo.com
global.chart.redis
Deploys Redis that is needed for log tailing. Set this to false
if you're using an external Redis cluster.
true
global.environment.redis_host
The host IP/DNS of the external Redis cluster
redis-master
global.environment.redis_port
The host port where the external Redis service is exposed
6379
global.environment.cluster_id
The cluster ID being used for the K8s cluster running Apica Ascent. For more information, read Managing multiple K8S clusters.
Apica Ascent
small
12
32 GB
3
medium
20
56 GB
5
large
32
88 GB
8
This guide will take you through deploying Apica Ascent on an EKS cluster on AWS using CloudFormation and HELM. The installation will create user roles and policies that are necessary to create a GP3 storage class and a private S3 bucket with default encryption and bucket policies.
The Cloud formation template provisions the following resources
S3 Bucket
EKS Cluster
EKS Node Pools
Create a role for EKS and EKS Node Pools with the below policies. Alternatively, this can be created using Cloud formation template https://logiq-scripts.s3.ap-south-1.amazonaws.com/logiqiamrole.yaml, details of created resources will be in the output section of Cloud formation, these details are used in section 5(step 4 and 5).
AmazonEKSWorkerNodePolicy
AmazonEC2ContainerRegistryReadOnly
AmazonEKS_CNI_Policy
AmazonEKSClusterPolicy
AmazonEKSServicePolicy
Create the managed policies below and attach them to the above role, this will enable one to create GP3 volumes in the cluster.
In order for the IAM role to access the S3 bucket, create the policy below and attach it to the above IAM role
Edit the Trust Relationship in the IAM role and add the following entities
ec2.amazonaws.com
eks.amazonaws.com
Before you begin, ensure you have the following prerequisites.
You have permission on your AWS account to create an Elastic Kubernetes Service, S3 Bucket.
Above mentioned roles are created
The AWS CLI is installed and configured on your machine
Helm 3 is installed on your machine.
If you choose to use AWS RDS, then follow the guidelines below for your RDS
Note down your RDS instance DNS, username, and password handy.
Use Postgres V13 RDS type with 100GB storage, io1 with 3000 IOPS.
We recommend creating a db.m5.xlarge for deployments ingesting < 500GB/day and db.m5.2xlarge for deployments ingesting > 500GB/day
Ensure EKS cluster can connect to AWS RDS Instance. Once the EKS cluster is created, add the security group of EKS cluster in the Postgres security group inbound rules for port 5432
Step 1: To prepare for the deployment, first obtain the Cloudformation template that will be used at the URL:https://logiq-scripts.s3.ap-south-1.amazonaws.com/EKSCluster.yaml
Step 2: On your AWS Console, navigate to CloudFormation and select Create stack.
Step 3: Provide the options as shown below
Under Prerequisite - Prepare template, select Template is ready.
Under Specify template > Template source, select Amazon S3 URL - Here you will specify the template URL from Step 1 above.
Step 4: To deploy the EKS cluster, we need to enter the ARN of the IAM Role for EKS that was created in section 3.1. We need a VPC with 2 subnets. Select them from the Network Configuration and Subnet configuration dropdown lists.
The EKS cluster will need the following node groups. Ensure that you select the node groups as specified in the following table.
ingest
c5.xlarge (4 Core 8 GB RAM)
2
common
c5.2xlarge (8 Core 32 GB RAM)
2
db
c5.xlarge (4 Core 8 GB RAM)
2
Step 5: Provide the S3 bucket name from section 3, the Cloudformation will create the S3 bucket, S3 bucket name needs to be globally unique.
Step 6: Click Next, and follow the instructions on the screen to create the stack.
Step 1: Once the stack is fully provisioned, connect to the AWS EKS cluster using AWS CLI as mentioned below. To do this, you need to install and configure AWS CLI.
Step 2: Once the EKS cluster is up and running, execute the following commands to check the health of the cluster.
Step 1:Download this yaml file and run the commands mentioned below:
Step 2: Once the chart is installed, you should see pods similar to those shown below in your kube-system
namespace.
Step 3: Create the apica-ascent namespace in your EKS cluster
Step 2: Download the values file below and customize it per the instructions below.
Step 3: Replace the following variables in the values.yaml and proceed to install the Apica Ascent stack on your EKS cluster.
awsServiceEndpoint
: https://s3.<aws-region>.amazonaws.com
s3_bucket
: S3 bucket name
s3_region
: <s3 region>
alert: "PrometheusDown"
expr: absent(up{prometheus="<namespace>/<namespace>-prometheus-prometheus"})
Step 4: Deploy Apica Ascent stack using helm and updated values file, see below for additional options to customize the deployment for enabling https and to use external Postgres database
Step 5: Apply below command to get the Loadbalancer ip as a "EXTERNAL-IP" and browse. For UI login, you can find admin username and password in vaues.yaml.
Step 6 (Optional): To enable https using self-signed certificates, please add additional options to helm and provide the domain name for the ingress controller. In the example below, replace "ascent.my-domain.com" with the https domain where this cluster will be available.
Step 7 (Optional): If you choose to deploy using AWS RDS, provide the following options below to customize
Deploying Apica Ascent on AWS EKS (Private End poi with Aurora PostgreSQL and ElastiCache Redis on production VPC using Cloudformation
Before proceeding, ensure the following prerequisites are met:
Helm 3 is installed on your machine.
AWS CLI is installed and configured on your machine.
You have permissions on your AWS account to create resources including Elastic Kubernetes Service (EKS), S3 Bucket, Aurora PostgreSQL, and ElastiCache.
You have configured an AWS Virtual Private Cloud (VPC) and two (2) Private subnets.
Note: These resources will be automatically generated during the CloudFormation deployment process and are not prerequisites for initiating it.
The Cloudformation template provisions the following resources:
S3 Bucket
EKS Cluster
EKS Node Pools
Aurora PostgreSQL
ElastiCache Redis
Note: Ensure you're operating within the same region as your Virtual Private Cloud (VPC).
On the following page (step 1 of Stack creation) select "Template is ready" and "Amazon S3 URL". In the "Amazon S3 URL" textfield, enter https://logiq-scripts.s3.ap-south-1.amazonaws.com/Private-cluster/apicarole.yaml
Click "Next"
Enter a stack name
Enter an IAM role name for Logiq-EKS (Save this value for later),This will create the IAM role
Enter an S3 bucket name (Save this value for later),Make sure to apply AWS Bucket Naming Rules
Enter a master username for Postgresql. (Save this value for later),Master Username can include any printable ASCII character except /, ', ", @, or a spac
Enter a password for the above Postgresql user. (Save this value for later),Master Password should be > 8 characters.
Enter a database name for the Postgresql database,Start with small letter.
You can find this by searching for "VPC" on the top left search bar, select the VPC service, click the VPCs resource and select your region. Locate your VPC and copy the VPC ID.
From where you left of extracting your VPC ID, on the left hand side menu, select Private Subnets and copy the two Subnet IDs you intend you use
Nothing required here, navigate to the bottom of the page and click "Next"
You can review your configurations, acknowledge the capabilities and click "Submit"
Deployment might take a while. Please wait until the stack status shows "CREATE_COMPLETE" before proceeding.
On the following page (step 1 of Stack creation) select "Template is ready" and "Amazon S3 URL". In the "Amazon S3 URL" textfield, enter https://logiq-scripts.s3.ap-south-1.amazonaws.com/Private-cluster/pvt-cluster.yaml
Enter a stack name (Whatever you want to call the cluster)
Enter a name for the EKS cluster (Save this value)
Enter the ARN value of the IAM role you created in the previous CloudFormation deployment (Navigate to the previous stack and check outputs tab to find the value for the key LogiqEKSClusterRole)
Select a VPC id in the dropdown (This guide assumes you’ve created these previously)
Select two VPC Private subnets with NAT GATEWAY Attatched for the above VPC from each dropdown.
Enter "2" in the fields for “Ingest Worker Node count” and “Common Worker Node count”
Enter the S3 bucket name you used in the previous CloudFormation deploy in “S3 bucket for Logiq”
Click "Next"
Step 3: Configure stack options and Click "Next"
Step 4: Review and create
Create a bastion host in the public subnet of your VPC with a key pair. Launch this host with user data that installs kubectl and aws CLI tools you need.
Access the bastion host via SSH from your workstation to ensure it works as expected.
Check that the security group attached to your EKS control plane can receive 443 traffic from the public subnet. You can create a rule by adding port HTTPS (443) and giving the Security group id of bastion host in EKS security group. This will enable communication between the bastion host in the public subnet and the cluster in the private subnets.
Access the bastion host and then use it to communicate with the cluster just as you would with your personal machine.
Update your kubeconfig using below command.
Execute the following command:
Execute the following command:
Expected output:
Execute the following command:
Expected output:
Execute the following command:
Download the following file:
Open the file in a text editor and replace the following values:
awsServiceEndpoint:
Replace <region>
with your specific AWS region, for example eu-north-1
. The updated URL format should look like this:
s3_bucket:
Replace the placeholder <>
with the actual name of the S3 bucket that was created during the initial CloudFormation deployment:
s3_region:
Replace the AWS service endpoint region in the URL with the appropriate region, for example, eu-north-1
:
s3_url:
Replace <region>
with the region where you installed it. For example:
redis_host:
Replace <>
with your specific ElastiCacheCluster endpoint generated from the first CloudFormation deploy. For example, if your generated endpoint is apicaelasticache.hdsue3.0001.eun1.cache.amazonaws.com
, you would update the configuration as follows:
You can find this value from the output tab of the first CloudFormation deploy
postgres_host:
Replace <>
with your AuroraEndpoint endpoint. For example, if your generated endpoint is apicadatafabricenvironment-aurorapostgresql-0vqryrig2lwe.cluster-cbyqzzm9ayg8.eu-north-1.rds.amazonaws.com
, you would update the configuration as follows:
You can find this value from the output tab of the first CloudFormation deploy
postgres_user:
Replace <>
with the master username you created during the first CloudFormation deployment:
postgres_password:
Replace <>
with the password for the user you created during the first CloudFormation deployment:
s3_access:
Replace <>
with your AWS CLI access key id.
To retrieve your AWS credentials from your local machine, execute the command below in your terminal:
AWS_ACCESS_KEY_ID
Replace <>
with your AWS CLI access key id.
To retrieve your AWS credentials from your local machine, execute the command below in your terminal:
s3_secret
Replace <>
with your AWS CLI secret access key id.
To retrieve your AWS credentials from your local machine, execute the command below in your terminal:
AWS_SECRET_ACCESS_KEY
Replace <>
with your AWS CLI secret access key id.
To retrieve your AWS credentials from your local machine, execute the command below in your terminal:
Namespace
Search the file for "namespace" and replace <namespace>/<namespace>
with the following:
To modify the administrator username and password, replace the existing details with your desired credentials.
Save the file
Execute the following command:
Expected output:
Ensure that the path to your values.yaml
file is correctly set, or run the commands from the directory that contains the file. Use the following command to deploy:
Expected output:
To get the default Service Endpoint, execute the below command:
Under the EXTERNAL-IP
column you will find a URL similar to below:
Create windows server with same vpc and create a rule for RDP in security group of windows server. RDP into that and access the application with "EXTERNAL-IP"
Login credentials is as defined in your values.yaml
file
As the EKS Cluster has been created, we can now set up the access rules for our VPC.
From the 1st stack, we need to find the SecurityGroups
which was created
Navigate to either EC2
or VPC
by using the search bar, and then look for Secutiry Groups
on the left hand side menu
Search for your security group using the ID
extracted from the 1st stack and click on the ID
Click on "Edit inbound rules"
Now we need to set up 2 rules
TCP
on Port 6379
and source is your VPC CIDR
Postgresql (TCP)
on Port 5432
and source is your VPC CIDR
Click "Save Rules"
To enable https using self-signed certificates, please add additional options to helm and provide the domain name for the ingress controller.
In the example below, replace apica.my-domain.com
with the https domain where this cluster will be available.
To customize your TLS configuration by using your own certificate, you need to create a Kubernetes secret. By default, if you do not supply your own certificates, Kubernetes will generate a self-signed certificate and create a secret for it automatically. To use your own certificates, perform the following command, replacing myCert.crt
and myKey.key
with the paths to your certificate and key files respectively:
In order to include your own secret, please execute the below command and replace $secretName
with your secret to enable HTTPS and replace apica.my-domain.com
with the https domain where this cluster will be available.
This creates its own VPC,Subnets and NAT gateways.
Link to Video: https://www.youtube.com/watch?v=3Yw7TfeojDQ
This guide will take you through deploying Apica Ascent on an AWS EKS cluster with Aurora PostgreSQL and ElastiCache Redis using CloudFormation and HELM. The installation will create user roles and policies that are necessary to create a GP3 storage class, a private S3 bucket, Aurora PostgreSQL and Elasticache with default encryption and bucket policies.
The Cloud formation template provisions the following resources
S3 Bucket
EKS Cluster
EKS Node Pools
VPC (Private and Public Subnets) and related things like Internet Gateway and NAT Gateway
Aurora PostgreSQL
ElastiCache Redis
IAM Role, Aurora PostgreSQL and ElastiCache can be created using Cloud formation template which is available on this link: https://logiq-scripts.s3.ap-south-1.amazonaws.com/apicasingleset.yaml, details of created resources will be in the output section of Cloud formation, these details are used in section 5 (step 4 and 5).
AmazonEKSWorkerNodePolicy
AmazonEC2ContainerRegistryReadOnly
AmazonEKS_CNI_Policy
AmazonEKSClusterPolicy
AmazonEKSServicePolicy
RDSPolicy
ElastiCachePolicy
Aurora Endpoint
ElastiCache Endpoint
Before you begin, ensure you have the following prerequisites.
You have permission on your AWS account to create an Elastic Kubernetes Service, S3 Bucket, Aurora PostgreSQL, and ElastiCache.
Above mentioned roles, Aurora and ElastiCache endpoints are created.
The AWS CLI is installed and configured on your machine
Helm 3 is installed on your machine.
Step 1: To prepare for the deployment, first obtain the Cloudformation template that will be used at the URL: https://logiq-scripts.s3.ap-south-1.amazonaws.com/Apica/EKSCluster-singleset.yaml
Step 2: On your AWS Console, navigate to CloudFormation and select Create stack.
Step 3: Provide the options as shown below
Under Prerequisite - Prepare template, select Template is ready.
Under Specify template > Template source, select Amazon S3 URL - Here you will specify the template URL from Step 1 above.
Step 4: To deploy the EKS cluster, we need to enter the ARN of the IAM Role for EKS that was created in section 3.1. We need a VPC with 2 Private subnets. Select them from the Network Configuration and Subnet configuration dropdown lists and they were created by the previous cloudformation template.
The EKS cluster will need the following node groups. Ensure that you select the node groups as specified in the following table.
ingest
c5.xlarge (4 Core 8 GB RAM)
2
common
c5.2xlarge (8 Core 32 GB RAM)
2
Step 5: Provide the S3 bucket name from section 3, the Cloudformation will create the S3 bucket, S3 bucket name needs to be globally unique.
Step 6: Click Next, and follow the instructions on the screen to create the stack.
Step 1: Once the stack is fully provisioned, connect to the AWS EKS cluster using AWS CLI as mentioned below. To do this, you need to install and configure AWS CLI.
Step 2: Once the EKS cluster is up and running, execute the following commands to check the health of the cluster.
Step 1: Download this yaml file and run the commands mentioned below:
Step 2: Once the chart is installed, you should see pods similar to those shown below in your kube-system
namespace.
Step 1: Create the apica-ascent namespace in your EKS cluster
Step 2: Download the values file below and customise it per the instructions below.
Step 2: Replace the following variables in the values.yaml and proceed to install the Apica Ascent stack on your EKS cluster.
awsServiceEndpoint
: https://s3.\.amazonaws.com
s3_bucket
: S3 bucket name
s3_region
: <s3 region>
redis_host: <ElastiCache endpoint>
postgres_host: <Aurora endpoint>
postgres_user: <>
postgres_password: <>
alert: "PrometheusDown" expr: absent(up{prometheus="<namespace>/<namespace>prometheus-prometheus"})
Step 4: Deploy Apica Ascent stack using helm and updated values file, see below for additional options to customise the deployment for enabling https
Step 5 (Optional): To enable https using self-signed certificates, please add additional options to helm and provide the domain name for the ingress controller. In the example below, replace "ascent.my-domain.com" with the https domain where this cluster will be available.
Step 6: Once the EKS cluster is created, add the VPC cidr in the Postgresql and Elasticache security group (create by first cloudformation template) inbound rules for port 5432 and 6379. Step 7: After the installation is complete execute the below command to get the service endpoint
You have a working EKS cluster with the ALB ingress controller for deploying Apica Ascent. Please refer to the section on enabling AWS ALB ingress controller in the PaaS Deployment document for more details.
You can now deploy the Apica Ascent Helm chart. AWS ALB should get provisioned and you should be able to see the UI and push data using http/https ingest. Most agents such as fluent-bit, vector, logstash provide http output support. Please refer to the section on Integrations on how to configure them to publish data to Apica Ascent endpoint.
If you have deployed the Apica Ascent on private subnet, you may need to map global accelerator (under Integrated services) to access the public endpoints and DNS on top of it.
By default the ALB will be configured to used port 80
If you want to route traffic on https(port:443), ensure your listener rules are configured on port 443. As a prerequisite, your global accelerator should have all the certificates configured for this to work. Please refer to the AWS documentation on how to configure HTTPS - https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html
This guide will take you through deploying Apica Ascent on an EKS cluster with node groups using custom AMI on AWS using CloudFormation and HELM. The installation will create user roles and policies that are necessary to create a GP3 storage class and a private S3 bucket with default encryption and bucket policies.
The Cloud formation template provisions the following resources
S3 Bucket
Launch template with custom AMI
IAM roles and S3 bucket policies
EKS Cluster
EKS Node Pools with custom AMI
Before you begin, ensure you have the following prerequisites.
You have permission on your AWS account to create an Elastic Kubernetes Service, S3 Bucket.
Pre baked Custom AMI to spin up for EKS managed node groups ( AWS recommended: https://github.com/awslabs/amazon-eks-ami)
KMS Key and appropriate key policy to allow Auto scaling group to access the KMS key (https://aws.amazon.com/premiumsupport/knowledge-center/kms-launch-ec2-instance/)
The AWS CLI is installed and configured on your machine
Helm 3 is installed on your machine.
If you choose to use AWS RDS, then follow the guidelines below for your RDS
Note down your RDS instance DNS, username, and password handy.
Use Postgres V13 RDS type with 100GB storage, io1 with 3000 IOPS.
We recommend creating a db.m5.xlarge for deployments ingesting < 500GB/day and db.m5.2xlarge for deployments ingesting > 500GB/day
Ensure EKS cluster can connect to AWS RDS Instance. Once the EKS cluster is created, add the security group of EKS cluster in the Postgres security group inbound rules for port 5432
Step 1: To prepare for the deployment, first obtain the Cloudformation template that will be used at the URL:
https://logiq-scripts.s3.ap-south-1.amazonaws.com/aws-custom-ami.yaml
Step 2: On your AWS Console, navigate to CloudFormation and select Create stack.
Step 3: Provide the options as shown below
Under Prerequisite - Prepare template, select Template is ready.
Under Specify template > Template source, select Amazon S3 URL - Here you will specify the template URL from Step 1 above.
Step 4: To deploy the EKS cluster, we need to enter the Custom AMI-ID using which the node groups of EKS will be spun up. We need a VPC with 2 subnets. Select them from the Network Configuration and Subnet configuration dropdown lists. Also, provide the ssh-keys for the EKS node groups.
The EKS cluster will need the following node groups. Ensure that you select the node groups as specified in the following table.
ingest
c5.xlarge (4 Core 8 GB RAM)
2
common
c5.2xlarge (8 Core 32 GB RAM)
2
Step 5: Provide the S3 bucket name from section 3, the Cloudformation will create the S3 bucket, S3 bucket name needs to be globally unique.
Step 6: Provide the KMS key ARN
Step 7: Click Next, and follow the instructions on the screen to create the stack.
Step 1: Once the stack is fully provisioned, connect to the AWS EKS cluster using AWS CLI as mentioned below. To do this, you need to install and configure AWS CLI.
Step 2: Once the EKS cluster is up and running, execute the following commands to check the health of the cluster.
Step 3: Tag both subnets used in EKS cloud formation as mentioned below. Replace the cluster name, region, and subnet-id.
Step 1: The Amazon Elastic Block Store Container Storage Interface (CSI) Driver provides a CSI interface used by Container Orchestrator to manage the lifecycle of Amazon EBS volumes. To enable GP3 volumes for this stack, run the following commands.
Step 2: Once the chart is installed, you should see pods similar to those shown below in your kube-system
namespace.
Step 1: Download the values file below and customize it per the instructions below.
Step 2: Replace the following variables in the values.yaml from step 1 above and proceed to install the Apica Ascent stack on your EKS cluster.
awsServiceEndpoint
: https://s3.<aws-region>.amazonaws.com
s3_bucket
: S3 bucket name
s3_region
: <s3 region>
Step 3: Create the apica-ascent namespace in your EKS cluster
Step 4: Deploy Apica Ascent stack using helm and updated values file, see below for additional options to customize the deployment for enabling https and to use external Postgres database
Step 5 (Optional): To enable https using self-signed certificates, please add additional options to helm and provide the domain name for the ingress controller. In the example below, replace "ascent.my-domain.com" with the https domain where this cluster will be available.
Step 6 (Optional): If you choose to deploy using AWS RDS, provide the following options below to customize
Step 7: After the installation is complete execute the below command to get the service endpoint
Deploying Apica Ascent on AWS EKS with Aurora PostgreSQL and ElastiCache Redis on production VPC using Cloudformation
Before proceeding, ensure the following prerequisites are met:
Reference Video: https://www.youtube.com/watch?v=3Yw7TfeojDQ
Helm 3 is installed on your machine. For installation instructions, visit Helm Installation Guide.
AWS CLI is installed and configured on your machine. For installation instructions, visit AWS CLI Installation Guide.
You have permissions on your AWS account to create resources including Elastic Kubernetes Service (EKS), S3 Bucket, Aurora PostgreSQL, and ElastiCache.
You have configured an AWS Virtual Private Cloud (VPC) and two (2) subnets. For configuration, visit AWS Create a VPC.
Note: These resources will be automatically generated during the CloudFormation deployment process and are not prerequisites for initiating it.
The Cloudformation template provisions the following resources:
S3 Bucket
EKS Cluster
EKS Node Pools
Aurora PostgreSQL
ElastiCache Redis
Note: Ensure you're operating within the same region as your Virtual Private Cloud (VPC).
Once logged in to the AWS GUI, with the search bar on your top left, search for "CloudFormation" and select the CloudFormation Service
On your top right, click "Create Stack" and select "With new resources (standard)"
Step 1: Create stack
On the following page (step 1 of Stack creation) select "Template is ready" and "Amazon S3 URL". In the "Amazon S3 URL" textfield, enter https://logiq-scripts.s3.ap-south-1.amazonaws.com/apicamultiset.yaml
Click "Next"
Step 2: Specify stack details
Enter a stack name
Enter an IAM role name for Logiq-EKS (Save this value for later)
This will create the IAM role
Enter an S3 bucket name (Save this value for later)
Make sure to apply AWS Bucket Naming Rules
Enter a master username for Postgresql. (Save this value for later)
Master Username can include any printable ASCII character except /, ', ", @, or a space.
Enter a password for the above Postgresql user. (Save this value for later)
Master Password should be > 8 characters.
Enter a database name for the Postgresql database
Start with small letter
Provide a Virtual Private Cloud (VPC) ID
You can find this by searching for "VPC" on the top left search bar, select the VPC service, click the VPCs resource and select your region. Locate your VPC and copy the VPC ID.
Enter two (2) Private Subnets.
From where you left of extracting your VPC ID, on the left hand side menu, select Private Subnets and copy the two Subnet IDs you intend you use
Click "Next"
Step 3: Configure stack options
Nothing required here, navigate to the bottom of the page and click "Next"
Step 4: Review and create
You can review your configurations, acknowledge the capabilities and click "Submit"
Deployment might take a while. Please wait until the stack status shows "CREATE_COMPLETE" before proceeding.
If the stack for some reason would fail, make sure to check the stack events (select your stack, and click on "Events") to understand the error. In order to fix your error, delete the stack and re-do the above.
After successfully deploying the initial CloudFormation stack, follow these steps to create an EKS Cluster:
From the previous steps, you can click on "Stacks" or with the search bar on your top left, search for "CloudFormation" and select the CloudFormation Service
On your top right, click "Create Stack" and select "With new resources (standard)"
Step 1: Create stack
On the following page (step 1 of Stack creation) select "Template is ready" and "Amazon S3 URL". In the "Amazon S3 URL" textfield, enter https://logiq-scripts.s3.ap-south-1.amazonaws.com/EKSCluster-multiset.yaml
Click "Next"
Step 2: Specify stack details
Enter a stack name (Whatever you want to call the cluster)
Enter a name for the EKS cluster (Save this value)
Enter the ARN value of the IAM role you created in the previous CloudFormation deployment (Navigate to the previous stack and check outputs tab to find the value for the key LogiqEKSClusterRole)
Select a VPC id in the dropdown (This guide assumes you’ve created these previously)
Select two VPC Private subnets with NAT GATEWAY Attatched for the above VPC from each dropdown.
Enter "2" in the fields for “Ingest Worker Node count” and “Common Worker Node count”
Enter the S3 bucket name you used in the previous CloudFormation deploy in “S3 bucket for Logiq”
Click "Next"
Step 3: Configure stack options
Nothing required here, navigate to the bottom of the page and click "Next"
Step 4: Review and create
You can review your configurations, acknowledge the capabilities and click "Submit"
Deployment might take a while. Please wait until the stack status shows "CREATE_COMPLETE" before proceeding.
Open a terminal and execute the following:
Example:
Expected output:
Execute the following command:
Expected output:
Download the following file:
Change directory to where you downloaded the file with your terminal (using the command cd)
Example:
Execute the following command:
Expected output:
Execute the following command:
Expected output:
Execute the following command:
Expected output:
Execute the following command:
Download the following file
Open the file in a text editor and replace the following values:
awsServiceEndpoint:
Replace <region>
with your specific AWS region, for example eu-north-1
. The updated URL format should look like this:
s3_bucket:
Replace the placeholder <>
with the actual name of the S3 bucket that was created during the initial CloudFormation deployment:
s3_region:
Replace the AWS service endpoint region in the URL with the appropriate region, for example, eu-north-1
:
s3_url:
Replace <region>
with the region where you installed it. For example:
redis_host:
Replace <>
with your specific ElastiCacheCluster endpoint generated from the first CloudFormation deploy. For example, if your generated endpoint is apicaelasticache.hdsue3.0001.eun1.cache.amazonaws.com
, you would update the configuration as follows:
You can find this value from the output tab of the first CloudFormation deploy
postgres_host:
Replace <>
with your AuroraEndpoint endpoint. For example, if your generated endpoint is apicadatafabricenvironment-aurorapostgresql-0vqryrig2lwe.cluster-cbyqzzm9ayg8.eu-north-1.rds.amazonaws.com
, you would update the configuration as follows:
You can find this value from the output tab of the first CloudFormation deploy
postgres_user:
Replace <>
with the master username you created during the first CloudFormation deployment:
postgres_password:
Replace <>
with the password for the user you created during the first CloudFormation deployment:
s3_access:
Replace <>
with your AWS CLI access key id.
To retrieve your AWS credentials from your local machine, execute the command below in your terminal:
AWS_ACCESS_KEY_ID
Replace <>
with your AWS CLI access key id.
To retrieve your AWS credentials from your local machine, execute the command below in your terminal:
s3_secret
Replace <>
with your AWS CLI secret access key id.
To retrieve your AWS credentials from your local machine, execute the command below in your terminal:
AWS_SECRET_ACCESS_KEY
Replace <>
with your AWS CLI secret access key id.
To retrieve your AWS credentials from your local machine, execute the command below in your terminal:
Namespace
Search the file for "namespace" and replace <namespace>/<namespace>-prometheus-prometheus
with the following:
To modify the administrator username and password, replace the existing details with your desired credentials.
Save the file
Execute the following command:
Expected output:
Ensure that the path to your values.yaml
file is correctly set, or run the commands from the directory that contains the file. Use the following command to deploy:
Expected output:
To get the default Service Endpoint, execute the below command:
Under the EXTERNAL-IP
column you will find a URL similar to below:
Use this in your browser to access the Ascent UI
Login credentials is as defined in your values.yaml
file
As the EKS Cluster has been created, we can now set up the access rules for our VPC.
From the 1st stack, we need to find the SecurityGroups
which was created
Navigate to either EC2
or VPC
by using the search bar, and then look for Secutiry Groups
on the left hand side menu
Search for your security group using the ID
extracted from the 1st stack and click on the ID
Click on "Edit inbound rules"
Now we need to set up 2 rules
TCP
on Port 6379
and source is your VPC CIDR
Postgresql (TCP)
on Port 5432
and source is your VPC CIDR
Click "Save Rules"
To enable https using self-signed certificates, please add additional options to helm and provide the domain name for the ingress controller.
In the example below, replace apica.my-domain.com
with the https domain where this cluster will be available.
To customize your TLS configuration by using your own certificate, you need to create a Kubernetes secret. By default, if you do not supply your own certificates, Kubernetes will generate a self-signed certificate and create a secret for it automatically. To use your own certificates, perform the following command, replacing myCert.crt
and myKey.key
with the paths to your certificate and key files respectively:
In order to include your own secret, please execute the below command and replace $secretName
with your secret to enable HTTPS and replace apica.my-domain.com
with the https domain where this cluster will be available.
The following guide takes you through deploying Apica Ascent PaaS in an Azure Kubernetes Service cluster. The deployment involves the following steps:
If you have an AKS cluster that is appropriately sized for deploying Apica Ascent and handling your data ingestion rate, you can skip the AKS cluster creation step. However, you must label the nodes as specified in the Node pool configuration table failing which the pods in the cluster will not land in any of the nodes.
Install Azure CLI.
Connect to your Azure account using Azure CLI by running the following command.
Create a resource group by running the following command.
Create four node pools as described and labelled in the following table. The table describes the node pool configuration for ingesting 100 GB of data per day.
Important: Ensure that the node pools are all created in the same availability zone.
Size
Node pool
Node labels
Node count
8 Core 16 GB RAM (F8s_v2)
ingest (CPU intensive)
logiq.ai/node=ingest
1
8 Core 16 GB RAM (F8s_v2)
common
logiq.ai/node=common
1
4 Core 8 GB RAM (F4s_v2)
db
logiq.ai/node=db
1
4 Core 8 GB RAM (F4s_v2)
hauler
logiq.ai/node=hauler
1
Execute the following commands to create the AKS cluster along with the node pools described above.
The following code block depicts example commands for creating the AKS cluster with the node pool specification provided in the table above.
Connect to the AKS cluster by first visiting the Azure portal, navigating to the AKS cluster you created, and selecting it. Next, click on the Connect icon and follow the instructions displayed on the right panel. Execute the following command and you should see the nodes in your cluster.
Follow the instructions on MinIO’s site to create an Azure blob storage account. Once you login, click the “+” button on the right hand corner of the screen to create a bucket named logiq
. Note down this bucket name since we'd be using it in later steps.
Create an Azure ultra disk storage class using the YAML configuration provided below.
Verify that the storage class has been created by running the following command.
Download the values.yaml
file from this location and replace the following variables in the file.
Next, follow the instructions on Apica Ascent’s Quickstart guide to spin up the Apica Ascent stack on this AKS cluster.
Once you've successfully deployed Apica Ascent, you can (optionally) disable monitoring AKS with container insights on your cluster by running the following command.
Azure Blob Storage lifecycle management lets you move your data to the best access tier and expire data at the end of its lifecycle with a set of rule-based policies. When using Azure Blob Storage with the Apica Ascent platform, you can leverage Azure Blob Storage lifecycle management policies to build and employ rules that manage the lifecycle and cost of the data ingested by your Apica Ascent instance and stored on Azure Blobs.
You can use Azure Blob Storage lifecycle management policies to:
Immediately move blobs from cool to hot storage tiers on access, to optimize for performance.
Move blobs, blob versions, and blob snapshots to a cooler storage tier if they've not been accessed or modified for a while, to optimize for cost. In this scenario, the lifecycle management policy can move objects from hot to cool, from hot to archive, or from cool to archive.
Delete blobs, blob versions, and blob snapshots at the end of their lifecycles.
Define rules to be run daily at the storage account level.
Apply rules to containers or to a subset of blobs, using name prefixes or blob index tags as filters.
A lifecycle management policy contains one or more rules that define a set of actions to take on a blob based on a pre-configured condition being met. To add lifecycle management rules in your Storage Account, do the following.
Step 1: On your Storage account in the Azure portal, navigate to the Data Management section and click on Lifecycle management > Add a rule.
Step 2: Provide a Rule name and then select the following options.
Under Rule scope, select Limit blobs with filters
Under Blob subtype, select Base blobs
Step 3: On the Base blobs tab, you can setup data retention and choose what you'd like to do with the data post the retention period. In this example, we've chosen to retain data for 30 days and delete the data post the 30-day retention period.
To set up a similar rule, do the following.
Under the If condition, set the retention period to 30 days.
Under the Then condition, select Delete the blob from the dropdown menu.
Optionally, set up additional qualifiers and conditions if you'd like more granular control over data lifecycle management by clicking Add conditions and configuring the conditions accordingly.
Step 4: Since we selected Limit blobs with filters on the Details page, select Filter set to add an optional filter. On this tab, you can add prefixes to help identify which blobs to delete when the data lifecycle condition is met. If you do not set a prefix, all the objects in the container will be deleted. The prefix follows the format: <container_name>/<blob_name>
In this example, we set a prefix that deletes objects that store logs from the qradar
namespace in the default_log_store
blob inside the testlogiqblog
container.
This completes the configuration of a single data lifecycle management rule on your Azure Storage account. You can add and configure as many rules as you deem appropriate based on your data lifecycle needs.
This document provides a comprehensive guide on setting up, configuring, and managing Ascent in OpenSh
This guide walks you through the process of setting up an OpenShift cluster on AWS using the Red Hat OpenShift Service on AWS (ROSA). You'll need to create an account on Red Hat, configure AWS CLI and ROSA CLI, and set up roles and networking for the cluster. Once the setup is done, you'll be able to deploy Helm charts to your cluster.
Linux Environment: ROSA CLI works only on Linux.
Red Hat Account: Required for accessing the OpenShift Console.
AWS Account: Required to create and manage OpenShift clusters on AWS.
AWS CLI and ROSA CLI: Required for interacting with AWS and ROSA.
Visit Red Hat Console: Navigate to Red Hat OpenShift Console.
Create an Account: If you don't already have one, create a Red Hat account by following the instructions on the site.
Login: Once your account is created, log in to the console.
After logging into the Red Hat Console, navigate to Clusters and check if there are any existing OpenShift clusters.
If a cluster already exists, proceed to deploy Helm charts.
If no cluster exists, you will need to create one.
Enable OpenShift Services in AWS
Go to the ROSA Getting Started Guide.
Follow the instructions to enable the OpenShift service in your AWS account. This may include linking your AWS account with Red Hat if it hasn't been done already.
Install and Configure AWS CLI
Download AWS CLI: Download and install AWS CLI from AWS CLI Installation.
Configure AWS CLI: Run the following command to configure the AWS CLI with your AWS account credentials:
if you sso then use that command to take login and create a profile out of it
Install and Configure ROSA CLI
Download ROSA CLI: Download and install the ROSA CLI (Red Hat OpenShift Service on AWS CLI) by following the ROSA CLI Installation Guide.
Set Up ROSA CLI: Add the ROSA CLI to your system's PATH. Edit your .bashrc
or .zshrc
file and add the following line:
Reload the Shell: After modifying the file, reload your shell configuration:
Get the rosa login command from openshift page and proceed for next steps after the successfull login
Create ROSA Roles in Your AWS Account
Once logged into ROSA, create the necessary roles for your OpenShift cluster in your AWS account:
Configure Networking (Optional)
You can either use an existing network (VPC, subnets, etc.) or create a new network for your OpenShift cluster. If creating a new network, set up the following in AWS:
Virtual Private Cloud (VPC)
Subnets
Security groups
Create the OpenShift Cluster
Open the Red Hat OpenShift Console.
Follow the steps in the console to create a new OpenShift cluster:
Select your region, network, and instance type.
Configure the cluster (OpenShift version, node configuration, etc.).
Create the final role for your OpenShift cluster to gain access:
Once the cluster is successfully created, you can:
Access via Console: Use the URL provided in the ROSA CLI output.
Log in to OpenShift: Use the login credentials provided during cluster creation.
Prerequisites
Kubernetes 1.18, 1.19 or 1.20
Helm 3.2.0+
Dynamic PV provisioner support in the underlying infrastructure
Install Helm 🔨
Follow the instructions in the Helm Installation Guide to install Helm on your local machine.
Log in to the OpenShift Cluster 🔐
install oci-client in openshift console create htpasswd login creds to get access into your local Log into your OpenShift cluster using the oc login
command:
Deploy a Helm Chart 📦
View the current projects in your OpenShift cluster:
Create a namespace (project) for the Apica deployment:
Replace <namespace>
with the desired name for your namespace.
Switch to the newly created namespace:
This ensures that all subsequent commands are executed within the specified namespace.
Install the Helm chart for Apica:
Use the helm install
command to deploy a Helm chart. Replace the placeholders with your actual values
Example:
Retrieve the service accounts created in the namespace:
Service accounts may require elevated permissions to perform specific operations, such as creating pods. Assign the privileged
Security Context Constraints (SCC) to the necessary service accounts.
Use this command to assign the privileged
SCC to a service account:
Use the following script to grant the privileged
SCC to multiple service accounts:
To confirm the assigned permissions, run:
Replace privileged
with a different SCC if less permissive access is sufficient.
Ensure the namespace and service accounts exist before assigning SCCs.
The Node Exporter component is not deployed by default because the required port is busy. To resolve this, set the port to 9101
during deployment.
Go to openshift console and open daemonset of node exporter > check the pod status > if its not running open the yaml file update the port to 9101 and save it
Ensure these commands are executed after the Helm installation, either as part of the helm install
command or separately.
once its deployed update the cert in secret kubernetes-ingress-default-cert
Go to openshift console > project > secrets > replace the original secret key and cert > save it
OpenShift Documentation: Security Context Constraints
Helm Documentation: Helm Charts
This guide provides the necessary steps for deploying Apica on OpenShift.
Verify the Deployment 🛡️
After deploying the Helm chart, verify the installation:
Check the status of your pods:
With our RTO solution we want everything to be simple, straight-forward, and easy to use. With this in mind, we created this pre-built dashboard, which has everything you'll need to monitor your Boomi Infrastructure in one spot.
As you can see in the high-lighted box, we have all your infrastructure tabs, such as Compute, Storage, Memory, and Network. In addition to this we have included JVM and Atom-specific metrics, which is integral to monitoring your Boomi integrations. We have all your alerts and logs in the same dashboard for convenience:
This allows for easy correlation between logs and machine performance, leading to faster root cause.
Ascent has a robust dashboard capability which provides numerous methods to visualize your critical data - across metrics, events, logs, and traces. You can visualize and detect anomalies, and get notified before any potential incident.
Expand Create
from the navbar and click dashboard
. A popup will be displayed by prompting the dashboard name.
Enter a name for the dashboard.
Click on Create
. You will be navigated to your new dashboard.
Click Add Widget
button at the bottom of the page.
Select a query that you want to visualize.
Select a visualization for the selected query.
Click Add to Dashboard.
Click Apply Changes.
Steps to add a widget:
Navigate to the dashboard for which you need to add a widget.
Click the More Options icon in the top right corner of the dashboard page.
Click edit from the dropdown.
Click the Add Widget button at the bottom of the page.
Select a query that you want to visualize.
Select a visualization for the selected query.
Click Add to Dashboard.
To publish, simply click the publish button on the top right corner of the dashboard page. After your dashboard is published, you can share it with anyone using the share option.
The dashboard widgets execute the queries and visualize the results. You can configure all the widgets to automatically refresh to get the latest results.
Steps to make an auto-refreshing dashboard:
Navigate to any dashboard.
Click the down arrow button in the refresh button, which is available in the top right corner.
Select the time interval in which all the widgets in the dashboard will be refreshed automatically.
Now, the dashboard widgets will be refreshed on every selected time interval.
You can get more out of the monitoring dashboard when it monitors various aspects of your target. Building that kind of dashboard with more tricky queries can be time-consuming and delay you from knowing more about your application and infrastructure.
We help you to build a viable dashboard with a few clicks by providing you with pre-defined dashboards for some of the most common use cases.
Expand the dashboard option from the navigation bar.
Click on the Import dashboard.
You will be navigated to the import dashboard page, where you will be provided with some of the pre-defined dashboards.
Click the import button for the dashboard.
You will be displayed with a pop-up that will ask you to provide the dashboard name and data source which will be used by the queries in the dashboard widgets.
After providing the inputs, click Import. You will be navigated to the dashboard page.
Grafana is an open-source tool for building monitoring and visualization tools. It has a public repository with thousands of dashboards published and maintained by its community, which is being used by millions of people to monitor its infrastructure.
We are providing some popular dashboards from their public repository for you to monitor.
Navigate to the import dashboard page.
Click the Import Button under the Grafana dashboard.
Select the type of target that you want to monitor. You will be provided with the list of dashboards available for the selected target.
Click the view button to get details of that dashboard.
Click select to import the dashboard.
Provide a name for the dashboard and select the datasource that will be used by the widgets.
Click Import. You will be redirected to the dashboard.
Our supported monitoring targets include:
FluentBit
Go Application
Kafka
Kubernetes
Redis
Postgres
Prometheus
Node
JSON Data source provides a quick and flexible way to issue queries to arbitrary RESTful endpoints that return JSON data.
Navigate to Integrations > Data Sources
Click New Data Source
Select JSON
Create the data source
Enter a name for your data source (required)
Enter Basic Authentication credentials (optional)
Navigate to Queries and click New Query
In the drop-down on your left hand side, select your new data source
The following HTTP options are used for sending a query
url
- This is the URL where the RESTful API is exposed
method
- the HTTP method to use (default: get
)
headers
- a dictionary of headers to send with the request
auth
- basic auth username/password (should be passed as an array: [username, password]
)
params
- a dictionary of query string parameters to add to the URL
data
- a dictionary of values to use as the request body
json
- same as data
except that it’s being converted to JSON
path
- accessing attributes within the response
field
- rows of objects within selected attribute
The response data can be filtered by specifying the path
and fields
parameters. The path
filter allows accessing attributes within the response, for e.g. if a key foo
in the response contains rows of objects you want to access, specifying path
foo
will convert each of the objects into rows.
In the example below, we are then selecting fields
volumeInfo.authors, volumeInfo.title, volumeInfo.publisher and accessInfo.webReaderLink
The resulting data from the above query is a nicely formatted table that can be searched in Apica Ascent or made available as a widget in a dashboard
Apica Ascent supports SQL, NoSQL, Time Series, and API data sources along with Apica Ascent's inbuilt data source to help you query data from different sources to make sense of your data. Currently supported Data Sources on Apica Ascent are shown below
Apica’s Run-Time Observability (RTO) solution provides comprehensive monitoring for Boomi Atoms, Molecules, and processes, enabling engineers to monitor, manage, and optimize the performance of their Boomi environments. RTO aggregates key performance metrics, logs, and traces to give full visibility into the health and operation of Boomi runtimes, ensuring seamless integration processes and proactive issue resolution.
Complete Visibility Monitor critical system metrics (CPU, memory, disk, network) and application performance (Boomi APM, Java APM) for comprehensive oversight of Boomi environments.
Improved Troubleshooting By correlating logs, metrics, and traces, RTO makes diagnosing performance issues and root causes easier Boomi Atoms, Molecules, and Clouds. Engineers can quickly pinpoint areas of concern and address them with minimal downtime.
Process monitoring RTO allows engineers to track the performance and execution status of Boomi integrations in real time, providing visibility into each stage of the process. This ensures that any delays, failures, or resource issues within the integration pipeline can be identified and addressed proactively.
Track key performance metrics across Boomi Atoms, Molecules, and Clouds, including system resource usage and Boomi-specific metrics such as process execution time, execution count, and Atom worker activity.
Capture and analyze logs from Atom workers, web servers, and containers, enabling quick issue identification and efficient troubleshooting.
Pre-built customizable dashboards to visualize critical performance metrics, helping engineers prioritize and resolve issues quickly.
Monitor critical Boomi metrics such as CPU utilization, process failures, and JVM health, notifying engineers of potential issues in real time. These alerts help ensure quick issue resolution, minimize downtime, and maintain optimal performance across Boomi environments.
Additional RTO content links:
This page describes the deployment of Apica Ascent PaaS on MicroK8s Red Hat 8/9.
Red Hat v8 / v9
32 vCPU
64GB RAM
500GB disk space on the root partition
The first step in this deployment is to install MicroK8s on your machine. The following instructions pertain to RHEL-based Linux systems. To install MicroK8s on such systems, do the following.
Update package lists by running the following command.
we need to use following commands to install microk8s on Red Hat
Once you added these repl repos to server we need to run the below commands - Note: If you are running RHEL On-Premises with Red Hat CDN (Connected Environment) where subscription management is handled automatically:
Install core
using Snap by running the following command.
Install MicroK8s using Snap by running the following command.
Join the group created by MicroK8s that enables uninterrupted usage of commands that require admin access by running the following command.
Create the .kube directory.
Add your current user to the group to gain access to the .kube
caching directory by running the following command.
Generate your MicroK8s configuration and merge it with your Kubernetes configuration by running the following command.
Check whether MicroK8s is up and running with the following command.
MicroK8s is now installed on your machine.
Now that we have MicroK8s up and running, let’s set up your cluster and enable the add-ons necessary such as Helm, CoreDNS, ingress, storage, and private registry. MicroK8s readily provides these addons and can be enabled and disabled at any time. Most of these add-ons are pre-configured to work without any additional setup.
To enable add-ons on your MicroK8s cluster, run the following commands in succession.
Enable Helm 3.
If you get a message telling you have insufficient permissions, a few of the commands above which tried to interpolate your current user into the command with the $USER variable did not work. You can easily fix it by adding your user to the microk8s group by specifying the name of the user explicitly:
Enable a default storage class that allocates storage from a host directory.
Enable CoreDNS.
Enable ingress.
To enable the Ingress controller in MicroK8s, run the following command:
Enable HTTPS (optional)
How to Create a Self-Signed Certificate using OpenSSL:
Create server private key
Create certificate signing request (CSR)
Sign the certificate using the private key and CSR
To create a TLS secret in MicroK8s using kubectl
, use the following command:
This command creates a secret named "https" containing the TLS keys for use in your Kubernetes cluster. Ensure you have the cert.crt
and cert.key
files in your current directory or specify full paths.
To enable Ingress on microk8s with a default SSL certificate, issue the following command:
Enable private registry.
Copy over your MicroK8s configuration to your Kubernetes configuration with the following command.
To provision an IP address, do the following:
Check your local machine's IP address by running the ifconfig
command, as shown below.
Enable MetalLB by running the following command.
Now that your MicroK8s environment is configured and ready, we can proceed with installing Apica Ascent PaaS on it. To install Apica Ascent PaaS using Helm, do the following:
Add the Apica Ascent PaaS Helm chart to your Helm repository by running the following command.
Update your Helm repository by running the following command.
Create a namespace on MicroK8s on which to install Apica Ascent PaaS.
Make sure you have the necessary permissions to copy a file to the specified folder on the Linux machine. If you are not providing the cloud S3 details and want to spin a S3 bucket internally within the VM then comment out below lines in the values.yaml file:
And then change s3gateway to 'true':
In the values file, add the below fields global-> environment section with your own values.
In the global -> chart section, change S3gateway to false.
In the global -> persistence section, change storageClass as below.
Install Apica Ascent PaaS using Helm with the storage class set to microk8s-hostpath
with the following command.
If you see a large wall of text listing configuration values, the installation was successful - Ascent PaaS is now installed in your MicroK8s environment!
Spin up an internal S3 bucket using minio - If you are not using S3 cloud related variables in the values.yaml file and want to create an internal S3 bucket, then create a s3-batch.yaml file and execute the below batch job to spin S3 bucket using minio:
Create s3-batch.yaml file and insert the below contents:
Apply the batch job:
Delete the Thanos pods (apica-ascent-thanos-compactor-XXXXXX and apica-ascent-thanos-storegateway-0) so it can created again after applying the s3-batch.yaml:
Now that Apica Ascent PaaS is installed on your MicroK8s cluster, you can visit the Apica Ascent PaaS UI by either accessing the MetalLB endpoint we defined in the pre-install steps (if you installed/configured MetalLB), or by accessing the public IP address of the instance over HTTP(S) (if you aren't utilizing MetalLB).
If you are load balancing the hosting across multiple IPs using MetalLB, do the following to access the Apica Ascent PaaS UI:
Inspect the pods in your MicroK8s cluster in the apica-ascent
namespace by running the following command.
Find the exact MetalLB endpoint that's serving the Apica Ascent PaaS UI by running the following command.
The above command should give you an output similar to the following.
Using a web browser of your choice, access the IP address shown by the load balancer service above. For example, http://192.168.1.27:80
.
If you aren't utilizing MetalLB, you can access the Ascent UI simply by accessing the public IP or hostname of your machine over HTTP(S); you can utilize HTTPS by following the "enabling HTTPS" step in the "Enabling Add-Ons" section above.
You can log into Apica Ascent PaaS using the following default credentials.
Username: flash-admin@foo.com
Password: flash-password
If we have any issues on injecting the logs are something then we have to add new paths that we need to add as part of upgrade of the image, from cli edit the ingress.
Copy the bellow paths and paste them and save.
Kubernetes cluster is unreachable
If you see an error message indicating the Kubernetes cluster is unreachable, the Microk8s service has stopped - simply restart it. Error text:
Solution:
Restarting the Ascent installation after a failed installation
If the Ascent installation using the supplied .yaml file fails, you must first remove the name in use. Error text:
Solution:
Some crucial indicators in the Boomi logs that should be used to stay ahead of potential issues within your infrastructure. Two key ones already baked into the RTO solution are "Multiple Head Nodes" and "Container Version Mismatch". We'll use "Multiple Head Nodes" as an example to create a similar alert on Boomi logs.
To query and alert on a log first navigate to Queries -> New Report (Top Right)
Then click the pencil icon:
In this next view you can copy the below configuration. This will be the same for most Boomi logging alerts.
The main field you'll want to change is the message value at the bottom. Change this to the value you want the query to look for in the Boomi logs. For example, if you had a process called "Salesforce API" and wanted to be alerted on anything except INFO, the configuration would look like this:
This is saying if the log message contains "Salesforce API" and the loglevel is not "INFO", meaning the value is failure, warning, error, etc., then return the timestamp and hostname (configured under "group by" above) for the machine it failed on.
Out of the box, we provide 15 pre-configured alerts:
You are off and running once an alert destination is set up and attached to the alert. We integrate with most ITSM providers and have custom webhooks for the ones we don't. To configure an alert destination, navigate to Integrations -> Alert Destinations -> New Alert Destination (Top Right) -> and fill the appropriate fields out:
Once the destination is configured you simply add it to the alert:
The alerts we provide cover most, if not all of your bases, but if you wanted to configure an alert on another metric or log it's simple enough. Navigate to your RTO dashboard, click the three dots on the widget you'd like to be alerted on:
Fill out the fields below with your criteria. Best Practice: Apica recommends clicking the "+" so you can set two values: a lower threshold that throws a warning and a higher threshold that throws critical or emergency. This sets your team up for a proactive monitoring approach.
On creating a new dashboard, it will be blank and not have any widgets. Widgets are created on top of the queries. If you don't have any queries created, please follow the documentation for to create one.
Apica Ascent also includes a Grafana dashboard import section where popular Grafana dashboards can be directly imported into Apica Ascent. See the section on for how to use that capability.
is a lightweight, pure-upstream Kubernetes aiming to reduce entry barriers for K8s and cloud-native application development. It comes in a single package that installs a single-node (standalone) K8s cluster in under 60 seconds. The lightweight nature of Apica Ascent PaaS enables you to deploy Apica Ascent on lightweight, single-node clusters like MicroK8s. The following guide takes you through deploying Apica Ascent PaaS on MicroK8s.
In this step, we'll provision an endpoint or an IP address where we access Apica Ascent PaaS after deploying it on MicroK8s. For this, we'll leverage which is a load-balancer implementation that uses standard routing protocols for bare metal Kubernetes clusters.
Prepare your values.microk8s.yaml file. You can use the starter file we've created to configure your Apica Ascent PaaS deployment. If you need to download the file to your own machine, edit, and then transfer to a remote linux server, use this command:
Optionally, if you are provisioning public IP using Metallb, use the instead. run the following command.
Apica Ascent supports numerous services for AWS directly as Datasources.
You can find documentation for the following AWS Data sources below
Apica Ascent helps you to connect to your Redshift Cluster to easily query your data and build dashboards to visualize data easily
The first step is to create a Redshift Cluster, please navigate to get started with Amazon Redshift https://docs.aws.amazon.com/redshift/latest/gsg/getting-started.html
The second step is to create and add Redshift to Apica Ascent and add fill out the below fields and save
Name: Name the data source (e.g. Redshift)
Host: The full URL to your instance
Port: The port of the instance endpoint (e.g. 3306)
User: A user of the instance
Password: The password for the above user
Database name: The name of the virtual database for Redshift (e.g. Redshift1)
That's it. Now navigate the Query editor page and start querying your data
The first thing you’ll need to do is create an IAM user that will have permission to run queries with Amazon Athena and access the S3 buckets that contain your data.
To configure your Amazon Athena with the necessary permission, please navigate to https://docs.aws.amazon.com/athena/latest/ug/setting-up.html
After your Amazon Athena is configured, the next step is to create and add the Amazon Athena data source to your Apica Ascent.
The next step is to fill out the details using the information from the previous step:
AWS Access Key and AWS Secret Key are the ones from the previous step.
AWS Region is the region where you use Amazon Athena.
S3 Staging Path is the bucket Amazon Athena uses for staging/query results, you might have created it already if you used Amazon Athena from the AWS console - simply copy the same path.
That's it. Now navigate to the Query editor to query your data.
Apica Ascent connects to Amazon CloudWatch using the boto3 client with the help of the AWS CloudWatch data source making it easy for you to query CloudWatch metrics using its natural syntax, analyze, monitor, and create Visualization of data.
The first step is to create an Amazon CloudWatch data source and provide all details such as the Name, AWS Region, AWS Access Key, AWS Secret Key
Name: Name of the Data Source
AWS Region: Region of your AWS account
AWS Access Key: access_key_id of your IAM Role
AWS Secret Key: secret_access_key of your IAM Role
These instructions assume you are familiar with the CloudWatch ad-hoc query language. To make exploring your data easier the schema browser will show which Namespaces and Metrics ( optionally dimensions ) you can query.
Apica Ascent includes a simple point-and-click wizard for creating CloudWatch queries. You can launch the query wizard by selecting the CloudWatch YAML data source and selecting the "Construct CloudWatch query" icon.
In the query designer, you can select the Namespace, Metric, and Dimensions along with the Stat. You can add one or more Namespaces, Metric using a simple point-and-click interface.
You are now ready to run and plot the metric. Running Execute will automatically create the built-in line graph for your metric. You can further create additional visualizations using "New Visualization".
For the curious, here is a breakdown of the YAML syntax and what the various attributes mean. NOTE: You don't need to write or type these to query data. The No-code built-in WYSIWYG editor makes it easy to query CloudWatch without writing any code. Let us look at the YAML syntax now. It should be an array of MetricDataQuery
objects under a key called MetricsDataQueries
.
Here's an example that sends MetricDataQuery
Your query can include the following keys:
LogGroupName
string
LogGroupNames
array of strings
StartTime
integer or timestring
EndTime
integer or timestring
QueryString
string
Limit
integer
Let's look at a slightly more complex example and query AWS Lambda metrics for AWS Lambda Errors. In this example, we are using the MetricName: "Errors" for the "AWS/Lambda" Namespace.
When selecting the AWS/Lambda Namespace, you can see the available MetricNames
AWS/Lambda
Errors
ConcurrentExecutions
Invocations
Throttles
Duration
IteratorAge
UnreservedConcurrentExecutions
Below is an example query that tracks AWS Lambda errors as an aggregate metric. The StartTime is templatized and allows dynamic selection.
You can further click on the Errors MetricName and it will expand to show you Dimensions available for further querying. For AWS/Lambda, the Dimension FunctionName provides further drill down to show Cloudwatch metrics by Lambda Function Name.
The query can be further enhanced by making the lambda function name, a templatized parameter. This allows you to pull metrics using a dropdown selection e.g. a list of lambda functions. The FunctionName template below can also be retrieved from another database as a separate query.
An expression can be a mathematical expression of metrics or an sql query.
Each list item in the MetricDataQueries list in the above mentioned examples can contain either an Expression or a MetricStat Query item. we can provide a combination of both also.
In the above example the second item uses MetricStat syntax to fetch data and the first item uses expression syntax to fetch the data. here, first item is used to perform a math expression on the data fetched by second item.
In the above example first and second items are used to fetch metric data. the third item is used to perform a mathematical expression on the data fetched using the first and second items.
The period indicates granularity and stat indicates the group by operation to be performed on the fetched data.
or
for some detailed information on querying cloud-watch metrics, follow the below links https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricData.html https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/get-metric-data.html
Apica Ascent supports Amazon Elasticsearch Service as a Data Source which makes it easy for you to perform interactive log analytics, real-time application monitoring, a website search, and more. OpenSearch is an open-source, distributed search and analytics suite derived from Elasticsearch
Let's see how Amazon Elasticsearch Service works
The first step is to add Amazon Elasticsearch Service Data Source to your Apica Ascent. Fill out the below fields while configuring the data source
Name: Name of the data source
Endpoint: The endpoint of the Amazon Elasticsearch Service instance
Region: The region of the Amazon Elasticsearch Service instance
Access Key (optional): Access Key of the IAM user
Secret Key (optional): Secret of the IAM user
That's all. The next step is to navigate to the Query editor page and start querying the data
Apica Ascent can connect to your Data Bricks cluster and SQL Endpoints
The first step is to obtain the Host, HTTP Path, and an Access Token for your endpoint from the Data Bricks. Refer to the below link to obtain the necessary information
The next step is to add Data Source in Apica Ascent using information obtained from the above source
That's it, now navigate to the Query Editor page and start querying
Apache Druid is a real-time database to power modern analytics applications. Druid is designed to quickly ingest massive quantities of event data, and provide low-latency queries on top of the data.
Apica Ascent can connect to Druid to help you analyze your data.
The first step is to add Druid Data Source to your Apica Ascent. Fill out the below fields while configuring the data source
Name: Name of the data source
Scheme (optional): HTTP/HTTPS scheme of your Druid instance
Host: Host Endpoint point of your Druid Instance
Port: Port address of your Druid Instance
That's all. Now navigate to the Query editor page and start Querying
Apica Ascent lets you connect to your MongoDB for seamless Querying of data
The first step is to create a MongoDB data source and provide all details such as the Name of the data source, Connection String, and the Database Name of your MongoDB. Optionally you can add Replica Set Name
The next step is to Navigate to the Query editor page and start Querying your data from your MongoDB
Apica Ascent lets you connect to the Microsoft SQL Server which is a relational database management system (RDBMS) that supports a wide variety of transaction processing, business intelligence, and analytics applications in corporate IT environments.
With Apica Ascent you can easily query, monitor, and visualize the MS SQL Server data
The first step is to create and add MS SQL Server to Apica Ascent and add fill out the below fields and save
Name: Name the data source
User: A user of the MS SQL Server which is in the form: user@server-name
Password: The password for the above user
Server: This is your server address without the .database-windows.net
suffix
Port: The port of the MS SQL Server
TDS Version: TDS Version of your MS SQL Server
Character Set: Character encoding of your MS SQL Server
Database name: The name of the database of the MS SQL Server
That's all, now navigate to the Query Editor to query your data
Apica Ascent helps you to connect to Amazon RDS for MySQL data source which makes it easy for you to query MySql using its natural syntax, analyze, monitor, and create Visualization of data.
All your queried results are cached, so you don't have to wait for the same result set every time, also Apica Ascent helps you to visualize your data gathered from queries
The first step is to create a MySQL data source and provide all details such as the Host, Port, User, Password, and Database name of your MySQL
Name: Name the data source
Host: This is your MySQL server address
Port: The port of the MySQL Server
User: A user of the MySQL Server
Password: The password for the above user
Database name: The name of the database of the MySQL Server
That's it. Now navigate to the Query editor page to query and create Visualizations of your data
Apica Ascent lets you connect to your MySQL easily and provides a rich Query editor to Query your MySQL using its natural syntax.
All your queried results are cached, so you don't have to wait for the same result set every time, also Apica Ascent helps you to visualize your data gathered from queries.
The first step is to create a MySQL data source and provide all details mentioned below
Name: Name the data source
Host: This is your server address
Port: The port of the MySQL Server
User: A user of the MySQL Server
Password: The password for the above user
Database name: The name of the database of the MySQL Server
Optionally you can use the SSL protocol for the secure transaction of information
That's it. Now navigate to the Query editor page to query your data
Also make sure to Check out for instructions to whitelist your Apica Ascent IP address when connecting to Synapse.
Apica Ascent helps you to connect your Snowflake for faster querying and visualization of your data.
The first step is to create a Snowflake data source and provide all details mentioned below
Name: Name the data source
Account: Unique identifier of Snowflake account within the organization
User: Unique username of your Snowflake account
Password: The password for the above user
Warehouse: The Warehouse name
Database name: The name of the database of the Snowflake.
That's it. Now navigate to the Query editor page to query your data
Elasticsearch data source provides a quick and flexible way to issue queries to one or more indexes in an Elasticsearch cluster
The first step is to create the data source and provide the Elasticsearch cluster URL and optionally provide the basic auth login
and password
.
You can then proceed to the query editor and run the search query. The query uses JSON as passed to the Elasticsearch search API
In the query editor view, select the Elasticsearch data source created above. On the left column, click on the refresh icon to refresh the schemes (indexes). The schemes are expandable and show the schema details.
Apica Ascent lets you connect to your PostgreSQL easily and provides a rich Query editor to Query your PostgreSQL using its natural syntax.
All your queried results are cached, so you don't have to wait for the same result set every time, also Apica Ascent helps you to visualize your data gathered from queries.
The first step is to create a PostgreSQL data source and provide all details such as the Host, Port, User, Password, and Database name of your PostgreSQL
The next step is to Navigate to the Query editor page and start Querying your data from your PostgreSQL schemes
Apica Ascent also supports external Prometheus compatible data sources e.g. Prometheus, Thanos, VictoriaMetrics. If you have any hosted such an instance in the cloud or on-premises, you can connect that in Apica Ascent as a data source. you can use your existing queries in Apica Ascent to build dashboards and create alerts.
Please see the Infra & Application Monitoring section to know about configuring various data sources.
A time-series database (TSDB) is a software system that is optimized for storing and serving time series through associated pairs of time(s) and value(s).
The Ascent Logs Data source allows querying logs from all the namespaces and applications ingested by Ascent.
How to Use the Ascent Checks Data Source to Query Checks
Navigate to the queries page in your dashboard to begin
In the Queries page, click on the "New Query" button to start a new query
On the left sidebar, click on Ascent Checks. This will display a list of all available checks that you can query.
From the list, expand the check you want to query by clicking on it. This will show more details about the check.
Click on the right arrow next to the check id to append it to the query editor
To use a specific time range, enter the start and end times as Unix epoch values.
To query relative durations, use the duration option with a human-readable format (e.g., 1d
for one day, 2h
for two hours, etc.)
Example:
Start: 1609459200
(Unix epoch for the start time)
End: 1609545600
(Unix epoch for the end time)
Duration: 1d
(relative to the current time)
Once your query is complete, click on Execute to run the query and see the results.
This page describes port numbers that are supported in the Apica Ascent Platform. Note all ports numbers are enabled by default but can be enabled based on use case.
514 / 7514 (TLS) - RFC 5424 documentation / Read RFC 3164 Document
515 / 7515 (TLS) - Syslog CEF
516 / 7516 (TLS) - Syslog Fortinet / RFC 6587
517 - Raw RCP / catch-all for non-compliant syslog / Debug
2514 / 20514 (TLS) - https://en.wikipedia.org/wiki/Reliable_Event_Logging_Protocol
80/443 (TLS)
25224/ 25225 (TLS) - Logstash Protocol
24224/24225 (TLS) - Fluent Protocol Documentation
Apica Ascent comes with a number of integration options for ingest and incident management.
Ingest lets you connect with and securely ingest data from popular log forwarding agents, cloud services, operating systems, container applications, and on-premise infrastructure. You can secure data ingestion from your endpoints into Apica Ascent by generating a secure ingest token.
Apica Ascent currently integrates with over 150+ data sources via support for popular open source agents and open protocols. See below for links on how to enable specific integrations.
You can also ingest logs from endpoint devices running:
Apica Ascent's support for logs based HIDS enables data ingest directly from log based hids agents. Supported agents are as follows
Users can choose a variety of incident management integrations to bring reliability into your production operations.
ServiceNow
Multiple data collection packages are combined and automate the installation and management using the script from tarball logiqcoll.tgz. The data collection agent consists of
Prometheus metrics data collector
Fluent-bit log data collector
Prometheus node exporter that produces Linux system OS metrics data
OSSEC agent
The Ascent Checks Data source allows querying all check results that run within a specific interval, providing comprehensive access to all the associated data. This data source is available to tenants
Apica Ascent uses an ingest token to secure the ingestion of log data from your data sources into your Apica Ascent deployment. You can generate a secure ingest token using the Apica Ascent UI and the command-line tool, apicactl.
You can obtain a secure ingest token from the Account tab in the Settings page on the Apica Ascent UI.
To begin, click on the username on the navbar, click "Settings", and click on the "Account" tab if you are not brought there by default. Your secure ingest token will be displayed under the Ingest Token field. Click the Copy icon next to the token to copy it to your clipboard.
To generate a secure ingest token, do the following.
Run the following command to generate a secure ingest token:
Copy the ingest key generated by running the command above and save it in a secure location.
You can now use this ingest token while configuring Apica Ascent to ingest data from your data sources, especially while using log forwarders like Fluent Bit or Logstash.
How to use Ascent Logs data source to query logs from namespaces and applications
Navigate to the queries page in your dashboard to begin
In the Queries page, click on the "New Query" button to start a new query
On the left sidebar, click on Ascent Checks. This will display a list of all available checks that you can query.
Write the query in YAML format and execute query, for example:
VPC
, select your region in the VPCs
dropdown and on the VPC list you have a column called IPv4 CIDR
, copy your CIDR
and use it as a source.VPC
, select your region in the VPCs
dropdown and on the VPC list you have a column called IPv4 CIDR
, copy your CIDR
and use it as a source.You can use theLOGIQ-IO Connector provided by Apica Ascent via GitHub to push Apache Beam metrics to Push-Gateway.
In order to set up push-gateway, just run the provided docker image.
You'll now have an instance of push-gateway running on your machine, you can verify by running the below command.
Once the instance is up and running, we can then specify it in our prometheus.yaml config file.
View the sample prometheus.yaml file below.
Great, now you will have Prometheus scraping the metrics from the given PushGateway endpoint.
Now that you have configured the push-gateway and Prometheus, it's time that we start configuring the Apache Beam Pipeline to export the metrics to the Push-Gateway instance.
For this, we will refer to the tests written in the LOGIQ-IO Connector here.
The metrics()
method is responsible for sending the metrics to the given push-gateway endpoint. Once the pipeline has been modeled, we are good to view the result. we should be able to access the metrics of the PipelineResult at PipelineResult.metrics
, now just pass this to the Push-Gateway class with the correct endpoint and call the write()
method with the metrics.
Hooray, you have successfully pushed your Apache Beam Metrics to Push-Gateway. These metrics will shortly be scraped by Prometheus and you would be able to access them.
You can check your results on Push-gateway Instance and Prometheus Instance.
If you want to apply any transformations other than the default transformers, you can specify the functions with withCounterTransformer, withDistributionTransformer, withGaugeTransformer
provided by the PushGateway class. This allows you to perform complex operations and achieve granularity within your metrics.
Amazon Web Services, Inc. is a subsidiary of Amazon that provides on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis. These cloud computing web services provide distributed computing processing capacity and software tools via AWS server farms.
AWS provides a wide array of services that generate observability data via different software tools. Apica Ascent integrates all these tools into a single interface for easy consumption
See the sub modules to this page for integrations for AWS enabled by Apica Ascent.
Observability data collector agent
Agent code and its installation tarball are released in the GitHub repository below
Please refer to the agent readme.txt
below for installation instructions.
Apache Beam is an open-source, unified model for defining both batch and streaming data-parallel processing pipelines. Using one of the open-source Beam SDKs, you build a program that defines the pipeline. The pipeline is then executed by one of Beam’s supported distributed processing back-ends, which include Apache Flink, Apache Spark, and Google Cloud Dataflow.
Beam is particularly useful for embarrassingly parallel data processing tasks, in which the problem can be decomposed into many smaller bundles of data that can be processed independently and in parallel. You can also use Beam for Extract, Transform, and Load (ETL) tasks and pure data integration. These tasks are useful for moving data between different storage media and data sources, transforming data into a more desirable format, or loading data onto a new system.
Apica Ascent provides integrations to let you integrate Apica Ascent with Apache Beam. Checkout the submodules to learn more about it.
Pull Check results from Apica's ASM
The Apica Source Extension is a component designed to integrate with the Apica Synthetics and Load test platform. Its main purpose is to retrieve check results from the Apica platform and make them available for further processing or analysis within another system or tool.
This checks can also be forwarded to further downstream destinations for further processing.
Navigate to the Integrations page and click on the New Plugin button and select Apica option.
Provide the Plugin Name of choice and click Next.
Enter your Apica ASM platform credentials.
Configure your resource requirements and click Next.
Finally enter URL of Apica ASM Instance, Timezone, Version of Apica Source Extension Plugin and Number of workers to be used for the Apica data pull.
After entering these details, click on the Done button.
After creation of the Apica ASM source extension, you will see the check data which will have all the check details.
By utilizing the LOGIQ IO Connector, we can directly send the Apache Beam metrics created , to Prometheus as Metrics.
There are two mechanisms to achieve this, namely :-
In the context of the push mechanism done via remote-write, Prometheus can be used to collect and store the data that is being pushed from the source system to the destination system. Prometheus has a remote-write receiver that can be configured to receive data using the remote-write protocol.
Once the data is received by the remote-write receiver, Prometheus can store the data in its database and perform real-time analysis and aggregation on the data using its powerful query language. This allows system administrators and operators to monitor the performance of various components of the system in real-time and detect any issues or anomalies.
In this way, Prometheus can replicates it data to third-party system for backup , analysis and long-term storage .
In a distributed system, the pull mechanism is a common way of collecting data from various sources by querying them periodically. However, there may be cases where it's not feasible to collect data using the pull mechanism, such as when the data is only available intermittently or when it's costly to query the data source repeatedly. In such cases, the PushGateway method can be used to enable a pull mechanism via a push approach.
Prometheus offers a PushGateway component that allows applications to push metrics into it via an HTTP API. Applications can use this API to push metrics to the PushGateway instead of exposing an endpoint for Prometheus to scrape. Prometheus can then pull the data from the PushGateway, acting as if it were a normal Prometheus target.
To use the push gateway method in a pull mechanism, applications periodically push their metrics data to the Push-gateway via the HTTP API. Prometheus, in turn, periodically queries the Push-Gateway to collect the data. The Push-Gateway stores the metrics data until Prometheus scrapes it, which can be configured to occur at regular intervals.
This approach can be useful when collecting metrics from systems that are not always available or when it's not feasible to pull the data frequently. Additionally, it allows applications to expose metrics data without exposing an endpoint for Prometheus to scrape, which can be more secure.
Overall, the Push-Gateway method can be a powerful tool in enabling a pull mechanism for collecting metrics in a distributed system via Prometheus.
LOGIO-IO Connector currently accepts pushing metrics to Prometheus by this method. For more info, refer to this post.
As you can see in this, we have simulated a pipeline flow with sample log lines as our input which will then be pushed to Apica Ascent.
We can not simply push the log lines to Apica Ascent. Instead, we first need to transform the log lines to LogiqEvent(s). Here is the Transformer()
class that handles the transformations.
Once, we have successfully transformed the log lines to LogiqEvent(s), we can now use the LOGIQ-IO Connector to export these LogiqEvent(s) to our Apica Ascent Instance.
Specify the ingest endpoint and your ingest token. You can find the ingest token in your Apica Ascent Settings.
Hooray, you should now be able to see logs flowing into Apica Ascent from your Pipeline with namespace="ns", host="test-env", appName="test-app" and clusterId as "test-cluster"
.
You can forward Cloud watch logs to Apica Ascent using 2 methods.
Apica Ascent CloudWatch exporter Lambda function
Run Logstash on VM (or docker)
You can export AWS CloudWatch logs to Apica Ascent using an AWS Lambada function. The AWS Lambda function acts as a trigger for a CloudWatch log stream.
This guide explains the process for setting up an AWS Lambda function and configuring an AWS CloudWatch trigger to forward CloudWatch logs to Apica Ascent.
Apica Ascent provides CloudFormation templates to create the Apica Ascent CloudWatch exporter Lambda function.
Depending on the type of logs you'd like to export, use the appropriate CloudFormation template from the following list.
Use the following CloudFormation template to export AWS Lambda function logs to Apica Ascent.
Use the following CloudFormation template to export CloudTrail logs to Apica Ascent.
Use the following CloudFormation template to export Flowlogs logs to Apica Ascent.
Use the following CloudFormation template to export cloudwatch logs.
This CloudFormation stack creates a Lambda function and its necessary permissions. You must configure the following attributes.
Once the CloudFormation stack is created, navigate to the AWS Lambda function (logiq-cloudwatch-exporter
) and add a trigger.
On the Add trigger page, select CloudWatch, and then select a CloudWatch Logs Log Group.
Once this configuration is complete, any new logs coming to the configured CloudWatch Log group will be streamed to the Apica Ascent cluster.
Cloudwatch logs can also be pulled using agents such as logstash. If your team is familiar and has logstash in place, follow the instructions below to configure logstash to pull logs from CloudWatch.
Install Logstash on Ubuntu virtual machine as shown below.
Logstash comes with no default configuration. Create a new file /etc/logstash/conf.d/logstash.conf
with these contents, modifying values as needed:
Using the, We can easily stream our data to Apica Ascent for further processing.
Let's look at this by going over the sample starter repository provided .
You can obtain an ingest token from the Apica Ascent UI as described . You can customize the namespace
and cluster_id
in the Logstash configuration based on your needs.
Your AWS Cloud watch logs will now be forwarded to your Apica Ascent instance. See the Section to view the logs.
Parameter
Description
APPNAME
Application name - a readable name for Apica Ascent to partition logs.
CLUSTERID
Cluster ID - a readable name for Apica Ascent to partition logs.
NAMESPACE
Namespace - a readable name for Apica Ascent to partition logs.
LOGIQHOST
IP address or hostname of the Apica Ascent server. (Without http or https)
INGESTTOKEN
JWT token to securely ingest logs. Refer here to generate ingest token.