Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 799 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

Apica Docs

Loading...

PRODUCT OVERVIEW

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

TECHNOLOGIES

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

GETTING STARTED

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

DATA SOURCES

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

INTEGRATIONS

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Release Notes

Release Notes for recent software and platform updates across all Ascent components:

  • Fleet

  • Flow

  • Lake

  • Observe

For specific information or questions relating to a given release, please contact Apica Support at [email protected].

Ascent Overview

Welcome to Apica Ascent

Apica Ascent is a powerful full-stack Telemetry Data Management and Observability platform designed to streamline and optimize your entire data life-cycle: Collect, Control, Store, and Observe.

Ascent 2.14.2

The Ascent 2.14.2 release includes the following updates:

Bugfixes

  • Fix ASM+ deadlocks after too many connections


Ascent 2.13.0

Note

Internal backend and infrastructure optimizations have been completed to improve performance and reliability. No user-facing changes in this release.


Ascent Synthetics

Welcome to Apica Docs!

All the details to fully configure, enable and optimize the Ascent platform.

Getting Started

Data Management and Pipeline Control

Backend Improvements
  • Improved backend data routing with per-tenant message isolation for greater reliability and scalability in CRS.

  • Refactored Kafka client to enhance performance and reliability.

  • Enhanced partitioning logic to improve data isolation and reliability in single-tenant deployments.

  • Improved integration authentication for Kafka brokers.

Other Technical Enhancements

  • All relevant backend services flows transitioned to new single tenant architecture, enabling future scalability and reliability improvements.

Data Management powered by InstaStoreTM

Apica Ascent uses a patented storage engine : InstaStore for all it's data persistence. InstaStore provides unique benefits due to it's architecture using object storage as the underlying storage layer. You can read more on the InstaStore here.

Observability Data Lifecycle

The Apica Ascent platform consolidates observability data into a single platform, focusing on (M)etrics, (E)vents, (L)ogs, and (T)races, commonly known as MELT data. This integrated approach to MELT data is crucial for efficient root cause analysis. For example, if you encounter an API performance issue represented by latency metrics, being able to drill down to the API trace and accompanying logs becomes critical for faster root cause identification. Unlike traditional observability implementations, where data sits in separate silos that don't communicate, Apica Ascent ensures a cohesive view of all MELT data, leading to faster root cause outcomes.

This makes the Ascent platform a reliable first-mile solution for consolidating MELT data within your enterprise environments. Experience a seamless, fully integrated observability solution that enhances performance and efficiency across your infrastructure.

Capabilities

Apica Ascent employs a unified view of your enterprise, utilizing a full-stack approach to observability data life cycle management. By seamlessly integrating various capabilities, Apica Ascent facilitates a smoother and more effective root cause analysis process.

Communities and Compliance

Apica Ascent takes pride in its commitment to security and compliance. The platform adheres to SOC 2 Type II Compliance standards and is an esteemed member of the Cloud Native Computing Foundation (CNCF).

Component Versions - Ascent v2.14.2
Components
Version

Flash

v3.19.2

Coffee

v3.20.1

ASM

13.38.1

NG Private Agent

1.0.9

Check Execution Container: Browser

fpr-c-130n-10.2.1-716-r-2025.04.02-0-base-2.0.0

Check Execution Container: Zebratester

zt-7.5a-p0-r-2025.04.02-0-base-1.2.0

Observability

Ascent 2.14.3

Observe

Bug Fixes

  • Improved the “Forgot Password” experience so it no longer assumes the email address is always the same as the username, reducing confusion and login friction for users.

Ascent Synthetics

Bug Fixes

  • Corrected behavior where a user deleted in Ascent could still remain configured in ASM, ensuring that user deletions are handled consistently across the system.

Flow

Bug Fixes

  • Fixed an issue in the pipeline graph view where creating a new pipeline could replace existing attached pipelines, so existing connections are now preserved when adding new pipelines.


Component Versions - Ascent v2.14.3

Components
Version

Ascent 2.14.1

The Ascent 2.14.1 release includes the following updates:

Enhancements

  • Update the ChecksFillThroughASMAPI function to support all subsequent sync attempts from running

  • Enable redirection to checks page based on license type

  • Create alerts API fails on newly spun up environment

  • Update icons of New Alert Destinations and New Data Sources

  • Enable search filter under pending invitations to handle advanced results according to the search

  • Enable ability to assign config files to fleet agents

  • Other minor bugs and defects


Component Versions - Ascent v2.14.1

Components
Version

Ascent 2.10.8

Fixes and Improvements

This release focuses on improving reliability and consistency in Pipeline and Check Analytics behavior. The following issues have been resolved:


Pipeline & Data Flow Fixes

  • Missing Pipeline Visibility Resolved issue where a second pipeline intermittently disappeared from the Pipeline view, even though it was applied to the namespace. Both pipelines now appear correctly.

  • Pipeline Preview Fails Fixed a problem where the pipeline preview in Logs & Insights failed to display results, even when valid data existed.

  • Deleted Rules Still Visible Addressed a bug where deleted pipeline rules continued to appear in the UI, marked in red. These are now correctly removed from the view once deleted.


Synthetics / Check Analytics Fixes

  • Incorrect Fatal Check Messages Corrected misleading error messages shown in Ascent for failed synthetic checks. The system now shows accurate messages, in line with ASM.

  • Monitor Group User Assignment Bug Fixed an issue where assigning or un-assigning users to/from a monitor group appeared to have “no changes to apply,” even when actions were taken. User assignments now save and reflect correctly.


Component Versions - Ascent v2.10.8

Components
Version

Check Execution Container: Runbin

runbin-2025.04.17-0-base-2.2.1

Check Execution Container: Postman

postman-2025.04.17-0-base-1.4.1

Bnet (Chrome Version)

10.2.2 (Chrome 130)

Zebratester

7.5A

ALT

6.13.3.240

IronDB

1.5.0

Check Execution Container: Runbin

runbin-2025.04.17-0-base-2.2.1

Check Execution Container: Postman

postman-2025.04.17-0-base-1.4.1

Bnet (Chrome Version)

10.2.2 (Chrome 130)

Zebratester

7.5A

ALT

6.13.3.240

IronDB

1.5.0

Flash

v3.19.3

Coffee

v3.20.2

ASM

13.38.2

NG Private Agent

1.0.9

Check Execution Container: Browser

fpr-c-130n-10.2.1-716-r-2025.04.02-0-base-2.0.0

Check Execution Container: Zebratester

zt-7.5a-p0-r-2025.04.02-0-base-1.2.0

Check Execution Container: Runbin

runbin-2025.04.17-0-base-2.2.1

Check Execution Container: Postman

postman-2025.04.17-0-base-1.4.1

Bnet (Chrome Version)

10.2.2 (Chrome 130)

Zebratester

7.5A

ALT

6.13.3.240

IronDB

1.5.0

Flash

v3.19.1

Coffee

v3.20.1

ASM

13.38.1

NG Private Agent

1.0.9

Check Execution Container: Browser

fpr-c-130n-10.2.1-716-r-2025.04.02-0-base-2.0.0

Check Execution Container: Zebratester

zt-7.5a-p0-r-2025.04.02-0-base-1.2.0

Check Execution Container: Runbin

runbin-2025.04.17-0-base-2.2.1

Check Execution Container: Postman

postman-2025.04.17-0-base-1.4.1

Bnet (Chrome Version)

10.2.1 (Chrome 130)

Zebratester

7.0B

ALT

6.13.3.240

IronDB

1.5.0

Flash

v3.15.10

Coffee

v3.16.14

ASM

13.36.1

NG Private Agent

1.0.9

Check Execution Container: Browser

fpr-c-130n-10.2.1-716-r-2025.04.02-0-base-2.0.0

Check Execution Container: Zebratester

zt-7.5a-p0-r-2025.04.02-0-base-1.2.0

Data Management

Telemetry Pipeline

Data Forwarding

Splunk Forwarding

Fleet Management

Time-Series AI/ML

Monitoring Overview

Log Management

Distributed Tracing

Getting Started Guide
Adding Data Sources
Integrations
Dashboards
Data Explorer
Boomi RTO Quick Start

Time Series Databases

A time-series database (TSDB) is a software system that is optimized for storing and serving time series through associated pairs of time(s) and value(s).

NoSQL Data Sources

SQL Data Sources

OLAP

List of Integrations

Ascent User Interface

The Apica Ascent UI is your window to your IT data, logs, metrics, events and traces - ingested from all of your data sources and converged onto a single layer. The Apica Ascent UI enables you to perform a wide range of operations - from simple uptime monitoring and error troubleshooting to capacity planning, real-time forensics, performance studies, and many more.

You can access the Apica Ascent UI by logging into your Apica Ascent instance URL using your account credentials.

Onboarding page

The navigation bar at the right side of the UI allows you to access your:

  • Dashboards

  • Queries

  • Alerts

  • Explore - Logs, Topology, etc.

  • Integrations - Forwarders, Source Extensions, Alert Destinations, Pre-created dashboards,

  • Settings

The following sections in this article describe the various elements of the Apica Ascent UI and their purposes.

Dashboards

A dashboard is a collection of visualizations and queries that you've created against your log data. You could create dashboards to house visualizations and queries for specific as well as multiple data sources. Everything contained within a dashboard is updated in real-time.

The Dashboards page on the Apica Ascent UI lists all the dashboards you've created within Apica Ascent. Dashboards that you've favorited are marked with a yellow star icon and are also listed under the Dashboards dropdown menu for quick access in the navigation bar. The following images depict dashboards that you can create using Apica Ascent.

Queries

Apica Ascent enables you to write custom queries to analyze log data, display metrics and events, view and customize events and alerts, and create custom dashboards. The Queries page lists all of the queries you've created on Apica Ascent. You can mark some of them as favorites or archive the ones, not in use. Your favorite queries also appear in the drop-down menu of the Queries tab for quick access.

Alerts

Apica Ascent enables you to set alerts against events, data, or metrics of interest derived from your log data. The Alerts page on the UI lists all of the alerts you've configured on Apica Ascent. You can sort and display the list of alerts by their name, message, state, and the time they were last updated or created. Depending on your user permissions within Apica Ascent, you can click an alert to view more information or reconfigure the alert based on your need.

The following image depicts a typical Alerts page on the Apica Ascent UI.

Explore

The Explore page lists all of the log streams generated across your IT environment that are being ingested into Apica Ascent. The Explore page lists and categorizes logs based on Namespace, Application, ProcID, and when they were last updated. By default, logs are listed by the time they were ingested with the most recent applications appearing on the top. You can filter log streams by namespaces, applications, and ProcIDs. You can also filter them by custom time ranges.

You can also click into a specific application or ProcID to view logs in more detail and to search through or identify patterns within your log data.

The following image depicts a typical Explore page on the Apica Ascent UI.

Journals

The Journals page lists all the important events that have occurred in the Apica Ascent Platform. Audit Trail are listed by their Name, Message, and the time they were created. The Journals page tracks important service notifications like service restarts, license expiry, etc...

Create

The Create dropdown menu enables you to create new alerts, dashboards, queries, reports, and checks as shown in the following image.

A function-specific modal appears based on what you select from this dropdown menu.

MongoDB

Apica Ascent lets you connect to your MongoDB for seamless Querying of data

Adding MongoDB to Apica Ascent

The first step is to create a MongoDB data source and provide all details such as the Name of the data source, Connection String, and the Database Name of your MongoDB. Optionally you can add Replica Set Name

Selecting MongoDB data source
Adding MongoDB

Querying MongoDB

The next step is to Navigate to the Query editor page and start Querying your data from your MongoDB

AWS

Apica Ascent supports numerous services for AWS directly as Datasources.

You can find documentation for the following AWS Data sources below

  • Amazon Athena

  • Amazon Cloudwatch

  • Amazon Elasticsearch Service

RSS

Apica Ascent helps you to connect your RSS for faster querying and visualization of your data.

Adding RSS to Ascent

Configuring RSS data source

Ascent 2.10.6

We're excited to share the latest improvements and bug fixes in Ascent 2.10.6. This release focuses on enhancing stability and user experience across all our products.

Ascent Synthetics

What's Fixed

Ascent 2.12.1

1. Observe

  • New ilert Integration Apica Ascent now natively integrates with ilert for alerting and incident management. This enables users to forward alerts from Ascent directly into ilert for on-call scheduling, escalation, and incident response.

Ascent 2.14.0

Ascent Synthetics

New Features & Improvements

Ascent 2.10.5

Release 2.10.5 - What's New

We're excited to share the latest improvements in version 2.10.5. This release focuses on making your monitoring and observability experience smoother with bug fixes, enhanced integrations, and better user interface improvements.

Flow

Ascent 2.10.4

We are pleased to announce the release of Ascent v2.10.4, which brings important performance optimizations, stability improvements, and functional enhancements across the platform.


Flow

Ascent 2.11.1

Observe

New Features & Improvements

  • Notification destinations can now only be viewed by authorized users, providing improved access control and clarity.

Why Implement OpenTelemetry?

OPENTELEMETRY VS PROMETHEUS: A COMPARISON

Solving Observability Fragmentation

Before OpenTelemetry, organizations struggled with multiple proprietary monitoring agents, each producing telemetry data in incompatible formats, leading to data silos, increased complexity, and a lack of correlation between logs, metrics, and traces. These challenges made it difficult to gain a comprehensive view of system health and troubleshoot issues efficiently.

OpenTelemetry unifies telemetry collection across all environments, ensuring seamless interoperability between tools and services, allowing teams to consolidate their observability strategy while reducing operational overhead and improving incident response times.

Amazon Athena

Setting up your Amazon Athena

The first thing you’ll need to do is create an IAM user that will have permission to run queries with Amazon Athena and access the S3 buckets that contain your data.

To configure your Amazon Athena with the necessary permission, please navigate to

Register and Gain Access

If you are not yet a registered user, please follow the steps below:

Signing up for Apica Ascent SaaS

To sign up for Apica Ascent SaaS, follow these steps:

  1. Go to the .

Druid

Apache Druid is a real-time database to power modern analytics applications. Druid is designed to quickly ingest massive quantities of event data, and provide low-latency queries on top of the data.

Apica Ascent can connect to Druid to help you analyze your data.

Adding Druid Data Source

The first step is to add Druid Data Source to your Apica Ascent. Fill out the below fields while configuring the data source

Amazon Elasticsearch Service

Apica Ascent supports Amazon Elasticsearch Service as a Data Source which makes it easy for you to perform interactive log analytics, real-time application monitoring, a website search, and more. OpenSearch is an open-source, distributed search and analytics suite derived from Elasticsearch

Let's see how Amazon Elasticsearch Service works

Creating and Adding Amazon Elastic Service Data Source

The first step is to add Amazon Elasticsearch Service Data Source to your Apica Ascent. Fill out the below fields while configuring the data source

Ascent Synthetics

The Ascent Checks Data source allows querying all check results that run within a specific interval, providing comprehensive access to all the associated data. This data source is available to tenants

Accessing the Ascent Checks Data Source

The Ascent Checks Data Source is available to all tenants with ASM+ access. You should be able to query on all checks out of the box.

AWS

Overview

Amazon Web Services, Inc. is a subsidiary of Amazon that provides on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis. These cloud computing web services provide distributed computing processing capacity and software tools via AWS server farms.

Data Source Overview

Apica Ascent supports SQL, NoSQL, Time Series, and API data sources along with Apica Ascent's inbuilt data source to help you query data from different sources to make sense of your data. Currently supported Data Sources on Apica Ascent are shown below

Reducing Vendor Lock-in

Traditional observability solutions force users into proprietary ecosystems, making migrations and integrations difficult. This often leads to increased costs, limited customization, and difficulty in adapting to evolving business needs.

OpenTelemetry decouples telemetry collection from backend storage and analysis, allowing organizations to switch observability platforms without re-instrumenting applications, ensuring greater flexibility, scalability, and long-term sustainability of their monitoring strategy.

Improving Performance and Cost Efficiency

With OpenTelemetry, organizations can eliminate redundant monitoring agents, reducing system overhead and lowering observability costs. By optimizing data collection and leveraging intelligent sampling and filtering, businesses can gain deep visibility without incurring excessive data ingestion costs. Additionally, OpenTelemetry provides the flexibility to fine-tune data collection strategies, ensuring that only the most valuable telemetry data is retained and processed.

This targeted approach not only enhances performance but also mitigates the risk of overwhelming storage and analytics systems with excessive data. By standardizing observability across an organization, OpenTelemetry helps engineering teams make data-driven decisions more effectively while controlling infrastructure spending.

Enhancing Developer and SRE Productivity

OpenTelemetry’s automatic instrumentation and standardized APIs make it easier for developers and SRE teams to implement observability across applications. By reducing the need for manual instrumentation, teams can accelerate deployment cycles and ensure that telemetry data is collected consistently and reliably.

This results in faster debugging, reduced MTTR (Mean Time to Resolution), and increased deployment confidence, while also enabling proactive issue detection and automated root cause analysis. With OpenTelemetry, organizations can shift from reactive troubleshooting to predictive monitoring, allowing engineering teams to optimize performance before issues escalate into major incidents.

Regulatory and Compliance Benefits

Many industries require strict compliance with security and auditing standards, including regulations such as GDPR, HIPAA, and SOC 2. OpenTelemetry provides structured telemetry data that simplifies compliance reporting by offering detailed, real-time insights into system activity, ensuring better traceability and transparency. By capturing rich metadata within traces, metrics, and logs, OpenTelemetry enhances auditability, enabling security teams to quickly detect and respond to anomalies.

Furthermore, OpenTelemetry’s vendor-neutral approach allows organizations to centralize security monitoring across multiple platforms, ensuring consistency in compliance efforts while reducing reliance on proprietary solutions.

Integrations

AWS provides a wide array of services that generate observability data via different software tools. Apica Ascent integrates all these tools into a single interface for easy consumption

See the sub modules to this page for integrations for AWS enabled by Apica Ascent.

Amazon Redshift
Amazon RDS - MySQL
Dashboards list page shows all the dashboards. Create new dashboards or Import pre created dashboards.
A typical monitoring dashboard on Apica Ascent
Another example of a Apica Ascent dashboard
Explore page of Apica Ascent
Create menu
Querying MongoDB data

Common Use Cases for OpenTelemetry

OPENTELEMETRY TRACING AND METRICS IN ACTION

Application Performance Monitoring (APM)

End-to-end distributed tracing for microservices: OpenTelemetry enables deep visibility into microservices interactions by capturing traces across service boundaries. This allows developers and operations teams to understand request flow, identify problematic dependencies, and detect failures in a distributed system. By leveraging OpenTelemetry’s context propagation, teams can follow a request from its origin to its termination, providing a clear picture of dependencies and bottlenecks. This improves troubleshooting efficiency, reduces the mean time to resolution (MTTR), and helps organizations build more resilient, scalable architectures. Additionally, OpenTelemetry supports integrations with distributed tracing backends such as Jaeger, Zipkin, and commercial solutions, ensuring flexibility in visualization and analysis.

Identifying latency bottlenecks in cloud-native environments: By collecting granular performance data, OpenTelemetry helps teams pinpoint where delays are occurring in an application. Whether it’s a slow database query, an overloaded service, or network latency, OpenTelemetry provides the data needed to optimize system responsiveness and improve user experience. With built-in support for metrics and histograms, OpenTelemetry allows teams to measure request duration, throughput, and error rates, enabling proactive performance tuning. Furthermore, OpenTelemetry facilitates real-time alerting on latency spikes, allowing DevOps teams to quickly diagnose and mitigate issues before they impact users. This level of insight is particularly beneficial for cloud-native applications where dynamic scaling and complex service interactions demand constant monitoring and optimization.

Infrastructure and Cloud Monitoring

Collecting host and container-level metrics: OpenTelemetry provides extensive support for collecting system-level and container-level metrics, including CPU, memory, disk usage, and network statistics. This enables teams to track resource consumption across distributed environments, identify performance anomalies, and optimize infrastructure utilization. By leveraging OpenTelemetry’s support for metric aggregation and real-time monitoring, organizations can ensure their applications remain resilient under varying workloads.

Monitoring Kubernetes clusters at scale: Kubernetes environments introduce unique challenges due to their dynamic and ephemeral nature. OpenTelemetry integrates seamlessly with Kubernetes to provide real-time visibility into cluster health, pod performance, and service-to-service communications. It enables DevOps teams to monitor workload scheduling efficiency, detect failing pods, and correlate application performance with underlying infrastructure issues. By centralizing observability across multiple clusters, OpenTelemetry empowers organizations to maintain high availability and reduce operational overhead in cloud-native environments.

Log Correlation with Traces and Metrics

Unified observability for root cause analysis: OpenTelemetry provides a comprehensive approach to observability by linking logs, metrics, and traces together, enabling teams to perform in-depth root cause analysis. By correlating log events with specific traces and spans, teams can identify exactly where failures occur within a distributed system, reducing the time spent diagnosing incidents and improving mean time to resolution (MTTR). This unified observability approach ensures that developers and operators have a complete understanding of system behavior, making debugging and performance optimization more efficient.

Enriching logs with trace and span context: OpenTelemetry enhances logging by automatically injecting trace and span identifiers into log messages, allowing for precise contextualization of events. This enrichment enables teams to follow an event from initiation through completion, offering clear insights into request flow and dependencies. Additionally, integrating log correlation with tracing helps detect patterns, anomalies, and dependencies that might not be immediately visible when logs are analyzed in isolation. This capability is especially beneficial in microservices architectures, where tracking down issues across multiple services can be complex without proper log-trace correlation.

Security and Compliance Observability

Capturing audit trails with OTEL logs and traces: OpenTelemetry enables organizations to create detailed audit trails by collecting logs and traces that capture user activity, API calls, and system interactions. These audit trails help organizations meet compliance requirements by providing clear, verifiable records of all system activities. By maintaining an immutable record of telemetry data, OpenTelemetry enhances accountability and security, ensuring that organizations can detect and investigate security incidents efficiently.

Detecting anomalies and unauthorized access patterns: OpenTelemetry’s advanced telemetry data collection allows security teams to analyze trends, detect anomalies, and identify unauthorized access attempts in real-time. By correlating logs, traces, and metrics, OpenTelemetry provides a holistic view of system behavior, helping teams recognize suspicious patterns, mitigate security threats, and prevent potential data breaches. This proactive security monitoring is essential for maintaining regulatory compliance and protecting sensitive data in distributed and cloud-native environments.

Business Analytics and SLO Monitoring

Defining Service Level Objectives (SLOs): Service Level Objectives (SLOs) are key performance indicators (KPIs) that define the desired reliability and performance targets for services. OpenTelemetry enables organizations to collect and analyze telemetry data that aligns with predefined SLOs, ensuring services meet business expectations. By leveraging OpenTelemetry metrics, organizations can measure service uptime, response times, and error rates, allowing teams to proactively address performance degradations before they impact end users. This approach fosters a culture of reliability engineering and helps teams adhere to Service Level Agreements (SLAs).

Analyzing user behavior and optimizing transactions: OpenTelemetry provides deep insights into user interactions and application workflows by capturing traces and metrics across distributed systems. By analyzing user journeys, organizations can identify friction points, optimize performance, and enhance user experience. OpenTelemetry allows businesses to track critical transactions, detect drop-offs, and correlate them with system behavior, ensuring continuous improvement. Additionally, businesses can leverage telemetry data to fine-tune application logic, allocate resources efficiently, and personalize user interactions based on real-time performance trends.

Getting Started with Logs

This Getting Started section provides a quick overview on collecting and ingesting logs into Apica using different methods:

  • Collecting logs using Python

  • Collecting logs using OpenTelemetry

  • Collecting logs using Rsyslog

Screenshot Issues Resolved: Screenshots now work properly for browser checks and ASM integration

  • Check Management Improvements:

    • Fixed issues with uploading ZebraTester scripts in Scenario Management

    • Resolved problems creating compound checks

    • Fixed check runs graph and table views showing empty data

  • Private Location Support: Private locations are now working correctly again

  • User Interface Enhancements:

    • Check deletion now properly closes tabs

    • Fixed cloning issues where check details showed incorrect values

    • Removed unwanted scroll bars in Screenshots & Filmstrips section

    • Improved SSL check type image display

  • Group Management:

    • Fixed loading issues when monitor groups contain more than 10 checks

    • Renamed "Check Group" and "Monitor Group" to simply "Group" for consistency

  • Data Display: Resolved issue where no data was showing for enabled and running checks

  • Flow

    New Features

    • Pipeline Management: Enhanced pipeline creation and management experience

    • Rule Configuration: Improved rule creation with better help text and field validation

    • SIEM Integration: Added Alert and Dashboard tabs for SIEM rules

    What's Fixed

    • Pipeline Operations:

      • Fixed metric flow stopping when more than 2 machines send data

      • Resolved pipeline filter functionality

      • Fixed issue where deleted pipelines still appeared

      • Corrected Active Pipelines counter when filtering

    • Data Processing:

      • Fixed CSV file upload errors in Lookups

      • Resolved namespace and application availability issues

      • Fixed facet fields display after selecting dataflow options

    • Rule Management:

      • Fixed TAG rule creation issues with metrics, dashboards, and alerts

      • Resolved field name display problems with dots in the name

      • Fixed pipeline preview to use raw logs correctly

    • User Interface:

      • Improved layout and color schemes for pipelines

      • Fixed dropdown bugs in pipeline configuration

      • Better handling of pipeline rules display

    Fleet Management

    What's Fixed

    • Agent Management:

      • Stopped fleet agents from restarting repeatedly with new configurations

      • Fixed syncing issues between repository and fleet-control

    • Configuration:

      • Improved Datadog agent field handling

      • Added platform-based filtering for agent types

      • Fixed tech preview text display

    • Repository Updates: Updated fleet-management-defaults to match fleet-tests

    Observe

    What's Fixed

    • Check Analytics: Improved performance and reliability of check analytics pages.

    • Data Visualization: Fixed pipeline table data and hover issues.

    • Integration: Better integration with Ascent Synthetics features.

    General Improvements

    • User Management: Fixed internal server error when disabling pending users.

    • Documentation: Enhanced Swagger documentation for Flash Bundles API.

    • Performance: Various backend optimizations for better system stability.


    Component Versions - Ascent v2.10.6

    Components
    Version

    Flash

    v3.15.9

    Coffee

    v3.16.12

    ASM

    13.35.1

    NG Private Agent

    1.0.9

    Check Execution Container: Browser

    fpr-c-130n-10.2.1-716-r-2025.04.02-0-base-2.0.0

    Check Execution Container: Zebratester

    zt-7.5a-p0-r-2025.04.02-0-base-1.2.0

    Configuration is available under Integrations > Alert Destinations > ilert in Ascent.

    Bug Fixes

    • Dashboard Widgets Lock Issue Resolved Newly created dashboard widgets are now editable after saving, addressing the problem where widgets were becoming locked.

    • Vault > Certificates – Button Functionality Restored The “Add Certificate” button in the Certificates section of the Vault now loads correctly, allowing users to add new certificate entries without issues.

    • Pipelines Dashboard Widget Legend Field Display Issue Fixed Addressed an issue where certain widget fields in the Pipelines Dashboard were displaying “NaN” values. Fields now render the correct data consistently.

    • IAM Settings Sorting Restored Fixed a bug where the sort functionality in IAM (Settings) was not working as expected. Sorting now applies correctly to all relevant columns.


    2. Ascent / Synthetics

    Bug Fixes

    • Check Analytics > Map View Fixed

      Fixed an issue because to which checks were not getting rendered in the global map view and the counts were not matching the number of checks being rendered on the map.


    3. Flow

    Bug Fixes

    • Pipeline Rule Interaction Fixed an intermittent issue where some rules within pipelines were greyed out and unclickable after opening the Configure Pipeline screen.

    • Rule Detachment Behavior Fixed Resolved a problem where disabling a rule in one pipeline inadvertently detached it from another connected pipeline.

    • Forwarder Removal from Pipelines Fixed an issue preventing users from removing forwarders from pipelines, forwarders can now be added or removed as intended.

    • Filter Rule Label Warning Removed Fixed a problem where manually adding labels to a filter rule prompted an unnecessary warning when saving. Rules now save without false alerts.


    Component Versions - Ascent v2.12.1

    Components
    Version

    Flash

    v3.17.1

    Coffee

    v3.18.2

    ASM

    13.36.3

    NG Private Agent

    1.0.9

    Check Execution Container: Browser

    fpr-c-130n-10.2.1-716-r-2025.04.02-0-base-2.0.0

    Check Execution Container: Zebratester

    zt-7.5a-p0-r-2025.04.02-0-base-1.2.0

    SAML roles and group mappings now override current user roles and groups, ensuring updated access controls.

  • UI for the Analyze Metrics page was implemented using new Figma designs for improved user experience.

  • SLA has been added to ASM+ check synchronization, enhancing monitoring capabilities.

  • General Changes

    • Check Analytics tabs have been renamed and reordered for easier navigation.

    Bug Fixes

    • Support for alert target groups was added to alert creation in ASM, broadening alert configuration options.

    • Edit and create functionalities for ASM Alerts are now available, making alert management more flexible.


    Observe

    New Features & Improvements

    • Support for anomaly alerts has been added, providing more robust monitoring capabilities.

    • Alert models have been revamped for AscentCore and UI improvements, simplifying alert management.

    General Changes

    • Alert detail pages and dashboards have been implemented, giving users a clearer view into each alert and system status.

    • The Alert Management interface has been refreshed for a more seamless user experience.

    Bug Fixes

    • Webhook field/payload mapper issues are resolved, so integrations now function as intended.

    • The dropdown for alert destination is available when creating alerts, improving workflow efficiency.


    Fleet, Flow, Lake

    Improvements

    • Ascent Alert Management enhancements are rolled out across Fleet, Flow, and Lake, providing unified improvements for alert setup and tracking.


    Data Explorer

    Improvements

    • The Analyze Metrics - Data Explorer feature is now available, allowing for deeper insights into performance and metrics across the platform.


    Component Versions - Ascent v2.14.0

    Components
    Version

    Flash

    v3.19.0

    Coffee

    v3.20.0

    ASM

    13.38.2

    NG Private Agent

    1.0.9

    Check Execution Container: Browser

    fpr-c-130n-10.2.1-716-r-2025.04.02-0-base-2.0.0

    Check Execution Container: Zebratester

    zt-7.5a-p0-r-2025.04.02-0-base-1.2.0

    New Features
    • Certificate Management: You can now upload and delete certificates directly in the Vault, giving you better control over your security credentials

    • Enhanced Rule Creation: When creating rules, you'll now see helpful RE2 pattern guidance to make rule setup easier and more accurate

    • Improved Documentation: Updated API documentation with better descriptions and examples for Namespace and Applications APIs

    Fixes

    • Fixed dropdown menus in the pipeline configuration tab that weren't properly connecting to your data flow logs

    • Resolved an issue with the Unflatten function in Pipeline Code blocks

    • Fixed namespace settings in Kubernetes agent configurations

    Observe

    Improvements

    • Better Filtering: Fixed multiple filter issues in Ascent, including problems with check types, browser checks, and search functionality in Check Groups

    • Enhanced Security: Updated certificate handling to automatically pick up new certificates during deployments

    Bug Fixes

    • Resolved search field issues in Check Groups View

    • Fixed filtering problems for browser-based monitoring checks

    Fleet (Agent Management)

    Fixes

    • Resolved installation issues with OpenTelemetry Collector agents on Windows systems

    • Fixed Kubernetes agent namespace configuration problems

    Ascent Synthetics

    New Integrations

    • GitLab Integration: Added support for GitLab as a remote source for repository profiles, making it easier to manage your synthetic checks alongside your code

    Bug Fixes

    • Fixed frequency settings that were incorrectly changing to "Manual" when editing checks

    • Resolved connection cleanup issues with Nomad Proxy

    • Improved error handling in Postman checks

    • Corrected multiple API endpoint issues that were returning error codes

    • Improved error handling and filtering in check result endpoints

    ASM Legacy

    Fixes

    • Resolved API endpoint issues that were causing 400 and 500 error responses

    • Fixed filtering problems in check result and mostrecent API endpoints

    • Improved error classification for better troubleshooting.

    • Fixed postman check discarding issue that was being caused due to Out of Memory (OOM).

    Other Improvements

    General

    • Fixed admin settings redirect issues in Tech Preview

    • Various backend stability improvements and performance optimizations


    Component Versions - Ascent v2.10.5

    Component

    Versions

    Flash

    v3.15.8

    Coffee

    v3.16.11

    ASM

    13.35.0

    NG Private Agent

    1.0.9

    Check Execution Container: Browser

    fpr-c-130n-10.2.1-716-r-2025.04.02-0-base-2.0.0

    Check Execution Container: Zebratester

    zt-7.5a-p0-r-2025.04.02-0-base-1.2.0

    Improvements

    Ingestion Stability and Metric Accuracy

    • Dedicated Ingest Routing: All ingest endpoints are now routed exclusively to ingest nodes to reduce latency, eliminate conflicts, and improve path separation.

    • Shard Locking for Inserter: Replaced global locks with shard-level locks in the ingestion layer to prevent deadlocks and improve concurrency safety.

    • Batch Size Metric Fix: Corrected the calculation of JSON batch sizes in metric outputs — metrics now represent per-batch payload size instead of total stream size.

    • Rate Limiting Revamp: Introduced a configurable leaky-bucket algorithm for ingest rate limiting with burst handling and fine-grained limiter options.

    • Fixed an issue where log entries associated with default_namespace were not visible in log explorer views.

    Vault & Configuration Variables

    Improvements

    • Fixed a critical issue where Vault-stored variables failed to persist due to stale distributed cache states. The cache parameters have been tuned for distributed consistency.

    Regex Validation (RE2)

    • Regex validation improvements:

      • Server errors now shown in UI with contextual error messages.

      • Removed the 3-character minimum input constraint.

      • Added help link for RE2 syntax reference.

    Bug Fixes

    • Fixed issue where help documentation for rule-based code blocks failed to load in the UI.

    • Removed redundant duplicated Pipelines & Rules tabs in the pipeline page.


    Ascent Synthetics

    Scenario Management

    Improvements

    • Resolved visual duplication issue in Scenario Management when editing existing scenarios.

    Checks & Monitoring

    Bug Fixes

    • Resolved incorrect check behavior where frequency-based scheduled checks appeared as "manual" runs in the UI.


    Observe

    Dashboards & Visualization

    Bug Fixes

    • Fixed issue where some dashboards failed to import in preview mode due to serialization mismatches.

    • Addressed failure of dashboard import from shared URL links.


    Platform-Wide Improvements

    License banner

    • Updated ADF license banner with correct Apica email address.


    Component Versions - Ascent v2.10.4

    Component

    Versions

    Flash

    v3.15.5

    Coffee

    v3.16.7

    ASM

    13.34.0

    NG Private Agent

    1.0.8

    Check Execution Container: Browser

    fpr-c-130n-10.2.1-716-r-2025.04.02-0-base-2.0.0

    Check Execution Container: Zebratester

    zt-7.5a-p0-r-2025.04.02-0-base-1.2.0

    When a resource is deleted, its associated entries are properly removed from permissions management, aiding data consistency.
  • User sessions are now more reliably managed, and inactivity correctly results in logouts across all application pages for enhanced security.

  • General Changes

    • Policy management navigation now works smoothly without breadcrumb issues.

    • Further optimizations enhance dashboard load performance.

    Bug Fixes

    • Resolved an issue where newly added widgets appeared locked after saving a dashboard. Widgets are now functional and accessible upon dashboard creation.

    • Users can now successfully upload and save scenarios in Scenario Management.

    Flow

    Bug Fixes

    • Searching in the alerts tab within the pipelines dashboard responds correctly, enabling more efficient issue tracking.

    • Log extraction rules now function as intended, ensuring extracted values are visible and rules work consistently - not just during preview.

    Ascent Synthetics

    Bug Fixes

    • The Groups View and Manage Groups tab in Check Analytics now consistently show all monitor groups.

    • Duplicate check names no longer appear in Manage Groups, ensuring a clear, accurate listing of checks.

    • Check details opened from the operations view now display the correct location information, rather than “unknown.”


    Component Versions - Ascent v2.11.1

    Components
    Version

    Flash

    v3.16.3

    Coffee

    v3.17.7

    ASM

    13.36.3

    NG Private Agent

    1.0.9

    Check Execution Container: Browser

    fpr-c-130n-10.2.1-716-r-2025.04.02-0-base-2.0.0

    Check Execution Container: Zebratester

    zt-7.5a-p0-r-2025.04.02-0-base-1.2.0

    Creating Athena Data Source

    After your Amazon Athena is configured, the next step is to create and add the Amazon Athena data source to your Apica Ascent.

    Selecting Athena from Data Source

    The next step is to fill out the details using the information from the previous step:

    • AWS Access Key and AWS Secret Key are the ones from the previous step.

    • AWS Region is the region where you use Amazon Athena.

    • S3 Staging Path is the bucket Amazon Athena uses for staging/query results, you might have created it already if you used Amazon Athena from the AWS console - simply copy the same path.

    Adding a new Athena Data Source

    That's it. Now navigate to the Query editor to query your data.

    https://docs.aws.amazon.com/athena/latest/ug/setting-up.html

    Provide your Name, your business Email Address, and Company and Country details.

  • Click Submit and you will receive a confirmation email to validate your contact information.

  • This completes the sign-up process. We'll send your Apica Ascent account credentials to your registered email shortly after you sign up.

    Logging into Apica Ascent SaaS

    To access your Apica Ascent SaaS instance, do the following:

    1. Using your favorite web browser, navigate to your Apica Ascent SaaS instance URL. Your instance URL is listed in the onboarding email we send you post sign up and will resemble https://<unique name>.apica.io/.

    2. Enter the login credentials shared in the onboarding email.

    3. Click Login.

    You'll now have access to and can interact with the Apica Ascent UI.

    Apica Ascent SaaS sign-up page

    Name: Name of the data source

  • Scheme (optional): HTTP/HTTPS scheme of your Druid instance

  • Host: Host Endpoint point of your Druid Instance

  • Port: Port address of your Druid Instance

  • Selecting Druid data source
    Configuring Druid

    That's all. Now navigate to the Query editor page and start Querying

  • Name: Name of the data source

  • Endpoint: The endpoint of the Amazon Elasticsearch Service instance

  • Region: The region of the Amazon Elasticsearch Service instance

  • Access Key (optional): Access Key of the IAM user

  • Secret Key (optional): Secret of the IAM user

  • Selecting Amazon Elasticsearch Service Data Source
    Configuring the Amazon Elasticsearch Service Data Source

    That's all. The next step is to navigate to the Query editor page and start querying the data

    Working method of Amazon Elasticsearch Service

    Ascent 2.10.7

    This release includes a number of fixes and improvements across the platform. Here's a breakdown of what’s been addressed, organized by product area.


    Ascent Synthetics

    • Improved Check Cloning Cloned checks now behave more predictably:

      • The aggregator view no longer shows the original check name.

      • Manual run messages are now accurate.

      • Deleting cloned checks works as expected.

      • Checks created via Postman no longer fail silently.

    • Private Location Fixes

      • The correct private location name now displays.

      • Access group information is now visible in the Private Locations settings.

    • Download Issues Resolved

      • Downloaded browser scenarios now retain their original names and extensions.


    Observe

    • Dashboard and Data Explorer Stability

      • Dashboards created from logs or alerts now load properly.

      • Tabs and widgets in Data Explorer no longer disappear after dashboard creation.

    • UI Improvements


    Flow

    • Pipeline Usability

      • You can now rearrange pipeline sequences and see the updated order.

      • Creating pipelines with duplicate names is now blocked.

      • Pipeline preview works consistently on every click.


    Fleet

    • Agent and Configuration Management

      • Sorting by name in Fleet configurations now works across all pages.

      • You can now delete configuration files reliably.

      • The agent list filter dropdown updates dynamically based on selections.


    Component Versions - Ascent v2.10.7

    Components
    Version

    Ascent 2.4.0

    Discover the latest advancements and improvements of the Apica Ascent platform. This is your go-to destination for updates on platform enhancements and new features. Explore what's new to optimize your observability and data management strategies.

    Synthetic Monitoring (ASM 13.26.0) - SaaS

    Features

    • NG Private Locations/Agents: New check-type agnostic Private Agents can be grouped into Private Locations with full self-serve functionality in the ASM UI portal. *ASM API support for full self-server ability will be added during Q3.

      Features include the creation and management of Private Location and Agent along with Private Container Repositories for Private Agent use. Private Agent install packages (.rpm and .deb) will be available with support for RHEL v8+ and Debian v10+. Private locations can be set up to use either Docker or Podman driver for check execution.

    • New Browser checks will automatically accept dialogs/modals that can pop up during a test such as alert/confirmation/prompts.

    • New Browser checks will attach and include control of new tabs created by the target site. I.e. the chrome WebDriver will automatically attach to new tabs that are opened during check execution of a Browser check.

    • Added SAN/SNI options to SSL Cert Expiration and Fingerprint Validation for URL checks.

    • Compound check is available on NG locations.

    • Extended the ability to append the custom message specified in _Apica_Message collection variable to Postman check result messages in case the Postman script fails.

    Bug Fixes

    • Screenshots for Browser checks were not working in new tabs or windows created by the check. This is fixed as part of the above feature that include control of created tabs and windows by the target site.

    • Debug scenario of Browser checks from the Edit Check page will use the same location as the check does.

    • Fixed the issue where ASM UI was throwing a 500 error from Ajax while adding target value for newly created Selenium scenarios.

    • Fixed the sporadic non-availability of agents in the Stockholm location issue when debugging a Selenium scenario.

    Synthetic Monitoring (ASM 13H.5) - OnPrem

    Features

    • The display response body for failed URL calls in a ZebraTester checks the result, if available, to enable the identification of what error messages or content might be returned.

    • Added support for PUT API request to add or update URL v1 checks through ASM API.

    Apica Data Fabric (ADF v3.9)

    Features

    • Dark Mode: A new dark mode option is now available, providing a dark-themed interface for users who prefer it.

    • Code Rule Preview: Users can preview and compare the data after the code rule is applied.

    • : Introduced a new command-line tool in Apica Github for API management.

    • Bookmark date range: Users can now bookmark specific date ranges for quick access and reference.

    Bug Fixes

    • Inconsistent time range when moving from ALIVE to Facet Search page: Fixed the issue where the time range was inconsistent when moving from the ALIVE to the Facet Search page.

    • Orphan tab from ALIVE: Resolved the issue of orphan tabs appearing from ALIVE.

    • Alert page issue showing undefined value: Corrected the problem where the Alert view page was showing undefined values.

    Advanced Scripting Engine

    Major Release V7.5-B (Installation Kit dated April 17, 2024)

    ZebraTester 7.5-B release contains the following new features.

    • Support for Color Blindness: To improve support for vision impairments and color blindness adaptation we have added new themes to the GUI configuration section.

    • Ability to change request method from the ZT GUI: This gives the users the ability to change request method from the ZT GUI. Depending on the request method the Request body field will be enabled & visible or not.

    • Support user agent details from a file: Provides an option in ZT personal settings GUI settings area, where user can upload a JSON file, which have all the latest User-Agents details.

    • Updated Browser Agent List: All the current and latest browser agent list has been updated. • Option to Disable Page Breaks: Option to comment/disable a page break in the recorded session.

    • Variables as Page Break Names: Users can use variables when setting my page-breaks names to make scripts more dynamic.

    • Add OR condition for content type validation: Logical OR condition against content type validation can be tested by users.

    • ZebraTester Controller Pull file (.wri): User will be able to pull files from the execagent that have been written by the feature "writetofile". For this the files need to be pulled to the controller as any other out/err/result file.

    • WebSocket Extension (MS1): WebSocket implementation capabilities of Zebra Tester, allowing users to conduct more comprehensive testing of WebSocket-based applications. A detailed how guide on how to use WebSocket extension is added in the documentation folder.

    In addition, Zebra Tester V7.5-B release contains the following bug fixes / improvements:

    • Bug Fix for XML extractor giving 500 internal error in ZT scripts.

    • .Har file conversion issue.

    • Conflict when using variables as Mime Type validation.

    • Zebra Tester -Auto assign Fix

    Read previous Release notes,

    Ascent: Built on Kubernetes

    Apica Ascent: Built on Kubernetes for Ultimate Scale & Performance

    Observability isn’t just about collecting logs, metrics, and traces—it’s about ensuring real-time insights, high performance, and cost-efficiency at any scale. Traditional monitoring solutions often struggle with large-scale data ingestion, leading to performance bottlenecks, slow query times, and high storage costs.

    Apica’s Ascent Platform, built on Kubernetes, solves these challenges by providing infinite scalability, AI-driven optimization, and seamless multi-cloud support. With a unified data store, OpenTelemetry-native architecture, and intelligent workload management, Apica delivers unparalleled observability performance while reducing operational complexity and costs.

    More on Ascent Kubernetes Integration

    InstaStore™: Kubernetes-Powered Unified Telemetry Data Lake

    One of the biggest challenges in observability is data storage and retention. Traditional monitoring solutions rely on tiered storage models, leading to high costs, data fragmentation, and slow query times.

    Apica’s InstaStore™ data lake, built on Kubernetes, eliminates these limitations by providing:

    • Infinite scalability – Stores billions of logs, traces, and metrics without performance degradation.

    • ZeroStorageTax architecture – No more storage tiering, reducing storage costs by up to 60%.

    • Real-time data indexing – Instant query access to historical and real-time telemetry data.

    • Multi-cloud compatibility – Supports AWS S3, Azure Blob, Ceph, MinIO, and other object storage providers.

    How InstaStore™ Enhances Observability Performance

    • Single source of truth – Eliminates data silos by storing logs, metrics, traces, and events in a unified repository.

    • On-demand query acceleration – Uses high-speed indexing for sub-second query response times.

    • Long-term retention & compliance – SOC 2, GDPR, HIPAA-compliant storage for enterprise observability data.

    Result: Enterprises can store, query, and analyze observability data instantly, at a fraction of the cost of traditional solutions.

    Ascent Flow (Telemetry Pipeline): Data Flow Optimization at Scale

    Observability pipelines must ingest, process, and export massive volumes of telemetry data while maintaining low latency and high efficiency. Without proper optimization, unstructured data overloads monitoring systems, leading to delays, noise, and unnecessary costs.

    Apica’s Telemetry Pipeline, built on Kubernetes, solves this by:

    • Filtering, enriching, and transforming telemetry data in real time.

    • Automatically routing observability data to the most cost-effective storage backend.

    • Optimizing ingestion rates to reduce infrastructure costs and enhance performance.

    • Providing a drag-and-drop interface for managing data pipelines effortlessly.

    Fleet Management: Automated Deployment & Scaling of Observability Agents

    The Challenge: Managing Observability Agents at Scale

    In modern enterprise environments, observability data is collected from thousands of microservices, virtual machines, containers, and cloud functions. Manually deploying, configuring, and maintaining OpenTelemetry agents, Fluent-bit log collectors, and Prometheus exporters is resource-intensive and error-prone.

    • Config drift leads to inconsistent telemetry data across environments.

    • Manual agent updates result in security vulnerabilities and broken data pipelines.

    • Lack of centralized management makes troubleshooting difficult.

    Apica’s Fleet Management: Kubernetes-Powered Automation

    Apica solves these challenges with Fleet Management, an automated system for managing OpenTelemetry collectors and other observability agents at enterprise scale.

    • Automated Agent Deployment – Uses Kubernetes DaemonSets and StatefulSets to deploy and manage observability agents across clusters.

    • Zero-Drift Configuration Management – Ensures all observability agents stay in sync with the latest configurations.

    • Real-Time Health Monitoring – Continuously tracks agent status, performance, and data collection efficiency.

    • Multi-Cloud & Hybrid Support – Deploys agents across AWS, Azure, GCP, on-prem environments, and edge locations.

    Result: Enterprises eliminate manual observability agent management, ensuring consistent, reliable telemetry collection at scale.

    Ascent 2.10.3

    We are excited to introduce the v2.10.3 release of Flow, focused on expanding metrics capabilities, enhancing pipelines, and improving system performance.

    "Ascent v2.10.3 is now Generally Available — with OTEL Metrics support in Apica Telemetry Pipeline, and new JSON functions!"


    Flow

    OpenTelemetry Metrics Support in Apica Telemetry Pipeline is now GA.

    • OpenTelemetry Metrics in Telemetry Pipelines: Apica Flow now fully supports receiving and forwarding OpenTelemetry (OTLP)-compatible metrics within Apica telemetry pipelines.

    • OTLP Metrics Endpoint: You can now ingest metrics through the /v1/metrics OTLP-compatible endpoint.

    • Flexible Storage Options: Configure whether metrics are sent to internal Ascent Prometheus storage or forwarded externally to another OTLP compatible metric storage OR archive to an external object store.


    Pipelines and Rules Enhancements

    New Functions

    • flatten(input: object): Flattens nested JSON structures into simple key-value pairs.

    • unflatten(input: object): Reconstructs nested JSON objects from flattened structures, enabling full roundtrip transformations.


    This release further strengthens Apica Flow’s telemetry capabilities, giving you more flexibility, deeper observability, and better control over your pipelines and metrics.

    Component Versions - Ascent v2.10.3

    Ascent 2.6.0

    🗓️ 12th September 2024


    Features

    • React-grid-layout Integration: React-grid-layout has been integrated into Data Explorer for widget flexibility and condensed dashboards.

    • Legend Component: A separate component for displaying legends in Data Explorer widgets was implemented, which shows statistics for the data that is being rendered in the widget.

    • Port Management via UI: Added support for enabling and disabling k8s cluster ports via the UI.

    • Ping Checks: Implemented Ping Checks in Check Management.

    • Port Checks: Implemented Port Checks in Check Management.

    • Logs as a Data Source: Apica logs can now be integrated as a data source for Data Explorer and users can create/run queries on top of logs. This also introduces a new way to set alerts on the platform using logs.

    • File Compare Graph Y-axis Scale: The Y-axis of the File Compare graph now supports two modes: PS count and percentage.

    • PS Compare Anomaly Marker: Added anomaly markers for better visualization in PS Compare.

    • Dashboard Data Migration: Dashboard schemas are now formatted into Data Explorer format and moved from LogiqHub to ApicaHub Github Repositories.

    • Legacy Dashboard Converter: A converter was implemented to convert legacy Dashboard JSON to Data Explorer JSON format.

    • Data Explorer: Editing Controls and Breakpoints: Added editing controls and breakpoints in Data Explorer.

    • Scatter Chart Support: Data Explorer now supports scatter chart visualizations.

    • Dark Theme: Improved dark themes for multiple screens, including Logs & Insights, Dashboards, Topology, and Pipelines.

    • Dashboard Import in Data Explorer Format: Frontend changes were implemented to import dashboards in Data Explorer format.

    • Check Analytics Reports Integration: Enhanced check analytics by integrating it with reporting.

    • FPR Checks Consolidated Metrics: Added the ability to enrich check data at time of ingestion using a new domain-specific language (DSL).

    Improvements

    • Check Status Widget: Added custom configuration options for the check status widget.

    • Performance Improvements: Extended efforts to improve the performance of Data Explorer for smoother usage.

    • Gauge Chart Design: Modified the Gauge chart design, providing more user-configurable options and units for charts.

    • New Visualizations in Data Explorer: New widget types were added, including Check Status, Stat, Size, Date/Time, and Scatter chart visualizations.

    Bugs

    • Invalid Log Timestamp: Fixed an issue where log timestamps were invalid.

    • Tracing Volume Query Issue: Addressed an issue affecting tracing volume queries.

    • File Compare Graph Display: Resolved issues with the display of the file compare graph summary.

    • Data Explorer Page Crashing: Fixed errors causing the Data Explorer page to crash due to undefined values.

    Amazon Redshift

    Apica Ascent helps you to connect to your Redshift Cluster to easily query your data and build dashboards to visualize data easily

    The first step is to create a Redshift Cluster, please navigate to get started with Amazon Redshift https://docs.aws.amazon.com/redshift/latest/gsg/getting-started.html

    Adding Redshift to Apica Ascent

    The second step is to create and add Redshift to Apica Ascent and add fill out the below fields and save

    • Name: Name the data source (e.g. Redshift)

    • Host: The full URL to your instance

    • Port: The port of the instance endpoint (e.g. 3306)

    • User: A user of the instance

    • Password: The password for the above user

    • Database name: The name of the virtual database for Redshift (e.g. Redshift1)

    That's it. Now navigate the Query editor page and start querying your data

    Logs

    How to use Ascent Logs data source to query logs from namespaces and applications

    Follow the steps below to create and execute a query using the Ascent Logs Data Source.


    1. Go to the Queries Page

    • Navigate to the queries page in your dashboard to begin

    • In the Queries page, click on the "New Query" button to start a new query

    2. Select "Ascent Logs" from the Left Sidebar

    • On the left sidebar, click on Ascent Checks. This will display a list of all available checks that you can query.

    3.Write the query

    • Write the query in YAML format and execute query, for example:

    Elasticsearch

    Elasticsearch data source provides a quick and flexible way to issue queries to one or more indexes in an Elasticsearch cluster

    Create the Elasticsearch data source

    The first step is to create the data source and provide the Elasticsearch cluster URL and optionally provide the basic auth login and password.

    Configuring the Elasticsearch data source

    Writing queries

    In the query editor view, select the Elasticsearch data source created above. On the left column, click on the refresh icon to refresh the schemes (indexes). The schemes are expandable and show the schema details.

    You can then proceed to the query editor and run the search query. The query uses JSON as passed to the Elasticsearch search API

    MySQL Server (Amazon RDS)

    Apica Ascent helps you to connect to Amazon RDS for MySQL data source which makes it easy for you to query MySql using its natural syntax, analyze, monitor, and create Visualization of data.

    All your queried results are cached, so you don't have to wait for the same result set every time, also Apica Ascent helps you to visualize your data gathered from queries

    Adding MySQL Server for Amazon RDS

    The first step is to create a MySQL data source and provide all details such as the Host, Port, User, Password, and Database name of your MySQL

    • Name: Name the data source

    • Host: This is your MySQL server address

    • Port: The port of the MySQL Server

    • User: A user of the MySQL Server

    • Password: The password for the above user

    • Database name: The name of the database of the MySQL Server

    That's it. Now navigate to the Query editor page to query and create Visualizations of your data

    Data Ingest Ports

    This page describes port numbers that are supported in the Apica Ascent Platform. Note all ports numbers are enabled by default but can be enabled based on use case.

    Syslog

    1. 514 / 7514 (TLS) - RFC 5424 documentation / Read RFC 3164 Document

    2. 515 / 7515 (TLS) - Syslog

    3. 516 / 7516 (TLS) - Syslog Fortinet /

    4. 517 - Raw RCP / catch-all for non-compliant syslog / Debug

    RELP

    1. 2514 / 20514 (TLS) -

    Http/Https

    1. 80/443 (TLS)

    Logstash

    1. 25224/ 25225 (TLS) -

    Fluent Protocol

    1. 24224/24225 (TLS) -

    Export Metrics to Prometheus

    By utilizing the Ascent I/O Connector, we can directly send the Apache Beam metrics created , to Prometheus as Metrics.

    There are two mechanisms to achieve this, namely :

    Push Mechanism done via remote-write method

    In the context of the push mechanism done via remote-write, Prometheus can be used to collect and store the data that is being pushed from the source system to the destination system. Prometheus has a remote-write receiver that can be configured to receive data using the remote-write protocol.

    Once the data is received by the remote-write receiver, Prometheus can store the data in its database and perform real-time analysis and aggregation on the data using its powerful query language. This allows system administrators and operators to monitor the performance of various components of the system in real-time and detect any issues or anomalies.

    In this way, Prometheus can replicates it data to third-party system for backup , analysis and long-term storage .

    Pull Mechanism done via Push-Gateway method

    In a distributed system, the pull mechanism is a common way of collecting data from various sources by querying them periodically. However, there may be cases where it's not feasible to collect data using the pull mechanism, such as when the data is only available intermittently or when it's costly to query the data source repeatedly. In such cases, the PushGateway method can be used to enable a pull mechanism via a push approach.

    Prometheus offers a PushGateway component that allows applications to push metrics into it via an HTTP API. Applications can use this API to push metrics to the PushGateway instead of exposing an endpoint for Prometheus to scrape. Prometheus can then pull the data from the PushGateway, acting as if it were a normal Prometheus target.

    To use the push gateway method in a pull mechanism, applications periodically push their metrics data to the Push-gateway via the HTTP API. Prometheus, in turn, periodically queries the Push-Gateway to collect the data. The Push-Gateway stores the metrics data until Prometheus scrapes it, which can be configured to occur at regular intervals.

    This approach can be useful when collecting metrics from systems that are not always available or when it's not feasible to pull the data frequently. Additionally, it allows applications to expose metrics data without exposing an endpoint for Prometheus to scrape, which can be more secure.

    Overall, the Push-Gateway method can be a powerful tool in enabling a pull mechanism for collecting metrics in a distributed system via Prometheus.

    The "LOGIO-IO" Connector currently accepts pushing metrics to Prometheus by this method. For more info, refer to this post.

    Microsoft SQL Server

    Apica Ascent lets you connect to the Microsoft SQL Server which is a relational database management system (RDBMS) that supports a wide variety of transaction processing, business intelligence, and analytics applications in corporate IT environments.

    With Apica Ascent you can easily query, monitor, and visualize the MS SQL Server data

    Adding MS SQL Server to Apica Ascent

    The first step is to create and add MS SQL Server to Apica Ascent and add fill out the below fields and save

    • Name: Name the data source

    • User: A user of the MS SQL Server which is in the form: user@server-name

    • Password: The password for the above user

    • Server: This is your server address without the .database-windows.net suffix

    • Port: The port of the MS SQL Server

    • TDS Version: TDS Version of your MS SQL Server

    • Character Set: Character encoding of your MS SQL Server

    • Database name: The name of the database of the MS SQL Server

    Also make sure to Check out for instructions to whitelist your Apica Ascent IP address when connecting to Synapse.

    That's all, now navigate to the Query Editor to query your data

    Apache Beam

    Overview

    Apache Beam is an open-source, unified model for defining both batch and streaming data-parallel processing pipelines. Using one of the open-source Beam SDKs, you build a program that defines the pipeline. The pipeline is then executed by one of Beam’s supported distributed processing back-ends, which include Apache Flink, Apache Spark, and Google Cloud Dataflow.

    Beam is particularly useful for embarrassingly parallel data processing tasks, in which the problem can be decomposed into many smaller bundles of data that can be processed independently and in parallel. You can also use Beam for Extract, Transform, and Load (ETL) tasks and pure data integration. These tasks are useful for moving data between different storage media and data sources, transforming data into a more desirable format, or loading data onto a new system.

    Apica Ascent provides integrations to let you integrate Apica Ascent with Apache Beam. Checkout the submodules to learn more about it.

    Data Bricks

    Apica Ascent can connect to your Data Bricks cluster and SQL Endpoints

    Adding Data Bricks to Apica Ascent

    The first step is to obtain the Host, HTTP Path, and an Access Token for your endpoint from the Data Bricks. Refer to the below link to obtain the necessary information

    The next step is to add Data Source in Apica Ascent using information obtained from the above source

    Selecting Data Bricks data source

    That's it, now navigate to the Query Editor page and start querying

    MySQL Server

    Apica Ascent lets you connect to your MySQL easily and provides a rich Query editor to Query your MySQL using its natural syntax.

    All your queried results are cached, so you don't have to wait for the same result set every time, also Apica Ascent helps you to visualize your data gathered from queries.

    Adding MySQL Server Data Source to Apica Ascent

    The first step is to create a MySQL data source and provide all details mentioned below

    • Name: Name the data source

    • Host: This is your server address

    • Port: The port of the MySQL Server

    • User: A user of the MySQL Server

    • Password: The password for the above user

    • Database name: The name of the database of the MySQL Server

    Optionally you can use the SSL protocol for the secure transaction of information

    That's it. Now navigate to the Query editor page to query your data

    Integrations Overview

    Apica Ascent comes with a large number of integration options for ingest and incident management. This list is growing on a weekly basis!

    Ingest for Ascent

    Ingest lets you connect with and securely ingest data from popular log forwarding agents, cloud services, operating systems, container applications, and on-premise infrastructure. You can secure data ingestion from your endpoints into Apica Ascent by generating a secure ingest token.

    Integration Details

    Apica Ascent currently integrates with over 150+ data sources via support for popular open source agents and open protocols.

    Follow these links to the Ascent integration details:

    Forwarder Details

    Once data is ingested, Ascent allows any data to be forwarded to specific source destinations. Follow the link below for more details on available forwarders:

    PostgreSQL

    Apica Ascent lets you connect to your PostgreSQL easily and provides a rich Query editor to Query your PostgreSQL using its natural syntax.

    All your queried results are cached, so you don't have to wait for the same result set every time, also Apica Ascent helps you to visualize your data gathered from queries.

    Adding PostgreSQL to Apica Ascent

    The first step is to create a PostgreSQL data source and provide all details such as the Host, Port, User, Password, and Database name of your PostgreSQL

    Choosing a new data source

    Querying your data

    The next step is to Navigate to the Query editor page and start Querying your data from your PostgreSQL schemes

    Ascent Logs

    The Ascent Logs Data source allows querying logs from all the namespaces and applications ingested by Ascent.

    Apica ASM

    Pull Check results from Apica's ASM

    The Apica Source Extension is a component designed to integrate with the Apica Synthetics and Load test platform. Its main purpose is to retrieve check results from the Apica platform and make them available for further processing or analysis within another system or tool.

    This checks can also be forwarded to further downstream destinations for further processing.

    Steps to create Apica ASM Source Extension

    • Navigate to the Integrations page and click on the New Plugin button and select Apica option.

    • Provide the Plugin Name of choice and click Next.

    • Enter your Apica ASM platform credentials.

    • Configure your resource requirements and click Next.

    • Finally enter URL of Apica ASM Instance, Timezone, Version of Apica Source Extension Plugin and Number of workers to be used for the Apica data pull.

    After entering these details, click on the Done button.

    After creation of the Apica ASM source extension, you will see the check data which will have all the check details.

    Ascent 2.12.0

    We’re excited to announce the release of Apica One Platform 2.12.0, delivering enhanced features, improved security, better integrations, and important fixes across all major products.

    Observe

    New Features & Improvements

    Ascent 2.2.0

    Discover the latest advancements and improvements of the Apica Ascent platform. This is your go-to destination for updates on platform enhancements and new features. Explore what's new to optimize your observability and data management strategies.

    Synthetic Monitoring (ASM 13.25.0) - SaaS

    • Features

    Collect Logs with Rsyslog

    Install Rsyslog

    1. For Debian/Ubuntu:

    2. For RHEL/CentOS:

    Verify that rsyslog is running:

    Ascent 2.8.1

    Release Notes - Ascent January 2025

    Overview

    Introducing Apica Ascent Freemium—a FREE FOREVER version of our Intelligent Data Management Platform, now available as a convenient SaaS offering. This release democratizes intelligent observability, providing access to powerful features at no cost. Experience all the core capabilities of Ascent and take your telemetry data management to the next level.

    New Features and Enhancements

    Freemium Support

    Ascent with Kubernetes

    Introduction

    The digital landscape is evolving at an unprecedented pace. Enterprises are migrating to cloud-native architectures, embracing microservices, Kubernetes, and distributed applications to stay competitive. However, this shift introduces a new set of challenges—traditional monitoring tools are struggling to keep up with the scale, complexity, and velocity of modern applications.

    This is where next-generation observability platforms, like Apica’s Ascent Platform, come in. Built on Kubernetes and OpenTelemetry, Apica delivers a scalable, AI-powered, cloud-native observability solution designed to handle billions of logs, metrics, and traces in real time.

    Architecture and Sizing

    This page describes the deployment architecture of a typical on-premises production deployment of Apica Ascent.

    Requirements

    A production deployment of Apica Ascent requires the following key components:

    1. A cloud-based or k0s Kubernetes cluster to run the Apica Ascent software components. Apica Ascent OnPrem's non-cloud offering is based on .

    Prometheus Compatible

    Apica Ascent also supports external Prometheus compatible data sources e.g. Prometheus, Thanos, VictoriaMetrics. If you have any hosted such an instance in the cloud or on-premises, you can connect that in Apica Ascent as a data source. you can use your existing queries in Apica Ascent to build dashboards and create alerts.

    Please see the section to know about configuring various data sources.

    Snowflake

    Apica Ascent helps you to connect your Snowflake for faster querying and visualization of your data.

    Adding Snowflake to Apica Ascent

    The first step is to create a Snowflake data source and provide all details mentioned below

    • Name: Name the data source

    Generating a secure ingest token

    Apica Ascent uses an ingest token to secure the ingestion of log data from your data sources into your Apica Ascent deployment. You can generate a secure ingest token using the Apica Ascent UI and the command-line tool, .

    Obtaining an ingest token using UI

    You can obtain a secure ingest token from the Account tab in the Settings page on the Apica Ascent UI.

    To begin, click on the username on the navbar, click "Settings", and click on the "Account" tab if you are not brought there by default. Your secure ingest token will be displayed under the Ingest Token field. Click the Copy icon next to the token to copy it to your clipboard.

    Ascent with OpenTelemetry

    What is OpenTelemetry (OTEL)?

    OpenTelemetry (OTEL) is an open-source observability framework that provides a standardized approach to collecting, processing, and exporting telemetry data—including traces, metrics, and logs—from applications and infrastructure. It is a vendor-neutral solution designed to help organizations gain deep insights into the performance, health, and behavior of their distributed systems without being locked into proprietary observability tools.

    By unifying telemetry collection across different platforms, programming languages, and monitoring solutions, OpenTelemetry simplifies instrumentation, reduces integration complexities, and enhances observability capabilities for modern cloud-native applications.

    Fix for time zone lists, shows the java standard supported time zones without the deprecated ones.

  • Detailed Replay logs in ZT (extended logs)

  • ALPN Protocol Negotiation

  • Page Break - Threshold Breach (Trigger & Abort)

  • Library Update (Update JGit library): Updated the JGit library to the latest version to leverage new features and improvements.

  • Fix issues with JavaScript editor in ZT.

  • Statistical Data in Legends: Introduced statistical data to the new legend component in Data Explorer.

  • Auto Gradient Colors: Implemented an automatic gradient color generator for area charts in Data Explorer.

  • Grafana Dashboard Converter: Developed a converter for Grafana dashboards to be compatible with Data Explorer.

  • Widgets Deletion Handling: Implemented proper handling for widget deletion to prevent crashes.

  • Tab Loss on Reload: Resolved the issue where Data Explorer page tabs were lost on reload.

  • Chart Label Issues: Fixed chart label issues and improved chart rendering.

  • We have added bulk GET support for the API endpoint /checks/config. Users can now request multiple check configurations in one go, preventing issues caused by rate limiting. This is especially beneficial for those automating the synchronization of their own versions of check configurations with the Ascent platform through the ASM API.

  • A user must be able to see the response body from a failed URL call in a ZebraTester checks, if available, to enable the identification of what error messages or content might be returned.

  • Bugs Fixes:

    • We have eliminated the inconsistencies (spikes) in NG check result metrics previously impacted by infrastructure resource constraints. This has now been rolled out to all public and dedicated check locations available.

    • We have fixed the bug where the location API endpoint for Zebratester checks GET /checks/proxysniffer/locations was not returning all NG locations.

    • Expanding urls in check results for URLv2 check will display readable response content.

  • Synthetic Monitoring (ASM) - On-Prem

    • Features:

      • Display response body for failed URL calls in a ZebraTester checks result.

    • Bug Fixes:

      • We have fixed a bug that prevented new Browser check scenarios from syncing with the controlling agents effectively making them unavailable at time of check execution.

    Loadtest (ALT)

    • Bug Fixes:

      • Not all transaction names are available in ‘Edit Non-Functional Requirements (NFR)’.

    ADF v3.7.7

    • Features

      • We have added an OTel forwarder to be used in ADF/FLOW to send OTel data untouched downstream to external OTel collector.

    • Bug Fixes:

      • ASM+ Pagination bug on Check Analytics

      • Email delivery bug

      • ASM+ check data ingest stability improvements

    Added Freemium support via the Freemium license, offering free access to Ascent's capabilities.

    Core Features

    1. Fleet Management

      • Efficiently manage data collection with support for up to 25 agents, including OpenTelemetry Collectors for Windows, Linux, and Kubernetes.

    2. Telemetry Pipelines

      • Seamlessly integrate with popular platforms, including Splunk, Elasticsearch, Kafka, and Datadog, among others.

    3. Digital Experience Monitoring

      • Leverage Synthetic Monitoring for URL, Ping, Port, and SSL checks to optimize the digital experience.

    4. Log Management

      • Centralize log collection, analysis, and management for improved observability.

    5. Distributed Tracing

      • Gain deep insights into application performance with distributed tracing capabilities.

    6. Infrastructure Monitoring

      • Monitor and manage infrastructure performance to ensure optimal operations.

    7. Enterprise-Ready Features

      • Enable SAML-based Single Sign-On (SSO) for enhanced security and ease of access.

    8. ITOM Integration

      • Integrate seamlessly with IT operations management platforms such as PagerDuty, ServiceNow, and OpsGenie.

    Key Benefits of Ascent Freemium

    • Process up to 1TB/month of telemetry data, including logs, metrics, traces, events, and alerts.

    • Unlimited users and dashboards for collaboration and real-time data visualization.

    • No storage costs or credit card requirements.

    • Built-in AI-driven insights to enhance troubleshooting and decision-making.

    Browser Compatibility

    The freemium release is qualified for Chrome up to version 131. Newer browser versions are not yet fully supported.

    Apica Ascent Freemium is available immediately. Sign up now at https://www.apica.io/freemium and start transforming your data management experience today.

    Definition and Purpose

    At its core, OpenTelemetry serves the following primary purposes:

    1. Standardization of Observability Data – OTEL defines a common set of APIs, libraries, and protocols for collecting and transmitting telemetry data, ensuring that observability data is structured and consistent across different environments.

    2. Vendor-Neutral Telemetry Collection – Unlike proprietary solutions, OpenTelemetry is not tied to a single vendor, giving users the flexibility to export data to multiple observability platforms, including Prometheus, Jaeger, Zipkin, Elasticsearch, and various commercial solutions.

    3. Comprehensive Observability for Distributed Systems – OTEL helps organizations monitor, trace, and analyze applications running in microservices architectures, Kubernetes clusters, serverless environments, and hybrid cloud infrastructures.

    4. Simplified Instrumentation – Developers can use OpenTelemetry’s SDKs and automatic instrumentation to collect telemetry data without manually modifying large portions of their application code.

    5. Better Troubleshooting and Performance Optimization – By correlating traces, metrics, and logs, OTEL enables teams to detect bottlenecks, troubleshoot incidents faster, and optimize system performance proactively.

    Brief History and CNCF Involvement

    OpenTelemetry originated as a merger of two popular open-source observability projects:

    • OpenTracing – Focused on distributed tracing instrumentation.

    • OpenCensus – Provided metrics collection and tracing capabilities.

    Recognizing the need for a unified observability framework, the Cloud Native Computing Foundation (CNCF) merged OpenTracing and OpenCensus into OpenTelemetry in 2019, creating a single, industry-wide standard for telemetry data collection.

    Key Milestones in Open Telemetry’s Evolution:

    • 2016 – OpenTracing & OpenCensus emerge as separate projects to address distributed tracing and metrics collection.

    • 2019 – CNCF consolidates both projects into OpenTelemetry to create a single, unified standard.

    • 2021 – OpenTelemetry tracing reaches stable release, making it production-ready.

    • 2022 – OpenTelemetry metrics reach general availability (GA), expanding beyond tracing.

    • 2023-Present – Work continues on log correlation, profiling, and deeper integrations with various observability platforms.

    CNCF’s Role in OpenTelemetry

    The Cloud Native Computing Foundation (CNCF), a part of the Linux Foundation, serves as the governing body for OpenTelemetry. CNCF provides:

    • Project oversight and funding to support OpenTelemetry’s development.

    • Community-driven governance, ensuring OTEL remains an open and collaborative initiative.

    Integration with other CNCF projects, such as Kubernetes, Prometheus, Fluentd, and Jaeger, to enhance observability capabilities for cloud-native workloads.

    Why CNCF Backing Matters

    CNCF’s involvement ensures OpenTelemetry remains a widely adopted, industry-backed, and continuously evolving framework. With support from major cloud providers (Google, Microsoft, AWS), observability vendors (Datadog, New Relic, Dynatrace), and enterprise technology companies, OpenTelemetry has become the de facto standard for open-source observability.

    By adopting OpenTelemetry, organizations align with a future-proof, community-driven observability strategy, ensuring compatibility across cloud environments and monitoring solutions.

    Check Execution Container: Runbin

    runbin-2025.04.17-0-base-2.2.1

    Check Execution Container: Postman

    postman-2025.04.17-0-base-1.4.1

    Bnet (Chrome Version)

    10.2.1 (Chrome 130)

    Zebratester

    7.0B

    ALT

    6.13.3.240

    IronDB

    1.5.0

    Check Execution Container: Runbin

    runbin-2025.04.17-0-base-2.2.1

    Check Execution Container: Postman

    postman-2025.04.17-0-base-1.4.1

    Bnet (Chrome Version)

    10.2.2 (Chrome 130)

    Zebratester

    7.5A

    ALT

    6.13.3.240

    IronDB

    1.5.0

    Check Execution Container: Runbin

    runbin-2025.04.17-0-base-2.2.1

    Check Execution Container: Postman

    postman-2025.04.17-0-base-1.4.1

    Bnet (Chrome Version)

    10.2.2 (Chrome 130)

    Zebratester

    7.5A

    ALT

    6.13.3.240

    IronDB

    1.5.0

    Check Execution Container: Runbin

    runbin-2025.04.17-0-base-2.2.1

    Check Execution Container: Postman

    postman-2025.04.17-0-base-1.4.1

    Bnet (Chrome Version)

    10.2.1 (Chrome 130)

    Zebratester

    7.0B

    ALT

    6.13.3.240

    IronDB

    1.5.0

    Check Execution Container: Runbin

    runbin-2025.04.17-0-base-2.2.1

    Check Execution Container: Postman

    postman-2025.04.17-0-base-1.4.0

    Bnet (Chrome Version)

    10.2.1 (Chrome 130)

    Zebratester

    7.0B

    ALT

    6.13.3.240

    IronDB

    1.5.0

    Check Execution Container: Runbin

    runbin-2025.04.17-0-base-2.2.1

    Check Execution Container: Postman

    postman-2025.04.17-0-base-1.4.1

    Bnet (Chrome Version)

    10.2.2 (Chrome 130)

    Zebratester

    7.5A

    ALT

    6.13.3.240

    IronDB

    1.5.0

  • A warning popup now appears when enabling tech preview features.

  • The Tag Management list no longer hides pagination controls.

  • The Pending Users detail page now loads correctly.

  • System Status

    • Clicking on outdated queries no longer breaks the page.

  • Most dashboard widgets now show data as expected.

  • Sorting in the Rules section now works.

  • Rule Execution and Filtering

    • Rule execution in the pipeline engine has been fixed.

    • Filtered names in the Topological View no longer overflow their containers.

  • Documentation Updates

    • Added guidance on setting namespace and app_name in dataflows.

    • Documentation on replay feature is now available.

  • Other Fixes

    • The “Download Complete Report” button in Report page now works.

  • The agents list now uses the backend API for filtering, improving performance.

  • Package Management

    • The package assignment table now shows historical data.

    • The install script now detects the Linux flavor (Rocky Linux) and uses the correct package manager.

  • Documentation Enhancements

    • Added instructions for updating the Fleet GitHub repository, including agent types, configurations, and packages.

  • Check Execution Container: Runbin

    runbin-2025.04.17-0-base-2.2.1

    Check Execution Container: Postman

    postman-2025.04.17-0-base-1.4.1

    Bnet (Chrome Version)

    10.2.1 (Chrome 130)

    Zebratester

    7.0B

    ALT

    6.13.3.240

    IronDB

    1.5.0

    Flash

    v3.15.10

    Coffee

    v3.16.13

    ASM

    13.36.0

    NG Private Agent

    1.0.9

    Check Execution Container: Browser

    fpr-c-130n-10.2.1-716-r-2025.04.02-0-base-2.0.0

    Check Execution Container: Zebratester

    zt-7.5a-p0-r-2025.04.02-0-base-1.2.0

    Enhanced encryptapica feature in Scenarios for Browser checks. The target value of encryptapica prefixed store commands used in Selenium scenarios will be masked across all scenario commands in the Browser check results in case the specified target value appears in any other scenario commands (eg. echo command).

  • Data Explorer API endpoint: A new API endpoint has been added to support data explorer for Boomi OEM.

  • Tabs are now scrollable: Improved usability by making the Tabs scrollable, ensuring better navigation and access.

  • Pipeline tab inside search view: This enhances the search view and the user can see the pipeline of the selected flow.

  • Pipeline application filter: While creating a new pipeline, users can filter which application to show in the pipeline view.

  • Enhanced the Fleet agent manager installation.

  • apicactl
    More information on Lake and InstaStore
    More information on Flow
    More information on Fleet
    CEF
    RFC 6587
    https://en.wikipedia.org/wiki/Reliable_Event_Logging_Protocol
    Logstash Protocol
    Fluent Protocol Documentation
    Pull Mechanism via Push-Gateway
    Export Metrics to Prometheus
    Export Events to Apica Ascent
    Active List of Integrations
    Incident Management Integrations
    List of Forwarders
    Why Traditional Monitoring is Failing in Cloud-Native Environments

    In the past, traditional monitoring tools were sufficient for monolithic applications deployed on static infrastructure. However, modern applications are distributed, dynamic, and ephemeral.

    Challenges with Traditional Monitoring:

    • Data Silos – Logs, metrics, and traces are collected separately, making root-cause analysis slow and inefficient.

    • Scalability Issues – Legacy tools struggle to handle high-cardinality telemetry data from microservices.

    • Lack of Context – Traditional APM tools focus on isolated performance metrics, failing to provide full-stack observability.

    • High Costs – Observability data grows exponentially, leading to excessive storage and retention costs.

    • Manual Effort – Engineers spend too much time managing telemetry pipelines and analyzing fragmented data.

    To address these challenges, enterprises must shift to a cloud-native observability approach that is scalable, cost-efficient, and AI-driven.

    Apica’s Cloud-Native Observability: Built for the Future

    Apica’s Ascent Platform is designed from the ground up to tackle modern observability challenges. Unlike traditional monitoring tools, it is built on Kubernetes, enabling infinite scalability and seamless multi-cloud deployments.

    Key Advantages of Apica’s Cloud-Native Observability:

    • Kubernetes-Powered – Dynamically scales observability pipelines, eliminating bottlenecks.

    • Unified Data Store (InstaStore™) – Eliminates data silos by storing logs, metrics, traces, and events in a single repository.

    • ZeroStorageTax Architecture – No more expensive tiered storage; data is stored in infinitely scalable object stores (AWS S3, Azure Blob, Ceph, etc.).

    • AI-Driven Insights – Uses AI/ML anomaly detection, GenAI assistants, and automated root-cause analysis to accelerate issue resolution.

    • Multi-Cloud & Hybrid Ready – Seamlessly integrates with AWS, Azure, GCP, and on-prem environments.

    • Full OpenTelemetry Support – No proprietary agents needed—fully compatible with OpenTelemetry, Prometheus, Jaeger, and Loki.

    As enterprises scale their applications, they need an observability platform that scales with them. Apica’s Kubernetes-native approach enables organizations to gain full-stack observability across highly distributed, multi-cloud environments.

    More information on Kubernetes with OpenTelemetry
    Infra & Application Monitoring
    Selecting a Redshift data source
    Configuring Redshift data source
    Adding MySQL Server for Amazon RDS data source
    Microsoft’s documentation
    Configuring MS SQL Server
    Configuring MySQL Server
    Integrations page -> Create Apica Plugin
    Authentication details
    Resource Requirements of Apica Source Extension
    Final step for creating Apica Plugin
    Adding datasource to Ascent
    Adding data source to Ascent
    Query Editor

    API

    OTLP Metrics and Logs Forwarders to compatible external systems.

    runbin-2025.04.17-0-base-2.2.1

    Check Execution Container: Postman

    postman-2025.04.17-0-base-1.4.0

    Bnet (Chrome Version)

    10.2.1 (Chrome 130)

    Zebratester

    7.0B

    ALT

    6.13.3.240

    IronDB

    1.5.0

    Component

    Versions

    Coffee

    v3.16.6

    Flash

    v3.15.4

    ASM

    13.34.0

    NG Private Agent

    1.0.8

    Check Execution Container: Browser

    fpr-c-130n-10.2.1-716-r-2025.04.02-0-base-2.0.0

    Check Execution Container: Zebratester

    zt-7.5a-p0-r-2025.04.02-0-base-1.2.0

    Check Execution Container: Runbin

    Configure forwarding

    Edit the rsyslog configuration file (usually /etc/rsyslog.conf or /etc/rsyslog.d/*.conf).

    1. Open the configuration file:

    2. Enable TCP forwarding by adding *.* @@remote-server-ip:514 to the config:

    3. Save your changes and restart rsyslog

    Verify ingestion in Ascent

    On your server, use logger to log a custom message which you can track easily in order to verify ingestion has been successful.

    1. Use the logger command to trigger a custom log entry:

      It might take a slight moment for this entry to appear in the Ascent platform, so if it doesn’t show up immediately, give it a moment and check again.

    2. In your Ascent platform, navigate to Explore > Logs & Insights

    3. In the filter view, search for namespace default_namespace. Then look for your username which generated the custom log entry, and click on it.

    4. This view should only display the custom log entry generated earlier

    namespace: "Alerts"
    application: "alerts-app"
    keyword: ''
    duration: '1h'
    sudo nano /etc/rsyslog.conf
    # /etc/rsyslog.conf configuration file for rsyslog
    #
    # For more information install rsyslog-doc and see
    # /usr/share/doc/rsyslog-doc/html/configuration/index.html
    #
    # Default logging rules can be found in /etc/rsyslog.d/50-default.conf
    
    
    #################
    #### MODULES ####
    #################
    
    *.* @@<YOUR-ASCENT-ENV>:514
    sudo systemctl restart rsyslog
    logger "This is a test message from $(hostname)"
    sudo apt update
    sudo apt install rsyslog
    sudo yum install rsyslog
    sudo systemctl enable rsyslog
    sudo systemctl start rsyslog
    sudo systemctl status rsyslog

    User Management Enhancements

    • Improved workflows for disabling and enabling users, with clear warnings if an action cannot be completed.

    • Introduced a new API for retrieving full user lists.

    • Built-in policies, permissions, and roles are now loaded from default CSV files for improved security and easier out-of-the-box configuration.

    • Built-in/default policies are now visible when initializing policy lists.

  • Dashboards

    • Dashboards now load using the last executed query for greater efficiency.

  • Security Enhancements

    • Enforced stronger password requirements.

    • Application session cookies now include ‘SameSite’ attributes for additional browser security.

  • Notifications & Alerting

    • Introduced integration with ilert alerting platform.

  • UI and Data Explorer

    • Data Explorer now supports column-based filters for raw data view, improving the metric explorer.

    • Improvements throughout Data Explorer.

  • API and Integration

    • Resolved UI inconsistencies with the action buttons in the Integrations tab.

  • Bug Fixes

    • License details are now displayed correctly after SAML-based login.

    • Fixed API access issue coming because of broken authentication header support.

    • Queries now support searching with the ‘%’ and other characters.

    • Fixed general user interface issues in Check Management.

    • Improved pipeline component reliability and migration file handling.

    • Pattern enable/disable logic in the UI now correctly uses the exclusion list.

    • Fixed errors that could occur when assigning pipelines.

    • Fixed a bug where when anomaly column is disabled, the alert picks a random field from the query result. Now correct columns will be rendered.

    Ascent / Synthetics

    New Features & Improvements

    • Analytics

      • Analytics can now be filtered by check identifier.

      • Added auto-refresh in all check views.

      • Enhanced split view features make analytics more actionable.

    • Scenario Management

      • Scenario files now keep consistent names when downloaded.

      • Backend enforcement has been added for improved scenario and summary security.

      • Improved usability in the scenario location dropdown and file upload process.

    • Check Details & Scheduling

      • Schedule information is now visible in the check details view.

      • Corrected issues with showing multiple Stockholm locations on maps and lists.

    • User-to-Group Role Assignment

      • Assigning a user to a group now correctly applies the appropriate role permissions.

    • Netapp Sub-account

      • Sub-account location sharing is now supported.

    Bug Fixes

    • Improved check details performance.

    • Fixed issues with the “Hide Scenario Details” option.

    • Resolved errors when editing or saving pending invitation details in user/group management.

    Flow

    New Features & Improvements

    • Forwarding & Integration

      • Now supports sending data to multiple destinations from a single dataflow.

    • Usability & Error Handling

      • Improved prompts for incorrect username/password entries.

      • Resolved issues where some rules lost dynamic values in rule names.

    Bug Fixes

    • Users can now reliably remove rules from pipelines.

    • Fixed pod crashes when specific advanced filter settings were enabled.

    • Dashboards on pipeline pages now load without errors.

    Fleet

    New Features & Improvements

    • Agent Management & Infra View

      • New Infra View in the Honeycomb dashboard supports visibility and management for more than 50,000 agents.

      • “Group By” functionality added to the Infra View.

      • Tooltips for agent details are now available.

      • The Fleet repository page now features a reload button and easy-copy install scripts.

    • UI & Workflow Improvements

      • Updated the Fleet UI create configuration modal for better usability.

      • Reduced unnecessary API calls and improved overall Fleet page reliability.

      • Confirmation messages for agent actions now only appear when the agent manager is connected.

    Bug Fixes

    • Resolved layout and overlapping issues in agent group display.

    • The main Fleet page now loads reliably every time.

    Additional Improvements

    • Added ASM subscription details as a 2nd tab in the license details page.


    Component Versions - Ascent v2.12.0

    Components
    Version

    Flash

    v3.17.0

    Coffee

    v3.18.0

    ASM

    13.36.3

    NG Private Agent

    1.0.9

    Check Execution Container: Browser

    fpr-c-130n-10.2.1-716-r-2025.04.02-0-base-2.0.0

    Check Execution Container: Zebratester

    zt-7.5a-p0-r-2025.04.02-0-base-1.2.0

    An object store is where the data fabric stores its data at rest. An S3-compatible object store is required.

    1. Azure installs can take advantage of a native integration with the Azure Blob store.

  • Access to a container registry for docker images for the Apica Data Fabric.

  • Optional External Items

    1. Postgres - Ascent's internal Postgres can be replaced with a RDS or other managed offerings.

    2. Redis - Ascent's internal Redis server can be replaced with like managed offerings.

    Sizing

    Ascent stores most customer data in the object store, which will scale with usage. In addition, the Kubernetes cluster has the following minimum requirements.

    Service
    vCPUs
    RAM
    Disk

    Ingest per GB/hour

    1.25

    3.5GB

    5GB*

    Core Components

    10

    28GB

    150GB

    * 5GB/ingest pod is the minimum, but 50GB is recommended.

    Packaging

    The deployment of the Apica Data Fabric is driven via a Helm chart.

    The typical method of customizing the deployment is done with a values.yaml file as a parameter to the Helm software when installing the Apica Data Fabric Helm Chart.


    Reference Kubernetes Deployment Architecture


    Reference AWS Deployment Architecture


    Reference Hybrid Deployment Architecture

    The reference deployment architecture shows a hybrid deployment strategy where the Apica stack is deployed in an on-prem Kubernetes cluster but the storage is hosted in AWS S3. There could be additional variants of this where services such as Postgres, Redis, and Container registry could be in the cloud as well.

    k0s

    Account: Unique identifier of Snowflake account within the organization

  • User: Unique username of your Snowflake account

  • Password: The password for the above user

  • Warehouse: The Warehouse name

  • Database name: The name of the database of the Snowflake.

  • Configuration of Snowflakedata source

    That's it. Now navigate to the Query editor page to query your data

    Ingest Token

    Generating using apicactl

    To generate a secure ingest token, do the following.

    1. Install and configure apicactl, if you haven't already.

    2. Run the following command to generate a secure ingest token:

      apicactl get httpingestkey
    3. Copy the ingest key generated by running the command above and save it in a secure location.

    You can now use this ingest token while configuring Apica Ascent to ingest data from your data sources, especially while using log forwarders like Fluent Bit or Logstash.

    apicactl
    Refresh and lookup Elasticsearch indexes
    Writing a search query against an Elasticsearch index
    Confguring Data Bricks data source

    Collect Logs with OpenTelemetry

    A guide on how to collect logs using OpenTelemetry on Linux from installation to ingestion

    Install otelcol-contrib

    At the time of writing, the latest version of otelcol-contrib is v0.121.0

    See releases for later versions

    For DEB-based:

    For RHEL-based:

    Configure Collector

    Edit /etc/otelcol-contrib/config.yaml and replace the content with the below

    Replace the following values:

    • <your_log_file_path>

      • Physical path to your log file

    • <your_domain>

      • Hostname of your Apica environment (example.apica.io)

    Validate and apply

    When you're done with your edits, execute the below command to validate the config is valid (it should return nothing if everything is in order)

    Restart OTel to apply your changes

    Ascent view

    Assuming everything has been done correctly, your logs will start to appear in Explore > Logs & Insight on your Ascent environment. They will show up based on the namespace and application names that you set in your config.yaml file.

    Ascent 2.3.0

    Discover the latest advancements and improvements of the Apica Ascent platform. This is your go-to destination for updates on platform enhancements and new features. Explore what's new to optimize your observability and data management strategies.

    Synthetic Monitoring (ASM 13.26.0) - SaaS

    Features

    • Browser checks will automatically accept dialogs/modals that can pop up during a test such as alert/confirmation/prompts.

    • Browser checks will attach and include control of new tabs created by the target site. I.e. the chrome WebDriver will automatically attach to new tabs that are opened during check execution of a Browser check.

    • Added SAN/SNI options to SSL Cert Expiration and Fingerprint Validation for URL checks.

    Bugs Fixes:

    • Screenshots for Browser checks were not working in new tabs or windows created by the check. This is fixed as part of the above feature that include control of created tabs and windows by the target site.

    ADF v3.8

    Features

    • Data Explorer adds a new way to create queries, dashboards, and widgets directly from a browsable inventory of available metrics and events. With just a few clicks, a query builder is guiding the simple creation of dashboards and widgets. Please read further on this substation set of features in our product documentation:

    • Code Rule is a new rule type that is introduced with this release, where user can add JavaScript code to enhance the logs. With the help of Code Block, add Code Rule to improve your pipelines. Code Rules takes in a JavaScript function that gets integrated with your pipeline. Please read further on this in the product documentation:

    • Fleet 🚢 is the ultimate solution for making the collection of observability data responsive to changes in your environment using your pre-existing observability agents. With Fleet, you can collect more data when you need it and less when you don’t. And the best part? Almost all observability agents can be managed through configuration files describing how to collect, enrich, and send data. Fleet aims to simplify this process through an agent manager. The Fleet Agent Manager functions as a sidecar utility that checks for new configuration files and triggers the appropriate restart/reload functionality of the supported agent. The Agent Manager is kept intentionally simple, with the goal that it only needs to be installed once and updated infrequently. Please read further on this in the product documentation:

    Improvements

    • Revamped Alert API to support multiple severities (Info, Warning, Critical, Emergency) with multiple thresholds, in the same alert.

    • Changed the location of Track duration in alert screens to be adjacent to the Alert condition.

    • All the alert destinations (Slack, PagerDuty, Mattermost, Chatwork, Zenduty, Opsgenie, Webhook, ServiceNow, and Email) will now start receiving values that triggered that specific alert.

    • Further UI changes for Alert Screens, Integrations Screen, and Distributed Tracing to align with the new design system.

    Bug Fixes:

    • Fixed ServiceNow alert destination API errors.

    • Fixed Email settings page bug.

    • Fixed User page bug because of which admin was not able to change groups of users.

    • Fixed missing services in ASM+.

    Others:

    • Deprecated Hipchat alert destination.

    IRONdb

    Bugfixes

    • Avoid metric index corruption by using pread(2) in jlog instead of mmap(2).

    • Fix the bug where a node could crash if we closed a raw shard for delete, then tried to roll up another shard before the delete ran.

    • Fix the bug where setting raw shard granularity values above 3w could cause data to get written with incorrect timestamps during rollups.

    • Fix the NNTBS rollup fetch bug where we could return no value when there was valid data to return.

    Improvements

    • Deprecate max_ingest_age from the graphite module. Require the validation fields instead.

    • Change the Prometheus module to convert nan and inf records to null.

    • Add logging when when the snowth_lmdb_tool copy operation completes.

    • Improve various listener error messages.

    Getting Started with Metrics

    Install OpenTelemetry

    1. Go to https://opentelemetry.io/docs/collector/installation/ or https://github.com/open-telemetry/opentelemetry-collector-releases/releases/ to find the package you want to install. At the point of writing this guide, 0.115.1 is the latest package so we’ll install otelcol-contrib_0.115.1_linux_amd64

    2. On the machine you wish to collect metrics from, run the following 4 commands:

      1. Deb-based

        1. sudo apt-get update

        2. sudo apt-get -y install wget

        3. wget https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.115.1/otelcol-contrib_0.115.1_linux_amd64.deb

    3. Navigate to /etc/otelcol-contrib/

    4. Edit the file with your favourite file editor, for example: nano config.yaml

    5. Paste the following into the config file overwriting it completely:

      1. Replace <YOUR-ASCENT-ENV>with your Ascent domain, e.g. company.apica.io

      2. Replace <YOUR-INGEST-TOKEN>with your Ascent Ingest Token, e.g. eyXXXXXXXXXXX...

    6. When you’ve finished editing the config, save it and run otelcol-contrib validate --config=config.yaml

      1. If you get no error returned, the config file is valid.

    7. Restart the service with sudo systemctl restart otelcol-contrib

    8. Verify that the service is up and running correctly with sudo systemctl status otelcol-contrib

      1. A good result should look like this:

      2. You can also view live logs using journalctl -u otelcol-contrib -f. With the above config you would see entries every 10 seconds.

    Verify metrics in the Ascent platform

    1. Click on the green “+ Create” button on the top navigation bar and select Query

    2. In the dropdown menu on the left hand side, select Ascent Metrics

    3. In the search bar, search for system_

      1. This will present all the different system metrics that is being scraped with your Otel configuration

    JSON Data source

    JSON Data source provides a quick and flexible way to issue queries to arbitrary RESTful endpoints that return JSON data.

    Create the JSON Data source

    1. Navigate to Integrations > Data Sources

    2. Click New Data Source

    3. Select JSON

    4. Create the data source

      1. Enter a name for your data source (required)

      2. Enter Basic Authentication credentials (optional)

    Writing queries

    1. Navigate to Queries and click New Query

    2. In the drop-down on your left hand side, select your new data source

    Providing HTTP Options

    The following HTTP options are used for sending a query

    The URL parameter is the only required parameter

    • url - This is the URL where the RESTful API is exposed

    • method - the HTTP method to use (default: get)

    • headers - a dictionary of headers to send with the request

    Example query:

    Example query including all HTTP options:

    Filtering response data: path and fields

    The response data can be filtered by specifying the path and fields parameters. The path filter allows accessing attributes within the response, for e.g. if a key foo in the response contains rows of objects you want to access, specifying path foo will convert each of the objects into rows.

    In the example below, we are then selecting fields volumeInfo.authors, volumeInfo.title, volumeInfo.publisher and accessInfo.webReaderLink

    The resulting data from the above query is a nicely formatted table that can be searched in Apica Ascent or made available as a widget in a dashboard

    How to Get Started with OpenTelemetry

    SETTING UP THE OPENTELEMETRY COLLECTOR

    Choosing the Right OTEL Components

    Understanding OTEL architecture: OpenTelemetry consists of multiple components, including APIs, SDKs, Collectors, and exporters. Selecting the right components depends on the architecture of your system and the telemetry data you need to collect. Organizations must assess whether they need distributed tracing, metrics, logs, or a combination of all three to achieve complete observability.

    Deployment considerations: Choosing between an agent-based or sidecar deployment model affects resource utilization and scalability. OpenTelemetry provides flexible deployment options that integrate directly into microservices, Kubernetes clusters, and traditional monolithic applications.

    Links for using OpenTelemetry with Apica Ascent:

    OTEL SDKs: SPRING BOOT OPENTELEMETRY AND MORE

    Language-specific SDKs: OpenTelemetry provides official SDKs for multiple programming languages, including Java, Python, JavaScript, Go, .NET, and more. Choosing the correct SDK ensures seamless instrumentation of applications to capture relevant telemetry data without requiring excessive code modifications.

    Automatic vs. manual instrumentation: Many OpenTelemetry SDKs support automatic instrumentation, which simplifies the collection of telemetry data by automatically instrumenting common frameworks and libraries. Manual instrumentation, on the other hand, allows developers to capture more granular details specific to their business logic, providing richer observability insights.

    Configuration and customization: Each OpenTelemetry SDK offers various configuration options, such as sampling rates, exporters, and resource attributes. Understanding these settings helps optimize observability while minimizing overhead on production systems.

    Collector Setup for Telemetry Aggregation

    Role of the OpenTelemetry Collector: The OpenTelemetry Collector acts as a central hub for processing, filtering, and exporting telemetry data. It eliminates the need to send data directly from applications to multiple backends, reducing the complexity of observability pipelines.

    Collector pipeline configuration: OpenTelemetry Collectors support a pipeline model consisting of receivers (data ingestion), processors (data transformation), and exporters (data forwarding). Configuring these pipelines efficiently ensures that only relevant telemetry data is retained and sent to the appropriate monitoring backends.

    Scalability and performance tuning: Organizations with high-volume telemetry data must optimize Collector performance using batching, compression, and load balancing techniques. Running multiple Collector instances or deploying Collectors at the edge can enhance data aggregation efficiency while minimizing network latency.

    Instrumenting Applications with OTEL

    Manual vs. Automatic Instrumentation

    Understanding the differences: OpenTelemetry offers two approaches to instrumenting applications—automatic and manual instrumentation. Choosing the right approach depends on the level of detail required and the effort an organization is willing to invest.

    Automatic Instrumentation: OpenTelemetry provides auto-instrumentation libraries that hook into commonly used frameworks (e.g., Spring Boot, Express, Flask, Django) to capture telemetry data without modifying application code. This is an easy way to get started and ensures coverage across key application functions with minimal effort. However, automatic instrumentation may not capture business-specific logic or custom events that organizations want to track.

    Manual Instrumentation: With manual instrumentation, developers explicitly insert OpenTelemetry SDK calls into the application code. This provides precise control over what telemetry data is collected and allows capturing custom metrics, business transactions, and domain-specific spans. While more effort is required to implement, manual instrumentation results in richer observability data tailored to an organization’s needs.

    Combining both approaches: Many organizations use a hybrid approach where auto-instrumentation provides baseline observability, and manual instrumentation is used to track critical business operations, unique workflows, or domain-specific logic.

    Injecting Context Propagation Across Services

    Why context propagation matters: In distributed systems, requests travel through multiple services, making it difficult to correlate logs, traces, and metrics. Context propagation ensures that telemetry data remains linked throughout an entire request lifecycle, enabling effective debugging and root cause analysis.

    Using Trace Context and Baggage: OpenTelemetry follows the W3C Trace Context standard, which passes unique trace identifiers across service boundaries. Additionally, baggage propagation allows attaching custom metadata to traces, which can be used for debugging or business analytics.

    Instrumentation strategies: Developers need to ensure that trace context is carried through HTTP requests, gRPC calls, and message queues. OpenTelemetry SDKs provide middleware and client libraries that handle this automatically for popular frameworks and protocols.

    Ensuring compatibility across environments: Organizations using multiple tracing tools should verify that OpenTelemetry context propagation integrates well with existing logging and monitoring solutions, avoiding data fragmentation.

    Exporting Telemetry Data

    OTEL-native Exporters (OTLP, Prometheus, Jaeger, etc.)

    OpenTelemetry Protocol (OTLP): OTLP is the native protocol for OpenTelemetry, offering a standardized and efficient way to transmit telemetry data. It supports traces, metrics, and logs in a unified format, ensuring compatibility with a broad range of observability backends. Organizations using OTLP benefit from reduced complexity and better performance, as the protocol is optimized for high-throughput data collection.

    Prometheus Exporter: OpenTelemetry integrates seamlessly with Prometheus, a widely used open-source monitoring system. The Prometheus exporter allows applications instrumented with OpenTelemetry to send metrics to Prometheus, enabling real-time monitoring and alerting. This is particularly useful for organizations leveraging Prometheus as their primary observability backend.

    Jaeger and Zipkin Exporters: OpenTelemetry supports both Jaeger and Zipkin, two popular distributed tracing backends. These exporters allow organizations to continue using their existing tracing infrastructure while benefiting from OpenTelemetry’s standardized instrumentation. By enabling these exporters, teams can visualize request flows and troubleshoot latency issues effectively.

    Commercial Observability Platforms: Many commercial observability platforms, such as Datadog, New Relic, and Dynatrace, support OpenTelemetry exporters. This ensures that organizations adopting OpenTelemetry can seamlessly integrate their telemetry data into these platforms without vendor lock-in.

    Integrating OTEL with Your Observability Backend (Your Platform)

    Configuring Exporters for Seamless Data Ingestion: OpenTelemetry provides a flexible exporter configuration, allowing organizations to send telemetry data to multiple observability platforms simultaneously. This enables hybrid monitoring strategies where teams can leverage both open-source and commercial solutions for observability.

    Optimizing Data Flow with the OpenTelemetry Collector: The OpenTelemetry Collector can be used as an intermediary layer to receive, process, and export telemetry data efficiently. By implementing batch processing, filtering, and data enrichment, organizations can optimize data flow while reducing unnecessary storage and processing costs.

    Ensuring High Availability and Performance: When integrating OpenTelemetry with an observability backend, organizations should ensure that exporters and collectors are configured to handle high-volume telemetry data. Strategies such as load balancing, horizontal scaling, and adaptive sampling help maintain reliability while keeping infrastructure costs under control.

    Security and Compliance Considerations: Organizations should implement encryption (e.g., TLS) and authentication mechanisms when exporting telemetry data to observability platforms. Ensuring secure transmission prevents unauthorized access and aligns with regulatory requirements.

    Kubernetes is a Game-Changer

    Why Kubernetes is a Game-Changer for Apica Ascent

    As enterprises scale their cloud-native applications, they need an observability platform that can keep up with dynamic workloads, high-velocity data streams, and distributed architectures. Kubernetes has emerged as the foundation for modern observability platforms because it enables infinite scalability, automated resilience, and superior resource efficiency.

    Apica’s Ascent Platform, built on Kubernetes, leverages these advantages to deliver next-generation observability—one that scales on demand, ensures high availability, and optimizes infrastructure resources efficiently.

    Infinite Scale & Elastic Resource Management

    Observability data is massive and continuously growing. Logs, metrics, traces, and events are ingested at an unprecedented scale, especially in high-throughput environments like fintech, telecom, and SaaS platforms.

    Traditional monitoring solutions struggle because they rely on static infrastructure, making it difficult to scale on demand. Kubernetes, on the other hand, provides:

    Horizontal Scalability – Automatically scales observability workloads based on real-time ingestion rates.

    • Dynamic Resource Allocation – Ensures workloads receive the right amount of CPU, memory, and storage.

    • Event-Driven Autoscaling – Uses Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA) to dynamically adjust observability workloads.

    How Apica Leverages Kubernetes for Infinite Scale

    • Auto-Scaling Observability Pipelines – Apica’s OpenTelemetry-based collectors automatically scale based on traffic volume, ensuring consistent performance.

    • Seamless Multi-Cluster Deployment – Apica’s platform runs across Kubernetes clusters in AWS, Azure, GCP, and on-premise for global observability.

    • Optimized Data Processing – High-throughput workloads are distributed across multiple nodes for maximum efficiency and minimal latency.

    Result: Enterprises using Apica Ascent based on Kubernetes can ingest billions of logs, traces, and metrics in real time without worrying about infrastructure limitations.

    Seamless Scaling for Any Environment

    Modern enterprises operate in multi-cloud and hybrid environments, where observability data comes from Kubernetes clusters, virtual machines, serverless functions, and on-premises data centers.

    Kubernetes removes infrastructure constraints by allowing observability workloads to be deployed across any cloud provider or on-prem environment, ensuring consistent visibility across all application layers.

    • Run observability workloads anywhere – On-prem, hybrid cloud, or multi-cloud setups.

    • Unified Observability Across Diverse Environments – Monitor Kubernetes, VMs, and serverless environments in a single platform.

    • Zero Vendor Lock-In – Apica’s platform is built on open standards (OpenTelemetry, Prometheus, Jaeger) and deployable across AWS, Azure, GCP, Oracle Cloud, and private data centers.

    How Apica Enables Seamless Scaling

    • Scalable Resourcing – Kubernetes allows Apica to scale observability workloads for multiple customers or teams without resource contention.

    • Cloud-Agnostic Deployment – Apica’s observability platform runs natively across any cloud provider or on-premises Kubernetes cluster.

    • Unified Observability at Global Scale – Centralized data collection, analytics, and AI-driven insights across all Kubernetes environments.

    Result: Enterprises gain full observability across all environments, whether running in AWS, Azure, GCP, or on-prem.

    Resilience, Fault-Tolerance, and Self-Healing Architecture

    One of Kubernetes' biggest advantages is its self-healing capabilities, ensuring that observability workloads remain highly available and fault-tolerant.

    • Automatic Failover & Pod Recovery – Kubernetes automatically replaces failed observability agents and collectors, ensuring no gaps in monitoring.

    • Load Balancing for Observability Workloads – Kubernetes evenly distributes data ingestion, preventing bottlenecks in observability pipelines.

    • Multi-Region & Disaster Recovery Ready – Kubernetes automates failover between cloud regions, ensuring continued observability in case of outages.

    How Apica Ensures High Availability with Kubernetes

    • Redundant Observability Agents – Apica deploys multiple OpenTelemetry Collectors to prevent data loss during failures.

    • AI-Driven Incident Recovery – Apica’s AI proactively detects infrastructure failures and triggers automated remediation workflows.

    • Built-in Kubernetes Load Balancing – Ensures efficient routing of telemetry data to optimize performance.

    Result: No single point of failure, ensuring continuous observability, even during infrastructure outages.

    Security & Compliance at Scale

    Observability data contains sensitive business insights, requiring enterprise-grade security and compliance measures. Kubernetes provides built-in security capabilities that make it ideal for running observability platforms at scale.

    • RBAC (Role-Based Access Control) – Granular access control for observability workloads.

    • End-to-End Encryption – TLS encryption for telemetry data in transit and at rest.

    • Network Segmentation & Pod Security – Prevents unauthorized access to observability data.

    • Multi-Tenant Observability with Isolation – Ensures customers or teams have secure, isolated access to their data.

    How Apica Delivers Secure & Compliant Observability

    • Secure Observability Pipelines – Data is encrypted at rest and in transit using TLS 1.2+ and AES encryption.

    • Multi-Tenant Data Isolation – RBAC ensures fine-grained access control across teams and business units.

    • Long-Term Retention for Compliance – Apica’s InstaStore™ data lake ensures observability data meets SOC 2, GDPR, HIPAA, and enterprise compliance standards.

    Result: Enterprises gain observability at scale while ensuring full security and compliance.

    Ascent Leverages Kubernetes Key Benefits and Best Practices

    Kubernetes is redefining the observability landscape, enabling platforms like Apica’s Ascent to deliver:

    • Infinite Scalability – Dynamically scales telemetry pipelines to handle billions of logs, traces, and metrics.

    • Global Observability Across Any Environment – Runs on AWS, Azure, GCP, Oracle Cloud, and on-premises Kubernetes clusters.

    • AI-Driven Automation & Self-Healing – Automatically detects and resolves failures, reducing operational overhead.

    • Cost-Optimized & Storage-Efficient – Leverages object storage (S3, Azure Blob, Ceph) to eliminate unnecessary costs.

    Getting Started with Flow

    This guide provides a walkthrough of configuring your data pipelines using Ascent's Flow solution.

    Quick Start Guide for Using Ascent's Flow Solution to Optimize Your Data Pipelines.

    This guide will teach you how to use the Flow solution to optimize your data pipelines. You will learn how to create a processing pipeline that filters unnecessary data from your logs, reducing storage costs. Finally, you will learn how to route that streamlined data to a downstream observability platform, such as Datadog.

    For a full video walkthrough, please refer to our video guide:

    Let's begin.

    Prerequisite: Make sure to have logs ingested into the Ascent platform before getting started.

    Step 1: Create A New Pipeline:

    Go to -> Explore -> Pipeline

    Click -> Actions -> Create Pipeline

    Enter a name for the new Pipeline and press "Create"

    Step 2: Create A Filter Rule:

    Click on the 3 dotted lines menu for the Pipeline you created

    • Click on "Configure Pipeline"

    Click on "Add Rule" -> FILTER

    Enter mandatory fields:

    • Name / Group

    Click on "Drop Labels"

    Then, click on "Data Flows"

    Next, you will select what labels you want to drop.

    Enter the labels you want to drop on the left hand side as shown below:

    To preview the changes, go to the right-hand side and click "Pipeline Preview" -> "Run Pipeline"

    Click "Save Pipeline"

    Next, "Apply Pipeline" by clicking on the 3 dot menuand clicking "Apply Pipeline"

    Then, select the namespace and logs you want to apply the new FILTER RULE ( in this case, we are applying it to our "OtelDemo" logs

    Click "Apply"

    Step 2: Create A Filter Rule:

    Create a Forwarder (Datadog in this example), to push our FILTERED OTEL logs downstair to another Observability platform.

    Click on "Integrations" -> "Forwarders"

    Step 3: Create A Forwarder (DataDog in this example):

    Click on "Add Forwarder" and select your destination (Datadog in our example)

    Then, copy over the "DataDog (JSON) configs as shown below:

    Buffer_size: 16000

    Host: app.datadog.com

    Tags: logs

    Type: JSON

    Name: Datadog Forwarder

    Click "Create"

    Step 4: Assign the Forwarder to the Logs:

    Next, go back to "Pipelines" and

    Click on "Map Forwarder" from the 3 dot menu:

    Select the "DataDog Forwarder" that you created and click OK:

    Step 5: Verify Data in Destination (Datadog in this example):

    Go to your Datadog dashboard and verify data coming in as expected:

    As you can see, data ingestion has decreased after the FILTER rule was applied:

    Conclusion:

    By following this guide, you have learned how to successfully use Flow to manage and optimize your telemetry data. You now know how to build a data pipeline that filters unneeded fields, drops irrelevant log messages entirely, and forwards the clean, cost-effective data to a downstream platform like Datadog. Applying these techniques allows you to significantly reduce observability costs while maintaining cleaner and more efficient data pipelines.

    Checks

    How to Use the Ascent Checks Data Source to Query Checks

    Follow the steps below to create and execute a query using the Ascent Checks Data Source.


    1. Go to the Queries Page

    • Navigate to the queries page in your dashboard to begin

    • In the Queries page, click on the "New Query" button to start a new query

    2. Select "Ascent Checks" from the Left Sidebar

    • On the left sidebar, click on Ascent Checks. This will display a list of all available checks that you can query.

    3. Expand a check to uncover more details

    • From the list, expand the check you want to query by clicking on it. This will show more details about the check.

    4. Append the CheckID to the Query Editor to consume this check.

    • Click on the right arrow next to the check id to append it to the query editor

    5. Add Duration or Start/End Date

    • To use a specific time range, enter the start and end times as Unix epoch values.

    • To query relative durations, use the duration option with a human-readable format (e.g., 1d for one day, 2h for two hours, etc.)

    • Example:

    6. Execute the Query

    • Once your query is complete, click on Execute to run the query and see the results.

    Check Data Source Query Options

    The query for the Ascent Checks Data Source is written in YAML format. The following options are supported:

    Synthetic Monitoring

    ASM 13.24 Public Release Notes (2024-04-12)

    User Story Enhancements

    Creating Data Sources in Apica Ascent

    Apica Ascent allows you to connect multiple data sources for unified observability across logs, metrics, checks, and reports. Follow the steps below to configure basic data sources to interact with metrics, logs, checks and reports.


    1. Ascent Logs

    Purpose: Enables access to logs coming in different dataflows.

    Steps:

    Dashboards & Visualizations

    Ascent has a robust dashboard capability which provides numerous methods to visualize your critical data - across metrics, events, logs, and traces. You can visualize and detect anomalies, and get notified before any potential incident.

    Creating a Dashboard

    • Expand Create from the navbar and click dashboard. A popup will be displayed by prompting the dashboard name.

    Pull Mechanism via Push-Gateway

    You can use the provided by Apica Ascent via GitHub to push Apache Beam metrics to Push-Gateway.

    Setting up Push-Gateway via Docker (recommended)

    In order to set up push-gateway, just run the provided docker image.

    You'll now have an instance of push-gateway running on your machine, you can verify by running the below command.

    Once the instance is up and running, we can then specify it in our prometheus.yaml config file.

    Ascent 2.8.0

    Release Notes - Ascent October 2024

    Overview This release introduces a host of updates to enhance user experience, streamline operations, and address known issues across Fleet, Data Explorer, the Ascent platform, and ASM+. New features and improvements focus on usability, performance, and customization, while bug fixes enhance platform stability and reliability.


    helm install apica --namespace apica-data-fabric apica-repo/apica 
    helm install apica --namespace apica-data-fabric apica-repo/apica -f values.yaml

    Built-In Security & Compliance – Ensures full encryption, RBAC access control, and regulatory compliance.

    New Features and Enhancements

    OpenTelemetry

    OpenTelemetry Collectors can now be configured to use the standard ingest endpoints when pushing data to Apica Ascent

    1. Traces - /v1/traces

    2. Logs - /v1/logs

    3. Metrics - /v1/metrics

    Telemetry Pipelines

    1. New forwarders added for Oracle Cloud

      • OCI Buckets

      • OCI Observability & Monitoring - Logs

    Freemium Support

    Experience Apica Ascent with the Freemium release. The Freemium is a FREE FOREVER release which includes all the capabilities of the Apica Ascent Intelligent Data Management Platform available as a convenient SaaS offering

    1. Fleet Management

    2. Telemetry Pipelines with support for platforms such as Splunk, Elasticsearch, Kafka, Datadog among others

    3. Digital Experience Monitoring (Synthetic Monitoring for URL, Ping, Port and SSL Checks)

    4. Log Management

    5. Distributed Tracing

    6. Infrastructure Monitoring

    7. Enterprise ready with features such as SAML based SSO

    8. ITOM integration with platforms such as PagerDuty, ServiceNow, and OpsGenie

    Fleet Updates

    1. Agent Management: - Introduced controls for managing agents within the Fleet UI for better administration. - A summary table was added to display agent statistics, providing quick insights. - Enabled rules for assigning configurations or packages to agents. - User-defined Fleet resource types (rules, alerts, agent_types, configurations, and packages) can now be imported via Git. - Fleet REST API search endpoints now support the ?summary query parameter for result summarization. - Expanded fleetctl CLI tool capabilities to manage Fleet API resources directly.

    2. Advanced Search and Customization: - Users can save and retrieve advanced search queries in the Fleet Advanced Search Modal.

    Data Explorer Enhancements

    1. Improved Analytics Options: - Added support for PostgreSQL, expanding data integration capabilities. - Enhanced GroupBy functionality and a “Select All” label for better data analysis. - Enabled parameterized queries for dashboards, allowing dynamic user input for real-time customization. - Users can edit the dashboard header query and set the dropdown type (query, enum, or text) for customization.

    2. Visualization Improvements: - Introduced a DenseStatusType chart to monitor active and inactive pods/instances in real time. - Added time zone customization for chart displays. - Optimized dark theme UI components with updated icons and design assets.

    Ascent Platform Enhancements

    1. ASM UI Enhancements: - Integrated repository and certificate management for streamlined admin controls. - Implemented a persistent last-view setting on the Check Management page.

    2. General Improvements: - Enhanced navigation with streamlined redirection flows for faster page loads.

    AI/ML and GenAI Enhancements

    1. Pattern-Signature Processing: - Improved compaction with meaningful aliasing during pattern-signature (PS) merging. - Enhanced performance through PS coding representation for faster processing and UI responsiveness. - Fixed functionality for PS compaction at the backend.

    2. GenAI Features: - GenAI document search functionality was added to the NavBar.


    Bug Fixes

    Fleet UI and Backend Fixes

    1. UI and Agent Issues: - Resolved banner display inconsistencies during agent updates. - Fixed errors in anonymous report generation for Grafana Alloy. - Fixed agent-manager token refresh failures on Windows hosts.

    2. Backend and API: - Fixed errors preventing default configuration/package assignments via the install endpoint. - Resolved OpAMP client failures and Windows socket exhaustion issues. - Corrected lookup errors for agents by instance ID during OpAMP registration.

    Data Explorer Fixes

    1. Performance and Stability: - Resolved crashes on the Data Explorer page. - Corrected schema issues and bugs affecting *-based queries and widget calculations. - Fixed default date-type inputs and adjusted other input defaults for smoother workflows.

    2. UI Updates: - Fixed CSS and overflow issues in modals and alert render pages.

    General UI and Usability Fixes

    • Resolved usability regressions from the v3.11.2 update, improving input defaults and widget updates.


    Miscellaneous Enhancements

    1. Fleet-Specific Improvements: - Improved response times in Fleet views for queries involving large datasets.

    2. Ascent Platform: - Resolved permission issues for non-admin users in the Namespace endpoint.


    These updates reflect our commitment to delivering a robust and user-friendly platform. As always, we value your feedback to enhance our services further.

    Sorting by scenario name is now accurate.

    Check Execution Container: Runbin

    runbin-2025.04.17-0-base-2.2.1

    Check Execution Container: Postman

    postman-2025.04.17-0-base-1.4.1

    Bnet (Chrome Version)

    10.2.2 (Chrome 130)

    Zebratester

    7.5A

    ALT

    6.13.3.240

    IronDB

    1.5.0

    Getting Started with OpenTelemetry
    OpenTelemetry Integration for Ascent

    Start: 1609459200 (Unix epoch for the start time)

  • End: 1609545600 (Unix epoch for the end time)

  • Duration: 1d (relative to the current time)

  • check_id (Mandatory): The check_id refers to the checkguid of the check. You can find this in the sidebar when expanding a check.

  • start (mandatory if no duration): The start time, provided as a Unix epoch value, defines the beginning of the time range for your query.

  • end (mandatory if no duration): The end time, also in Unix epoch format, defines the end of the time range for your query.

  • duration (mandatory if no start/end): A human-readable format for relative durations. It supports the following units:

    • s for seconds

    • m for minutes

    • h for hours

    • d for days Example: 2d for two days ago, 1h for one hour ago.

  • limit(optional): The limit option allows you to specify the maximum number of check results to retrieve. This helps to control the size of the query results.


  • Query Execution Notes

    • By default, results are sorted by time in descending chronological order.

  • JS Code Forwarder is a robust batch processing tool designed to efficiently handle and forward batches of events. It supports forwarding arrays of event objects to a specified endpoint, and includes built-in functions for recording metrics, making HTTP requests, and logging. https://logflow-docs.logiq.ai/forwarding-to-monitoring-tools/js-code-forwarding

  • AWS XRay Forwarder. This allows users to send trace data to AWS XRay.

  • Alert page search. Ability to search across all existing Alerts by use of central search bar within Alert list view.

  • Search improvements in ASM+. Now search by location, severity, type, and checkID are supported. Search is also a lot faster because of parallel queries.

  • Improved waterfall chart in ASM+ analysis view.

  • Improved pattern signature enables/disables usability.

  • Bring back scenario commands and request/response headers for FPR checks in ASM+.

    Fix the bug where histogram rollup shards were sometimes not being deleted even though they were past the retention window.

    Add checks for timeouts in the data journal path where they were missing.

  • Improve graphite PUT error messages.

  • https://docs.apica.io/data-explorer/overview
    https://docs.apica.io/data-management/code
    https://docs.apica.io/fleet/fleet

    <your_token>

    • Your ingest token, see how to obtain your ingest token

  • <namespace>

    • A name for high-level grouping of logs, isolating different projects, environments, or teams.

  • <application>

    • A name for logs generated by a specific service or process

  • line_start_pattern

    • The above example uses a regex to match on the timestamp of a log entry to capture the entire entry. This needs to be adjusted to match the beginning of your log structure. See below example of entries that matches this pattern.

  • sudo dpkg -i otelcol-contrib_0.115.1_linux_amd64.deb

  • RHEL-based

    1. sudo dnf update -y

    2. sudo dnf install -y wget

    3. wget https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.115.1/otelcol-contrib_0.115.1_linux_amd64.rpm

    4. sudo rpm -ivh otelcol-contrib_0.115.1_linux_amd64.rpm

  • Follow this guide on how to obtain your ingest token - https://docs.apica.io/integrations/overview/generating-a-secure-ingest-token

    You can click any of the metrics directly to insert it into the query text, and hit execute to see the latest metrics.

    auth - basic auth username/password (should be passed as an array: [username, password])

  • params - a dictionary of query string parameters to add to the URL

  • data - a dictionary of values to use as the request body

  • json - same as data except that it’s being converted to JSON

  • path - accessing attributes within the response

    • field - rows of objects within selected attribute

  • View the sample prometheus.yaml file below.

    Great, now you will have Prometheus scraping the metrics from the given PushGateway endpoint.

    Setting up Apache Beam to export the Metrics to Push-Gateway

    Now that you have configured the push-gateway and Prometheus, it's time that we start configuring the Apache Beam Pipeline to export the metrics to the Push-Gateway instance.

    For this, we will refer to the tests written in the Connector here.

    The metrics() method is responsible for sending the metrics to the given push-gateway endpoint. Once the pipeline has been modeled, we are good to view the result. we should be able to access the metrics of the PipelineResult at PipelineResult.metrics, now just pass this to the Push-Gateway class with the correct endpoint and call the write() method with the metrics.

    Hooray, you have successfully pushed your Apache Beam Metrics to Push-Gateway. These metrics will shortly be scraped by Prometheus and you would be able to access them.

    You can check your results on Push-gateway Instance and Prometheus Instance.

    Advanced

    If you want to apply any transformations other than the default transformers, you can specify the functions with withCounterTransformer, withDistributionTransformer, withGaugeTransformer provided by the PushGateway class. This allows you to perform complex operations and achieve granularity within your metrics.

    Connector
    wget https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.120.0/otelcol-contrib_0.120.0_linux_amd64.deb
    dpkg -i otelcol-contrib_0.121.0_linux_amd64.deb
    wget https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.121.0/otelcol-contrib_0.121.0_linux_amd64.rpm
    rpm -ivh otelcol-contrib_0.121.0_linux_amd64.rpm
    receivers:
      filelog:
        include: ["<your_log_file_path>"]
        multiline:
          line_start_pattern: '^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3}'
    
    processors: 
      batch:
        timeout: 5s
    
    exporters:
      debug:
        verbosity: detailed   
      otlphttp:
        logs_endpoint: https://<your_domain>/v1/json_batch/otlplogs?namespace=<namespace>&application=<application>
        encoding: json
        compression: gzip
        headers:
          Authorization: "Bearer <your_token>"
        tls:
          insecure: false
          insecure_skip_verify: true
    
    service:
      pipelines:
        logs:
          receivers: [filelog]
          processors: [batch]
          exporters: [debug, otlphttp]
    2000-00-00 00:00:00,000 INFO  [xxx] process1: message
    
    2000-00-00 00:00:00,000 INFO  [xxx] process2: message
    
    2000-00-00 00:00:00,000 ERROR [xxx] process3: Exception: xyz
    java.lang.xxx: message
    	at java.base
    	at java.base
    	at java.base
    	at java.base
    	at java.base
    	at java.base
    	at java.base
    	
    #### The entire stack trace will be captured as a single entry, based on the line_start_pattern
    otelcol-contrib validate --config=/etc/otelcol-contrib/config.yaml
    systemctl restart otelcol-contrib
    receivers:
      hostmetrics:
        collection_interval: 10s
        scrapers:
          cpu:
            metrics:
              system.cpu.utilization:
                enabled: true
          load:
          memory:
          filesystem:
          network:
          disk:
          paging:
          processes:
    
    processors:
      batch:
        timeout: 5s
    
    exporters:
      debug:
        verbosity: detailed
      prometheusremotewrite:
        endpoint: https://<YOUR-ASCENT-ENV>/v1/receive/prometheus
        headers:
          Authorization: Bearer <YOUR-INGEST-TOKEN>
        tls:
          insecure: false
          insecure_skip_verify: true
    
    service:
      pipelines:
        metrics:
          receivers: [hostmetrics]
          processors: [batch]
          exporters: [prometheusremotewrite, debug]
    otelcol-contrib.service - OpenTelemetry Collector Contrib
         Loaded: loaded (/usr/lib/systemd/system/otelcol-contrib.service; enabled; preset: enabled)
         Active: active (running) since Tue 2024-11-19 15:29:59 UTC; 9s ago
       Main PID: 26248 (otelcol-contrib)
          Tasks: 8 (limit: 4630)
         Memory: 33.1M (peak: 33.7M)
            CPU: 98ms
         CGroup: /system.slice/otelcol-contrib.service
                 └─26248 /usr/bin/otelcol-contrib --config=/etc/otelcol-contrib/config.yaml
    
    Nov 19 15:30:04 otel-testing otelcol-contrib[26248]:      -> Description: Total number of created processes.
    Nov 19 15:30:04 otel-testing otelcol-contrib[26248]:      -> Unit: {processes}
    Nov 19 15:30:04 otel-testing otelcol-contrib[26248]:      -> DataType: Sum
    Nov 19 15:30:04 otel-testing otelcol-contrib[26248]:      -> IsMonotonic: true
    Nov 19 15:30:04 otel-testing otelcol-contrib[26248]:      -> AggregationTemporality: Cumulative
    Nov 19 15:30:04 otel-testing otelcol-contrib[26248]: NumberDataPoints #0
    Nov 19 15:30:04 otel-testing otelcol-contrib[26248]: StartTimestamp: 2024-11-18 10:25:54 +0000 UTC
    Nov 19 15:30:04 otel-testing otelcol-contrib[26248]: Timestamp: 2024-11-19 15:30:00.536392834 +0000 UTC
    Nov 19 15:30:04 otel-testing otelcol-contrib[26248]: Value: 26262
    Nov 19 15:30:04 otel-testing otelcol-contrib[26248]:         {"kind": "exporter", "data_type": "metrics", "name": "debug"}
    url: https://www.googleapis.com/books/v1/volumes?q=isbn:0747532699
    path: items
    fields: ["volumeInfo.authors","volumeInfo.title","volumeInfo.publisher","accessInfo.webReaderLink"]
    url: https://httpbin.org/post
    method: post
    headers: {"User-Agent": "Test", "Accept": "*/*"}
    auth: [username, password]
    params: {?q=myQuery}
    json: {"this": "is", "my": {"json":"body"}}
    path: json
    fields: ["my.json"]
    url: https://www.googleapis.com/books/v1/volumes?q=isbn:0747532699
    path: items
    fields: ["volumeInfo.authors","volumeInfo.title","volumeInfo.publisher","accessInfo.webReaderLink"]
    docker pull prom/pushgateway
    docker run -d -p 9091:9091 prom/pushgateway
    docker ps
    scrape_configs:
      - job_name: 'pushgateaway'
        scheme: http
        static_configs:
        - targets: ['localhost:9091']
    package logiqio
    
    import org.apache.beam.sdk.Pipeline
    import org.apache.beam.sdk.transforms.Create
    import org.apache.beam.sdk.metrics.Metrics
    import org.apache.beam.sdk.transforms.DoFn
    import org.apache.beam.sdk.transforms.ParDo
    import kotlin.test.Test
    
    class ApplyMetrics : DoFn<LogiqEvent, LogiqEvent>() {
        private var counter = Metrics.counter("Pipeline Metrics", "logiq_events_processed");
    
        @ProcessElement
        fun processElement() {
            counter.inc()
        }
    }
    
    class LibraryTest {
           @Test fun metrics() {
            val pipeline = Pipeline.create()
    
            val elems = List(1029) {
                LogiqEvent("ns$it", "$it Events occurred", it, "host-$it", "process-$it", "app-$it", "cos$it")
            }
    
            pipeline
                .apply("Create", Create.of(elems))
                .apply(ParDo.of(ApplyMetrics()))
    
            val result = pipeline.run()
            val metrics = result.metrics()
    
            PushGateway("http://localhost:9091/metrics/job/test").write(metrics)
        }
    }
    

    Updated the Compound Check type to run on the latest infrastructure

  • Added a new supported Selenium IDE command, setLocation

  • Added missing attributes to the response bodies of the /users and /users/{user_guid} API GET request endpoints

  • Added several new ASM commands to the ASM Manage Scenarios front end. See

  • for a complete list of supported Selenium IDE commands. Now, all of the commands listed in that article are available in the ASM Edit/Debug Scenarios page

    Tasks

    • ASM users now have the option to disable automatic page breaks when creating Browser checks:

    Bug Fixes

    • Fixed an issue in which checks were not correctly saved when an incorrect inclusion/exclusion period was used and the user was not notified of a reason. After the fix, users will be notified explicitly if their inclusion/exclusion period is incorrect.

    • Fixed an issue which prevented custom DNS from being used on the latest infrastructure

    • Fixed an issue which prevented an error message from being generated and displayed in the event that auto refresh fails to refresh a Dashboard.

    • Fixed an issue which prevented Power Users who had limited editing permissions from saving checks. For instance, Power Users who could edit only the name, description, and tags of a check could not save the check after doing so. The bug fix resolved this issue.

    • Fixed the following API call: which was returning a 500 server error previously.

    • Fixed an issue with certain checks which prevented Request & Response Headers from showing correctly within the Check Details page:

    • Fixed an issue which prevented API calls from returning correct responses when a new user’s time zone was not set

    • Fixed an issue which prevented spaces in between the “accepted codes” field for a URLv2 check:

    • Updated API documentation for URL, URLv2 checks to include acceptable "secureProtocolVersion" values

    • Fixed an issue with Ad Hoc report generation for certain users

    • Fixed issues which prevented Command checks from being created or fetched via the ASM API.

    Epic

    • Disabled the option to select "Firefox" on browser checks

    • Disabled location information in the API for deprecated checks

    • Disabled old Chrome versions when creating a Chrome check

    • Disabled location information in the API for deprecated Chrome versions

    • Disabled deprecated check types from the "create new check"

    • Disabled deprecated check types from the integration wizard

    • Disabled API endpoint for URLv1 checks

    • Disabled API endpoint for Command v1 checks

    • Disabled deprecated check types from /checks/command-v2/categories

    • Disabled deprecated browser version from /AnalyzeUrl

    • Replaced Firefox with Chrome when creating an iPhone, iPad, or Android Check in New Check Guide

    • Removed deprecated check versions as options from the Edit Scenario page

    • Disabled AppDynamics check types from the integration wizard

    Read previous Release Notes, go to: Knowledge Base


    On Premise ASM Patch 13H.4 Public Release Notes (2024-04-19)

    User Story Enhancements

    • Added the ability to add/edit “Accepted Codes”, “Port Number” and all “Secure Protocol Versions” for URLv1 checks via the ASM API. API documentation was updated to reflect the new functionality.

    • Added SNI (Server Name Indication) support for URLv1 checks

    Bug Fixes

    • Fixed an issue which prevented Power Users with limited check editing permissions from saving checks after performing edits.

    Read previous Release Notes, go to: Knowledge Base

    Navigate to Integrations → Data Sources.
  • Click Add New Data Source.

  • Select Logs from the available data source types.

  • In the Name field, enter:

  • Click Save.

  • After saving, click Test Connection.

    • Expected Result: Success


  • 2. Ascent Reports

    Purpose: Enables access to creating reports in Ascent.

    Steps:

    1. Navigate to Integrations → Data Sources.

    2. Click Add New Data Source.

    3. Select Reports.

    4. In the Name field, enter:

    5. Click Save.

    6. Click Test Connection.

      • Expected Result: Success


    3. Ascent Checks

    Purpose: Connects to Ascent's synthetic check results for analysis and visualization.

    Steps:

    1. Navigate to Integrations → Data Sources.

    2. Click Add New Data Source.

    3. Select Checks.

    4. In the Name field, enter:

    5. Click Save.

    6. Click Test Connection.

      • Expected Result: Success


    4. Ascent Metrics

    Purpose: Integrates the Prometheus endpoint used by Ascent to access metrics.

    Steps:

    1. Navigate to Integrations → Data Sources.

    2. Click Add New Data Source.

    3. Select Apica Ascent Prometheus.

    4. In the Name field, enter:

    5. In the Apica Prometheus API URL field, enter:

      • Replace <namespace> with your environment’s Kubernetes namespace (for example: apica-thanos-query:9090).

    6. Click Save.

    7. Click Test Connection.

      • Expected Result: Success


    Verification

    After completing all the configurations:

    • Each data source will appear under Integrations → Data Sources.

    • Dashboards and queries can now utilize these sources for visualization and alerting.

  • Enter a name for the dashboard.

  • Click on Create. You will be navigated to your new dashboard.

  • Creating a Widget

    On creating a new dashboard, it will be blank and not have any widgets. Widgets are created on top of the queries. If you don't have any queries created, please follow the documentation for queries to create one.

    • Click Add Widget button at the bottom of the page.

    • Select a query that you want to visualize.

    • Select a visualization for the selected query.

    • Click Add to Dashboard.

    • Click Apply Changes.

    Adding a widget to the Dashboard

    Adding widgets to the Dashboard

    Steps to add a widget:

    • Navigate to the dashboard for which you need to add a widget.

    • Click the More Options icon in the top right corner of the dashboard page.

    • Click edit from the dropdown.

    • Click the Add Widget button at the bottom of the page.

    • Select a query that you want to visualize.

    • Select a visualization for the selected query.

    • Click Add to Dashboard.

    Publish your Dashboard

    To publish, simply click the publish button on the top right corner of the dashboard page. After your dashboard is published, you can share it with anyone using the share option.

    Build an auto-refreshing Dashboard

    The dashboard widgets execute the queries and visualize the results. You can configure all the widgets to automatically refresh to get the latest results.

    Steps to make an auto-refreshing dashboard:

    • Navigate to any dashboard.

    • Click the down arrow button in the refresh button, which is available in the top right corner.

    • Select the time interval in which all the widgets in the dashboard will be refreshed automatically.

    • Now, the dashboard widgets will be refreshed on every selected time interval.

    Using Pre-defined Dashboards

    You can get more out of the monitoring dashboard when it monitors various aspects of your target. Building that kind of dashboard with more tricky queries can be time-consuming and delay you from knowing more about your application and infrastructure.

    We help you to build a viable dashboard with a few clicks by providing you with pre-defined dashboards for some of the most common use cases.

    Importing a Dashboard

    • Expand the dashboard option from the navigation bar.

    • Click on the Import dashboard.

    • You will be navigated to the import dashboard page, where you will be provided with some of the pre-defined dashboards.

    • Click the import button for the dashboard.

    • You will be displayed with a pop-up that will ask you to provide the dashboard name and data source which will be used by the queries in the dashboard widgets.

    • After providing the inputs, click Import. You will be navigated to the dashboard page.

    Import Dashboard

    Apica Ascent also includes a Grafana dashboard import section where popular Grafana dashboards can be directly imported into Apica Ascent. See the section on Grafana Dashboard import for how to use that capability.

    Importing Grafana Dashboards

    Grafana is an open-source tool for building monitoring and visualization tools. It has a public repository with thousands of dashboards published and maintained by its community, which is being used by millions of people to monitor its infrastructure.

    We are providing some popular dashboards from their public repository for you to monitor.

    Steps to Import Grafana Dashboard

    • Navigate to the import dashboard page.

    • Click the Import Button under the Grafana dashboard.

    • Select the type of target that you want to monitor. You will be provided with the list of dashboards available for the selected target.

    • Click the view button to get details of that dashboard.

    • Click select to import the dashboard.

    • Provide a name for the dashboard and select the datasource that will be used by the widgets.

    • Click Import. You will be redirected to the dashboard.

    Grafana Dashboard Import

    Our supported monitoring targets include:

    • FluentBit

    • Go Application

    • Kafka

    • Kubernetes

    • Redis

    • Postgres

    • Prometheus

    • Node

    Best Practices for OpenTelemetry Implementations

    Standardizing Instrumentation Across Teams

    Importance of consistency in observability: Ensuring that all teams follow a standardized approach to instrumentation is crucial for maintaining a reliable and actionable observability strategy. Without consistency, correlating telemetry data across services becomes challenging, leading to blind spots in monitoring and troubleshooting.

    Collaborative approach to instrumentation: Organizations should establish cross-functional teams that include developers, SREs, and platform engineers to define and implement observability standards. This ensures alignment on best practices and reduces redundant or conflicting telemetry data collection.

    Continuous improvement and governance: Standardization should not be a one-time effort. Organizations should regularly review and refine their observability practices to adapt to evolving business needs, new technologies, and OpenTelemetry updates.

    Links for using OpenTelemetry with Apica Ascent:

    Creating an Internal Instrumentation Policy

    Defining clear guidelines for telemetry data collection: Organizations should document best practices for collecting, processing, and exporting telemetry data. This includes specifying which types of data (traces, metrics, logs) should be collected for different applications and environments.

    Ensuring minimal performance impact: Instrumentation policies should balance comprehensive observability with system performance. Teams should implement sampling strategies, rate limiting, and filtering mechanisms to prevent excessive data collection from impacting application performance.

    Establishing ownership and accountability: Clear guidelines should specify who is responsible for instrumenting different parts of the system. Assigning ownership ensures that observability is an integral part of the development and operational lifecycle rather than an afterthought.

    Automating instrumentation where possible: Using automatic instrumentation libraries and OpenTelemetry’s SDKs can help enforce consistent observability standards with minimal manual effort. Automation reduces the likelihood of human errors and ensures that new services are consistently instrumented from day one.

    Establishing Naming Conventions for Spans and Metrics

    Consistent span naming for improved traceability: Using a structured and descriptive naming convention for spans ensures that distributed traces are easy to interpret. Naming should follow a hierarchical structure that includes service name, operation type, and key function details (e.g., order-service.db.query instead of queryDB).

    Standardized metric naming for cross-team compatibility: Metric names should follow a standardized format that aligns with industry best practices. This includes using prefixes for different metric types (http_request_duration_seconds for latency metrics) and ensuring clear labels for filtering and aggregation.

    Using semantic conventions: OpenTelemetry provides semantic conventions for naming spans, attributes, and metrics. Adhering to these standards improves interoperability and makes it easier to integrate OpenTelemetry data with third-party observability tools.

    Documenting naming conventions for long-term consistency: Organizations should maintain a centralized documentation repository outlining agreed-upon naming conventions and examples. This ensures that new teams and developers can easily adopt and follow established best practices.

    Optimizing OpenTelemetry Collector Performance

    Managing Memory and CPU Overhead

    Efficient resource allocation: OpenTelemetry Collectors process a large volume of telemetry data, making it essential to allocate adequate CPU and memory resources. Organizations should assess their workloads and set appropriate limits to prevent excessive resource consumption that could degrade system performance.

    Using lightweight configurations: To optimize resource usage, organizations should enable only necessary receivers, processors, and exporters. Disabling unused components minimizes CPU and memory overhead, improving overall efficiency.

    Load balancing Collectors: Deploying multiple Collector instances in a load-balanced configuration helps distribute processing across nodes, reducing bottlenecks and ensuring high availability. This is particularly important for large-scale deployments handling massive telemetry data volumes.

    Monitoring Collector performance: Continuously tracking Collector resource usage through built-in metrics helps teams identify performance bottlenecks and optimize configurations. Organizations can set up alerts for CPU spikes, memory saturation, and dropped telemetry events to maintain system stability.

    Implementing Batch Processing and Sampling Strategies

    Batch processing for efficiency: Instead of sending individual telemetry events, OpenTelemetry Collectors support batch processing to aggregate and compress data before transmission. This reduces network overhead and optimizes performance while ensuring minimal data loss.

    Adaptive sampling techniques: Organizations can use head-based and tail-based sampling techniques to limit the volume of telemetry data collected without losing critical observability insights. Tail-based sampling allows prioritizing high-value traces while discarding less useful data, improving cost efficiency.

    Configuring sampling rates based on workload: Setting appropriate sampling rates based on application traffic patterns prevents excessive data ingestion while retaining sufficient observability coverage. Dynamic sampling strategies can adjust rates in real-time based on system health and alert conditions.

    Ensuring data integrity with intelligent filtering: Organizations can filter and enrich telemetry data within OpenTelemetry Collectors, ensuring that only relevant metrics, logs, and traces are stored. This reduces storage costs and improves the relevance of observability data for troubleshooting and performance optimization.

    Ensuring Data Security and Compliance

    Masking Sensitive Data in Traces and Logs

    Understanding the risks of exposed telemetry data: Logs and traces often contain sensitive information such as user credentials, API keys, personally identifiable information (PII), and payment details. If not properly handled, this data can be exposed in observability pipelines, leading to security breaches and compliance violations.

    Implementing data masking and redaction: Organizations should establish policies for automatically redacting or masking sensitive data before it is ingested into logs or traces. OpenTelemetry allows for processors to be configured to scrub sensitive fields, ensuring that only anonymized data is transmitted.

    Using attribute-based filtering: OpenTelemetry provides mechanisms to filter telemetry attributes before they reach a storage backend. By defining attribute allowlists and blocklists, teams can prevent the transmission of confidential information while preserving necessary observability data.

    Enforcing encryption in transit and at rest: All telemetry data should be encrypted both in transit (e.g., using TLS) and at rest within storage systems. This ensures that intercepted data cannot be accessed by unauthorized entities.

    Compliance with industry regulations: Many industries require specific security practices, such as GDPR's data minimization principle and HIPAA’s de-identification requirements. By implementing structured masking and redaction policies, organizations can align with these regulatory standards while maintaining robust observability.

    Applying Role-Based Access Control (RBAC) for Telemetry Data

    Defining access levels for different roles: Not all users need access to all telemetry data. Organizations should define clear RBAC policies that grant varying levels of access based on job responsibilities. For example, developers may only need application performance data, while security teams require access to audit logs.

    Segmenting telemetry data by sensitivity: Logs, traces, and metrics can be categorized based on their sensitivity levels. By assigning access controls to different categories, organizations can prevent unauthorized personnel from accessing highly sensitive information.

    Using authentication and authorization mechanisms: OpenTelemetry integrates with identity management systems to enforce authentication and authorization. Implementing Single Sign-On (SSO), multi-factor authentication (MFA), and API key restrictions ensures that only authorized users and services can access telemetry data.

    Auditing and monitoring access logs: Continuous monitoring of who accesses telemetry data helps detect unauthorized access attempts. Audit logs should track all interactions with observability data, including user actions, query requests, and data exports.

    Automating policy enforcement with infrastructure as code: RBAC policies should be defined in infrastructure as code (IaC) templates to ensure consistency across deployments. By automating role assignments and access restrictions, organizations can enforce security best practices at scale.

    Avoiding Common Pitfalls

    Over-instrumentation Leading to High Overhead

    Understanding the risks of excessive instrumentation: Instrumenting every possible function, service, or transaction can introduce significant processing overhead, increasing CPU and memory consumption and impacting application performance. While observability is crucial, excessive instrumentation can slow down systems and lead to noise in telemetry data, making it harder to extract meaningful insights.

    Implementing strategic instrumentation: Teams should focus on capturing key telemetry data that aligns with business and operational needs. Instead of collecting every possible trace or metric, organizations should define specific service-level objectives (SLOs) and monitor the most critical performance indicators, reducing unnecessary data collection.

    Using adaptive sampling techniques: OpenTelemetry provides head-based and tail-based sampling, which allows teams to collect meaningful traces while reducing the data volume. Adaptive sampling dynamically adjusts based on traffic, ensuring visibility into important transactions without overwhelming observability pipelines.

    Optimizing trace and metric retention policies: Organizations should implement retention policies that store only high-value telemetry data while discarding redundant or less critical information. This ensures that logs, traces, and metrics remain relevant and actionable while keeping storage costs manageable.

    Regularly auditing telemetry data collection: Conduct periodic reviews of instrumentation policies and collected data to identify unnecessary metrics, spans, or logs that could be removed or optimized. Automating this audit process can help enforce efficient observability practices without human intervention.

    Lack of Correlation Between Metrics, Logs, and Traces

    The importance of unified observability: Metrics, logs, and traces serve different observability functions, but when analyzed in isolation, they provide an incomplete picture of system health. Ensuring proper correlation between these data types is critical for effective root cause analysis and performance optimization.

    Implementing trace-log correlation: OpenTelemetry allows injecting trace and span identifiers into log messages, providing direct relationships between traces and log events. This makes it easier for engineers to investigate issues by linking logs to the specific traces that triggered them, reducing time spent on debugging.

    Enriching metrics with trace and log context: By tagging metrics with trace identifiers and relevant log metadata, organizations can improve visibility into system-wide performance trends. This approach helps correlate spikes in error rates, latency fluctuations, and anomalous behaviors with specific transactions.

    Leveraging OpenTelemetry semantic conventions: Using standardized naming conventions and attributes for spans, logs, and metrics ensures consistency across telemetry data. Following OpenTelemetry’s semantic conventions improves interoperability with various backends and enhances observability tool integrations.

    Centralized observability dashboards: Organizations should aggregate and visualize logs, metrics, and traces in a unified observability platform. Tools like Grafana, Kibana, and OpenTelemetry-compatible backends enable cross-referencing telemetry data for more efficient troubleshooting and deeper insights.

    Quickstart with Docker-Compose

    This document describes the steps needed to bring up the Apica Ascent observability stack using docker-compose for trial and demo use

    docker-compose based deployment should not be used for production envornments.

    Quickstart features

    1. Log aggregation, search, reporting, and live tailing

    2. APM using built-in Prometheus, using external Prometheus

    3. Data sources - 21 data source connectors

    4. Alerting

    5. Incident response - PagerDuty, ServiceNow, Slack, Email

    6. apicactl CLI connectivity

    7. Dashboards and visualizations

    8. Filtering rules and rule packs

    9. User and group management

    10. Log flow RBAC

    11. UI Audit trail

    Install Docker compose

    You can spin-up Apica Ascent using docker-compose. Install guide for docker-compose can be found here -

    NOTE: the docker-compose quick-start YAML files are intended for demo and trial use only. If you want to run a production deployment, Apica Ascent uses Kubernetes with HELM to deploy the stack. Please contact us at : [email protected]

    Running Apica Ascent

    NOTE: Apica Ascent services use approximately 2GB of memory. Please have sufficient memory in your system before proceeding

    The first step is to get the docker-compose YAML file from the URL below.

    Download Apica Ascent compose file

    ⬇ Download the YAML -

    You are now ready to bring up the Apica Ascent stack.

    Download container images

    Bring up the stack

    NOTE: If you have been running previous versions of Apica Ascent docker-compose, you should stop and remove the existing containers by running docker-compose -f docker-compose.quickstart.yml down and remove any older docker volume via docker-compose -f docker-compose.quickstart.yml rm && docker-compose -f docker-compose.quickstart.yml rm -v

    Delete the stack

    If you are done with your evaluation and want to cleanup your environment, please run the following command to stop and delete the Apica Ascent stack and free up the used system resources.

    Test using Apica Ascent UI

    Once the Apica Ascent server is up and running, the Apica Ascent UI can be accessed as described above on port 80 on the server docker-compose. You will be presented with a login screen as shown below.

    Use [email protected] / flash-password to login

    Ingesting data

    For setting up data ingestion from your endpoints and applications into Apica Ascent, please refer to the .

    The quickstart compose file includes a test data tool that will generate test log data and also has a couple of dashboards that show Apica Ascent's APM capabilities.

    The test log data can be viewed under Explore page

    Click on any Application and you will be taken to the Flows page with detailed logs and a search view. You can search for any log pattern, searches can also be run using regex expressions along with conditional statements using Advanced search across a time period.

    Distributed Tracing

    Apica Ascent provides application performance monitoring (APM) which can help end-to-end monitoring for microservices architectures, traces can be sent over 14250 (gRPC port). To view traces, navigate to Trace page under Explore.

    select the Service and a list of traces will appear on the right-hand side of the screen. The traces have titles that correspond to the Operator selector on the search form. The traces can be further analyzed by clicking on the Analyse icon which will pull up the entire logs for the corresponding trace-id.

    Analyze icon displays all the logs for the respective trace-id in a given time range.

    To view the detailed trace, you can select a specific trace instance and check details like the time taken by each service, errors during execution, and logs.

    Prometheus monitoring and alerting

    The Apica Ascent quickstart file includes Prometheus and Alertmanager services. 2 APM Dashboards to monitor the quickstart environments are included.

    NOTE: It may take up to 1 minute for the APM metrics to appear once initial setup. Please use the "Refresh" button at the top right section of the Dashboards to refresh.

    Firewall ports and URLs

    Please refer to the supported by Apica.

    Collect Logs with Python

    Log Ingestion via Python (No Agent Required)

    Overview

    This guide explains how to ingest test logs into your Apica Ascent endpoint using a lightweight Python script — no agent installation required. Python comes preinstalled on most systems or can be added easily, making this a simple and flexible option for initial integrations or testing.

    Prerequisites

    • Python 3.x installed on your system

      • macOS: Usually preinstalled. Run python3 --version to check.

      • Windows/Linux: Download from if not installed.

    • Internet connectivity

    Step 1: Create the Python Script

    Create a file named ingest.py and paste the following code:

    Step 2: Customize the Script

    1. Replace paste_token_here with your actual Bearer token.

    2. Update https://mydomain.apica.io/v1/json_batch if your API endpoint is different.

    Step 3: Prepare Your Log File

    Create a plain text file (e.g., ingestlogs.txt) where each line represents one log entry:

    Step 4: Run the Script

    Use Cases

    • Quick validation of log ingestion

    • Proof-of-concept before deploying agents

    • CI/CD pipeline log publishing

    • Lightweight ingestion in containerized workflows

    Need Help?

    Connect with your Apica representative or support team for any further assistance.

    Ascent 2.7.0

    ASM

    New Features & Enhancements:

    1. ASM Private Location Management:

    Ascent 2.5.0

    Synthetic Monitoring (ASM 13.27.0) - SaaS

    Features

    1. NG Private Locations/Agents API Support: Added ASM API support for full self-serve new check-type agnostic Private Agents which can be grouped into Private Locations. Features include:

    Ascent 2.9.0

    Customer Release Notes

    ASM 13.31.0

    InfluxDB

    Apica Ascent helps you to connect to your InfluxDB for analyzing and visualization of your data.

    Adding InfluxDB data source

    Fill out the Name of your Data source and URL of your InfluxDB and you are ready to query your data from the Query Editor page

    Databricks ODBC and JDBC Drivers | Databricks on AWSdocs.databricks.com
    Ascent Logs
    Ascent Reports
    Ascent Checks
    Ascent Metrics
    New Features & Enhancements

    Automation for Checks and Alerts

    • Added support for CI/CD pipeline to streamline check creation and maintenance through ASM APIs.

    • Reduced manual efforts and ensured consistency across different environments through automation.

    • Ability to perform CRUD operations for ZebraTester, Browser, and URL checks.

    • Ability to create, upload, and assign ZebraTester and Browser scenarios for checks.

    • Ability to create and assign Email, SMS, and Webhook alert targets or alert groups.

    Chrome 130 Upgrade for Browser Checks

    • All existing Browser checks have been upgraded from Chrome 115 to Chrome 130.

    • All new Browser checks run on Chrome 130.

    NG Private Locations

    • Private locations/agents can be shared among sub-customer accounts to run checks.

    • Users can utilize their own CA certificates for checks in Private locations to monitor internal applications.

    Apica Grafana Plugin

    • Upgraded the Apica Grafana plugin to version 2.0.11.

    • Added support for page metrics, allowing users to analyze response time for specific pages instead of entire scenario metrics.

    Bug Fixes

    • Fixed the issue where invalid "acceptedCodes" were being accepted for URL checks in the POST /checks/url-v2 API.


    Ascent 2.9.0

    Ascent Synthetics (ASM+)

    Check Management

    • Introduced new check types: Browser, Postman, Traceroute, Mobile Device, and Compound.

    • Added support for the full workflow of check management: Edit, Delete, Clone, and Run Check.

    • Added support for Bulk Edit, Run, and Delete checks.

    • Inclusion and exclusion periods can be added in the check schedule.

    Private Location and Agent Management

    • Introduced full self-service (Add, Edit, Delete, Reissue Certificate) for new check-type agnostic Private agents.

    • Private locations can be added, edited, deleted, enabled, and disabled with the ability to associate Private repositories.

    • A new "Private Locations" section in the UI allows easy navigation and management.

    Check Analytics

    • Enabled alerting and reporting on checks.

    • Alerts and reports for a particular check can be created directly from the check page.

    • Screenshots taken during Browser check execution can now be viewed in the Check Analysis page.

    Bug Fixes

    • Fixed an issue where filter criteria were not working correctly on the Checks page.

    • Fixed a bug where some check results were missing on the Check Details page.


    New Features and Enhancements

    Fleet

    • Fleet Agent Limits: Enforced license-based agent limits.

    • Telemetry Enhancements: Added telemetry support for Fleet agents.

    • Fleet UI Revamp: Major UI improvements, better agent configuration management, and pagination fixes.

    • Fleet Summary Table: Redesigned the summary table for better usability.

    • Kubernetes Agent Status: Fleet UI now displays Kubernetes agent statuses.

    Observe

    • Data Explorer Graph Enhancements: Enhanced GroupBy plotting with multiple Y-axis selection.

    • Widgets Enhancements: Added delete functionality and improved widget load time.

    • New Chart Type: Introduced Pie and HoneyComb charts for visualization.

    • Grafana to Data Explorer: Added Grafana JSON conversion support in Data Explorer.

    • GenAI Enhancements: Integrated "Log Explain" feature for enhanced log analysis in ALIVE.

    • Data Explorer Enhancements: Improved metrics screen and query list support.

    • Dashboard Optimization: Reduced load times for Data Explorer dashboards and preserved widget data across tabs.

    • RCA Workbench: Introduced diagnostics and debugging features based on Data Explorer widgets.

    • Dashboard Validation: Added validation for Data Explorer dashboard creation.

    Authentication & Security Enhancements

    • React Page Migration: Migrated Login, Setup, Signup, Reset, and Forgot Password pages to React (TSX) to reduce tech debt.

    • Ascent Invitation Feature: Implemented user invitation functionality via Casdoor.

    • Casdoor Sync: Synced Casdoor users and groups with the Ascent database.

    • Port Management: Resolved open TCP/UDP port issues.

    • Casdoor Integration: Enhanced authentication, session management, and email integration.

    • API Key Support: Added API key support for Casdoor in Ascent.

    • Casdoor Mail Service: Integrated Ascent mail service with Casdoor for email functionality.

    • Casdoor Signing Certificates: Added support for Casdoor signing certificates to enhance security.


    Ascent Bug Fixes

    • GCP PubSub Plugin: Resolved file loading issues.

    • ResizeObserver Compatibility: Fixed compatibility issues with the latest Chrome version.

    • Alert Email Output: Truncated query output in triggered alert emails for better readability.

    • Agent Sorting: Fixed sorting by "Last Modified" in Fleet UI.

    • Incorrect Trace Volume: Fixed trace volume display on the Ascent landing page.

    • Alert Bug Fix: Resolved discrepancies in triggered alert counts displayed in the navbar.

    • Pipeline View: Fixed visual bugs in forwarder mapping and improved rule persistence.

    • Fleet Improvements: Enhanced Fleet installer, improved Kubernetes token creation, and fixed pagination issues.

    • Password Generation UI: Improved UI for password generation in Ascent.

    • Query Save Fix: Resolved unknown error when saving queries in the Freemium tier.

    • Moving Average Bug: Fixed AI-based query creation issues for Moving Average.

    • Alert UNKNOWN Issue: Resolved alerts triggering with an UNKNOWN state.

    • Alert Evaluation Fix: Fixed issues with alerts not evaluating after the first trigger.

    • SNMP Source Bug: Fixed SNMP ingest source extension bugs.

    • Fluent-Bit Installation: Addressed issues with Fluent-Bit post-agent manager installation.

    • Dual Active Packages: Resolved the issue of showing two active packages in Fleet.

    • Inclusion/Exclusion Fixes: Addressed syntax and period-saving issues.

    • Certificate Upload: Fixed certificate upload issues and removed the feature from Freemium.

    • Default Otel Configuration: Updated default Otel configuration for metric ingestion.

    • Platform Validation: Enhanced platform validation in Fleet.

    • Fleet Assign Package Error: Fixed package assignment issues.

    • Disable Pattern Signature: Disabled pattern signature functionality in Freemium.

    • Namespace Bug: Resolved incorrect namespace selection in Data Explorer.

    • Fleet Advanced Search: Fixed and improved advanced search functionality.

    • Dark Mode Fixes: Addressed UI inconsistencies, including Waterfall statistics and button styling.

    • Fleet Installation: Resolved installation errors on Linux and Windows.

    • Kubernetes Dropdown Fix: Fixed duplicate Kubernetes entries in Fleet dropdowns.

    • Configuration Refresh: Addressed package reassignment and configuration refresh issues.

    • Documentation Updates: Updated user and technical documentation.


    For further details or inquiries, please refer to the official documentation or contact our support team.

    Getting Started with OpenTelemetry
    OpenTelemetry Integration for Ascent
    Configuring InfluxDB
    https://api-wpm.apicasystem.com/v3/Help/Route/GET-checks-proxysniffer-checkId-results-resultId-errorlog
    Logo
    https://docs.docker.com/compose/install/
    docker-compose.quickstart.yml
    Integrations section
    ports list

    A valid Bearer token for authentication (Generating a secure ingest token)

  • Sample log file (ingestlogs.txt or similar)

  • https://www.python.org
    http://<namespace>-thanos-query:9090
    docker-compose -f docker-compose.quickstart.yml pull
    docker-compose -f docker-compose.quickstart.yml up -d
    docker-compose -f docker-compose.quickstart.yml down -v
    import requests
    import time
    import sys
    import os
    from datetime import datetime
     
    BEARER_TOKEN = 'paste_token_here'  # Replace this with your actual token
    API_ENDPOINT = 'https://mydomain.apica.io/v1/json_batch'
    MAX_RETRIES = 3
    RETRY_DELAY = 2  # seconds
     
    headers = {
        'Authorization': f'Bearer {BEARER_TOKEN}',
        'Content-Type': 'application/json'
    }
     
    success_count = 0
    failure_count = 0
    total_lines = 0
     
    def post_log(line):
        global success_count, failure_count
        payload = {'log': line.strip()}
        for attempt in range(1, MAX_RETRIES + 1):
            try:
                response = requests.post(API_ENDPOINT, headers=headers, json=payload, timeout=10)
                if 200 <= response.status_code < 300:
                    print(f"✅ Sent | Status: {response.status_code}")
                    success_count += 1
                    return
                else:
                    print(f"⚠️ Attempt {attempt}: HTTP {response.status_code}: {response.text}")
            except requests.exceptions.RequestException as e:
                print(f"❌ Attempt {attempt}: Exception - {e}")
           
            if attempt < MAX_RETRIES:
                print(f"🔁 Retrying in {RETRY_DELAY} seconds...")
                time.sleep(RETRY_DELAY)
     
        failure_count += 1
        print("⛔ Failed to send line after retries.")
     
    def main(log_file):
        global total_lines
        if not os.path.isfile(log_file):
            print(f"🚫 File not found: {log_file}")
            return
     
        start_time = datetime.now()
        print(f"📤 Starting log ingestion from: {log_file}")
        print(f"🕒 Start Time: {start_time.strftime('%Y-%m-%d %H:%M:%S')}\n")
     
        with open(log_file, 'r') as f:
            for line in f:
                if line.strip():
                    total_lines += 1
                    post_log(line)
     
        end_time = datetime.now()
        duration = (end_time - start_time).total_seconds()
     
        print("📊 Ingestion Summary")
        print("--------------------------")
        print(f"🕒 Start Time        : {start_time.strftime('%Y-%m-%d %H:%M:%S')}")
        print(f"🕒 End Time          : {end_time.strftime('%Y-%m-%d %H:%M:%S')}")
        print(f"⏱️ Duration (seconds): {duration:.2f}")
        print(f"📄 Total Lines Read  : {total_lines}")
        print(f"✅ Successfully Sent : {success_count}")
        print(f"❌ Failed to Send    : {failure_count}")
        print("--------------------------")
     
    if __name__ == '__main__':
        if len(sys.argv) != 2:
            print("Usage: python ingest.py /path/to/ingestlogs.txt")
        else:
            main(sys.argv[1])
    Log line 1
    Log line 2
    Error occurred at 2024-06-23
    Application restarted
    ✅ macOS/Linux
    python3 ingest.py /path/to/ingestlogs.txt
    ✅ Windows (Command Prompt)
    python ingest.py /path/to/ingestlogs.txt
    📈 Sample Output
    📤 Starting log ingestion from: ingestlogs.txt
    🕒 Start Time: 2025-06-23 11:32:14
     
    ✅ Sent | Status: 200
    ✅ Sent | Status: 200
    ...
     
    📊 Ingestion Summary
    --------------------------
    🕒 Start Time        : 2025-06-23 11:32:14
    🕒 End Time          : 2025-06-23 11:32:16
    ⏱️ Duration (seconds): 2.12
    📄 Total Lines Read  : 5
    ✅ Successfully Sent : 5
    ❌ Failed to Send    : 0
    Introduced the ability for Customer Administrators to Add, Edit, and Delete private locations and private repositories, giving more control over location and data management.
  • Added a "Private Locations" section in the UI, allowing easy navigation and management of these locations.

  • Implemented endpoints to Enable/Disable Private Locations, retrieve lists of private locations and repositories, and associate repositories with specific private locations.

  • Included a Timezone selection feature for URL V2 endpoints, enhancing configuration flexibility for global deployments.

  • New options for managing Private Agents with functionalities such as Adding, Editing, and Deleting agents, as well as Reissuing Certificates for enhanced security.

  • Check Management Enhancements:

    • Integrated ZebraTester within Check Management, improving performance testing capabilities.

    • Enhanced the Check Analytics screen for a smoother experience, including a redesigned Schedule and Severity Handling screen supporting Dark Theme.

  • Improved API & Documentation:

    • Refined API Endpoints: Added support for handling advanced configuration for missing checks, private agent solutions, and new fields in the SSL Certificate Expiration Check.

    • Documentation Improvements: Updated ASM API documentation to include better descriptions, missing fields, and request/response formats for enhanced usability.

  • Canary Release Support:

    • Extended Deployment APIs to support Canary Releases, ensuring more robust testing and rollouts.

  • Performance Optimization:

    • Implemented pre-fetching of access groups to reduce database calls and improve the performance of core endpoints.

    • Optimized Sampling Interval for tables based on time duration to reduce load times.

  • Agent Status Monitoring:

    • Added visual indicators for the Enable/Disable Status of private locations, improving overall monitoring and management.

  • Bug Fixes:

    1. Check Management:

      • Fixed inconsistencies in the Check Results Graph to ensure linear representation of data on the X-axis.

      • Addressed issues with timestamp formatting when clicked from different parts of the graph, which led to parsing errors.

    2. Fleet Management:

      • Corrected the behavior of agent ID and customer GUIDs during initial state setup.

      • Resolved problems causing memory issues in multi-cluster environments.

    3. UI & Visual Fixes:

      • Eliminated scroll issues when hovering over charts.

      • Adjusted the Date Picker to revert to its previous version for consistency and usability.

    4. Multi-Cluster Stability:

      • Fixed degradation issues occurring when one of the single tenants in a multi-cluster environment was down.

      • Ensured smoother data loading and resolved UI lock-up issues when handling larger datasets.

    5. Certificate Management:

      • Added validation checks and improved error handling for operations like adding, editing, and deleting SSL certificates and repositories.

    Ascent

    New Features & Enhancements

    1. Fleet Management Improvements:

      • Fleet UI Enhancements: Redesigned Fleet management screens, including Agents and Configuration, with consolidated controls for improved usability and support for Dark Theme.

      • Kubernetes Environment Support: Introduced support for Kubernetes environments in Fleet, enabling better agent management and installation flexibility.

      • Fleet Agent Support for , and : Expanded the ecosystem of supported agents with compatibility for OpenTelemetry Collector, Datadog and Grafana Alloy agents.

      • : Implemented new metrics to monitor the liveness status of each Fleet agent, ensuring better visibility and alerting.

      • Advanced Search for Fleet: Enhanced search capabilities with a new advanced search feature, making it easier to locate specific data and agents.

    2. Enhancements:

      • Y-Axis Multi-Column Plotting: Enhanced Y-axis plotting, allowing for the selection and visualization of multiple columns, making complex data analysis simpler.

      • Time Range in Headers: Added time range indicators in the header, improving context and navigation during data exploration.

      • Custom Chart Integration:

    3. UI/UX Improvements:

      • Dark Mode Icons & Design Adjustments: Optimized icon sets and UI components for a more consistent experience in dark mode.

      • New Toggle & Theme Options: Added a toggle for switching between Dark and Light modes in the navbar, giving users more control over their viewing experience.

    4. Integration & API Updates:

      • : Users can now ask questions directly in the search bar using Gitbook AI and receive answers instantly, enhancing accessibility to documentation and support. 🧠

      • Grafana Integration: Implemented a converter to transform Grafana JSON into Data Explorer JSON format, simplifying the migration of dashboards.

    5. User Onboarding:

      • Improved Onboarding Experience: A dedicated onboarding screen for new users has been added to streamline the setup process and introduce key features.

    Bug Fixes

    1. Fleet Management:

      • Fixed issues where disconnected Fleet agents could not be deleted.

      • Resolved problems with log collection on Windows machines.

      • Addressed duplicate agent entries when reinstalling Fleet agents.

    2. Data Explorer:

      • Corrected data inconsistency issues when switching between dashboards.

      • Fixed bugs related to alert tabs being incorrectly linked across dashboards.

      • Resolved intermittent behavior where data from one dashboard was erroneously stored in another.

    3. ALIVE:

      • Improved the alignment and visualization of PS compare graphs and log comparisons.

      • Added zoom-in and enlarge options for better graph analysis.

      • Enhanced visual feedback for log loading during comparisons.

    4. UI Bug Fixes:

      • Resolved AI button shadow and sizing issues for a more polished interface.

      • Corrected modal rendering in header dropdowns for persistent selections across tabs.

    Creation and management of Private location.

  • Creation and management of Private agents.

  • Configuration of Private Container repositories for Private locations to use during check run.

  • Added API support for Timezone selection for Check Inclusion/Exclusion periods during UrlV2 check creation.

  • Extended the subscription page to include more check statistics per check type like Info, Warning, Error, and Fatal check counts.

  • Enhanced status updates for NG Private agents.

  • Bug Fixes

    • Fixed the sporadic non-availability of agents in the Stockholm location issue when debugging a Selenium scenario.

    • Fixed a bug with downloading scripts from http sources for Scripted and Postman checks.

    • Fixed a bug where some block domain rules were not being respected in Browser checks.

    • Fixed the issue where setLocation command was not working properly if it is not used at the start of a Selenium script for Browser checks.



    Apica Data Fabric (ADF)

    Features

    1. Native Support for OTEL Logs.

      • Added native support for OTEL logs using the OTLP HTTP exporter.

    2. Native Support for OTEL Traces.

      • Added native support for OTEL trace using the OTLP HTTP exporter.

      • Introduced a new rule type for STREAMS.

    3. Improved

      • Enhanced moving average calculation using SMV (Simple Moving Average) and CMV (Cumulative Moving Average).

    4. .

      • Feature to compare the logs and patterns side by side to different time ranges.

    5. Improved depending on table content to provide better data visualisation.

    6. Data Explorer: Tabs Scrolling and Improvement

      • Added scrolling functionality and various improvements to the Data Explorer tabs for better navigation.

    7. GPT-4o-mini and Limited Model Support

      • Introduced support for GPT-4o, GPT-4o-mini, GPT-3.5-Turbo.

    8. API-Based Create Data-Explorer Dashboard

      • Added the ability to create Data-Explorer dashboards via API.

    9. API-Based Create Sharable Dashboard

      • Enabled the creation of sharable dashboards through API.

    10. Generic Implementation for Data Explorer Header

      • Made the Data Explorer header implementation generic and interdependent.

    11. Check Management Map View

      • Introduced a map view for check management.

    12. Check Management List View UI Changes

      • Updated the UI for the check management list view.

    13. Data Explorer Header to Persist Data

      • Added functionality for the header of data explorer to persist data.

    14. Automatically Create

      • Added automatic creation of Splunk universal forwarder for Splunk S2S Proxy.

    15. Pipeline Tab in Search View

      • Added a new pipeline tab in the search view.

      • Introduced a preview feature for code rules.

    16. Health Check for Agents

      • Implemented a health check feature for agents.


    Improvements

    1. Trace/Default App Performance Improved

      • Enhanced the performance of the trace/default application.

    2. New Algorithm for PS Compare and Anomalies Compare

      • Implemented a new algorithm for comparing architecture PS and detecting anomalies.

    3. Widget Refresh Performance

      • Improved the performance of widget refresh operations.

    4. Query API Performance for search

      • Enhanced the performance of the Query API for search.

    5. Default Namespace for Logs for Syslog vs Per Host Namespaces

      • Enhanced default namespace handling for logs, distinguishing between syslog and per host namespaces.

    6. UI Enhancements for Pipeline and Topology View

      • Improved UI for pipeline and topology views.

    7. Agent Manager Improvements for Installation Scripts

      • Enhanced agent manager installation scripts.

    8. Delete Agent Cleanup

      • Improved the cleanup process when deleting agents.

    9. Remove Unsupported Agents

      • Enhanced the process to remove unsupported agents.


    Bug Fixes

    1. Y-Axis Overlapping on View

      • Fixed an issue where the Y-axis was overlapping on the view in the ALIVE application.

    2. Gauge Widget Color Render Based on Zone

      • Fixed the rendering of gauge widget colors based on specified zones.

    3. Group By for Data-Explorer

      • Fixed the group by functionality in the Data-Explorer.

    4. Creating Alert Creates Panic

      • Resolved an issue where creating an alert caused a panic.

    On-Premise PaaS deployment

    Before you begin

    To get you up and running with the Apica Ascent PaaS, we've made Apica Ascent PaaS' Kubernetes components available as Helm Charts. To deploy Apica Ascent PaaS, you'll need access to a Kubernetes cluster and Helm 3. You will also need access to S3-compatible object storage for storing metric data.

    Before you start deploying Apica Ascent PaaS, let's run through a few quick steps to set up your environment correctly.

    Add the Apica Ascent Helm repository

    Add Apica Ascent's Helm repository to your Helm repositories by running the following command.

    The Helm repository you just added is named apica-repo. Whenever you install charts from this repository, ensure that you use the repository name as the prefix in your install command, as shown below.

    Create a Kubernetes cluster

    If you already have a Kubernetes cluster, you can skip down to "Create a namespace to deploy Apica Ascent".

    If you do not have a Kubernetes cluster, use to assemble one or more physical machines or VMs into a Kubernetes cluster, onto which you can deploy Apica Ascent. For the host operating system we assume some distribution of Linux, but it does not matter which one.

    Single Node

    For reference, see the .

    Single-node deployments are suitable for testing only and should not be used for production, as there is no redundancy.

    Download the latest supported version of k0s from the . For Linux, obtain the asset named k0s-<version>-amd64 (no file extension). Note that because Kubernetes upstream supports the three most recent minor releases concurrently, the top entry on the releases page may not be the latest minor version. You are encouraged to use the highest non-beta minor version available.

    Copy the k0s binary that you downloaded into the /usr/local/bin directory:

    Next, download and install , which is the tool used to interact with Kubernetes.

    Finally, download the current version of , the package manager for Kubernetes, from the . Install it alongside k0s and kubectl:

    Multi-Node

    For reference, see the .

    K0s clusters are bootstrapped with the k0sctl tool. It connects to k0s nodes over ssh to orchestrate cluster setup and maintenance.

    A small cluster used for Ascent consists of the following nodes:

    • 1 load balancer for control plane HA (1 vCPU, 2G RAM)

      • Note: this can be a cloud LB if deploying in a cloud environment. See .

    • 3 control-plane nodes, each 1 vCPU, 2G RAM

    • 3 worker nodes, each 6 vCPU, 16G RAM, 500G disk

    Adjust the number of worker nodes for larger-scale deployments. Three control plane nodes are sufficient for almost all situations, but should be an odd number per the .

    The control plane load balancer needs to be a TCP load balancer that routes traffic for the following ports to all controller nodes:

    • 6443 (Kubernetes API)

    • 8132 (Konnectivity)

    • 9443 (controller join API)

    This can be a cloud load balancer, if available, or another instance running load balancer software such as nginx or HAproxy.

    If the load balancer has an external IP for administrative access to the Kubernetes API, make note of it for configuring k0sctl.yaml below.

    Install the k0sctl tool on whichever system will be used to manage the cluster. This can be an operator's computer or other client system, or on one of the controller nodes. can be accomplished in various ways depending on the chosen system.

    Once you have k0sctl installed, you will need a configuration file that describes the cluster you wish to create. The following sample file creates a cluster with 3 controller and 3 worker nodes, and installs OpenEBS for hostpath storage and MetalLB for cluster load balancing. Change the host specifics to match your environment. If you increased the number of worker nodes, be sure to include all of them in your config. Enter the IP address of the load balancer that is visible from the worker nodes. This is the address they will use to communicate with the Kubernetes API. If this load balancer will also be accessed from outside the host environment, add its externally-facing IP and/or domain name to the sans list so that TLS certificates will have the appropriate SubjectAlternativeName (SAN) entries.

    Run k0sctl apply --config path/to/k0sctl.yaml to install the cluster. This will reach out to all configured nodes, install k0s, and apply the specified configuration.

    If something goes wrong and you want to start over, k0sctl reset will uninstall k0s and completely tear down the cluster. This command is destructive and should only be used during an initial setup or at the end of a cluster’s lifecycle.

    Once the cluster is running, you can k0sctl kubeconfig to output a configuration file suitable for use with kubectl to work with the cluster. If you do not already have any kubectl configuration, you can redirect it to ~/.kube/config, otherwise you will need to merge the values for this cluster into your existing kubectl config.

    Create a namespace to deploy Apica Ascent

    Create a namespace where we'll deploy Apica Ascent PaaS by running the following command.

    If desired, make this namespace the default for kubectl to use, removing the need to specify -n apica-ascent with every command:

    Create secrets to provide your HTTPS certificate to the ingress controller as well as to the log ingestion service. The CN of this certificate should be the hostname/domain that you wish to use to access the Ascent platform. The same key and certificate can be used for both, but the secrets are of different types for each usage.

    • kubectl -n apica-ascent create secret tls my-ascent-ingress --cert=my-tls.crt --key=my-tls.key

      • NOTE: if your certificate requires an intermediate, concatenate both into a single file, starting with the primary/server cert, and the intermediate after. Use this file as the argument to --cert.

    • kubectl -n apica-ascent create secret generic my-ascent-ingest --from-file=syslog.crt=my-tls.crt --from-file=syslog.key=my-tls.key --from-file=ca.crt=my-ca.crt

    You can choose different names for the secrets, but see the next section for where to set each secret's name in the values.yaml file.

    Prepare your Values file

    Just as any other package deployed via Helm charts, you can configure your Ascent PaaS deployment using a Values file. The Values file acts as the Helm chart's API, giving it access to values to populate the Helm chart's templates.

    To give you a head start with configuring your Apica Ascent deployment, we've provided sample values.yaml files for single-node, small, medium, and large clusters. You can use these files as a base for configuring your Apica Ascent deployment. You can download these files from the following links.

    You will need to fill in values for global.domain (the hostname/domain that will be used to access the Ascent UI, which should match the CN of the TLS certificate used to create the secrets above), as well as various site-specific values such as passwords and S3 object storage credentials.

    If you changed the names of the Kubernetes secrets above, use the name of the tls secret for ingress.tlsSecretName and kubernetes-ingress.controller.defaultTLSSecret.secret. Use the name of the generic secret for logiq-flash.secrets_name.

    NOTE: the admin_password value must meet the following minimum requirements: at least 12 characters, including one uppercase letter, one lowercase letter, and one digit.

    Install Envoy Gateway Resources

    Install Apica Ascent

    Install Apica Ascent by running Helm:

    Use these same command to apply updates whenever there is a new version of the Helm chart or a new version of Apica Ascent.

    Export Events to Apica Ascent

    Using the Apica Ascent IO Connector we can easily stream our data to Apica Ascent for further processing.

    Let's look at this by going over the sample starter repository provided here.

    As you can see in this, we have simulated a pipeline flow with sample log lines as our input which will then be pushed to Apica Ascent.

    Transforming the Logs to LogiqEvent.

    We can not simply push the log lines to Apica Ascent. Instead, we first need to transform the log lines to LogiqEvent(s). Here is the Transformer() class that handles the transformations.

    Once, we have successfully transformed the log lines to LogiqEvent(s), we can now use the Apica Ascent IO Connector to export these LogiqEvent(s) to our Apica Ascent Instance.

    Writing to Apica Ascent

    Specify the ingest endpoint and your ingest token. You can find the ingest token in your Apica Ascent Settings.

    Hooray, you should now be able to see logs flowing into Apica Ascent from your Pipeline with namespace="ns", host="test-env", appName="test-app" and clusterId as "test-cluster".

    // Main.java
    
    package ai.logiq.example;
    
    import logiqio.LogiqError;
    import logiqio.LogiqIO;
    import org.apache.beam.sdk.Pipeline;
    import org.apache.beam.sdk.transforms.Create;
    import org.apache.beam.sdk.transforms.MapElements;
    import org.apache.beam.sdk.transforms.SimpleFunction;
    
    class PrintElement extends SimpleFunction<LogiqError, String> {
        @Override
        public String apply(LogiqError input) {
            System.out.println(input != null ? input.getResponse() : "");
            return input != null ? input.getResponse() : "";
        }
    }
    
    public class Main {
        public static void main(String[] args) {
            var pipeline = Pipeline.create();
    
            var logLines = Create.of(
                    "FileNotFoundError: [Errno 2] No such file or directory: '/app/stats'",
                    "[2023-03-16 08:22:20,583][PID:10737][ERROR][root] prometheus alert manager error: connection error or timeout",
                    "[2023-03-16 08:22:20,585][PID:10737][INFO][werkzeug] 127.0.0.1 - - [16/Mar/2023 08:22:20] \"GET /api/alerts HTTP/1.1\" 200",
                    "INFO[2023-03-16T12:58:41.004021186+05:30] SyncMetadata Complete                         File=trigger.go Line=109 SleepTime=521 Took=2.180907ms",
                    "INFO[2023-03-16T13:00:14.452461041+05:30] License Manager doing license test            File=server.go Line=287",
                    "INFO[2023-03-16T13:00:48.438338692+05:30] Running GC - Final GC                         File=gc.go Line=273",
                    "INFO[2023-03-16T13:00:48.44403175+05:30] Running GC - Final GC: Total id's processed:34  File=gc.go Line=312",
                    "INFO[2023-03-16T13:00:48.444231874+05:30] Running GC - Final GC: cleaned up 0 files in total  File=gc.go Line=313",
                    "INFO[2023-03-16T13:01:50.438244706+05:30] Running GC - MarkforGC                        File=gc.go Line=317",
                    "INFO[2023-03-16T13:01:50.440538077+05:30] Running GC - MarkforGC: Total id's processed:0  File=gc.go Line=344"
            );
    
            var local_endpoint = "http://localhost:9999/v1/json_batch";
            var ingest_token = "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJhY2Nlc3MiOltdLCJhdWQiOiJsb2dpcS1jbGllbnRzIiwianRpIjoiZGI2YmM2MTUtYjQ4OS00YTFjLWI3ZWEtYzMxZjhiMDYwMGNkIiwiaWF0IjoxNjc4OTU0NzA1LCJpc3MiOiJsb2dpcS1jb2ZmZWUtc2VydmVyIiwibmJmIjoxNjc4OTU0NzA1LCJzdWIiOiJrZXZpbmRAbG9naXEuYWkiLCJVaWQiOjEsInJvbGUiOiJhZG1pbiJ9.Xkw-GlS08Wzut-A5_hFtL6T92g2oVjY2dSYQfv6FtgjeCPPOTCPGkl9fygIDxZSUiJV70JfqXDlWm277xxRa-jsAfKDN9Lc5TV_MmLjxi6AAS_UQkbUhuqJSygrjC2WKH6S0CRX8wffeWfG0Vp5g6fFA6hNLibhg0RL-zFmcTr47c3CuXL9E88ygfLhUvCIEkVHLLMnE4DL5Dj3mB9yY8v2Iw3Wl-ZrVmyJXOsgdKo4iyf_PYHSNUTnB2WhvRp3Qe1dxFeXx9u8xNmDGzYyvSpwQEWSVM3l4QD5aLjIP53xF6ki_XT_KWr86oaTtYmEy69Nu8CSQFaLw3EohGBUwIg";
            pipeline.apply("Add Elements", logLines).apply(new Transformer()).apply(new LogiqIO.Write(local_endpoint, ingest_token)).apply(MapElements.via(new PrintElement()));
    
            pipeline.run().waitUntilFinish();
        }
    }
    New customizable charts, such as Counters, are available for Data Explorer, providing enhanced visualization options.
  • Color Selection for Widgets: Users can now customize the colors of rendered data inside each widget on the Data Explorer page, making it easier to personalize and distinguish visual components.

    Performance & Optimization:

    • Lazy Loading Implementation: Optimized data explorer dashboards by implementing lazy loading, improving initial load times, and reducing resource consumption.

    • Custom Hooks for Skipping Component Mount Calls: Enhanced performance by introducing custom React hooks to skip unnecessary component mounts, minimizing UI lag.

  • OpenTelemetry Collectors
    Datadog
    Grafana Alloy
    Agent Liveness Status Metrics
    Data Explorer
    Gitbook AI Powered Search
    STREAMS Rule Type
    Moving Average
    Pattern-Log Compare
    ALIVE summary graph highlighting
    Splunk Universal Forwarder for Splunk S2S Proxy
    Code Rules Preview
    • NOTE: if your certificate requires an intermediate, provide that individually as ca.crt.

    k0s
    official documentation
    releases page
    kubectl
    Helm
    releases page
    official documentation
    Control Plane High Availability
    etcd documentation
    Installation
    2KB
    k0sctl-sample.yaml
    Open
    7KB
    values.single.yaml
    Open
    7KB
    values.small.yaml
    Open
    7KB
    values.medium.yaml
    Open
    7KB
    values.large.yaml
    Open
    helm repo add apica-repo https://apicasystem.github.io/apica-ascent-helm
    helm upgrade --install <deployment_name> apica-repo/<chart_name>
    chmod +x k0s-<version>-amd64
    sudo cp k0s-<version>-amd64 /usr/local/bin/k0s
    curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
    chmod +x kubectl && sudo cp kubectl /usr/local/bin/
    tar zxf helm-<version>-linux-amd64.tar.gz && sudo cp linux-amd64/helm /usr/local/bin/
    kubectl create namespace apica-ascent
    kubectl config set-context --current --namespace=apica-ascent
    helm install eg oci://docker.io/envoyproxy/gateway-helm   --version v1.6.0   -n envoy-gateway-system   --create-namespace
    helm repo update
    helm upgrade --install apica-ascent \
      --namespace apica-ascent \
      -f values.yaml \
      apica-repo/apica-ascent
    // Transformer.java
    package ai.logiq.example;
    
    import logiqio.LogiqEvent;
    import org.apache.beam.sdk.transforms.PTransform;
    import org.apache.beam.sdk.transforms.ParDo;
    import org.apache.beam.sdk.values.PCollection;
    
    public class Transformer extends PTransform<PCollection<String>, PCollection<LogiqEvent>> {
        @Override
        public PCollection<LogiqEvent> expand(PCollection<String> input) {
            return input.apply(ParDo.of(new TransformEvent()));
        }
    }
    // TransformEvent.java
    package ai.logiq.example;
    
    import logiqio.LogiqEvent;
    import org.apache.beam.sdk.transforms.DoFn;
    
    class Config {
        public static String namespace = "ns";
        public static String host = "test-env";
        public static String appName = "test-app";
        public static String clusterID = "test-cluster";
    }
    
    public class TransformEvent extends DoFn<String, LogiqEvent> {
        @ProcessElement
        public void processElement(@Element String element, OutputReceiver<LogiqEvent> receiver) {
            LogiqEvent event = new LogiqEvent()
                    .withAppName(Config.appName)
                    .withTimestamp(0)
                    .withClusterId(Config.clusterID)
                    .withHost(Config.host)
                    .withAppName(Config.appName)
                    .withNamespace(Config.namespace)
                    .withMessage(element);
    
            receiver.output(event);
        }
    }
    .apply(new LogiqIO.Write(local_endpoint, ingest_token))

    Getting Started with Fleet

    This guide provides a walkthrough of setting up two different types of monitoring agents (OTEL / Fluent bit) on a server using the Apica Fleet management tool.

    Quick Start Guide

    This Quick Start Guide for Fleet Management enables a user to quickly enable ingesting metrics and logs into Ascent, and provides step-by-step instructions for deploying monitoring agents using Apica Fleet. By completing this tutorial, you will be able to automatically collect and forward critical server metrics and application logs directly into the Ascent platform for complete visibility.

    For the purposes of this guide, we will install and deploy both an OTEL and Fluent Bit collector agent.

    For a full video walkthrough, please refer to our video guide:

    Let's begin:

    Part 1: Installing and Deploying an OpenTelemetry Collector Agent.

    Step 1: Install Agent Manager:

    Go to -> Explore -> Fleet

    Click -> Install Manager

    Select Platform: Linux

    Select Agent Type: OpenTelemetry Collector

    Click 'Proceed'

    Click on "Download All"

    • Open 'README' file for detailed instructions.

    Go to Your Linux Terminal:

    NOTE: Transfer 'Fleet Installation File' to the Linux host that you will collect data from.

    • make sure file has permissions to allow to 'execute'

    • Execute the following command to install the Agent Manager:

    • $ sudo ./fleet-install.sh

    Verify that the hostname is in the Fleet "Agents" UI tab:

    Step 2: Update Your Configuration File:

    Go to "Configurations" tab and search for:

    • 'otelcol linux default config'

    Then, click into the file to open the configuration file:

    Copy the below code block of the Configuration file:

    NOTE: You will have to insert your [ENV_URL_HERE]

    • Your [ENV_URL_HERE] is your domain name:

    Copy the below code block into the "Update Configuration" section in the UI:

    NOTE: Currently, this configuration file is set up to collect syslogs. If you would like to collect different types of logs adjust the path to the logs you want to ingest:

    Step 3: Apply the Changes Made to the Configuration File:

    Copy the below code block into the "Update Configuration" section in the UI:

    Click "Update".

    Then, go back to the "Agent" tab and click into the Linux hostname (where you'll be ingesting data from):

    Step 4: Verify Metrics/Logs Are Being Ingested into Ascent:

    Verify that logs/metrics are coming in and that it shows as "active":

    To verify Metrics are being ingested, go to Queries -> New Query and search for the host to verify data is there:

    Part 2: Installing and Deploying a Fluent Bit Agent.

    Similar to Part 1 (installing/deploying OTEL Agent), we will now install a Fluent Bit agent to collect logs.

    Step 1: Install Agent Manager:

    Go to -> Explore -> Fleet

    Click -> Install Manager

    Select Platform: Linux

    Select Agent Type: Fluent-bit

    Click 'Proceed'

    Click on "Download All"

    • Open 'README' file for detailed instructions.

    Go to Your Linux Terminal:

    NOTE: Transfer 'Fleet Installation File' to the Linux host that you will collect data from.

    • make sure file has permissions to allow to 'execute'

    • Execute the following command to install the Agent Manager:

    • $ sudo ./fleet-install.sh

    Verify that the hostname is in the Fleet "Agents" UI tab:

    Step 2: Update Your Configuration File:

    1. In the configuration, specify the following:

      • The file path to your application logs.

      • The hostname attribute.

      • Your ingest token.

    From the Fleet UI, select the Fluent Bit agent and apply this new configuration.

    You can verify that the token and hostname have been applied correctly within the agent's detail view.

    Step 4: Verify Logs Are Being Ingested into Ascent:

    Verify that logs/metrics are coming in and that it shows as "active":

    Finally, go to Explore -> 'Logs & Insights' to verify the datasource is there:

    Click into the "Source Application" and you can then drill down into a log entry:

    Conclusion:

    You have now completed the setup for both metrics and log collection using Apica Fleet. With your agents actively reporting, you can fully leverage the Ascent platform to analyze performance, troubleshoot issues, and gain deeper insights into your system. To expand your coverage, simply repeat these steps for other hosts and applications in your infrastructure.

    AWS CloudWatch

    You can forward Cloud watch logs to Apica Ascent using 2 methods.

    • Apica Ascent CloudWatch exporter Lambda function

    • Run Logstash on VM (or docker)

    Apica Ascent CloudWatch exporter Lambda function

    You can export AWS CloudWatch logs to Apica Ascent using an AWS Lambada function. The AWS Lambda function acts as a trigger for a CloudWatch log stream.

    This guide explains the process for setting up an AWS Lambda function and configuring an AWS CloudWatch trigger to forward CloudWatch logs to Apica Ascent.

    Creating a Lambda function

    Apica Ascent provides CloudFormation templates to create the Apica Ascent CloudWatch exporter Lambda function.

    Depending on the type of logs you'd like to export, use the appropriate CloudFormation template from the following list.

    Python version dependency

    AWS may choose to deprecate versions of python and you may have to edit the CloudFormation YAML template to refer to the most recent version of python that is not deprecated by AWS.

    HTTP vs HTTPS handling

    If your environment is configured as HTTP vs HTTPS, please make sure to edit the CloudFormation YAML file and change the connection handling function call as follows:

    edit Lambda definition to use urllib3.HTTPConnectionPool vs urllib3.HTTPSConnectionPool

    Exporting Lambda Function logs

    Use the following CloudFormation template to export AWS Lambda function logs to Apica Ascent.

    Exporting CloudTrail Logs

    Use the following CloudFormation template to export CloudTrail logs to Apica Ascent.

    Exporting AWS VPC Flowlogs

    Use the following CloudFormation template to export Flowlogs logs to Apica Ascent.

    Exporting Cloudwatch logs from other services

    Use the following CloudFormation template to export cloudwatch logs.

    This CloudFormation stack creates a Lambda function and its necessary permissions. You must configure the following attributes.

    Configuring the CloudWatch trigger

    Once the CloudFormation stack is created, navigate to the AWS Lambda function (logiq-cloudwatch-exporter) and add a trigger.

    On the Add trigger page, select CloudWatch, and then select a CloudWatch Logs Log Group.

    Once this configuration is complete, any new logs coming to the configured CloudWatch Log group will be streamed to the Apica Ascent cluster.

    Create the Logstash VM (or Docker)

    Cloudwatch logs can also be pulled using agents such as logstash. If your team is familiar and has logstash in place, follow the instructions below to configure logstash to pull logs from CloudWatch.

    Install Logstash on Ubuntu virtual machine as shown below.

    Configure Logstash

    Logstash comes with no default configuration. Create a new file /etc/logstash/conf.d/logstash.conf with these contents, modifying values as needed:

    You need to download and place the FlattenJSON.rb file in your local before you run the Logstash

    You can obtain an ingest token from the Apica Ascent UI as described . You can customize the namespace and cluster_id in the Logstash configuration based on your needs.

    Your AWS Cloud watch logs will now be forwarded to your Apica Ascent instance. See the Section to view the logs.

    Getting Started with Ascent

    The Ascent platform enables you to converge all of your IT data from disparate sources, manage your telemetry data, and monitor and troubleshoot your operational data in real-time. The following guide assumes that you have signed up for Apica Ascent in the cloud. If you are not yet a registered user, please follow this link and the defined steps. Once registered, use this guide to get started.

    Ascent Quick Start Process

    Quick Start Process for Using Ascent

    For all users that want to get started with Ascent should follow these five (5) simple steps:

    In this guide, we cover the key goals and related activities of each step to ensure a quick and easy setup of Ascent.

    Step 1 - Start Ingesting Data

    For a quick video on step 1 for data ingestion, click on the link below:

    The goal is to ingest telemetry data (logs, metrics, traces) from relevant systems.

    Key actions include:

    • Identify all sources

    • Choose agents appropriate for each data type

    • Configure data collection frequency and granularity

    • Ensure data normalization

    Detailed steps to start ingesting data:

    LOG INTO ASCENT

    From the menu bar, go to: Explore -> Fleet:

    With Fleet you can automate your data ingestion configuration:

    You'll be directed to the Fleet landing page:

    From here, you'll click "Install Agent Manager." - The Agent Manager will allow you to control and configure the OpenTelemetry Collector.

    Inside the "Install Agent Manager" pop-up screen, select:

    • Platform: Linux

    • Agent Type: OpenTelemetry Collector

    Then, click 'Proceed'.

    You'll be redirected to the 'Fleet README' pop-up page:

    • You'll download and configure this configuration file to start ingesting data.

    You'll download 2 files:

    • The README.txt contains instructions for how to install the Agent Manager and OpenTelemetry Collector.

    • The fleet-install.sh is a preconfigured script that you'll run on your Linux host to start ingesting data into Ascent automatically:

    On your Linux host, start by creating a file by running this command:

    Paste the contents of 'fleet-install.sh' into nano editor:

    Run the Fleet-install.sh with the command below:

    • sudo ./fleet-install.sh

    Once the script completes, you'll see the agent in the Fleet screen as 'Active':

    You can then confirm that data is flowing into the system (Go to 'Explore -> Logs & Insights):

    Additional Links to helpful docs include:

    Step 2 - Setup and Configure Pipeline

    For a quick video on setup and configuration of a pipeline, click on the link below:

    The goal is to transport and process the collected data.

    Key actions include:

    • Select or configure a data pipeline

    • Define data routing rules

    • Apply transformations, filtering, or enrichment if needed

    Links to related docs include:

    Step 3 - Design Queries

    For a quick video on designing queries and reports, click on the link below:

    The goal is to enable insights by querying telemetry data.

    Key actions include:

    • Understand the query language used

    • Create baseline queries for system health

    • Optimize queries for performance and cost

    • Validate query results

    Links to related docs include:

    Step 4 - Create Dashboards

    For a quick video on creating dashboards, click on the link below:

    The goal is to visualize system performance and behavior in real time.

    Key actions include:

    • Use visual components

    • Organize dashboards by domain

    • Incorporate filters

    • Enable drill-down for troubleshooting.

    Links to related docs include:

    Step 5 - Create Alerts and Endpoints

    For a quick video on creating alerts and endpoints, click on the link below:

    The goal is to detect anomalies and automate response actions.

    Key actions include:

    • Define alerting rules

    • Set up alert destinations

    • Establish escalation policies and on-call schedules

    • Integrate with incident management workflows and postmortem tools

    Links to related docs include:

    Additional Resources

    Here are helpful links to other "Getting Started" technical guides:

    Configure Apica Ascent to send alerts to your email server

  • Add and configure alert destinations like email, Slack, and PagerDuty

  • Configure SSO using SAML

  • Configure RBAC

  • Collect Data from Input Sources
    Setup and Configure Pipeline
    Design Queries
    Create Dashboards
    Setup Alerts and Workflow
    Data sources overview
    Integrations overview
    Configure pipelines
    Visualize pipelines
    Forwarding data
    Data explorer overview
    Query builder
    Widget
    Dashboards overview
    Alerts overview
    Alerting on queries
    Alerting on logs
    Getting Started with Metrics
    Getting Started with Logs
    Get acquainted with the Apica Ascent UI
    Configure your data sources
    Confirmation of Active Fleet Agent
    Log Data Flow

    Parameter

    Description

    APPNAME

    Application name - a readable name for Apica Ascent to partition logs.

    CLUSTERID

    Cluster ID - a readable name for Apica Ascent to partition logs.

    NAMESPACE

    Namespace - a readable name for Apica Ascent to partition logs.

    LOGIQHOST

    IP address or hostname of the Apica Ascent server. (Without http or https)

    INGESTTOKEN

    JWT token to securely ingest logs. Refer here to generate ingest token.

    431B
    flattenJSON.rb
    Open
    here
    Explore

    Ascent 2.11.0

    Platform-Wide Highlight: Casdoor Integration

    This release introduces full integration with Casdoor, our new authentication and authorization system - a foundational upgrade to Apica’s identity and access management model.

    Key Benefits:

    • Centralized Login & Session Management: Secure, unified authentication across all modules.

    • Fine-Grained Policy Enforcement: Access control enforced at the resource level for dashboards, alerts, plugins, and more.

    • Enhanced Security:

      • Secure, HTTP-only cookies


    Observe

    New Features

    • Casdoor Authorization Integration.

    • YAML-based support added for creating:

      • Alerts

      • Queries

    Policy Management

    • A brand-new Policy Management screen is now available under the IAM section in Settings.

    • Admins can create, assign, and manage policies directly from the UI.

    • Resources such as users, dashboards, alerts, data sources, and plugins can now be managed via policies.

    • A new resource table with search and a Select All

    Security Enhancements

    • Casdoor sessions now use secure, HTTP-only cookies.

    • Idle session timeout is configurable; all cookies are cleared upon logout.

    • Accounts are locked after 3 failed login attempts.

    • Concurrent logins with the same credentials are blocked.

    Expanded Role and Permission Enforcement

    • Policies and permissions are now enforced for key entities including:

      • Dashboards, queries, metrics, alerts, input plugins, and services.

    • Backend access control is fully enforced using Casdoor's policy engine.

    API and Documentation Improvements

    • Flash API Swagger documentation is now complete and publicly accessible.

    • Enhanced Swagger docs for pipelines, dashboards, widgets, events, and more.

    • Tags and Resources APIs now support pagination and search.

    User Experience

    • The UI now handles session expiration and logout scenarios gracefully.

    • The app can now follow your system’s dark/light mode automatically.


    Changes and Improvements

    Performance and Usability

    • Route-level authorization prevents access via direct URLs for unauthorized users.

    • UI animations optimized for faster load times.

    • Font download size reduced by 40%.

    • Minor improvements to batch enforcement logic on the user list page.

    API and Backend

    • Tag and resource syncing refactored for data integrity.

    • Migration scripts now clean up deprecated dashboards and migrate datasource groups to policies.

    UI and Visualization

    • Dashboard and pipeline pages updated with better grouping and detail views.

    • Widgets now support:

      • Full-screen mode

      • Exponential value formatting


    Bug Fixes

    • Fixed issue where API key field was reset after editing user groups.

    • /users/me no longer crashes if permissions are missing; user is redirected cleanly.

    • Dashboard API now correctly honors Casdoor-based permissions.

    • Check group view shows accurate check counts for all levels.


    Fleet

    New Features

    Policy Management for Fleet

    • Policies can now be applied to package applications and fleet-specific actions based on user roles.

    • A streamlined UI makes it easier to manage fleet permissions via Casdoor integration.

    Agent Management

    • Agents can be filtered by hostname, version, type, or name.

    • Agent detail pages now cache responses with defined stale times for faster loading.

    • Added force refresh option to pull the latest agent data.

    Configuration and Packages

    • Deployed agents auto-refresh every 30 seconds for real-time status.

    Security and Access

    • Casbin now supports multiple roles per user, enabling more flexible access models.

    • Vault and certificate access now uses IDs instead of names.


    Changes and Improvements

    • Admin check removed for get/set account repo endpoint.

    • Agent detail performance improved with caching and backend load reduction.

    • Agent-related tables now reload in real time.

    • Fleet entities are now registered in the resource table for consistent access control.


    Bug Fixes

    • Fixed CPU spikes when agents are stopped.

    • Fixed Kubernetes agent package import failures.

    • GUI now properly supports creating configs for Kubernetes.

    • Searching on the Fleet Agents page now waits until you finish typing (not on every keystroke), which means faster searches and less load on the server.


    Ascent Synthetics

    New Features

    Scenario Management

    • GitLab integration added for repo profiles (joins GitHub, Azure, Bitbucket support).

    • Browser Scenario updates:

      • Drag & drop steps

      • Real-time progress updates

    UI/UX Enhancements

    • Auto-refresh added to all check views (Map, List, Grouped).

    • Upload buttons and labels better aligned.

    • Improved error handling for certificate creation.

    • Icons and padding polished across scenario views.

    Policy & Permissions

    • Tag management and private location access now governed by policies.

    • SLA alignment and group filters improved on check visualizations.

    • Checks data source now supports multi-check results.


    Changes and Improvements

    • Improved validation for certificate management.

    • Button labels and filter UI match latest design guidelines.

    • Refined scenario group management and conditional rendering.

    • Kafka client updated to use franz.


    Bug Fixes

    • Removed duplicate entries in casbin_user_rule and fixed group mismatch.

    • Fixed persistence issue with permission-role mappings on Flash restart.

    • Addressed check runner bugs with self-signed certs and resource URL normalization.

    • Zebratester checks now render all expected steps.


    Flow

    New Features

    • Forward rules now integrated into pipeline processing.

    • Enriched Swagger documentation for pipelines with examples.

    • New Visualize mode added to the pipeline management page.


    Changes and Improvements

    • Build process updated with stricter test enforcement.

    • Server-side filtering and execution order support for pipelines.

    • Grouping enhancements for shared rules by namespace and app.


    Bug Fixes

    • Fixed graph rendering issues in pipeline metrics.

    • Resolved certificate creation and encrypted secret issues in Vault.

    • The group dropdown selector in the Rules tab for Pipelines rules now appears correctly.

    • You can now search for alerts by name or keyword in the Alerts section of the Pipelines dashboard, and the results will update immediately.


    IronDB

    New Features

    • Grafana plugin updates:

      • Cleaned up IRONdb datasource links

      • Updated signing and deployment instructions

      • (Planned) support for React-based plugin development


    Changes and Improvements

    • Lowered broker startup memory footprint.

    • Upgraded Flatbuffers for security.

    • Enhanced Kafka config via librdkafka.


    Bug Fixes

    • Fixed Coverity and ASAN build issues.

    • Resolved Prometheus ingestion problems during IRONdb reconstitute.

    • Fixed Graphite Web handling of empty arrays.

    • Addressed logging and dropped message scenarios in IRONdb Relay.


    ASM Legacy

    New Features

    • New API endpoints added for check management in Ascent On-Prem.

    • Improved support for editing and viewing all check types.


    Changes and Improvements

    • Updated default RTSE settings.

    • Improved integration profile and webhook handling.


    Bug Fixes

    • Fixed visibility issues for Videnca reports on specific silos.

    • Fixed filtering and interval behavior for manual mobile app checks.

    • ASM API fixes for check config and host exclusions.

    • SSL handshake and script execution issues resolved.


    General Improvements

    • Documentation: Swagger specs enhanced across the platform.

    • Performance: Backend syncing tasks are faster and more stable.

    • Accessibility: UI improvements for dark mode, screen responsiveness, and keyboard navigation.

    • Reliability: Better handling of long agent names, check filters, and data alignment across pages.


    Component Versions - Ascent v2.11.0

    Components
    Version

    Ascent 2.1.0

    Discover the latest advancements and improvements of the Apica Ascent platform. This is your go-to destination for updates on platform enhancements and new features. Explore what's new to optimize your observability and data management strategies.


    Data Fabric

    Release v3.7 (February 11, 2023)

    Welcome to the latest update of our product! We are excited to introduce several new features and improvements designed to enhance user experiences.

    Refined User Interface:

    • Introduced a refined User Interface across the app, enhancing user experience on the following pages:

      • Search

      • Data explorer

      • Topology

    Infrastructure with Honeycomb View:

    • This view offers users a bird's-eye view of all flow statuses on a single page.

    • Users can customize group-by options like namespace, application, and severity to analyze the flow status of the entire stack.

    • Flexible time range selection allows users to analyze their stack effectively.

    Counter Widget in Explore Page

    Added a new counter widget on the Explore page, enabling users to monitor ingested Trace volume across selected time ranges.

    Query Snippets

    Added Query Snippet templates, allowing users to create and insert query snippets from the settings page into the query editor using keyboard triggers/shortcuts.

    ASM Plus

    ASM Plus is a new offering enabling users to analyze their ASM synthetic check data in OpenTelemetry(OTel) format. Features include viewing check data as an Opentelemetry trace, page-level check execution details in a timeseries graph, check aggregator view with dynamic pivot table visualization, and check analysis view offering various visualizations like Waterfall chart, Flame graph, and Graph view.

    • View checks data as a Opentelemetry trace in ASM plus.

    • Check execution details (page level) view in a timeseries graph. Users can select different check attributes to analyze the check execution data.

    • Check aggregator view

      • Provide a dynamic pivot table for visualizing the check data in different formats like Tabular, line chart, bar graph, etc. We have also added a feature where users can export their pivot table data in an excel format for further analysis.

    New Forwarder for ServiceNow ITOM Event Management Connectors API:

    • Added a new forwarder to facilitate integration with ServiceNow ITOM Event Management Connectors API.

    New Query Parameter Type - Duration List:

    • Introduced a new Query parameter type called Duration list, enabling users to create a dropdown of relative time durations in templatized queries.

    Improved Dashboard Widgets Visualization:

    • Enhanced dashboard widgets visualization by smoothing the data for better presentation.

    Thank you for choosing our product! We hope you enjoy these new features and improvements. Should you have any questions or feedback, please do not hesitate to contact us.

    Data Fabric Release v3.7.1 (March 11, 2024)

    Bug Fixes:

    ALIVE Graph and Summary Fixes: Corrected issues where the "select-all" function wasn't applying across all pages in the ALIVE graph and the pattern index and y-axis didn't match in the summary table.

    ALIVE Page Navigation: The "psid log select-all" operation now correctly spans across all pages instead of just the current one.

    Browser Compatibility: Resolved a bug where the Check analysis view was breaking specifically in old Firefox browsers.

    UI and Display Fixes: Made improvements to various UI elements such as ensuring subject time intervals adhere strictly to different function screens and fixing issues with long horizontal content on the ALIVE summary page.

    Query and Data Handling: Handled edge cases where errors in results could lead to spans having no data.

    Performance and Functionality: Made improvements to several areas such as handling ingest ratelimiters more effectively, reducing open connections errors, and enhancing byte buffer pool performance.

    Enhancements:

    Dashboard Widget: Improved the overflow behavior for Alive Filter tags on the dashboard page for better visibility and usability.

    User Experience: Enhanced the Add widget dialog by fixing issues related to selecting visualization types and restricting multiple API calls while using the "Add tag" feature.

    Other Improvements:

    Performance Optimization: Made improvements to several backend processes, including moving from ReadAll to io.Copy for better performance and memory benefits.

    License Management: Fixed issues with licenses not syncing correctly and removed unknown fields from license display.

    Code Maintenance: Made updates to code repositories for better version parity and improved rules page images display.

    We're continuously working to enhance your experience with Apica Ascent Development, and we hope you find these updates valuable. If you have any questions or feedback, please don't hesitate to reach out to us. Thank you for choosing Apica!


    Synthetic Monitoring

    ASM 13.24 Public Release Notes (2024-04-12)

    User Story Enhancements

    • Updated the type to run on the latest infrastructure

    • Added a new supported Selenium IDE command,

    • Added missing attributes to the response bodies of the and API GET request endpoints

    • Added several new ASM commands to the ASM Manage Scenarios front end. See

    for a complete list of supported Selenium IDE commands. Now, all of the commands listed in that article are available in the ASM Edit/Debug Scenarios page

    Tasks

    • ASM users now have the option to disable automatic page breaks when creating Browser checks:

    Bug Fixes

    • Fixed an issue in which checks were not correctly saved when an incorrect inclusion/exclusion period was used and the user was not notified of a reason. After the fix, users will be notified explicitly if their inclusion/exclusion period is incorrect.

    • Fixed an issue which prevented custom DNS from being used on the latest infrastructure

    • Fixed an issue which prevented an error message from being generated and displayed in the event that auto refresh fails to refresh a Dashboard.

    • Fixed an issue which prevented who had limited editing permissions from saving checks. For instance, Power Users who could edit only the name, description, and tags of a check could not save the check after doing so. The bug fix resolved this issue.

    • Fixed an issue which prevented API calls from returning correct responses when a new user’s time zone was not set

    • Fixed an issue which prevented spaces in between the “accepted codes” field for a URLv2 check:

    • Updated API documentation for URL, URLv2 checks to include acceptable "secureProtocolVersion" values

    • Fixed an issue with Ad Hoc report generation for certain users

    • Fixed issues which prevented Command checks from being created or fetched via the ASM API.

    Epic

    • Disabled the option to select "Firefox" on browser checks

    • Disabled location information in the API for deprecated checks

    • Disabled old Chrome versions when creating a Chrome check

    • Disabled location information in the API for deprecated Chrome versions

    Read previous Release Notes, go to:


    Synthetic Monitoring On Premise

    On Premise ASM Patch 13H.4 Public Release Notes (2024-04-19)

    User Story Enhancements

    • Added the ability to add/edit “Accepted Codes”, “Port Number” and all “Secure Protocol Versions” for URLv1 checks via the ASM API. API documentation was updated to reflect the new functionality.

    • Added SNI (Server Name Indication) support for URLv1 checks

    Bug Fixes

    • Fixed an issue which prevented Power Users with limited check editing permissions from saving checks after performing edits.

    Read previous Release Notes, go to:


    Advanced Scripting Engine

    Major Release V7.5-B (Installation Kit dated April 17, 2024)

    ZebraTester 7.5-B release contains the following new features.

    • Support for Color Blindness: To improve support for vision impairments and color blindness adaptation we have added new themes to the GUI configuration section.

    • Ability to change request method from the ZT GUI: This gives the users the ability to change request method from the ZT GUI. Depending on the request method the Request body field will be enabled & visible or not.

    • Support user agent details from a file: Provides an option in ZT personal settings GUI settings area, where user can upload a JSON file, which have all the latest User-Agents details.

    • Updated Browser Agent List: All the current and latest browser agent list has been updated. • Option to Disable Page Breaks: Option to comment/disable a page break in the recorded session.

    In addition, Zebra Tester V7.5-B release contains the following bug fixes / improvements:

    • Bug Fix for XML extractor giving 500 internal error in ZT scripts.

    • .Har file conversion issue.

    • Conflict when using variables as Mime Type validation.

    • Zebra Tester -Auto assign Fix


    IRONdb

    Release Version 1.2.0

    NOTE: This release bumps the metric index version from 4 to 5. Upon restart, new indexes will be built and the old ones will be deleted. This process will use a significant amount of memory while the indexes are being rebuilt. It will also cause the first post-update boot to take longer than usual.

    • Update index version from 4 to 5.

    • Automatically clean up old index versions on startup to make sure outdated indexes don't clog the disk.

    • Fix Ubuntu 20.04 specific bug where nodes could crash when trying to clean up status files when rolling up raw shards.

    • Fix issue with level indexes where data was being lost when deleting metrics on levels where the metric has multiple tags.

    Ascent 2.10.2

    ASM 13.34.0

    Browser Behavior Improvements:

    • We've resolved an issue where, after upgrading to Chrome 130, some users were experiencing a different behavior. Specifically, the client was triggering the mobile/collapsed version of the web application instead of the desktop version, which was causing Selenium scenarios to fail. This has been corrected to ensure a consistent experience.

    • We've also fixed a problem where certain requests were incorrectly reported as URL errors in Chrome 130. Previously, these requests were reported as "cancelled" without throwing an error in Chrome 115. This update ensures more accurate error reporting.

    • These changes should provide a smoother and more reliable experience with ASM.

    Ascent Synthetics

    New Features

    Check Type

    • Moved several check types from Tech Preview to General Availability (Browser, Compound, Mobile, Postman and Traceroute)

    Monitor Groups

    • Introduced hierarchical monitor groups with sub-group support for improved check organization

    • Added user assignment capabilities for more granular access control

    • Implemented multi-check assignment functionality to streamline group management

    Visualization Enhancements

    • Deployed monitor group visualization of checks to provide status indicators across group hierarchies

    • Released comprehensive SLA Dashboard with performance trend analytics and success/failure metrics

    • Integrated 24-hour SLA uptime status directly within all check list view and monitor group view

    Operations View

    • Launched a consolidated operations view for comprehensive check monitoring

    • Enhanced check status indicators with consistent severity coloring for improved readability

    Scenario Management

    • Delivered unified management for Browser and ZebraTester scenarios

    • Added multi-deletion support to improve workflow efficiency

    • Implemented file type filtering (.html, .zip) for better scenario organization

    • Enhanced test execution with customizable browser, version, and location settings

    Repository Management

    • Refined repository settings interface for improved usability

    • Added private location association functionality for more flexible repository configuration

    UI/UX Improvements

    • Added multi-select filter capability in Check Visualization and Manage Check groups views

    • Consolidated navigation by integrating Manage Checks, Scenario Management, and Operations View as tabs

    • Refined location display format to include flag, country code, and city name

    • Upgraded status code selection with multi-select interface

    Bug Fixes

    • Fixed check results display problems that were causing missing data

    • Corrected search functionality in Check Group View

    Fleet Management

    New Features

    • Advanced Search Redesign: Saved queries now appear directly on the main screen, and we've added visual backgrounds to search groups so you can better organize your queries.

    Improvements

    • Table Design Enhancements:

      • Action buttons display in a line with helpful color coding

    • Better Documentation for Attributes and Secrets: We've added helpful explanations about:

    Bug Fixes

    • CPU Usage Optimization: Good news! We've fixed that frustrating issue where stopping an agent would cause the agent-manager service to max out your CPU.

    • Tech Preview Features: Tech Preview items are now disabled by default.

    Platform-Wide Improvements

    Security Enhancements

    • Cookie Security: We've strengthened session cookie security by implementing the HttpOnly and Secure attribute, giving you better protection against certain types of attacks.

    Performance Optimizations

    • Database Connection Pooling: We've fine-tuned how Ascent connects to databases, resulting in snappier performance across the platform.

    System Stability

    • Syncable Leader Selection: We've fixed issues that could occur during system updates, making the platform more reliable during maintenance windows.

    • First-time Access: First impressions matter! We've fixed those annoying error messages some users experienced when accessing the platform for the first time.

    Flow

    Pipelines (New)

    Pipeline Dashboard & Navigation

    • New Pipeline Dashboard: We've designed the Pipeline dashboard with intuitive summary cards showing total pipelines, data flows, and pipeline rules at a glance.

    • Enhanced List View: Pipeline lists include actionable information with good filtering options:

      • Search by pipeline name, rule type, namespace, application, and state

    Pipeline Configuration

    • Grouped Graph View: We've replaced the previous pipeline graph visualization with a clearer grouped pipeline view having multiple rules inside of it for better understanding of your data flows.

    • Easier Pipeline Creation: You can now create new pipelines and attach them to your currently selected namespace:application (dataflow) right from the pipeline view.

    • Rules Management:

    Rules Enhancements

    New and Improved Rules

    • Stream Rule: This powerful rule helps you redirect the data flows to other stream for better data management.

    • Forward Rule Enhancements:

      • Support for multiple attribute renaming with regex patterns in forward rule creation.

      • Enhanced preview functionality to properly display changed logs affected by rules.

    Rule Management Interface

    • Refactored Rule Interface: We've completely rebuilt our rule creation and editing interface:

      • New tabbed interface for easier rule configuration

      • Separated code blocks into their own tab for cleaner organization

      • Improved form fields with better validation

    Performance and Stability

    Metrics and Monitoring

    • Enhanced Pipeline Metrics: Added detailed metrics to help you track:

      • Pipeline execution time by pipeline ID/name, namespace, and application

      • Logs dropped in each pipeline

      • Number of new events created in new streams

    System Improvements

    • Cache Management: Pipeline cache now updates automatically on pipeline configuration changes.

    • Rule Processing: Rule type workers now operate at the channel level instead of globally, improving processing efficiency.

    Terminology Updates

    • 'Events' to 'Journals': We've updated our terminology from "events" to "journals" throughout the interface for better clarity.

    Observe Platform

    Authentication & User Management

    • SAML Implementation: Redesigned authentication flow provides more reliable enterprise login experiences

    • Contextual UI: SAML login option now intelligently displays only when configured in your environment

    Dashboards & Visualization Capabilities

    • New Chart Types:

      • Pie charts for proportion visualization

      • Race bar charts for temporal comparisons

      • List views with dynamic field selection

    ALIVE Analytics

    • Compare Functionality:

      • Integrated search capabilities for targeted analysis

      • Support for granular anomaly type classification

      • Enhanced visualization components with improved color differentiation

    Query System

    • Editor Improvements:

      • Confirmation safeguards to prevent unintended overwrites

      • Enhanced query execution monitoring

      • Fixed statistical output anomalies

    IronDB

    Enhancements

    • Prometheus Support: IronDB now fully supports Prometheus data through our noit metric director. We've added the ability to decode Prometheus protobuf data and ingest it directly into the system, expanding your metrics collection options.

    Component Versions - Ascent v2.10.2

    IRONdb

    Changes in 1.5.1

    2025-06-17

    • Add message field to /find/tags estimates for enhanced clarity.

    Amazon CloudWatch ( YAML )

    Apica Ascent connects to Amazon CloudWatch using the boto3 client with the help of the AWS CloudWatch data source making it easy for you to query CloudWatch metrics using its natural syntax, analyze, monitor, and create Visualization of data.

    Before you query your CloudWatch data, you should set up authentication credentials. Credentials for your AWS account can be found in the IAM Console. You can create or use an existing user. Go to manage access keys and generate a new set of keys.

    receivers:
      hostmetrics:
        collection_interval: 60s
        scrapers:
          cpu:
            metrics:
              system.cpu.utilization:
                enabled: true
          memory:
            metrics:
              system.linux.memory.available:
                enabled: true
              system.memory.utilization:
                enabled: true
          disk:
          network:
          load:
          filesystem:
            include_virtual_filesystems: true
            metrics:
              system.filesystem.inodes.usage:
                enabled: true
              system.filesystem.usage:
                enabled: true
              system.filesystem.utilization:
                enabled: true
          paging:
          processes:
      filelog:
        include:
          - /var/log/syslog
          - /var/log/auth.log
        start_at: beginning
        operators:
          - type: add
            field: attributes.log_source
            value: ubuntu
          - type: move
            from: attributes["log_source"]
            to: resource["log_source"]
    processors:
      attributes/os:
        actions:
          - key: ostype
            value: "linux"
            action: upsert
      attributes/host:
        actions:
          - key: hostname
            value: "{{$ .Agent.host_name }}"
            action: upsert
      batch:
        send_batch_size: 1000
        timeout: 5s
        
    exporters:
      debug:
        verbosity: detailed
      prometheus:
        endpoint: 0.0.0.0:9464
      otlphttp/apicametrics:
        compression: gzip
        disable_keep_alives: true
        encoding: proto
        metrics_endpoint: "{{$ .Agent.secret.otelmetrics.endpoint }}"
        headers:
          Authorization: "Bearer {{$ .Agent.secret.otellogs.token }}"
        tls:
          insecure: false
          insecure_skip_verify: true
      otlphttp/logs:
        compression: gzip
        disable_keep_alives: true
        encoding: json
        logs_endpoint: "https://[ENV_URL_HERE]/v1/json_batch/otlplogs?namespace=Linux&application=otellogs"
        headers:
          Authorization: "Bearer {{$ .Agent.secret.otellogs.token }}"
        tls:
          insecure: false
          insecure_skip_verify: true
        sending_queue:
          queue_size: 10000
    extensions:
    service:
      extensions:
      pipelines:
        metrics/out:
          receivers: [hostmetrics]
          processors: [attributes/host, attributes/os]
          exporters: [otlphttp/apicametrics]
        logs/out:
          receivers: [filelog]
          processors: [attributes/host, batch]
          exporters: [otlphttp/logs]
       filelog:
        include:
          - /var/log/syslog
          - /var/log/auth.log
    https://logiqcf.s3.amazonaws.com/cloudwatch-exporter/logiq-cloudwatch-lambda-logs-exporter.yaml
    https://logiqcf.s3.amazonaws.com/cloudwatch-exporter/logiq-cloudwatch-cloudtrail-exporter.yaml
    https://logiqcf.s3.amazonaws.com/cloudwatch-exporter/logiq-cloudwatch-flowlogs-exporter.yaml
    https://logiqcf.s3.amazonaws.com/cloudwatch-exporter/logiq-cloudwatch-exporter.yaml
    wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
    echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
    sudo apt-get update
    sudo apt-get install logstash
    
    # Install Logstash logstash-input-cloudwatch
    cd /usr/share/logstash
    sudo -u root sudo -u logstash bin/logstash-plugin install  logstash-input-cloudwatch
    input {
      cloudwatch_logs {
        access_key_id => "<Acess-key>"
        secret_access_key => "<secret-access-key>"
        region => "<region>"
        "log_group" => ["<Cloud-watch-log-group"]
        "log_group_prefix" => true
        codec => plain
        start_position => end
        interval => 30
      }
    }
    
    filter {
            ruby {
               path => "/home/<custom-path>/flattenJSON.rb"
               script_params => { "field" => "cloudwatch_logs" }
            }
            
    	mutate {
    
               gsub => ["cloudwatch_logs.log_group","\/","-"]
               gsub => ["cloudwatch_logs.log_group","^-",""]
    	   add_field => { "namespace" => "<custom-namespace>" }
    	   add_field => { "cluster_id" => "<custom-cluster-id>" }
    	   add_field => { "app_name" => "%{[cloudwatch_logs.log_group]}" }
    	   add_field => { "proc_id" => "%{[cloudwatch_logs.log_stream]}" }
            }
    }
    
    
    output {
     http {
           url => "http://<ascent-endpoint>/v1/json_batch"
           headers => { "Authorization" => "Bearer <SECURE_INGEST_TOKEN>" }
           http_method => "post"
           format => "json_batch"
           content_type => "json_batch"
           pool_max => 2000
           pool_max_per_route => 100
           socket_timeout => 300
          }
    }
    
  • Configurable idle timeouts (default 20 mins)

  • Rate-limiting and account lockout after failed logins

  • Restriction on concurrent sessions

  • Future-Ready: Lays the groundwork for modern authentication methods including OAuth and biometric logins like Face ID and for advanced features such as user impersonation and compliance-driven access control.

  • Role and Policy Sync with Casdoor When you delete a role in Ascent, all related user-role and policy mappings are now also removed from Casdoor, keeping your identity platform and Ascent in sync.

  • Data sources

  • Dashboards (via DataExplorer)

  • Oracle DB is now supported as a data source.

  • option improves policy creation workflows.
  • The modal for selecting resources within policy management has been improved. It now includes a search bar, a better-organized list, multi-select, a preview option, and clear action buttons, making large resource lists much more manageable.

  • Sessions now terminate explicitly during logout.

  • TLS/SSL usage enforced across the board.

  • Session login ID is rotated after each login to prevent fixation attacks.

  • The batch enforcement function for Ascent resources now includes working pagination and user feedback, including loading indicators for longer operations.

  • When typing in the dashboard search bar before dashboard data loads, search text will be preserved and the search will still run when the data is ready.

  • Visual fixes for cancel buttons, logo alignment in shared dashboards, and widget resizing.

  • It’s now possible to see whether dashboards and queries are published or not. This visibility was missing in previous versions.

  • Counter visualizations now support text color changes and unselection behavior.

  • Data Explorer no longer hangs when a saved query is deleted.

  • Visual fixes for dark mode, Y-axis labels, and widget resizing.

  • Logout message now shown properly; session timeouts handled smoothly.

  • Alerts page behaves predictably when data is missing; no more random column rendering.

  • Email formatting fixed in “Generate Password” emails.

  • Reduced redundant calls to /api/alerts.

  • Corrected role-to-group linkage.

  • Search fields in Queries now handle special characters like % correctly, so all queries return as expected when you use symbols.

  • CI pipeline enhancements with mandatory test coverage.

    When a new config file is assigned to an agent, its deployment state is updated correctly and won’t remain stuck as “new.”

    Cancel during test run

  • Bulk step deletion

  • Improved tooltips

  • Check runner supports host exclusions for proxy cases.

  • Extended function support: encode, decode, text, mask, net, time.

  • Various fixes for compound checks, location rendering, and UI spacing.

  • Sorting scenarios by name now works as intended in the Scenario Management area.

  • The action buttons under Pipelines now include explanatory tooltips. Hovering over an icon will show what it does.

    Fixed compilation with GCC 13 and Ubuntu 24.04.

    Fixed deprecated URL references and UI bugs in JSONPath extractor.

    Check Execution Container: Runbin

    runbin-2025.04.17-0-base-2.2.1

    Check Execution Container: Postman

    postman-2025.04.17-0-base-1.4.1

    Bnet (Chrome Version)

    10.2.1 (Chrome 130)

    Zebratester

    7.0B

    ALT

    6.13.3.240

    IronDB

    1.5.0

    Flash

    v3.16.1

    Coffee

    v3.17.5

    ASM

    13.36.3

    NG Private Agent

    1.0.9

    Check Execution Container: Browser

    fpr-c-130n-10.2.1-716-r-2025.04.02-0-base-2.0.0

    Check Execution Container: Zebratester

    zt-7.5a-p0-r-2025.04.02-0-base-1.2.0

    Pipeline

  • Dashboards

  • Query/Report editor

  • Implemented dynamic quick date-time selection for granular control, empowering users to specify any date range they desire, not limited to predefined time ranges.

  • Provides a timeseries graph for various kinds of service names.

  • Check analysis view provides an option to view the check results data in the following visualizations:

    • Waterfall chart

    • Flamegraph

    • Graph view

  • Fixed the following API call: https://api-wpm.apicasystem.com/v3/Help/Route/GET-checks-proxysniffer-checkId-results-resultId-errorlog which was returning a 500 server error previously.

  • Fixed an issue with certain checks which prevented Request & Response Headers from showing correctly within the Check Details page:

  • Disabled deprecated check types from the "create new check"

  • Disabled deprecated check types from the integration wizard

  • Disabled API endpoint for URLv1 checks

  • Disabled API endpoint for Command v1 checks

  • Disabled deprecated check types from /checks/command-v2/categories

  • Disabled deprecated browser version from /AnalyzeUrl

  • Replaced Firefox with Chrome when creating an iPhone, iPad, or Android Check in New Check Guide

  • Removed deprecated check versions as options from the Edit Scenario page

  • Disabled AppDynamics check types from the integration wizard

  • Variables as Page Break Names: Users can use variables when setting my page-breaks names to make scripts more dynamic.

  • Add OR condition for content type validation: Logical OR condition against content type validation can be tested by users.

  • ZebraTester Controller Pull file (.wri): User will be able to pull files from the execagent that have been written by the feature "writetofile". For this the files need to be pulled to the controller as any other out/err/result file.

  • WebSocket Extension (MS1): WebSocket implementation capabilities of Zebra Tester, allowing users to conduct more comprehensive testing of WebSocket-based applications. A detailed how guide on how to use WebSocket extension is added in the documentation folder.

  • Fix for time zone lists, shows the java standard supported time zones without the deprecated ones.

  • Detailed Replay logs in ZT (extended logs)

  • ALPN Protocol Negotiation

  • Page Break - Threshold Breach (Trigger & Abort)

  • Library Update (Update JGit library): Updated the JGit library to the latest version to leverage new features and improvements.

  • Fix issues with JavaScript editor in ZT.

  • Fix issue where level indexes were incorrectly reporting that levels existed when all underlying metrics had been removed.

  • Add new API endpoints, /compact_indexes and /invalidate_index_cache, that allow forcing compaction and cache invalidation for specific accounts, respectively.

  • Fix rollup bug where raw shards could be prematurely deleted if a rollup was aborted due to corruption.

  • Fix various potential memory corruption issues.

  • Fix issue where jlog journal data could get corrupted.

  • libmtev 2.7.1

  • Compound Check
    setLocation
    /users
    /users/{user_guid}
    Power Users
    Knowledge Base
    Knowledge Base
    What attributes are and how they help you categorize and manage fleet components
  • How secrets work to keep your sensitive information like passwords and API keys secure

  • Agent Details Page Refinements:

    • Fixed width issues in the configuration file tab

    • Improved the assignment visibility toggle for better clarity

    • Updated card colors in the Fleet Summary Table to match our design language

  • Visual identification of rule types with color-coded icons for each rule category
  • Quick access to pipeline metrics directly in the list view

  • Improved Navigation:

    • Pipeline names are clickable, opening a dashboard with preselected pipeline

    • Dataflow entries are clickable, opening a dashboard with preselected pipeline, namespace, and application to quickly view metrics and graphical data

    • Action buttons are accessible with clearer visual indicators

  • Rules are now brought directly to the pipeline page for easier access
  • Pipelines are decoupled from forwarders. Configure Pipeline modal no longer shows 'Forward Rule' when adding new rules.

  • 'Enable Code' switch is enabled by default

  • Preview Improvements:

    • Added GenAI option for generating sample logs in configure pipeline page.

    • Loading indicators when fetching sample logs, namespaces, or applications.

    • Better handling of empty dropdowns and state management

    • Replaced dataflow dropdown with direct namespace/application selection for clarity

  • Event Suppression: Added UI support for suppressing duplicate events (deduplication) within filter rules, allowing you to:

    • Set suppression duration periods

    • See aggregated events when the suppression period ends

  • Visual Rule Identification: Each rule type now has distinctive icons in the pipeline view for quick visual recognition.

  • Dense-status charts with multi-select labeling

  • Data Explorer Enhancements:

    • Logarithmic scale for Y-axis to better visualize exponential data

    • Time range bookmarking for reproducible data analysis

    • Multi-column plotting with GroupBy operations

    • Improved numerical representation for small values

  • Dashboard Management:

    • Corrected shared link functionality

    • Streamlined dashboard import process

  • Dual metric display showing both absolute counts and percentages

    runbin-2025.04.17-0-base-2.2.1

    Check Execution Container: Postman

    postman-2025.04.17-0-base-1.4.0

    Bnet (Chrome Version)

    10.2.1 (Chrome 130)

    Zebratester

    7.0B

    ALT

    6.13.3.240

    IronDB

    1.5.0

    Component

    Versions

    Coffee

    v3.16.4

    Flash

    v3.15.2

    ASM

    13.34.0

    NG Private Agent

    1.0.8

    Check Execution Container: Browser

    fpr-c-130n-10.2.1-716-r-2025.04.02-0-base-2.0.0

    Check Execution Container: Zebratester

    zt-7.5a-p0-r-2025.04.02-0-base-1.2.0

    Check Execution Container: Runbin

    Improve reconstitute performance.

  • Fix a bug where the first data point for an NNTBS metric for a rollup could be written with the wrong timestamp during a reconstitute.

  • Add new iterate style, seek, for sending data during reconstitute. This mode seeks to specific metrics rather than iterating the whole shard during the sending phase. This can be set via thereconstitute/nntbs@iterate_surrogates_for_send_style field - iterate is the old style and seek is the new style. The default value is iterate.

  • Remove the jindexer subscriber when the use_indexer field is disabled for journals, which will prevent unnecessary journal data retention.

  • Improve handling when mmap operations fail loading surrogate database or indexing files.

  • On delete responses, the X-Snowth-Incomplete-Results header is now always returned, and set to false when results are complete.

  • Remove unneeded checks for flatbuffer availability in journaling.

  • Changes in 1.5.0

    2025-05-08

    • Avoid excess CPU usage during replication when a node is in journal-only mode.

    • Fix issue with stalling in journal-only mode due to check tag replication.

    • Remove unneeded checks for flatbuffer availability in journaling. Non-flatbuffer journals were removed long ago.

    • Fix journal-only mode so that it exits when the replication journals are fully drained.

    • Disable the graphite, prometheus, opentsdb, check tag replication, and monitor modules when running in reconstitute or journal-only modes.

    • Remove Pickle support from Graphite module.

    • Reduce error log volume on text reconstitute errors.

    • Fix issue where the opentsdb and prometheus modules were incorrectly reporting that the toplogy was wrong when attempting to load data onto a node that is reconstituting. Nodes will now report that the service is unavailable.

    • Improve libsnowth listener error to include the IP address of the remote side of the socket.

    • Accelerate NNTBS reconstitute by avoiding memory copies.

    • Reduce unnecessary NNTBS data copies during reconstitute to improve speed.

    • Write NNTBS data out in parallel during reconstitute to improve speed.

    • Add parameter to text reconstitute data fetching to allow excluding data older than the provided time.

    • Improve text reconstitute performance by doing fewer surrogate id lookups.

    • Deprecate configuration parameters reconstitute/nntbs@batchsize andreconstitute/nnt@batchsize. Use reconstitute/nntbs@lmdb_commit_batch_size instead.

    • Add new flag, reject_data_older_than, to the <text> config stanza that will disallow text data older than the provided time from being ingested, whether by standard ingestion or reconstitute.

    • Improve NNTBS and histogram rollup reconstitute performance by no longer fetching shards that fall outside of any configured retention window.

    • Inhibit retention-related deletion of NNTBS and histogram rollup shards while a reconstitute is in progress.

    • Add error handling of invalid flatbuffer records found while replaying metric index journals. Optionally these invalid records can be saved to files. The number of replay errors is now tracked in the /state API.

    • Treat MDB_NOTFOUND errors returned from LMDB transaction puts as corruption.

    • Only flush column families in the raw database that finish rolling up. Previously all column families were flushed unconditionally, including those that had not finished rolling up or had long since finished.

    • Fix snowthsurrogatecontrol use-after-free error.

    Changes in 1.4.0

    2024-11-05

    NOTE: This release deprecates legacy histograms. Histogram shards must be configured before upgrading to this release. If this is not done, nodes may not start up after the upgrade.

    • Fix use after free bug that could occasionally happen due to a race when fetching raw data.

    • Fix potential memory leak on certain oil/activity data operations.

    • Fix fetch bug where C-style calloc allocations were being mixed with C++-style deletes.

    • Add new paramter to whisper config, end_epoch_time, that takes an epoch timestamp and directs the code to not look in whisper files if the fetch window starts after this time.

    • Fix bug where histogram ingestion data was not being sent properly during rebalance operations.

    • Fix bug where histogram rollup data was not being reconstituted during reconstitute operations.

    • Add get_engine parameter to histogram data retrieval to allow pulling from either rollups or ingestion data.

    • No longer open new LMDB transactions when reading data for merging NNTBS blocks together.

    • Remove all support for legacy, non-sharded histograms.

    • Fix bug where if a raw shard rollup was aborted after being scheduled but before actually starting, multiple rollups could end up triggering at once.

    • Fix rename bug where type was not getting set when failing to send NNTBS data.

    • Add header to the /rename endpoint to X-Snowth-Activity-Data-Mode, which can accept either use_existing or create_new as values.

    • Treat MDB_CORRUPTED, MDB_PAGE_FULL, MDB_TXN_FULL, MDB_BAD_TXN, and ENOMEM as LMDB corruption consistently when checking for errors.

    • When using the /merge/nntbs endpoint to send data to a node, allow either updating the receiving node's activity data using the incoming NNTBS data or leaving it as is and not updating it.

    • Fix bug where activity data was not being updated correctly when inserting NNTBS data.

    • Fix bug where rollups were marked clean after a rollup had been kicked off asynchronously, resulting in a race that could lead to shards being incorrectly considered dirty.

    • Deprecate support for rebalancing data into a cluster with fewer NNTBS periods.

    • The /rename endpoint will now detect when it gets a 500 error from the /merge/nntbs endpoint and will return an error instead of spinning forever.

    • The /merge/nntbs endpoint will no longer crash on detecting corrupt shards; it will offline the shards and return errors.

    • Various small fixes to reduce memory consumption, improve performance, and prevent possible crashes or memory corruption.

    Changes in 1.3.0

    2024-07-17

    • Fix bug in build_level_index where we were invoking a hook that called pcre_exec with an uninitialized metric length.

    • Reduce spam in error log when trying to fetch raw data for a metric and there isn't any for the requested range.

    • Add new API endpoint, /rename, to allow renaming a metric. This calculates where the new metric will live, sends the data for the metric to the new location, then deletes the old metric. This only works for numeric metrics.

    • Add new API endpoint, /full/canonical/<check uuid>/<canonical metric name> that will allow deleting an exact metric from the system without using tag search.

    • Add ability to skip data after a given time when using the copy sieve in snowth_lmdb_tool.

    Changes in 1.2.1

    2024-06-04

    • Avoid metric index corruption by using pread(2) in jlog instead of mmap(2).

    • Deprecate max_ingest_age from the graphite module. Require the validation fields instead.

    • Change Prometheus module to convert nan and inf records to null.

    • Add logging when when the snowth_lmdb_tool copy operation successfully completes.

    • Fix bug where a node could crash if we closed a raw shard for delete, then tried to roll up another shard before the delete ran.

    • Fix bug where setting raw shard granularity values above 3w could cause data to get written with incorrect timestamps during rollups.

    • Improve various listener error messages.

    • Add checks for timeouts in the data journal path where they were missing.

    • Improve graphite PUT error messages.

    • Fix NNTBS rollup fetch bug where we could return no value when there was valid data to return.

    • Fix bug where histogram rollup shards were sometimes not being deleted even though they were past the retention window.

    Changes in 1.2.0

    2024-03-27

    NOTE: This release bumps the metric index version from 4 to 5. Upon restart, new indexes will be built and the old ones will be deleted. This process will use a significant amount of memory while the indexes are being rebuilt. It will also cause the first post-update boot to take longer than usual.

    • Update index version from 4 to 5.

    • Automatically clean up old index versions on startup to make sure outdated indexes don't clog the disk.

    • Fix Ubuntu 20.04 specific bug where nodes could crash when trying to clean up status files when rolling up raw shards.

    • Fix issue with level indexes where data was being lost when deleting metrics on levels where the metric has multiple tags.

    • Fix issue where level indexes were incorrectly reporting that levels existed when all underlying metrics had been removed.

    • Add new API endpoints, /compact_indexes and /invalidate_index_cache, that allow forcing compaction and cache invalidation for specific accounts, respectively.

    • Fix rollup bug where raw shards could be prematurely deleted if a rollup was aborted due to corruption.

    • Fix various potential memory corruption issues.

    • Fix issue where jlog journal data could get corrupted.

    Changes in 1.1.0

    2024-01-02

    • Add preliminary support for operating IRONdb clusters with SSL/TLS. This allows securing ingestion, querying, and intra-cluster replication. See TLS Configuration for details. This feature should be considered alpha.

    • Fix bug where rollups were being flagged "not in progress" and "not dirty" when attempting to schedule a rollup and the rollup is already running.

    • Use activity ranges as part of query cache key. Previously, cached results from queries with a time range could be used to answer queries that had no time range, leading to incorrect results.

    • Fix logic bug where rollups were sometimes flagged as still being in progress after they were completed.

    • Account index WAL can keep growing without bounds due to a local variable value being squashed early.

    • Fix bug where the reconst_in_progress file was not being cleaned up after reconstitute operations, which could block rollups and deletes from running.

    • The raw/rollup and histogram_raw/rollup API endpoints will no longer block if there is a rollup already running. They will also return sensible error messages.

    • Raw shard rollups will not be allowed to run unless all previous rollups have run at least once.

    • Fix bug where deferred rollups could cause the rollup process to lock up until the node is restarted.

    • Add REST endpoint POST/PUT /histogram/<period>/<check_uuid>/<metric_name>?num_records=<x> which can be used with a json payload to directly insert histogram shard metric data for repair purposes.

    • Added configuration file field, //reconstitute/@max_simultaneous_nodes, that will cause the reconstituting node to only hit a predetermined number of peer nodes at once. The default if not specified is "all peers". This setting can be used if a reconstitute places too much load on the rest of the cluster, causing degradation of service.

    • Disallow starting single-shard reconstitutes with merge enabled if the shard exists and is flagged corrupt.

    • Improve NNTBS error messages if unable to open a shard.

    • PromQL - improve error messages on invalid or unsupported range queries.

    • PromQL - fix range nested inside one or more instant functions.

    • Include maintenance mode when pulling lists of raw, histogram, or histogram rollup shards.

    • Use read-copy-update (RCU) for Graphite level indexes and the surrogate database. It allows more queries to run concurrently without affecting ingestion, and vice versa.

    • Defer rollups of raw shards if there is a rollup shard in maintenance that the raw shard would write to.

    • Reject live shard reconstitute requests on NNTBS or histogram rollup shards if there is a raw shard rollup in progress that would feed into them.

    • Fix bug where the system would report that a live reconstitute was not in progress, even when one was running.

    • Allow running single-shard or single-metric reconstitute on non-raw shards, even if the shard extends beyond the current time.

    • The reconstitute GUI no longer apppears when doing online reconstitutes.

    • Fix iteration bug when reconstituting NNTBS shards.

    • Added the merge_all_nodes flag to live reconstitute which causes all available and non-blacklisted write copies to send metric data instead of only the "primary" available node.

    • Added the ability to repair a local database by reconstituting a single metric stream.

    • Fix bug where /fetch would not proxy if the data for a time period was all in the raw database, but the relevant raw shards were offline.

    Changes in 1.0.1

    2023-09-06

    NOTE: This version updates RocksDB (raw database, histogram shards) from version 6 to 7. It is not possible to revert a node to a previous version once this version has been installed.

    • Add a new configuration parameter, //rest/@default_connect_timeout, that allows configuring the connect timeout for inter-node communication. This was previously hardcoded to 3 seconds.

    • Graphite series and series_multi fetches now return 500 when there are no results and at least one node had an issue returning data.

    • Graphite series and series_multi fetches now return 200 with an empty results set on no data rather than a 404.

    • Fix bug on /find proxy calls where activity ranges were being set incorrectly.

    • Add ability to filter using multiple account IDs in the snowthsurrogatecontrol tool by providing the -a flag multiple times.

    • Reduce usage of rollup debug log to avoid log spam.

    • Upgrade RocksDB from version 6.20.3 to version 7.9.2.

    Changes in 1.0.0

    2023-07-28

    IMPORTANT NOTE: Any node running 0.23.7 or earlier MUST do a surrogate3 migration PRIOR to upgrading to 1.0.0. This is due to removal of support for the previous surrogate database format.

    • Prevent ingestion stalls by setting a better eventer mask on socket read errors.

    • Fix bug where surrogate deletes running at the same time as level index compactions can cause crashes.

    • Improve scalability of lock protecting all_surrogates in indexes.

    • Fix logging of old-data ingestion.

    • Don't stop a rollup in progress if new data is received; finish and reroll later.

    • Add ability to filter by account ID in the snowthsurrogatecontrol utility by using the -a flag.

    • Fix full-delete crash on malformed tag query.

    • Rewrite Graphite level-index query cache to remove race on lookup.

    • Remove surrogate2.

    • Some data was hidden post arts compaction, make sure it stays visible.

    • Fix bug where if a fetch to the /raw endpoint exceeded the raw streaming limit (10,000 by default), the fetch would hang.

    • Reduce memory usage during extremely large /raw fetches.

    • Fix bug where extremely large double values generated invalid JSON on /raw data fetches.

    • Handle surrogate duplicates on migration from s2 to s3.

    • Require all nodes for active_count queries.

    • Add back-pressure to raw puts, allows the database to shed load by returning HTTP 503.

    Older Releases

    For older release notes, see Archived Release Notes.

    Adding Amazon CloudWatch ( YAML ) data source

    The first step is to create an Amazon CloudWatch data source and provide all details such as the Name, AWS Region, AWS Access Key, AWS Secret Key

    • Name: Name of the Data Source

    • AWS Region: Region of your AWS account

    • AWS Access Key: access_key_id of your IAM Role

    • AWS Secret Key: secret_access_key of your IAM Role

    Adding Amazon CloudWatch data source

    Querying CloudWatch

    These instructions assume you are familiar with the CloudWatch ad-hoc query language. To make exploring your data easier the schema browser will show which Namespaces and Metrics ( optionally dimensions ) you can query.

    Query Page and Schema Navigator

    CloudWatch query designer wizard

    Apica Ascent includes a simple point-and-click wizard for creating CloudWatch queries. You can launch the query wizard by selecting the CloudWatch YAML data source and selecting the "Construct CloudWatch query" icon.

    CloudWatch query wizard

    In the query designer, you can select the Namespace, Metric, and Dimensions along with the Stat. You can add one or more Namespaces, Metric using a simple point-and-click interface.

    Add Metric
    Add dimension
    Edit a query manually
    Select supported Stat for the query

    You are now ready to run and plot the metric. Running Execute will automatically create the built-in line graph for your metric. You can further create additional visualizations using "New Visualization".

    Running a query and plotting the time-series data

    Deep-dive into query language for CloudWatch queries

    For the curious, here is a breakdown of the YAML syntax and what the various attributes mean. NOTE: You don't need to write or type these to query data. The No-code built-in WYSIWYG editor makes it easy to query CloudWatch without writing any code. Let us look at the YAML syntax now. It should be an array of MetricDataQuery objects under a key called MetricsDataQueries.

    Here's an example that sends MetricDataQuery

    Your query can include the following keys:

    Key

    LogGroupName

    string

    LogGroupNames

    array of strings

    StartTime

    integer or timestring

    EndTime

    integer or timestring

    QueryString

    string

    Querying AWS Lambda Metrics

    Let's look at a slightly more complex example and query AWS Lambda metrics for AWS Lambda Errors. In this example, we are using the MetricName: "Errors" for the "AWS/Lambda" Namespace.

    When selecting the AWS/Lambda Namespace, you can see the available MetricNames

    • AWS/Lambda

      • Errors

      • ConcurrentExecutions

      • Invocations

      • Throttles

      • Duration

      • IteratorAge

      • UnreservedConcurrentExecutions

    Below is an example query that tracks AWS Lambda errors as an aggregate metric. The StartTime is templatized and allows dynamic selection.

    You can further click on the Errors MetricName and it will expand to show you Dimensions available for further querying. For AWS/Lambda, the Dimension FunctionName provides further drill down to show Cloudwatch metrics by Lambda Function Name.

    The query can be further enhanced by making the lambda function name, a templatized parameter. This allows you to pull metrics using a dropdown selection e.g. a list of lambda functions. The FunctionName template below can also be retrieved from another database as a separate query.

    Examples Queries

    Query using a single Expression

    An expression can be a mathematical expression of metrics or an sql query.

    Query using a list of expressions

    Query using metric-stat or a list of metric-stats:

    Each list item in the MetricDataQueries list in the above mentioned examples can contain either an Expression or a MetricStat Query item. we can provide a combination of both also.

    Query using a combination of MetricStat and Expression:

    In the above example the second item uses MetricStat syntax to fetch data and the first item uses expression syntax to fetch the data. here, first item is used to perform a math expression on the data fetched by second item.

    Query Example to Perform math Expression on fetched data

    In the above example first and second items are used to fetch metric data. the third item is used to perform a mathematical expression on the data fetched using the first and second items.

    Query to use Period and Stat in MetricDataQueries items

    The period indicates granularity and stat indicates the group by operation to be performed on the fetched data.

    or

    for some detailed information on querying cloud-watch metrics, follow the below links https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricData.html https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/get-metric-data.html

    Using the OpenTelemetry Demo

    What is the OpenTelemetry (OTel) Demo?

    The OTel Demo is a microservices-based application created by OpenTelemetry's community to demonstrate its capabilities in a realistic, distributed system environment. This demo application, known as the OpenTelemetry Astronomy Shop, simulates an e-commerce website composed of over 10 interconnected microservices (written in multiple languages), and communicates via HTTP and gRPC. Each service is fully instrumented with OTel, generating comprehensive traces, metrics, and logs.

    The demo serves as an invaluable resource for understanding how to implement and use OpenTelemetry in real-world applications. Using the Ascent platform with the OTel Demo enables you to converge all of the IT data, manage the telemetry data, and monitor and troubleshoot the operational data in real-time. The following steps guide you through the process of using the OTel Demo application with Ascent.

    Ascent Quick Start Process

    Quick Start Process for Using the OTel Demo with Ascent

    All users getting started with using the OTel Demo with Ascent should follow these simple steps:

    In these steps, we cover the key goals and related activities to ensure a quick and easy setup of OTel Demo with Ascent along with the full pipeline deployment process.

    How to Deploy the OTel Demo Application

    The goal is to ingest telemetry data (logs, metrics, traces) from relevant systems.

    Key actions include:

    • Accessing and deploying the public OpenTelemetry (OTel) Demo App

    • Configure data collection setup, frequency and granularity

    • Ensure data normalization

    This guide aims to walk you through the steps required to deploy the OpenTelemetry Demo app and begin sending data to Ascent.

    NOTE: We will deploy the OTel demo app using Docker for this guide.

    Prerequisites:

    • Docker

    • 6 GB of RAM for the application

    Setting Up the OTel Demo

    Step 1: Get and Clone the OTel demo app repository:

    $ git clone https://github.com/open-telemetry/opentelemetry-demo.git

    Step 2: Go to the demo folder:

    $ cd opentelemetry-demo/

    Step 3: Start the demo app in Docker:

    $ docker compose up --force-recreate --remove-orphans --detach

    Step 4: (Optional) Enable API observability-driven testing:

    $ docker compose up --force-recreate --remove-orphans --detach

    Step 5: Test and access the OTel demo application:

    Once the images are built and containers are started, you can now access the following OpenTelemetry components on the demo app web store:

    • Web store:

    • Grafana:

    • Load Generator UI:

    • Jaeger UI:

    Optional: Changing the demo’s primary port number

    By default, the demo application will start a proxy for all browser traffic bound to port 8080. To change the port number, set the ENVOY_PORT environment variable before starting the demo.

    • For example to use port 8081:

    $ ENVOY_PORT=8081 docker compose up --force-recreate --remove-orphans --detach

    Step 6: Update the OTel config file:

    • opentelemetry-demo/src/otel-collector/otelcol-config-extras.yml

    Paste the following into the config file, overwriting it completely:

    1. Copy

    2. Replace <YOUR-ASCENT-ENV>with your Ascent domain, e.g. company.apica.io

    3. Replace <YOUR-INGEST-TOKEN>with your Ascent Ingest Token, e.g. eyXXXXXXXXXXX...

      1. See 'Step 7' to get your 'ingest-token'

    Step 7: Get Your Ingest Token from Ascent:

    Step 8: Get Data Flowing into Ascent Platform:

    Restart the OpenTelemetry collector by running the following command:

    $ docker compose restart

    Step 9: Verify data flow in Ascent:

    1. Log into Ascent

    2. Navigate to Explore -> Logs & Insights:

    1. You should see namespace "OtelDemo" and Application "DemoLogs":

    2. This confirms that data is flowing from the Opentelemetry Demo Application. Feel free to click into application "DemoLogs" to view all the logs that are being sent from the Demo App.

    Now that data is flowing, please follow the steps below to learn how to interact, enhance, and visualize this data in Ascent.

    Step 9 - FLOW (Cost Savings Use Case)

    Step 10 - Setup and Configure Pipeline

    The goal is to transport and process the collected data.

    Key actions include:

    • Select or configure a data pipeline

    • Define data routing rules

    • Apply transformations, filtering, or enrichment if needed

    Links to related docs include:

    Step 11 - Design Queries

    The goal is to enable insights by querying telemetry data.

    Key actions include:

    • Understand the query language used

    • Create baseline queries for system health

    • Optimize queries for performance and cost

    • Validate query results

    Links to related docs include:

    Step 12 - Create Dashboards

    The goal is to visualize system performance and behavior in real time.

    Key actions include:

    • Use visual components

    • Organize dashboards by domain

    • Incorporate filters

    • Enable drill-down for troubleshooting.

    Links to related docs include:

    Step 13 - Setup Alerts and Workflow

    The goal is to detect anomalies and automate response actions.

    Key actions include:

    • Define alerting rules

    • Set up alert destinations

    • Establish escalation policies and on-call schedules

    • Integrate with incident management workflows and postmortem tools

    FLOW - Cost Savings Use Case

    FLOW allows you to filter unecessary data out of your logs before hitting the data lake which leads to significant cost savings. This guide will walk you through how to drop labels from our Otel Demo App logs. You can apply the same functionality to any other data source.

    1. Navigate to the Logs & Insights page:

    1. This view lists all of the datasources pushing data to Ascent. To access the logs, click into "DemoLogs".

    1. To view one of the logs simply click one of them.

    1. We have one of our otel logs here. In this example, we will be dropping "destination.address" and "event.name" from the logs.

    1. To drop these fields, navigate to the Pipeline tab and then click the + button shown below:

    1. Create a new Pipeline:

    1. Add a new Filter Rule. If you're interested in the other rules please use this documentation: for a detailed guide.

    1. Enable Drop Labels by click the slider:

    1. On the right of the screen you'll want to preview the logs to know what labels to drop. Select the following and then hit Preview in the top right:

    1. Here are the two labels we want to drop:

    1. Select the key in the dropdown by typing them out or clicking.

    1. Click "Save" in the bottom right:

    1. Go back to the log view to verify the filter rule has been applied. Refresh the page and make sure it is a new log that you're verifying:

    1. As you can see, destination.address and event.name are no longer being ingested:

    Dropping a few labels might not seem like a big deal at first, but if you exrapolate that over 10,000 or 100,000's logs, the cost savings start to add up QUICK.

    14. To view savings and your configured pipelines navigate to "Pipelines":

    1. View all your pipeline data along with savings:

    2. For more information on pipelines please see

    Links to related docs include:

    Additional Resources

    Here are helpful links to other "Getting Started" technical guides:

    MetricDataQueries: 
      - Id: q1
        MetricStat:
          Metric:
            Namespace: AWS/Logs
            MetricName: IncomingLogEvents
            Dimensions:
              - Name: LogGroupName
                Value: flowlogs
          Period: 300
          Stat: Sum
    StartTime: "2022-07-04 00:00:00"
    MetricDataQueries: 
      - Id: q1
        MetricStat:
          Metric:
            Namespace: AWS/Lambda
            MetricName: Errors
          Period: 300
          Stat: Sum
    StartTime: "{{StartTime}}"
    MetricDataQueries: 
      - Id: q1
        MetricStat:
          Metric:
            Namespace: AWS/Lambda
            MetricName: Errors
            Dimensions:
              - Name: FunctionName
                Value: <My lambda function name>
          Period: 300
          Stat: Sum
    StartTime: "{{StartTime}}"
    MetricDataQueries: 
      - Id: q1
        MetricStat:
          Metric:
            Namespace: AWS/Lambda
            MetricName: Errors
            Dimensions:
              - Name: FunctionName
                Value: {{FunctionName}}
          Period: 300
          Stat: Sum
    StartTime: "{{StartTime}}"
    StartTime: 1518867432,
    EndTime: 1518868432,
    MetricDataQueries :
        -Id: errorRate,
        Label: Error Rate,
        Expression: errors/requests
      StartTime: 1518867432
      EndTime: 1518868432
      MetricDataQuerie:
          - Id: errorRate
            Label: Error Rate
            Expression: errors/requests
          - Id: errorRatePercent
            Label: %Error Rate
            Expression: errorRate*100
            
    StartTime: 1518867432
    EndTime: 1518868432
    MetricDataQueries:
      - Id: invocations
        MetricStat:
          Metric:
            Namespace: AWS/Lambda
            MetricName: Invocations
          Period: 600
          Stat: Sum
      - Id: errors
        MetricStat:
          Metric:
            Namespace: AWS/Lambda
            MetricName: Errors
          Period: 600
          Stat: Sum
          
      StartTime: 1518867432
      EndTime: 1518868432
      MetricDataQuerie:
          - Id: errorRate
            Label: Error Rate
            Expression: errors*500
          - Id: errors
            MetricStat:
              Metric:
                Namespace: AWS/Lambda
                MetricName: Errors
              Period: 600
              Stat: Sum
              
    StartTime: 1518867432
    EndTime: 1518868432
    MetricDataQueries:
      - Id: invocations
        MetricStat:
          Metric:
            Namespace: AWS/Lambda
            MetricName: Invocations
          Period: 600
          Stat: Sum
      - Id: errors
        MetricStat:
          Metric:
            Namespace: AWS/Lambda
            MetricName: Errors
          Period: 600
          Stat: Sum
      - Id: errorRatio
        Expression: errors/invocations*100
      
          
    Id: errors
    MetricStat:
          Metric:
            Namespace: AWS/Lambda
            MetricName: Invocations
          Period: 600
          Stat: Sum
          
    Id: errors
    Expression: 'some SQL query or a math expression'
    period: 600
    Stat: Avg
    libmtev 2.7.1
    libmtev 2.5.3
    libmtev 2.5.2

    Limit

    integer

    Setup Alerts and Workflow
  • Review Ongoing Data & Cost Savings

  • Tracetest UI: http://localhost:11633/, only when using make run-tracetesting

  • Flagd configurator UI: http://localhost:8080/feature

  • Optional if you want to change the namespace and or application (to help ID your data in Ascent): logs_endpoint: https://<YOUR-ASCENT-ENV>/v1/json_batch/otlplogs?namespace=<NAMESPACE_HERE>&application=<APPLICATION_NAME_HERE>

    1. Update <NAMESPACE_HERE> and <APPLICATION_NAME_HERE> for a custom namespace and application in Ascent.

    Configure Apica Ascent to send alerts to your email server
  • Add and configure alert destinations like email, Slack, and PagerDuty

  • Configure SSO using SAML

  • Configure RBAC

  • Setup and Deploy the OTel Demo App (steps 1-9)
    Setup and Configure Pipeline
    Design Queries
    Create Dashboards
    Docker Compose v2.0.0+
    http://localhost:8080/
    http://localhost:8080/grafana/
    http://localhost:8080/loadgen/
    http://localhost:8080/jaeger/ui/
    https://docs.apica.io/integrations/overview/generating-a-secure-ingest-token
    FLOW Guide Here
    Configure pipelines
    Visualize pipelines
    Forwarding data
    Data explorer overview
    Query builder
    Widget
    Dashboards overview
    https://docs.apica.io/flow/rules
    setup and configure pipelines
    Alerts overview
    Alerting on queries
    Alerting on logs
    Getting Started with Metrics
    Getting Started with Logs
    Get acquainted with the Apica Ascent UI
    Configure your data sources
    Logs & Insights
    Logs & Insights
    Otel Demo App Logs
    Pipeline View
    New Pipeline
    Filter Rule
    Drop Labels
    Preview Logs
    Labels
    Select Label
    Logs & Insights
    Otel Log
    Pipelines
    Pipeline Dashboard
    exporters:
      otlphttp/apicametrics:
        compression: gzip
        disable_keep_alives: true
        encoding: proto
        metrics_endpoint: "https://<YOUR-ASCENT-ENV>/v1/metrics"
        headers:
          Authorization: "Bearer <YOUR-INGEST-TOKEN>"
        tls:
          insecure: false
          insecure_skip_verify: true
      otlphttp/logs:
        logs_endpoint:  https://<YOUR-ASCENT-ENV>/v1/json_batch/otlplogs?namespace=OtelDemo&application=DemoLogs
        encoding: json
        compression: gzip
        headers:
          Authorization: "Bearer <YOUR-INGEST-TOKEN>"
        tls:
          insecure: false
          insecure_skip_verify: true
    service:
      pipelines:
        metrics:
          exporters: [otlphttp/apicametrics]
        logs:
          exporters: [otlphttp/logs]