Getting Started with Ascent
Last updated
Was this helpful?
Last updated
Was this helpful?
The Ascent platform enables you to converge all of your IT data from disparate sources, manage your telemetry data, and monitor and troubleshoot your operational data in real-time. The following guide assumes that you have signed up for Apica Ascent in the cloud. If you are not yet a registered user, please . Once registered, use this guide to get started.
For all users that want to get started with Ascent should follow these five (5) simple steps:
In this guide, we cover the key goals and related activities of each step to ensure a quick and easy setup of Ascent.
The goal is to ingest telemetry data (logs, metrics, traces) from relevant systems.
Key actions include:
Identify all sources
Choose agents appropriate for each data type
Configure data collection frequency and granularity
Ensure data normalization
Detailed steps to start ingesting data:
From the menu bar, go to: Explore -> Fleet:
With Fleet you can automate your data ingestion configuration:
You'll be directed to the Fleet landing page:
From here, you'll click "Install Agent Manager." - The Agent Manager will allow you to control and configure the OpenTelemetry Collector.
Inside the "Install Agent Manager" pop-up screen, select:
Platform: Linux
Agent Type: OpenTelemetry Collector
Then, click 'Proceed'.
You'll be redirected to the 'Fleet README' pop-up page:
You'll download and configure this configuration file to start ingesting data.
You'll download 2 files:
The README.txt contains instructions for how to install the Agent Manager and OpenTelemetry Collector.
The fleet-install.sh is a preconfigured script that you'll run on your Linux host to start ingesting data into Ascent automatically:
On your Linux host, start by creating a file by running this command:
Paste the contents of 'fleet-install.sh' into nano editor:
Run the Fleet-install.sh with the command below:
sudo ./fleet-install.sh
Once the script completes, you'll see the agent in the Fleet screen as 'Active':
You can then confirm that data is flowing into the system (Go to 'Explore -> Logs & Insights):
Additional Links to helpful docs include:
The goal is to transport and process the collected data.
Key actions include:
Select or configure a data pipeline
Define data routing rules
Apply transformations, filtering, or enrichment if needed
Links to related docs include:
The goal is to enable insights by querying telemetry data.
Key actions include:
Understand the query language used
Create baseline queries for system health
Optimize queries for performance and cost
Validate query results
Links to related docs include:
The goal is to visualize system performance and behavior in real time.
Key actions include:
Use visual components
Organize dashboards by domain
Incorporate filters
Enable drill-down for troubleshooting.
Links to related docs include:
The goal is to detect anomalies and automate response actions.
Key actions include:
Define alerting rules
Set up alert destinations
Establish escalation policies and on-call schedules
Integrate with incident management workflows and postmortem tools
Links to related docs include:
Here are helpful links to other "Getting Started" technical guides: