Apica Docs
  • Welcome to Apica Docs!
  • PRODUCT OVERVIEW
    • Ascent Overview
    • Ascent User Interface
  • TECHNOLOGIES
    • Ascent with Kubernetes
      • Kubernetes is a Game-Changer
      • Ascent: Built on Kubernetes
    • Ascent with OpenTelemetry
      • Why Implement OpenTelemetry?
      • Common Use Cases for OpenTelemetry
      • How to Get Started with OpenTelemetry
      • Best Practices for OpenTelemetry Implementations
  • RELEASE NOTES
    • Release Notes
      • Ascent 2.10.4
      • Ascent 2.10.3
      • Ascent 2.10.2
      • Ascent 2.9.0
      • Ascent 2.8.1
      • Ascent 2.8.0
      • Ascent 2.7.0
      • Ascent 2.6.0
      • Ascent 2.5.0
      • Ascent 2.4.0
      • Ascent 2.3.0
      • Ascent 2.2.0
      • Ascent 2.1.0
        • Data Fabric
          • Releases-old
        • Synthetic Monitoring
        • Advanced Scripting Engine
        • IRONdb
      • Synthetic Monitoring
  • GETTING STARTED
    • Getting Started with Ascent
      • Getting Started with Metrics
      • Getting Started with Logs
        • OpenTelemetry
    • Ascent Deployment Overview
    • Quickstart with Docker-Compose
    • On-Premise PaaS deployment
      • On-Premise PaaS Deployment Architecture
      • Deploying Apica Ascent PaaS on Kubernetes
      • Deploying Apica Ascent PaaS on MicroK8s
      • Deploying Apica Ascent PaaS on AWS
      • Deploying Apica Ascent EKS on AWS using CloudFormation
      • Deploying Ascent on AWS EKS with Aurora PostgreSQL and ElastiCache Redis using Cloud Formation
        • Deploying Apica Ascent on AWS EKS with Aurora PostgreSQL and ElastiCache Redis using CloudFormation
        • Apica Ascent on AWS EKS (Private Endpoint) with Aurora PostgreSQL and ElastiCache Redis on prod VPC
      • Deploying Apica Ascent EKS on AWS using custom AMI
      • Deploying Apica Ascent EKS with AWS ALB
      • Deploying Apica Ascent PaaS in Azure Kubernetes Service
        • Azure Blob Storage Lifecycle Management
      • Deploying Apica Ascent with OpenShift
    • Boomi RTO Quick Start Guide
      • RTO Dashboarding
      • Alerting on RTO Metrics
      • Alerting on RTO Logs
    • Dashboards & Visualizations
  • DATA SOURCES
    • Data Source Overview
    • API
      • JSON Data source
      • RSS
    • AWS
      • Amazon Athena
      • Amazon CloudWatch ( YAML )
      • Amazon Elasticsearch Service
      • Amazon Redshift
      • MySQL Server (Amazon RDS)
    • NoSQL Data Sources
      • MongoDB
    • OLAP
      • Data Bricks
      • Druid
      • Snowflake
    • SQL Data Sources
      • PostgreSQL
      • Microsoft SQL Server
      • MySQL Server
    • Time Series Databases
      • Prometheus Compatible
      • Elasticsearch
      • InfluxDB
    • Ascent Synthetics
      • Checks
    • Ascent Logs
      • Logs
  • INTEGRATIONS
    • Integrations Overview
      • Generating a secure ingest token
      • Data Ingest Ports
    • List of Integrations
      • Apache Beam
        • Export Metrics to Prometheus
          • Pull Mechanism via Push-Gateway
        • Export Events to Apica Ascent
      • Apica ASM
      • Apica Ascent Observability Data Collector Agent
      • AWS
        • AWS CloudWatch
        • AWS ECS
          • Forwarding AWS ECS logs to Apica Ascent using AWS FireLens
          • ECS prometheus metrics to Apica Ascent
        • AWS S3
      • Azure
        • Azure Databricks
        • Azure Eventhub
        • Azure Event Hubs
      • Docker Compose
      • Docker Swarm logging
      • Docker Syslog log driver
      • F5 Big-Ip System
      • Filebeat
      • Fluent Bit
        • Forwarding Amazon-Linux logs to Apica Ascent using Fluent Bit
        • Fluent Bit installation on Ubuntu
        • Enabling IoT(MQTT) Input (PAAS)
        • IIS Logs on Windows
      • Fluentd
      • FortiNet Firewalls
      • GCP PubSub
      • GCP Cloud Logging
      • IBM QRadar
      • ilert
      • Incident Management
        • Webhooks
      • Jaeger
      • Kafka
      • Kinesis
      • Kubernetes
      • Logstash
      • MQTT
      • Network Packets
      • OpenTelemetry
      • Object store (S3 Compatible)
      • Oracle OCI Infrastructure Audit/Logs
      • Oracle Data Integrator (ODI)
      • OSSEC Variants (OSSEC/WAZUH/ATOMIC)
        • Apica Ascent-OSSEC Agent for Windows
      • Palo Alto Firewall
      • Prometheus
        • Spring Boot
        • Prometheus on Windows
        • Prometheus Remote Write
        • MongoDB Exporter
        • JMX Exporter
      • Rsyslogd
      • Syslog
      • Syslog-ng
      • Splunk Universal Forwarder
      • Splunk Heavy Forwarder
      • SNMP
      • Splunk Forwarding Proxy
      • Vault
        • Audit Vault Logs - AWS
        • Audit Vault Logs - OCI
        • Audit Vault Metrics
    • Apica API DOCS
  • DATA MANAGEMENT
    • Data Management Overview
    • Data Explorer Overview
      • Query Builder
      • Widget
      • Alerts
      • JSON Import
      • Creating Json Schema
        • Visualization
          • Line chart
          • Bar chart
          • Area chart
          • Scatter chart
          • Status chart
          • Counter chart
          • Stat chart
          • Size chart
          • Dense Status chart
          • Honeycomb chart
          • Gauge chart
          • Pie chart
          • Disk chart
          • Table chart
          • Date time chart
      • Time-Series AI/ML
        • Anomaly Detection
        • Averaging
        • Standard Deviation(STD)
      • Data Explorer Dashboard
        • Create a Dashboard
        • Editing Dashboard
          • Dashboard level filters
    • Timestamp handling
      • Timestamp bookmark
    • Large log/events/metrics/traces
  • OBSERVE
    • Monitoring Overview
      • Connecting Prometheus
      • Connecting Amazon Managed Service for Prometheus
      • Windows Redis Monitoring
      • Writing queries
        • Query Snippets
      • Query API
      • Use Apica API to ingest JSON data
    • Distributed Tracing
      • Traces
      • Spans
      • Native support for OTEL Traces
      • Windows .NET Application Tracing
      • Linux+Java Application Tracing
    • Log Management
      • Terminology
      • Explore Logs
      • Topology
      • Apica Ascent Search Cheat Sheet
      • Share Search Results
      • Severity Metrics
      • Log2Metrics
      • Native support for OTEL Logs
      • Reports
        • Accessing Reports results via API
      • Role-Based Access Control (RBAC)
      • Configuring RBAC
    • AI and LLM Observability
      • AI Agent Deployment
      • Ascent AI Agent Monitoring
      • Ascent Quick Start Guide
    • Synthetic Check Monitoring
      • Map View
      • List View
      • Alerting for Check Results
  • Flow
    • Overview
    • Pipeline Management
      • Configuring Pipelines
      • Visualize Pipelines
      • Pipeline Overview Dashboard
      • Forwarding Data
    • OpenTelemetry Ingest
      • OpenTelemetry Logs / Traces
      • OpenTelemetry Metrics
        • Transforming Metrics through Code Rules
    • Vault
      • Certificates
      • Variables
      • Lookups
    • Rules
      • FILTER
      • EXTRACT
      • SIEM and TAG
      • REWRITE
      • CODE
      • FORWARD
        • Rename Attributes
      • STREAM
    • Functions
      • ascent.encode
      • ascent.decode
      • ascent.persist
      • Ascent.variables
      • ascent.crypto
      • Ascent.mask
      • Ascent.net
      • Ascent.text
      • Ascent.time
      • Ascent.lookups
    • List of Forwarders
    • OpenTelemetry Forwarding
      • Metrics
      • Traces
      • Logs
    • Splunk Forwarding
      • Apica UF Proxy App Extension
        • Standalone Instance
        • List of Indexer Instances
        • Indexer Discovery
      • Splunk HTTP Event Collector (HEC) Forwarder
        • Metric Indexes
        • Non Metric Indexes
      • Splunk Syslog Forwarding
    • Real-Time Stream Forwarding
      • AWS Kinesis
      • Azure Eventhub
      • Google Pub/Sub
    • Security Monitor Forwarding
      • Arc Sight
      • RSA New Witness
    • Forwarding to Monitoring Tools
      • Datadog Forwarding
      • New Relic Forwarding
      • Dynatrace Forwarding
      • Elasticsearch Forwarding
      • Coralogix Forwarding
      • Azure Log Analytics Forwarding
    • Object Store Forwarding
      • S3 Compatible
      • Azure Blob Storage
    • Forwarding to Data Warehouse
      • GCP Bigquery
  • Customized Forwarders
    • JS Code Forwarding
  • LAKE
    • Powered by Instastore™
  • FLEET MANAGEMENT
    • Overview
    • Agents
    • Configurations
    • Packages
    • Fleet Repository Management
    • Advanced Search
    • List of Agents
      • Datadog Agent
      • Fluent-bit Agent
      • Grafana Alloy
      • OpenTelemetry Collector
      • OpenTelemetry Kubernetes
      • Prometheus Agent
  • COMMAND LINE INTERFACE
    • apicactl Documentation
  • AUTONOMOUS INSIGHTS
    • Time Series AI-ML
      • Anomaly Detection
      • Averaging
      • Standard Deviation(STD)
      • Forecasting
      • AI-ML on PromQL Query Data Set
      • Statistical Data Description
    • Pattern-Signature (PS)
      • Log PS Explained
        • Unstructured Logs
        • Semi-structured JSON
        • Reduce Logs Based on PS
        • Log PS Use Cases
          • Log Outlier Isolation
          • Log Trending Analysis
          • Simple Log Compare
      • Config PS
        • Config JSON PS
    • ALIVE Log Visualization
      • ALIVE Pattern Signature Summary
      • ALIVE Log Compare
    • Log Explained using Generative AI
      • Configuring Generative AI Access
      • GenAI Example Using Log Explain
    • Alerts
    • Alerts (Simple/Anomaly)
    • Alerts On Logs
    • Rule Packs
    • AI-powered Search
  • PLATFORM DOCS
    • Synthetic Monitoring Overview
      • Getting Started with ASM
        • Achieving 3 Clicks to Issue Resolution via ASM
        • FAQ - Frequently Asked Questions
        • Creating A New Check
          • Creating a New Real Browser Check
      • Explore the Platform
        • API Details
        • Check Types
          • Android Check
          • Command Check
          • Compound Check
          • Browser Check
          • Desktop Application Check
          • AWS Lambda Check
          • DNS Resolver Check
          • DNS Security Check
          • Domain Availability Check
          • Domain Delegation Check
          • Domain Expiration Date Check
          • Hostname Integrity Check
          • iPad Check
          • iPhone Check
          • Ping Check
          • Port Check
          • Postman Check
          • Response Time Check
          • SSL Certificate Expiration Check
          • Scripted Check
        • Dashboards
        • Integrations
          • DynaTrace Integration
          • Google Analytics Integration
          • Akamai Integration
          • Centrify Integration
          • AppDynamics Integration
          • PagerDuty Integration
          • ServiceNow Integration
          • Splunk Integration
        • Metrics
          • Analyze Site
          • Result Values
          • Trends
          • Analyze Metrics
        • Monitoring
          • Integrating ASM Metrics into Grafana Using Apica Panels
            • Understanding the ASM Imported Dashboards
            • Using the Apica Panels Dashboards
          • Understanding ASM Check Host Locations
        • Navigation
          • Manage Menu
        • Reports
        • Use Cases
      • Configurations
        • Configuring Checks
          • Understanding Check Results
            • Understanding ZebraTester Check Results
            • Understanding Browser Check Results
            • Understanding Check Details
          • Editing Checks
            • Editing Browser Checks
            • Editing ZebraTester Checks
          • Using Regular Expressions Within the ASM Platform
          • Understanding the Edit Scenario Page
          • Comparing Selenium IDE Scripts to ASM Scenarios
          • Configuring Apica DNS Check Types
          • Implementing Tags Effectively Within ASM
          • Storing and Retrieving Information Using the ASM Dictionary
        • Configuring Users
          • Configuring SSO Within ASM
        • Configuring Alerts
          • Configuring Webhook Alerts
      • How-To Articles
        • ASM Monitoring Best Practices
        • API Monitoring Guide
        • IT Monitoring Guide
        • Monitor Mission-Critical Applications through the Eyes of Your Users
        • How To Mask Sensitive Data in ASM
        • How to Mask Sensitive Data When Using Postman Checks
        • How to Handle URL Errors in a Check
        • How To Set Up SSO Using Azure AD
        • How to Set Up SSO Using Centrify
        • ASM Scenarios How-To
          • How To Pace a Selenium Script
          • How to Utilize XPath Within a Selenium Script
          • How to Mask Sensitive Information Within an ASM Scenario
          • Handling Elements Which Do Not Appear Consistently
          • How to Handle HTML Windows in ASM Scenarios
    • ZebraTester Scripting
      • ZebraTester Overview
      • Install ZebraTester
        • Download ZebraTester
          • Core ZebraTester V7.5-A Documentation
          • Core ZebraTester V7.0-B Documentation
          • Core ZebraTester V7.0-A Documentation
          • Core ZebraTester V5.5-Z Documentation
          • Core ZebraTester V5.5-F Documentation
        • Download the ZebraTester Recorder Extension
        • Windows Installation
          • ZebraTester on Windows
          • Generate Private CA Root Certificate
          • Windows System Tuning
          • Install a new ZT version on Windows Server
          • Install/Uninstall ZT Windows Installer Silently
        • macOS Installation
          • macOS Preinstallation Instructions
          • Generate Private CA Root Cert (macOS X)
          • System Tuning (macOS)
          • Import a CA Root Certificate to an iOS device
          • Memory Configuration Guidelines for ZebraTester Agents
      • ZebraTester User Guide
        • Menu and Navigation Overview
        • 1. Get a Load Test Session
          • Recording Web Surfing Sessions with ZebraTester
            • Further Hints for Recording Web Surfing Sessions
            • Recording Extension
              • Record Web Session
              • Cookies and Cache
              • Proxy
              • Page Breaks
              • Recording Extension Introduction
              • Troubleshooting
            • Add URL to ZebraTester
            • Page Scanner
          • Next Steps after Recording a Web Surfing Session
        • 2. Scripting the Load Test Session
          • 1. Assertions - HTTP Response Verificaton
          • 2. Correlation - Dynamic Session Parameters
            • 2b. Configuring Variable Rules
            • 2a. Var Finder
          • 3. Parameterization: Input Fields, ADR and Input Files
            • ADR
          • 4. Execution Control - Inner Loops
          • 5. Execution Control - URL Loops
          • 6. Execution Control -User-Defined Transactions And Page Breaks
          • 7. Custom Scripting - Inline Scripts
          • 8. Custom Scripting - Load Test Plug-ins
            • ZebraTester Plug-in Handbooks
          • Modular Scripting Support
        • 3. Recording Session Replay
        • 4. Execute the Load Test
          • Executing a First Load Test
          • Executing Load Test Programs
            • Project Navigator
              • Configuration of the Project Navigator Main Directory
            • Real-Time Load Test Actions
            • Real-Time Error Analysis
            • Acquiring the Load Test Result
            • More Tips for Executing Load Tests
          • Distributed Load Tests
            • Exec Agents
            • Exec Agent Clusters
          • Multiple Client IP Addresses
            • Sending Email And Alerts
            • Using Multiple Client IP Addresses per Load-Releasing System
        • 5. Analyzing Results
          • Detail Results
          • Load Test Result Detail-Statistics and Diagrams
          • Enhanced HTTP Status Codes
          • Error Snapshots
          • Load Curve Diagrams
          • URL Exec Step
          • Comparison Diagrams
            • Analysis Load Test Response Time Comparison
            • Performance Overview
            • Session Failures
        • Programmatic Access to Measured Data
          • Extracting Error Snapshots
          • Extracting Performance Data
        • Web Tools
        • Advanced Topics
          • Execute a JMeter Test Plan in ZebraTester
          • Credentials Manager for ZebraTester
          • Wildcard Edition
          • Execution Plan in ZebraTester
          • Log rotation settings for ZebraTester Processes
          • Modify Session
          • Modular Scripting Support
          • Understanding Pacing
          • Integrating ZebraTester with GIT
            • GitHub Integration Manual V5.4.1
      • ZebraTester FAQ
      • ZebraTester How-to articles
        • How to Combine Multiple ZebraTester Scripts Into One
        • Inline Scripting
        • How to Configure a ZebraTester Script to Fetch Credentials from CyberArk
        • How to Configure a ZebraTester Scenario to Fetch Credentials from CyberArk
        • How to Convert a HAR file into a ZebraTester Script
        • How to Convert a LoadRunner Script to ZebraTester
        • How to Import the ZT Root Certificate to an iOS device
        • How to iterate over JSON objects in ZebraTester using Inline Scripts
        • How to round a number to a certain number of decimal points within a ZebraTester Inline Script
        • How to Use a Custom DNS Host File Within a ZebraTester Script
        • How to Move a ZebraTester Script to an Older Format
        • API Plugin Version
        • Setting up the Memu Player for ZebraTester Recording
        • Inline Script Version
      • Apica Data Repository (ADR) aka Apica Table Server
        • ADR related inline functions available in ZT
        • Apica Data Repository Release Notes
        • REST Endpoint Examples
        • Accessing the ADR with Inline Scripts
      • ZebraTester Plugin Repository
      • Apica YAML
        • Installing and Using the ApicaYAML CLI Tool
        • Understanding ApicaYAML Scripting and Syntax
    • Load Testing Overview
      • Getting Started with ALT
      • Creating / Running a Single Load Test
      • Running Multiple Tests Concurrently
      • Understanding Loadtest Results
    • Test Data Orchestrator (TDO)
      • Technical Guides
        • Hardware / Environment Requirements
        • IP Forwarding Instructions (Linux)
        • Self-Signed Certificate
        • Windows Server Install
        • Linux Server Install
        • User Maintenance
        • LDAP Setup
        • MongoDB Community Server Setup
        • TDX Installation Guide
      • User Documentation
        • End User Guide for TDO
          • Connecting to Orson
          • Coverage Sets and Business Rules
          • Data Assembly
          • Downloading Data
        • User Guide for TDX
          • Connecting to TDX
          • Setting up a Data Profile
          • Extracting Data
          • Analyzing Data Patterns
          • Performing Table Updates
        • API Guide
          • API Structure and Usage
          • Determining Attribute APIs
            • Create Determining Attribute (Range-based)
            • Create Determining Attribute (Value-based)
            • Update Determining Attributes
            • Get Determining Attribute Details
            • Delete a Determining Attribute
          • Coverage Set API’s
            • Create Coverage Set
            • Update Coverage Set
            • Get All Coverage Set Details
            • Get Single Coverage Set Details
            • Lock Coverage Set
            • Unlock Coverage Set
            • Delete Coverage Set
          • Business Rule API’s
            • Create Business Rule
            • Update Business Rule
            • Get Business Rule Details
            • Get All Business Rules
            • Delete Business Rule
          • Workset API's
            • Create Workset
            • Update Workset
            • Get All Worksets
            • Get Workset Details
            • Unlock Workset
            • Clone Workset
            • Delete Workset
          • Data Assembly API's
            • Assemble Data
            • Check Assembly Process
          • Data Movement API's
            • Ingest (Upload) Data Files
            • Download Data Files
              • HTML Download
              • CSV Download
              • Comma Delimited with Sequence Numbers Download
              • Pipe Delimited Download
              • Tab Delimited with Sequence Numbers Download
              • EDI X12 834 Download
              • SQL Lite db Download
              • Alight File Format Download
          • Reporting API's
            • Session Events
            • Rules Events
            • Coverage Events
            • Retrieve Data Block Contents
            • Data Assembly Summary
        • Workflow Guide
        • TDO Project Builder User Guide
          • Project Design
          • Projects
            • Select Existing Project
            • Create a New Project
        • Format Function Guide
      • Release Notes
        • Build 1.0.2.0-20250213-1458
  • IRONdb
    • Getting Started
      • Installation
      • Configuration
      • Cluster Sizing
      • Command Line Options
      • ZFS Guide
    • Administration
      • Activity Tracking
      • Compacting Numeric Rollups
      • Migrating To A New Cluster
      • Monitoring
      • Operations
      • Rebuilding IRONdb Nodes
      • Resizing Clusters
    • API
      • API Specs
      • Data Deletion
      • Data Retrieval
      • Data Submission
      • Rebalance
      • State and Topology
    • Integrations
      • Graphite
      • Prometheus
      • OpenTSDB
    • Tools
      • Grafana Data Source
      • Graphite Plugin
      • IRONdb Relay
      • IRONdb Relay Release Notes
    • Metric Names and Tags
    • Release Notes
    • Archived Release Notes
  • Administration
    • E-Mail Configuration
    • Single Sign-On with SAML
    • Port Management
    • Audit Trail
      • Events Trail
      • Alerts Trail
Powered by GitBook
On this page
  • The Alert Setup Process
  • Configuring Different Alerting Types
  • E-mail Alerts
  • SMS Alerts
  • PagerDuty Alerts
  • Other Alert Types
  • Understanding and Configuring Placeholders
  • Message Placeholders
  • Webhook Placeholders

Was this helpful?

Edit on GitHub
Export as PDF
  1. PLATFORM DOCS
  2. Synthetic Monitoring Overview
  3. Configurations

Configuring Alerts

PreviousConfiguring SSO Within ASMNextConfiguring Webhook Alerts

Was this helpful?

Alerts can be used to notify recipients about status changes to your checks. They can be sent by email address, SMS messages, or integrated 3rd party service.

Complex alerting setups can be tricky. If you have any questions about alert configuration, please send a support ticket to , and we will help you with the setup and testing process.

Here’s an example of some configured recipients, some of which are disabled:

Alerts configured for a browser check, all enabled:

Note: The alerts are all enabled, but since most of the targets are disabled, alerts will be triggered but not sent for these.

Apica’s alerting requires that the monitoring checks be defined, the thresholds and conditions for an alert set, and a destination where this information will ultimately end.

This section introduces the concept of alerting using a push concept, with Webhooks defining where the alerts' POST body gets sent. The contents of the alert messages are populated with placeholders that convey the actual values needed by either the Webhook-enabled service, the SMS text, or the email.

  • Alerts

    • Are fired/triggered when a threshold has been passed, or a condition has been met.

    • Have Targets (places to send the alert to)

  • Targets

    • Can be directly sent to a person/group

      • Email

      • SMS

    • It can be a Webhook-enabled service like OpsGenie or Splunk

      • Push style to the service ingestion endpoint

      • Not a REST-API Pull model

  • Placeholders

    • Messages: The messages these alerts create can contain information/metric Message Placeholders filled in during message generation.

    • For Webhooks: WebHook Placeholders are what the service needs to route the incoming alert message correctly. These placeholders can be customized for the service to be integrated.

The Alert Setup Process

From the Manage Alerts view, you can assign individual or group alerts for each check, which will notify them of any status change according to the preferred severity.

Screenshots

In Manage Alerts, checks are displayed by Top Level Group and Subgroup, much as they appear in Manage checks, but with an icon along the right side with which Alerts can be assigned.

In the upper right, you can toggle between Alerts and Recipients.

Workflow

The general workflow for creating alerts is:

  • Create or Add a User to receive the alerts (Who gets the alert?).

  • Create Targets for the recipients' delivery method (How are the alerts delivered? e.g., PagerDuty, SMS Text, Email, etc.).

  • (Optional) Create a Group containing multiple recipients for the alert.

  • Create the Alerts themselves by selecting checks and assigning recipients.

Step

Screenshot

Add Recipients

The Users and Groups you set to receive Alerts are set up in the Recipients tab.

There is a column to define Users and their contact information, as well as Groups.

First Step: Add a User

Recipients are the users or groups of users you select to receive the alerts.

  • Click the Add User button.

  • Enter the necessary user information:

    • User Name

    • User Description

    • User Phone Number (doubles as Target)

    • User Email (doubles as a Target)

    • Click the Create User button.

    • The user is created, containing the two default targets.

Second Step: Add Group

Recipient groups are collections of user targets you select to receive the alerts.

Note: You need to create the users and targets before you can add them to a group.

Create Group

To add a recipient group:

  • Click the Add Group button.

  • Enter a name for the group

  • Find the Users you want to include in the group.

  • Click the checkbox next to each Target.

  • Click Create Group

    • The group is created, containing the selected user/targets.

Targets

When defining alert recipients, you can have the message delivered via various target services.

For each User or Group Recipient, you add delivery Targets that define the method of delivery.

User

You can select to add PagerDuty, Email, a WebHook integration, or SMS (text message) as targets.

Groups

When you have defined targets for individual users, you can add them to Groups:

Alerts Tab

The Alerts tab allows you to set Severity, Targets (individual users), and Groups (of users) to review alerts according to the parameters you prefer.

Alerts can be set for individual checks and be delivered to multiple Targets.

Add Alert

You can add alerts for any checks and select one or several severities to include in the alert. Each alert can have multiple Recipients, and each recipient can have multiple Targets.

Create Alert

To add an alert:

  • Find the checks you want to create alerts for

  • Search for checks with the Filter search by

OR:

  • Expand the Monitor Groups as needed

  • Mark the Checks you want to include in the alert:

  • Mark the levels of Severity you want to send alerts for

  • Mark the users or groups your want to send alerts to

  • Click the Create Alert button.


Configuring Different Alerting Types

E-mail Alerts

Add an E-mail Target

To add an Email target:

  • Click the Email button

  • Enter a Target Name for identification in Synthetic Monitoring

  • Enter a list of Email addresses to sent the alert to

  • If you want to use a custom message:

  • Uncheck Use Default Message

  • Enter an alert Message (you can use Message Placeholders)

  • Click the Add Email Target button

The Target is created, containing the selected user/targets.

SMS Alerts

Create Target

To add an Text Message target:

  • Click the Text Message button

  • Enter a Target Name for identification in Synthetic Monitoring

  • Enter a list of Phone Numbers to sent the alert to

  • If you want to use a custom message:

  • Uncheck Use Default Message

  • Enter an alert Message (you can use Message Placeholders)

  • Click the Add SMS Target button

The Target is created, containing the selected user/targets.

PagerDuty Alerts

With the PagerDuty Integration, you can have alerts delivered through the PagerDuty platform, offering a rich set of notification delivery options. You need to set up the PagerDuty Integration before you can create a PagerDuty target.

You need to create the users and groups before you can add Targets to them.

To add a PagerDuty target, click the PagerDuty button

  • Open the Service menu

  • Choose the desired service

  • Enter a Target Name for identification in Synthetic Monitoring

  • Click the Add PagerDuty Target button

The Target is created, containing the selected user/targets.

Other Alert Types

For instructions on how to configure other alert types, refer to the article Configuring Webhook Alerts.

Understanding and Configuring Placeholders

A placeholder is a character, word, or string of characters that temporarily takes the place of the final data.

For example, an operations manager may know that, for an alert, he needs a certain number of metrics with returned values or variables but doesn't yet know what to input because the value is dynamically returned from the monitoring results. He can use a placeholder as a temporary solution until a proper value or variable can be assigned by an alert (or message).

At Apica, we use placeholders in the following manners:

Alerts and Messages: when a customer wants to be made aware of/alerted about a state of a monitoring check. In other words, some threshold or a specified set of conditions has been met that needs to be sent (in some format to consume, like a popular alerting service like PagerDuty, or via a Webhook, SMS, or email). When this happens, a message gets generated and displayed in either an email or SMS text or generally POSTed to a Webhook-enabled service that ingests this information.

Depending on what the purpose is, two placeholder characters denote an Apica placeholder.

  • Message and Alert placeholders are each surrounded by the % character.

  • Webhook placeholders are each surrounded by the # character.

Message Placeholders

When Apica sends out an alert (with Apica’s “Alerter service”), it uses a set of “placeholders” (needed by the various destination “targets” (alert destinations)) to refer to parts of information associated with events that have triggered (or resolved) the alert.

A placeholder has the following format:

%placeholder-name%

There is a set of predefined placeholders configured in ASM:

Placeholder

Meaning

Example

Event-based Placeholders

%E%

Event symbol. For check-based events, this is the CheckConfig.check_symbol value.

N84_M377_C1000_URL_20090227_013715_307

%M%

Event message text.

Message

%N%

The NETBIOS name of the host is the source of the event.

Node

%QM%

Event message text with any double-quotes (") replaced by single-quotes (').

Message

%S%

Event severity as one upper-case character, I, W, E, or F.

Severity

%SEV%

Event severity as one word, Info, Warning, Error, or Fatal.

Severity

%T%

Agent-local timestamp. Format YYYY-MM-DD HH:MM:SS.

Timestamp

%UTC-T%

UTC timestamp with a 'T' between the date and time portions. Format YYYY-MM-DDTHH:MM:SS.

Timestamp (UTC)

%UTC%

The timestamp of the event is expressed in UTC. Format YYYY-MM-DD HH:MM:SS.

Timestamp (UTC)

For Check-based Events, Use the Following Placeholders

%CHECK_ID%

%CHECK_GUID%

Check GUID.

A UUID from CheckConfig.check_guid, in the format XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

%CHECK_NAME%

Check descriptor (from CheckConfig.check_descriptor)

%CHECK_TYPE%

Check descriptor (based on CheckConfig.check_type)

URL, Command, Ping, Port, Scenario, Fullpage (IE), or Fullpage

%MODULE_NAME%

Check module descriptor (from NodeModule.amm_descriptor)

%OLD_SEV_CHAR%

Previous check severity as an uppercase letter

(I, W, E, or F)

%NEW_SEV_CHAR%

Current check severity as an uppercase letter

(I, W, E, or F)

%OLD_SEV_WORD%

Previous check severity as a word

(Info, Warning, Error or Fatal)

%NEW_SEV_WORD%

Current check severity as a word

(Info, Warning, Error or Fatal)

%RESULT_GUID%

Check Result UUID without dashes. Replaced by an empty string if no result identifier is part of the event.

a8e59d718fa949cb86c9ccfc93ff1876

%RESULT_G-U-I-D%

Check Result UUID with dashes. Replaced by an empty string if no result identifier is part of the event.

a8e59d71-8fa9-49cb-86c9-ccfc93ff1876. Replaced by an empty string if no result identifier is part of the event.

%TT%

Timestamp adjusted to the timezone of the current dispatch target (maybe based on user/customer). Falls back to UTC.

Format YYYY-MM-DD HH:MM:SS (TZ-offset) or YYYY-MM-DD HH:MM:SS if UTC.

%CHECK_TAGS%

A set of Key, Value pairs assigned to the check.

"Key 1: Value 1, Value 2, Value 3; Key 2: Value 1, Value 2, Value 3"

Placeholders that may be available if the Alerter uses a check information cache

%CHECK_DESCRIPTION%

Check description (from CheckConfig.check_description). For CLI-targets, any embedded carriage return/newline (CR/LF) character combinations (\r\n) s are replaced by a space, then the remaining CR and LF are replaced by empty strings.

%xmlsafe:CHECK_DESCRIPTION%

Check description (from CheckConfig.check_description) with any XML-unsafe characters replaced by character entities.

e.g. & -> &. Same rules apply for embedded CR and LF as for %CHECK_DESCRIPTION%.

%GROUPS%

List of monitor groups to which the check belongs. A comma-separated list of "top group/subgroup" entries. Since a check can be associated with more than one monitor group (possibly belonging to different users), the list can contain more than one entry.

Event-Related Placeholders

Placeholder

Description

Example

%E%

Event symbol. For check-based events, this is the CheckConfig.check_symbol value.

%M%

Event message text.

%QM%

Event message text with any double-quotes (") replaced by single-quotes (').

%S%

Event severity as one upper-case character, I, W, E, or F.

%SEV%

Event severity as one word, Info, Warning, Error, or Fatal.

%UTC%

The timestamp of the event is expressed in UTC.

Format YYYY-MM-DD HH:MM:SS.

%UTC-T%

UTC timestamp with a 'T' between the date and time portions.

Format YYYY-MM-DDTHH:MM:SS.

Check-related Placeholders

Placeholder

Description

Example

%CHECK_DESCRIPTION%

Check description. For CLI-targets, any embedded carriage return/newline (CR/LF) character combinations (\r\n) are replaced by a space, then the remaining CR and LF are replaced by empty strings.

%CHECK_GUID%

Check GUID.

A UUID in the format XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX.

%CHECK_ID%

Check ID.

%CHECK_NAME%

Check name.

%CHECK_TAGS%

A set of Key: Value pair(s) assigned to the check, with the format: "Key 1: Value 1, Value 2, Value 3; Key 2: Value 1, Value 2, Value 3"

%CHECK_TYPE%

Check descriptor.

URL, Command, Ping, Port, Scenario, Fullpage (IE), or Fullpage

%GROUPS%

List of monitor groups to which the check belongs.

A comma-separated list of "top group/subgroup" entries. Since a check can be associated with more than one monitor group (possibly belonging to different users), the list can contain more than one entry.

%LOCATION%

The location from which the check is executed.

%NEW_SEV_CHAR%

Current check severity as an uppercase letter.

(I, W, E or F)

%NEW_SEV_WORD%

Current check severity as a word.

(Info, Warning, Error or Fatal)

%OLD_SEV_CHAR%

Previous check severity as an uppercase letter.

(I, W, E or F)

%OLD_SEV_WORD%

Previous check severity as a word.

(Info, Warning, Error or Fatal)

%RESULT_G-U-I-D%

Check Result UUID with dashes. Replaced by an empty string if no result identifier is part of the event.

a8e59d71-8fa9-49cb-86c9-ccfc93ff1876. Replaced by an empty string if no result identifier is part of the event.

%RESULT_GUID%

Check Result UUID without dashes. Replaced by an empty string if no result identifier is part of the event.

a8e59d718fa949cb86c9ccfc93ff1876.

%TT%

Timestamp adjusted to the timezone of the current dispatch target (maybe based on user/customer). Falls back to UTC.

Format YYYY-MM-DD HH:MM:SS (TZ-offset) or YYYY-MM-DD HH:MM:SS if UTC.

%xmlsafe:CHECK_DESCRIPTION%

Check description with any XML-unsafe characters replaced by character entities.

Check description (from CheckConfig.check_description) with any XML-unsafe characters replaced by character entities, e.g. & -> & Same rules apply for embedded CR and LF as for %CHECK_DESCRIPTION%.

Webhook Placeholders

Default placeholders are surrounded by a pound/hashtag # character.

A default set of placeholders has been provided. These can be configured with the Webhook alert integration, or you may customize your Webhook placeholders as necessary.

Placeholder

Used For/Definition/Comment

#ALERT_TITLE#

#API_ID#

Slack, ServiceNow

#API_KEY#

VictorOps, OpsGenie, Datadog

#BASE64#

#HOST_ADDRESS#

#INCIDENT_ID#

#MESSAGE#

#MESSAGE_TYPE#

#TARGET_USER#

HipChat, VictorOps

#TITLE#

Splunk

#TOKEN#

Defining Your Own Webhook Placeholders in Custom Webhooks

The default placeholders above should only be considered suggestions. It is also possible to define your own webhook placeholders, which will pull their value from some response content which comes back from an API call. These custom-defined placeholders are also surrounded by a pound/hashtag # character.

Consider the following example:

One of the key/value pairs in the response is “url”. Although ASM asks for an XPath, you must provide the JSON path instead and ASM will find the value at the given path of the response and assign it to the custom placeholder you define. In other words, there is some manual translation which must be done in this instance - the URL property from the Postman screenshot above becomes /url in the webhook definition screenshot.

A standard way of delivering notifications is sending an email. You can send the alert to multiple email addresses, and optionally have a customized message containing .

An email target is created automatically when you set up a .

Alerts can be delivered as SMS to mobile phones. You can send the alert to multiple numbers, and optionally have a customized message containing .

The phone target is created automatically when you set up user alerts via .

The phone number needs to include the . For example; +1 for the US, and +46for Sweden, etc.

Webhooks: When there’s a business service that needs to ingest information about an Apica monitoring check (status, alert, message), the Webhook, as a push service, is a passive way to receive this information. So, has been defined and customized to your service needs.

So placeholders provide a way to customize the layout and contents of SMTP (email), SMS messages and provide event-based information via a POST body to Webhook enabled Services like .

Check id (32-bit positive integer from )

Here, after the main alert text is sent to Slack (via ), a second URL call is made to . It is a GET request which returns response data in JSON format:

Essentially, in the above example, when the GET request is resolved, the value of #url# becomes whatever is found in the “url” property of the response body. In the above example, the recipient of the check will instantly know that the check goes to .

support@apica.io
Message Placeholders
Add User
Message Placeholders
Add User
International Country Prefix
a set of alert integration placeholders
OpsGenie
ServiceNow
hooks.slack.com
https://api-asm-eu1.apica.io/
https://www.msn.com
CheckConfig.id