Apica Docs
  • Welcome to Apica Docs!
  • PRODUCT OVERVIEW
    • Ascent Overview
    • Ascent User Interface
  • TECHNOLOGIES
    • Ascent with Kubernetes
      • Kubernetes is a Game-Changer
      • Ascent: Built on Kubernetes
    • Ascent with OpenTelemetry
      • Why Implement OpenTelemetry?
      • Common Use Cases for OpenTelemetry
      • How to Get Started with OpenTelemetry
      • Best Practices for OpenTelemetry Implementations
  • RELEASE NOTES
    • Release Notes
      • Ascent 2.10.6
      • Ascent 2.10.5
      • Ascent 2.10.4
      • Ascent 2.10.3
      • Ascent 2.10.2
      • Ascent 2.9.0
      • Ascent 2.8.1
      • Ascent 2.8.0
      • Ascent 2.7.0
      • Ascent 2.6.0
      • Ascent 2.5.0
      • Ascent 2.4.0
      • Ascent 2.3.0
      • Ascent 2.2.0
      • Ascent 2.1.0
        • Data Fabric
          • Releases-old
        • Synthetic Monitoring
        • Advanced Scripting Engine
        • IRONdb
      • Synthetic Monitoring
  • GETTING STARTED
    • Getting Started with Ascent
      • Register and Gain Access
      • Using the OpenTelemetry Demo
      • Getting Started with Metrics
      • Getting Started with Logs
        • OpenTelemetry
      • Using Fleet for Data Ingestion
    • Ascent Deployment Overview
    • Quickstart with Docker-Compose
    • On-Premise PaaS deployment
      • On-Premise PaaS Deployment Architecture
      • Deploying Apica Ascent PaaS on Kubernetes
      • Deploying Apica Ascent PaaS on MicroK8s
      • Deploying Apica Ascent PaaS on AWS
      • Deploying Apica Ascent EKS on AWS using CloudFormation
      • Deploying Ascent on AWS EKS with Aurora PostgreSQL and ElastiCache Redis using Cloud Formation
        • Deploying Apica Ascent on AWS EKS with Aurora PostgreSQL and ElastiCache Redis using CloudFormation
        • Apica Ascent on AWS EKS (Private Endpoint) with Aurora PostgreSQL and ElastiCache Redis on prod VPC
      • Deploying Apica Ascent EKS on AWS using custom AMI
      • Deploying Apica Ascent EKS with AWS ALB
      • Deploying Apica Ascent PaaS in Azure Kubernetes Service
        • Azure Blob Storage Lifecycle Management
      • Deploying Apica Ascent with OpenShift
    • Boomi RTO Quick Start Guide
      • RTO Dashboarding
      • Alerting on RTO Metrics
      • Alerting on RTO Logs
    • Dashboards & Visualizations
  • DATA SOURCES
    • Data Source Overview
    • API
      • JSON Data source
      • RSS
    • AWS
      • Amazon Athena
      • Amazon CloudWatch ( YAML )
      • Amazon Elasticsearch Service
      • Amazon Redshift
      • MySQL Server (Amazon RDS)
    • NoSQL Data Sources
      • MongoDB
    • OLAP
      • Data Bricks
      • Druid
      • Snowflake
    • SQL Data Sources
      • PostgreSQL
      • Microsoft SQL Server
      • MySQL Server
    • Time Series Databases
      • Prometheus Compatible
      • Elasticsearch
      • InfluxDB
    • Ascent Synthetics
      • Checks
    • Ascent Logs
      • Logs
  • INTEGRATIONS
    • Integrations Overview
      • Generating a secure ingest token
      • Data Ingest Ports
    • List of Integrations
      • Apache Beam
        • Export Metrics to Prometheus
          • Pull Mechanism via Push-Gateway
        • Export Events to Apica Ascent
      • Apica ASM
      • Apica Ascent Observability Data Collector Agent
      • AWS
        • AWS CloudWatch
        • AWS ECS
          • Forwarding AWS ECS logs to Apica Ascent using AWS FireLens
          • ECS prometheus metrics to Apica Ascent
        • AWS S3
      • Azure
        • Azure Databricks
        • Azure Eventhub
        • Azure Event Hubs
      • Docker Compose
      • Docker Swarm logging
      • Docker Syslog log driver
      • F5 Big-Ip System
      • Filebeat
      • Fluent Bit
        • Forwarding Amazon-Linux logs to Apica Ascent using Fluent Bit
        • Fluent Bit installation on Ubuntu
        • Enabling IoT(MQTT) Input (PAAS)
        • IIS Logs on Windows
      • Fluentd
      • FortiNet Firewalls
      • GCP PubSub
      • GCP Cloud Logging
      • IBM QRadar
      • ilert
      • Incident Management
        • Webhooks
      • Jaeger
      • Kafka
      • Kinesis
      • Kubernetes
      • Logstash
      • MQTT
      • Network Packets
      • OpenTelemetry
      • Object store (S3 Compatible)
      • Oracle OCI Infrastructure Audit/Logs
      • Oracle Data Integrator (ODI)
      • OSSEC Variants (OSSEC/WAZUH/ATOMIC)
        • Apica Ascent-OSSEC Agent for Windows
      • Palo Alto Firewall
      • Prometheus
        • Spring Boot
        • Prometheus on Windows
        • Prometheus Remote Write
        • MongoDB Exporter
        • JMX Exporter
      • Rsyslogd
      • Syslog
      • Syslog-ng
      • Splunk Universal Forwarder
      • Splunk Heavy Forwarder
      • SNMP
      • Splunk Forwarding Proxy
      • Vault
        • Audit Vault Logs - AWS
        • Audit Vault Logs - OCI
        • Audit Vault Metrics
    • Apica API DOCS
  • DATA MANAGEMENT
    • Data Management Overview
    • Data Explorer Overview
      • Query Builder
      • Widget
      • Alerts
      • JSON Import
      • Creating Json Schema
        • Visualization
          • Line chart
          • Bar chart
          • Area chart
          • Scatter chart
          • Status chart
          • Counter chart
          • Stat chart
          • Size chart
          • Dense Status chart
          • Honeycomb chart
          • Gauge chart
          • Pie chart
          • Disk chart
          • Table chart
          • Date time chart
      • Time-Series AI/ML
        • Anomaly Detection
        • Averaging
        • Standard Deviation(STD)
      • Data Explorer Dashboard
        • Create a Dashboard
        • Editing Dashboard
          • Dashboard level filters
    • Timestamp handling
      • Timestamp bookmark
    • Large log/events/metrics/traces
  • OBSERVE
    • Monitoring Overview
      • Connecting Prometheus
      • Connecting Amazon Managed Service for Prometheus
      • Windows Redis Monitoring
      • Writing queries
        • Query Snippets
      • Query API
      • Use Apica API to ingest JSON data
    • Distributed Tracing
      • Traces
      • Spans
      • Native support for OTEL Traces
      • Windows .NET Application Tracing
      • Linux+Java Application Tracing
    • Log Management
      • Terminology
      • Explore Logs
      • Topology
      • Apica Ascent Search Cheat Sheet
      • Share Search Results
      • Severity Metrics
      • Log2Metrics
      • Native support for OTEL Logs
      • Reports
        • Accessing Reports results via API
      • Role-Based Access Control (RBAC)
      • Configuring RBAC
    • AI and LLM Observability
      • AI Agent Deployment
      • Ascent AI Agent Monitoring
      • Ascent Quick Start Guide
    • Synthetic Check Monitoring
      • Map View
      • List View
      • Alerting for Check Results
  • Flow
    • Overview
    • Replay
    • Pipeline Management
      • Configuring Pipelines
      • Visualize Pipelines
      • Pipeline Overview Dashboard
      • Forwarding Data
    • OpenTelemetry Ingest
      • OpenTelemetry Logs / Traces
      • OpenTelemetry Metrics
        • Transforming Metrics through Code Rules
    • Vault
      • Certificates
      • Variables
      • Lookups
    • Rules
      • FILTER
      • EXTRACT
      • SIEM and TAG
      • REWRITE
      • CODE
      • FORWARD
        • Rename Attributes
      • STREAM
    • Functions
      • ascent.encode
      • ascent.decode
      • ascent.persist
      • Ascent.variables
      • ascent.crypto
      • Ascent.mask
      • Ascent.net
      • Ascent.text
      • Ascent.time
      • Ascent.lookups
    • List of Forwarders
    • OpenTelemetry Forwarding
      • Metrics
      • Traces
      • Logs
    • Splunk Forwarding
      • Apica UF Proxy App Extension
        • Standalone Instance
        • List of Indexer Instances
        • Indexer Discovery
      • Splunk HTTP Event Collector (HEC) Forwarder
        • Metric Indexes
        • Non Metric Indexes
      • Splunk Syslog Forwarding
    • Real-Time Stream Forwarding
      • AWS Kinesis
      • Azure Eventhub
      • Google Pub/Sub
    • Security Monitor Forwarding
      • Arc Sight
      • RSA New Witness
    • Forwarding to Monitoring Tools
      • Datadog Forwarding
      • New Relic Forwarding
      • Dynatrace Forwarding
      • Elasticsearch Forwarding
      • Coralogix Forwarding
      • Azure Log Analytics Forwarding
    • Object Store Forwarding
      • S3 Compatible
      • Azure Blob Storage
    • Forwarding to Data Warehouse
      • GCP Bigquery
  • Customized Forwarders
    • JS Code Forwarding
  • LAKE
    • Powered by Instastore™
  • FLEET MANAGEMENT
    • Overview
    • Agents
    • Configurations
    • Packages
    • Fleet Repository Management
    • Advanced Search
    • List of Agents
      • Datadog Agent
      • Fluent-bit Agent
      • Grafana Alloy
      • OpenTelemetry Collector
      • OpenTelemetry Kubernetes
      • Prometheus Agent
  • COMMAND LINE INTERFACE
    • apicactl Documentation
  • AUTONOMOUS INSIGHTS
    • Time Series AI-ML
      • Anomaly Detection
      • Averaging
      • Standard Deviation(STD)
      • Forecasting
      • AI-ML on PromQL Query Data Set
      • Statistical Data Description
    • Pattern-Signature (PS)
      • Log PS Explained
        • Unstructured Logs
        • Semi-structured JSON
        • Reduce Logs Based on PS
        • Log PS Use Cases
          • Log Outlier Isolation
          • Log Trending Analysis
          • Simple Log Compare
      • Config PS
        • Config JSON PS
    • ALIVE Log Visualization
      • ALIVE Pattern Signature Summary
      • ALIVE Log Compare
    • Log Explained using Generative AI
      • Configuring Generative AI Access
      • GenAI Example Using Log Explain
    • Alerts
    • Alerts (Simple/Anomaly)
    • Alerts On Logs
    • Rule Packs
    • AI-powered Search
  • PLATFORM DOCS
    • Synthetic Monitoring Overview
      • Getting Started with ASM
        • Achieving 3 Clicks to Issue Resolution via ASM
        • FAQ - Frequently Asked Questions
        • Creating A New Check
          • Creating a New Real Browser Check
      • Explore the Platform
        • API Details
        • Check Types
          • Android Check
          • Command Check
          • Compound Check
          • Browser Check
          • Desktop Application Check
          • AWS Lambda Check
          • DNS Resolver Check
          • DNS Security Check
          • Domain Availability Check
          • Domain Delegation Check
          • Domain Expiration Date Check
          • Hostname Integrity Check
          • iPad Check
          • iPhone Check
          • Ping Check
          • Port Check
          • Postman Check
          • Response Time Check
          • SSL Certificate Expiration Check
          • Scripted Check
        • Dashboards
        • Integrations
          • DynaTrace Integration
          • Google Analytics Integration
          • Akamai Integration
          • Centrify Integration
          • AppDynamics Integration
          • PagerDuty Integration
          • ServiceNow Integration
          • Splunk Integration
        • Metrics
          • Analyze Site
          • Result Values
          • Trends
          • Analyze Metrics
        • Monitoring
          • Integrating ASM Metrics into Grafana Using Apica Panels
            • Understanding the ASM Imported Dashboards
            • Using the Apica Panels Dashboards
          • Understanding ASM Check Host Locations
        • Navigation
          • Manage Menu
        • Reports
        • Use Cases
      • Configurations
        • Configuring Checks
          • Understanding Check Results
            • Understanding ZebraTester Check Results
            • Understanding Browser Check Results
            • Understanding Check Details
          • Editing Checks
            • Editing Browser Checks
            • Editing ZebraTester Checks
          • Using Regular Expressions Within the ASM Platform
          • Understanding the Edit Scenario Page
          • Comparing Selenium IDE Scripts to ASM Scenarios
          • Configuring Apica DNS Check Types
          • Implementing Tags Effectively Within ASM
          • Storing and Retrieving Information Using the ASM Dictionary
        • Configuring Users
          • Configuring SSO Within ASM
        • Configuring Alerts
          • Configuring Webhook Alerts
      • How-To Articles
        • ASM Monitoring Best Practices
        • API Monitoring Guide
        • IT Monitoring Guide
        • Monitor Mission-Critical Applications through the Eyes of Your Users
        • How To Mask Sensitive Data in ASM
        • How to Mask Sensitive Data When Using Postman Checks
        • How to Handle URL Errors in a Check
        • How To Set Up SSO Using Azure AD
        • How to Set Up SSO Using Centrify
        • ASM Scenarios How-To
          • How To Pace a Selenium Script
          • How to Utilize XPath Within a Selenium Script
          • How to Mask Sensitive Information Within an ASM Scenario
          • Handling Elements Which Do Not Appear Consistently
          • How to Handle HTML Windows in ASM Scenarios
        • Installing CES Private Agent (Docker)
        • CES Private Agent - Download Links
    • ZebraTester Scripting
      • ZebraTester Overview
      • Install ZebraTester
        • Download ZebraTester
          • Core ZebraTester V7.5-A Documentation
          • Core ZebraTester V7.0-B Documentation
          • Core ZebraTester V7.0-A Documentation
          • Core ZebraTester V5.5-Z Documentation
          • Core ZebraTester V5.5-F Documentation
        • Download the ZebraTester Recorder Extension
        • Windows Installation
          • ZebraTester on Windows
          • Generate Private CA Root Certificate
          • Windows System Tuning
          • Install a new ZT version on Windows Server
          • Install/Uninstall ZT Windows Installer Silently
        • macOS Installation
          • macOS Preinstallation Instructions
          • Generate Private CA Root Cert (macOS X)
          • System Tuning (macOS)
          • Import a CA Root Certificate to an iOS device
          • Memory Configuration Guidelines for ZebraTester Agents
      • ZebraTester User Guide
        • Menu and Navigation Overview
        • 1. Get a Load Test Session
          • Recording Web Surfing Sessions with ZebraTester
            • Further Hints for Recording Web Surfing Sessions
            • Recording Extension
              • Record Web Session
              • Cookies and Cache
              • Proxy
              • Page Breaks
              • Recording Extension Introduction
              • Troubleshooting
            • Add URL to ZebraTester
            • Page Scanner
          • Next Steps after Recording a Web Surfing Session
        • 2. Scripting the Load Test Session
          • 1. Assertions - HTTP Response Verificaton
          • 2. Correlation - Dynamic Session Parameters
            • 2b. Configuring Variable Rules
            • 2a. Var Finder
          • 3. Parameterization: Input Fields, ADR and Input Files
            • ADR
          • 4. Execution Control - Inner Loops
          • 5. Execution Control - URL Loops
          • 6. Execution Control -User-Defined Transactions And Page Breaks
          • 7. Custom Scripting - Inline Scripts
          • 8. Custom Scripting - Load Test Plug-ins
            • ZebraTester Plug-in Handbooks
          • Modular Scripting Support
        • 3. Recording Session Replay
        • 4. Execute the Load Test
          • Executing a First Load Test
          • Executing Load Test Programs
            • Project Navigator
              • Configuration of the Project Navigator Main Directory
            • Real-Time Load Test Actions
            • Real-Time Error Analysis
            • Acquiring the Load Test Result
            • More Tips for Executing Load Tests
          • Distributed Load Tests
            • Exec Agents
            • Exec Agent Clusters
          • Multiple Client IP Addresses
            • Sending Email And Alerts
            • Using Multiple Client IP Addresses per Load-Releasing System
        • 5. Analyzing Results
          • Detail Results
          • Load Test Result Detail-Statistics and Diagrams
          • Enhanced HTTP Status Codes
          • Error Snapshots
          • Load Curve Diagrams
          • URL Exec Step
          • Comparison Diagrams
            • Analysis Load Test Response Time Comparison
            • Performance Overview
            • Session Failures
        • Programmatic Access to Measured Data
          • Extracting Error Snapshots
          • Extracting Performance Data
        • Web Tools
        • Advanced Topics
          • Execute a JMeter Test Plan in ZebraTester
          • Credentials Manager for ZebraTester
          • Wildcard Edition
          • Execution Plan in ZebraTester
          • Log rotation settings for ZebraTester Processes
          • Modify Session
          • Modular Scripting Support
          • Understanding Pacing
          • Integrating ZebraTester with GIT
            • GitHub Integration Manual V5.4.1
      • ZebraTester FAQ
      • ZebraTester How-to articles
        • How to Combine Multiple ZebraTester Scripts Into One
        • Inline Scripting
        • How to Configure a ZebraTester Script to Fetch Credentials from CyberArk
        • How to Configure a ZebraTester Scenario to Fetch Credentials from CyberArk
        • How to Convert a HAR file into a ZebraTester Script
        • How to Convert a LoadRunner Script to ZebraTester
        • How to Import the ZT Root Certificate to an iOS device
        • How to iterate over JSON objects in ZebraTester using Inline Scripts
        • How to round a number to a certain number of decimal points within a ZebraTester Inline Script
        • How to Use a Custom DNS Host File Within a ZebraTester Script
        • How to Move a ZebraTester Script to an Older Format
        • API Plugin Version
        • Setting up the Memu Player for ZebraTester Recording
        • Inline Script Version
      • Apica Data Repository (ADR) aka Apica Table Server
        • ADR related inline functions available in ZT
        • Apica Data Repository Release Notes
        • REST Endpoint Examples
        • Accessing the ADR with Inline Scripts
      • ZebraTester Plugin Repository
      • Apica YAML
        • Installing and Using the ApicaYAML CLI Tool
        • Understanding ApicaYAML Scripting and Syntax
    • Load Testing Overview
      • Getting Started with ALT
      • Creating / Running a Single Load Test
      • Running Multiple Tests Concurrently
      • Understanding Loadtest Results
    • Test Data Orchestrator (TDO)
      • Technical Guides
        • Hardware / Environment Requirements
        • IP Forwarding Instructions (Linux)
        • Self-Signed Certificate
        • Windows Server Install
        • Linux Server Install
        • User Maintenance
        • LDAP Setup
        • MongoDB Community Server Setup
        • TDX Installation Guide
      • User Documentation
        • End User Guide for TDO
          • Connecting to Orson
          • Coverage Sets and Business Rules
          • Data Assembly
          • Downloading Data
        • User Guide for TDX
          • Connecting to TDX
          • Setting up a Data Profile
          • Extracting Data
          • Analyzing Data Patterns
          • Performing Table Updates
        • TDO Project Builder User Guide
          • Project Design
          • Projects
            • Select Existing Project
            • Create a New Project
            • Export a Project
            • Import a Project
            • Clone a Project
            • Delete a Project
          • Working with Source Files
            • Ingest Data
            • Data Blocks
              • Create a Determining Attribute from a Data Block
              • Data Types and Field Formats
          • Determining Attributes
            • Manual Attribute Creation
              • Numeric Range Attribute
              • Manual Attribute Creation
              • Create a New Determining Attribute from an Existing Data Block
            • Setting Determining Attribute Priorities
            • Filtering Determining Attributes
            • Adding, Changing, or Deleting a Determining Attribute Value
          • Create Coverage Set
          • Business Rules
            • Create a New Business Rule
            • Edit a Business Rule
            • Using Priorities in Business Rules
            • Using Occurrences in Business Rules
            • Deleting a Business Rule
          • Create a Coverage Matrix
          • Create an Action
          • Create a Scenario
          • Create Data Views
            • Creating a Coverage Set Data View
            • Creating a Data View Joined to the Coverage Set View
            • Creating a Data View Linked to a Multiple Data Views
            • Locking Records in a Data View
            • Editing Data Source in a Data View
            • Other Edits in the Data View
          • Work Sets
            • Creating a Work Set
            • Editing a Work Set
            • Clone a Work Set
            • Deleting a Work Set
            • Unlocking a Work Set
            • Data Assembly from the Work Set Page
          • Data Assignment
            • Assign a Value from the Coverage Matrix
            • Assign a Value from a Data View
            • Assign a Value from a Prior Rule
            • Assign a Fixed Value
            • Assign a Value using a Format Function
            • Assign a Value using Mathematical Calculations
            • Assign a Value using String Concatenation
            • Assigning a Value using Conditions
          • Data Assembly
          • Other TDO Menu Items
        • API Guide
          • API Structure and Usage
          • Determining Attribute APIs
            • Create Determining Attribute (Range-based)
            • Create Determining Attribute (Value-based)
            • Update Determining Attributes
            • Get Determining Attribute Details
            • Delete a Determining Attribute
          • Coverage Set API’s
            • Create Coverage Set
            • Update Coverage Set
            • Get All Coverage Set Details
            • Get Single Coverage Set Details
            • Lock Coverage Set
            • Unlock Coverage Set
            • Delete Coverage Set
          • Business Rule API’s
            • Create Business Rule
            • Update Business Rule
            • Reduce Business Rules using Priorities
            • Get Business Rule Details
            • Get All Business Rules
            • Delete Business Rule
            • Generate Coverage Matrix
          • Workset API's
            • Create Workset
            • Update Workset
            • Get All Worksets
            • Get Workset Details
            • Unlock Workset
            • Clone Workset
            • Delete Workset
          • Assignment Rule API’s
            • Create Assignment Rule
              • Assign a Fixed Value
              • Assign a Value from a Data View
              • Using Conditions in Assignment Rules
              • Using Multiple Operators in an Assignment Rule
              • Using the FORMAT Function in an Assignment Rule
            • Get Assignment Rules
            • Get Rule Details
            • Update Assignment Rule
            • Delete Assignment Rule
          • Data Assembly API's
            • Assemble Data
            • Check Assembly Process
          • Data Movement API's
            • Ingest (Upload) Data Files
            • Download Data Files
              • HTML Download
              • CSV Download
              • Comma Delimited with Sequence Numbers Download
              • Pipe Delimited Download
              • Tab Delimited with Sequence Numbers Download
              • EDI X12 834 Download
              • SQL Lite db Download
              • Alight File Format Download
          • Reporting API's
            • Session Events
            • Rules Events
            • Coverage Events
            • Retrieve Data Block Contents
            • Data Assembly Summary
        • Workflow Guide
        • Format Function Guide
          • String Formats
          • Boolean Formats
          • Hexadecimal Formats
      • Release Notes
        • Build 1.0.2.0-20250408-0906
        • Build 1.0.2.0-20250213-1458
  • IRONdb
    • Getting Started
      • Installation
      • Configuration
      • Cluster Sizing
      • Command Line Options
      • ZFS Guide
    • Administration
      • Activity Tracking
      • Compacting Numeric Rollups
      • Migrating To A New Cluster
      • Monitoring
      • Operations
      • Rebuilding IRONdb Nodes
      • Resizing Clusters
    • API
      • API Specs
      • Data Deletion
      • Data Retrieval
      • Data Submission
      • Rebalance
      • State and Topology
    • Integrations
      • Graphite
      • Prometheus
      • OpenTSDB
    • Tools
      • Grafana Data Source
      • Graphite Plugin
      • IRONdb Relay
      • IRONdb Relay Release Notes
    • Metric Names and Tags
    • Release Notes
    • Archived Release Notes
  • Administration
    • E-Mail Configuration
    • Single Sign-On with SAML
    • Port Management
    • Audit Trail
      • Events Trail
      • Alerts Trail
Powered by GitBook
On this page
  • Introduction
  • 1. Implement a well thought-out monitoring strategy
  • 2. Establish your monitoring strategy goals
  • 3. Start synthetic monitoring early in the development lifecycle - and test your monitoring checks fully before implementing
  • Test your monitoring checks before fully implementing
  • 4. Assign responsibility
  • 5. Get strategic with documenting check scenarios
  • 6. Document how alerts are resolved
  • 7. Choose the right metrics
  • 8. Use multiple monitoring locations
  • 9. Combine functional checks with API-checks
  • 10. Last, but not least: create versatile checks
  • Conclusion

Was this helpful?

Edit on GitHub
Export as PDF
  1. PLATFORM DOCS
  2. Synthetic Monitoring Overview
  3. How-To Articles

ASM Monitoring Best Practices

  • Introduction

  • 1. Implement a well thought-out monitoring strategy

  • 2. Establish your monitoring strategy goals

  • 3. Start synthetic monitoring early in the development lifecycle - and test your monitoring checks fully before implementing

    • Test your monitoring checks before fully implementing

  • 4. Assign responsibility

  • 5. Get strategic with documenting check scenarios

  • 6. Document how alerts are resolved

  • 7. Choose the right metrics

  • 8. Use multiple monitoring locations

  • 9. Combine functional checks with API-checks

  • 10. Last, but not least: create versatile checks

  • Conclusion

Introduction

Performance monitoring of website and applications is a challenging but critical task designed to help ensure a positive end user experience, and to help companies gain and retain loyal customers. According to Apica’s 2017 Consumer Expectations survey, 60% of consumers indicated they’d be less loyal to brands after poor website and app performance, with more than three quarters of respondents saying they expect sites and apps to perform faster than three years ago.

So to avoid ‘Digital Desertion,’ brands are turning to the experts when it comes to forming formidable monitoring strategies. In this guide, we can help you to understand how synthetic monitoring can add value. We’ll explore strategy and how to start, synthetic monitoring and the development lifecycle, alert ownership and resolution, documenting checks, the benefits of geographically dispersed monitoring, and other specific best practices surrounding synthetic monitoring.

1. Implement a well thought-out monitoring strategy

Synthetic Performance Monitoring is a monitoring process initiated proactively by external agents that imitate actual end users and web traffic. All companies are likely to have some idea as to where to best focus monitoring. At first, this may just be a hypothesis, but as you collect more and more data, you’ll get closer to building a strong strategy surrounding what to monitor.

In considering your strategy, it’s best to start by analyzing your applications in order to identify where you must focus your monitoring. In order to do this, you must:

  • Classify your systems based on importance

  • Determine the most common/business critical tasks – this is where monitoring starts

  • Consider the most relevant scenarios and all the possible user journeys

  • Establish everything that needs to be monitored within the website and mobile application (including internal interfaces). This is likely to include processes such as making a transaction and performing searches.

An important factor in a successful monitoring strategy is also to identify and include all major stakeholders in the monitoring strategy development process, including developers, quality assurance, IT, management, owners, and marketing. It’s also essential to continue to meet with stakeholders to review the strategy monthly or bi-monthly in order to keep it up to date and continuously improve it.

2. Establish your monitoring strategy goals

Synthetic monitoring, at a high level, is valuable for determining the availability, performance, and functionality of applications. When determining the specific monitoring to perform against your applications, it’s essential to design the scripts and checks to match those goals. For instance, availability monitoring is typically much simpler, and can be run at higher frequencies than performance of functionality monitoring, but those advantages come at the cost of detailed information.

For your most critical applications, it makes sense to take a blended approach that uses some high- frequency availability monitoring in combination with lower-frequency (but more complex and detailed) performance and functionality monitoring.

3. Start synthetic monitoring early in the development lifecycle - and test your monitoring checks fully before implementing

Unfortunately, monitoring is often an afterthought - something that we discuss or talk about once a website or application has gone into production. This is often due to lack of resources or finances, but more often than not it’s simply down to lack of planning.

What companies don’t often realize is that monitoring should be implemented in the design phase - as you’re developing an application or as you’re planning and analyzing that application. Synthetic monitoring is best used before a web application goes into production to monitor its behavior, providing a report on the performance expected in the production stage. These ‘pre- production’ produced results are a great baseline, and the same synthetic monitoring script can also be continued in the production environment.

When synthetic monitoring is carried out through design, planning and into production, it acts as a proactive way of identifying issues before they become problems.

Test your monitoring checks before fully implementing

It may sound obvious, but time needs to be allocated in order to test monitoring checks fully before implementing, not only to ensure they are working, but also to give your team time to get used to them and customize them to suit specific requirements.

When you let new checks run for a reasonable test period, you’re in a much better position to establish suitable values for settings such as threshold levels, number of retries and to verify the functionality of the script itself. This way you can customize the check to your liking through trial and error so that it’s perfectly suited for your specific needs.

4. Assign responsibility

It’s essential to identify owners to handle each part of the monitoring and evaluation process to avoid checks breaking when changes are made. When changes are made to the target environment, you must make sure someone (preferably those who are responsible for the service the checks are connected to) is responsible for each script or check. This is when having different dashboards for different teams and stakeholders really becomes helpful.

These views can help give you a clear overview of relevant data and KPIs, making it easier to keep track of performance at a glance.

5. Get strategic with documenting check scenarios

It’s common to have complex check scenarios with multiple steps, and most organizations will also want to maintain scripts over an extended period of time. This is when strategic, structured documenting procedures become essential.

You should always document all steps with screenshots, function, input/output, and anything else critical to running checks - effectively, the documentation should explain why the check works the way it does. Using a naming standard for both groups and checks is also a good idea. This makes it easy for the stakeholders and team members to get a clear overview so that future check maintainers can contribute to your work when you cannot.

6. Document how alerts are resolved

This is one of the most important best practices surrounding synthetic monitoring. It’s essential to plan and configure check alerts that work for your team. And once set up, alerts should only be sent to the relevant party, not to all users.

This is critical when it comes to the alerts being properly addressed and not discarded or ignored - if everyone receives every alert, they may assume someone else is dealing with it.

Specifically, different thresholds should trigger different types of alerts to different people. For example, alerts surrounding minor issues can be sent to a junior engineer, while alerts surrounding more severe issues should be sent to a more senior team member of the team.

7. Choose the right metrics

There is no metric that fits all applications, and companies will gradually begin to understand the ones that are best suited to them. It’s good practice to compare the measured metrics result with the actual user experience, as they should be similar.

It is important to use the appropriate returned value metrics for the type of application being monitored. Examples of return value metrics include:

  • Total browser render time

  • DOM complete

  • DOM content loaded

  • Total download size

To illustrate the use of different metrics, consider a single page application. It would not be measured with the response time metric “DOM complete,” as this would show a misleading result and response time due to how the application is built.

Compare the measured metrics result with the actual user experience in order to choose the correct one, they should be similar. There is no metric that fit all applications, and it’s up to you to choose the one best suited for yours.

8. Use multiple monitoring locations

Develop a location-based monitoring strategy to keep an eye on performance for your audiences in all important regions and help you analyze performance across geographies.

By using a solution with a worldwide network of monitoring probes, you can address business requirements in specific regions or countries where you have users. For example, if your user base is located in Germany and Spain, it’s essential that you have check locations in these countries in order to simulate the correct type of traffic there, and to collect accurate data against this simulated traffic.

This is also handy when it comes to ruling out location specific errors, avoid false negatives, and to cover any routing issues over the internet. If you are doing business across a large number of regions or countries, using multiple monitoring locations can help you analyze and compare performance across these geographies, helping you to understand where to allocate data resources.

9. Combine functional checks with API-checks

To really get the full picture of your performance, combine functional checks with API checks. It means you’ll get better insight into specific functionalities, giving you the ability to pinpoint issues that may be difficult to spot with only functional or only API monitoring. It’s also a great way to speed up your mean time to recovery, giving you more in-depth knowledge about different functionalities in your application.

10. Last, but not least: create versatile checks

Versatility is everything in checks: By creating checks using scripts that are nonspecific, you can easily adapt to use them for multiple scenarios, ultimately saving you time and effort. In this way, scripts should be easily adapted to changes being made on a specific page so that they don’t have to be constantly rewritten. So don’t configure tests to always pick the same specific item or product from the same place each time on a webpage - configure them to pick the first product that appears on the page.

Conclusion

Synthetic monitoring is proving a must-have for companies who know they risk losing customers to less than optimal functionality and performance. From testing new features before deployment, to looking at performance degradations before and after deployments, to testing new markets or geographies, to detecting performance issues related to specific browsers, resolutions and devices - the use cases are endless.

However, while all companies recognize the importance of synthetic monitoring, some may not have the correct solution in place, while others may not understand the importance of underpinning synthetic monitoring with a strong, end-to-end strategy.

Here’s a 10 point breakdown of the best practices described in the white paper:

  1. Implement a well thought-out monitoring strategy

  2. Establish your monitoring strategy goals

  3. Start synthetic monitoring early in the development lifecycle & test your monitoring checks fully before implementing

  4. Assign responsibility

  5. Get strategic with documenting check scenarios

  6. Document how alerts are resolved

  7. Choose the right metrics

  8. Use multiple monitoring locations

  9. Combine functional checks with API-checks

10. Last, but not least: create versatile checks

PreviousHow-To ArticlesNextAPI Monitoring Guide

Was this helpful?