Apica Docs
  • Welcome to Apica Docs!
  • PRODUCT OVERVIEW
    • Ascent Overview
    • Ascent User Interface
  • TECHNOLOGIES
    • Ascent with Kubernetes
      • Kubernetes is a Game-Changer
      • Ascent: Built on Kubernetes
    • Ascent with OpenTelemetry
      • Why Implement OpenTelemetry?
      • Common Use Cases for OpenTelemetry
      • How to Get Started with OpenTelemetry
      • Best Practices for OpenTelemetry Implementations
  • RELEASE NOTES
    • Release Notes
      • Ascent 2.10.4
      • Ascent 2.10.3
      • Ascent 2.10.2
      • Ascent 2.9.0
      • Ascent 2.8.1
      • Ascent 2.8.0
      • Ascent 2.7.0
      • Ascent 2.6.0
      • Ascent 2.5.0
      • Ascent 2.4.0
      • Ascent 2.3.0
      • Ascent 2.2.0
      • Ascent 2.1.0
        • Data Fabric
          • Releases-old
        • Synthetic Monitoring
        • Advanced Scripting Engine
        • IRONdb
      • Synthetic Monitoring
  • GETTING STARTED
    • Getting Started with Ascent
      • Getting Started with Metrics
      • Getting Started with Logs
        • OpenTelemetry
    • Ascent Deployment Overview
    • Quickstart with Docker-Compose
    • On-Premise PaaS deployment
      • On-Premise PaaS Deployment Architecture
      • Deploying Apica Ascent PaaS on Kubernetes
      • Deploying Apica Ascent PaaS on MicroK8s
      • Deploying Apica Ascent PaaS on AWS
      • Deploying Apica Ascent EKS on AWS using CloudFormation
      • Deploying Ascent on AWS EKS with Aurora PostgreSQL and ElastiCache Redis using Cloud Formation
        • Deploying Apica Ascent on AWS EKS with Aurora PostgreSQL and ElastiCache Redis using CloudFormation
        • Apica Ascent on AWS EKS (Private Endpoint) with Aurora PostgreSQL and ElastiCache Redis on prod VPC
      • Deploying Apica Ascent EKS on AWS using custom AMI
      • Deploying Apica Ascent EKS with AWS ALB
      • Deploying Apica Ascent PaaS in Azure Kubernetes Service
        • Azure Blob Storage Lifecycle Management
      • Deploying Apica Ascent with OpenShift
    • Boomi RTO Quick Start Guide
      • RTO Dashboarding
      • Alerting on RTO Metrics
      • Alerting on RTO Logs
    • Dashboards & Visualizations
  • DATA SOURCES
    • Data Source Overview
    • API
      • JSON Data source
      • RSS
    • AWS
      • Amazon Athena
      • Amazon CloudWatch ( YAML )
      • Amazon Elasticsearch Service
      • Amazon Redshift
      • MySQL Server (Amazon RDS)
    • NoSQL Data Sources
      • MongoDB
    • OLAP
      • Data Bricks
      • Druid
      • Snowflake
    • SQL Data Sources
      • PostgreSQL
      • Microsoft SQL Server
      • MySQL Server
    • Time Series Databases
      • Prometheus Compatible
      • Elasticsearch
      • InfluxDB
    • Ascent Synthetics
      • Checks
    • Ascent Logs
      • Logs
  • INTEGRATIONS
    • Integrations Overview
      • Generating a secure ingest token
      • Data Ingest Ports
    • List of Integrations
      • Apache Beam
        • Export Metrics to Prometheus
          • Pull Mechanism via Push-Gateway
        • Export Events to Apica Ascent
      • Apica ASM
      • Apica Ascent Observability Data Collector Agent
      • AWS
        • AWS CloudWatch
        • AWS ECS
          • Forwarding AWS ECS logs to Apica Ascent using AWS FireLens
          • ECS prometheus metrics to Apica Ascent
        • AWS S3
      • Azure
        • Azure Databricks
        • Azure Eventhub
        • Azure Event Hubs
      • Docker Compose
      • Docker Swarm logging
      • Docker Syslog log driver
      • F5 Big-Ip System
      • Filebeat
      • Fluent Bit
        • Forwarding Amazon-Linux logs to Apica Ascent using Fluent Bit
        • Fluent Bit installation on Ubuntu
        • Enabling IoT(MQTT) Input (PAAS)
        • IIS Logs on Windows
      • Fluentd
      • FortiNet Firewalls
      • GCP PubSub
      • GCP Cloud Logging
      • IBM QRadar
      • ilert
      • Incident Management
        • Webhooks
      • Jaeger
      • Kafka
      • Kinesis
      • Kubernetes
      • Logstash
      • MQTT
      • Network Packets
      • OpenTelemetry
      • Object store (S3 Compatible)
      • Oracle OCI Infrastructure Audit/Logs
      • Oracle Data Integrator (ODI)
      • OSSEC Variants (OSSEC/WAZUH/ATOMIC)
        • Apica Ascent-OSSEC Agent for Windows
      • Palo Alto Firewall
      • Prometheus
        • Spring Boot
        • Prometheus on Windows
        • Prometheus Remote Write
        • MongoDB Exporter
        • JMX Exporter
      • Rsyslogd
      • Syslog
      • Syslog-ng
      • Splunk Universal Forwarder
      • Splunk Heavy Forwarder
      • SNMP
      • Splunk Forwarding Proxy
      • Vault
        • Audit Vault Logs - AWS
        • Audit Vault Logs - OCI
        • Audit Vault Metrics
    • Apica API DOCS
  • DATA MANAGEMENT
    • Data Management Overview
    • Data Explorer Overview
      • Query Builder
      • Widget
      • Alerts
      • JSON Import
      • Creating Json Schema
        • Visualization
          • Line chart
          • Bar chart
          • Area chart
          • Scatter chart
          • Status chart
          • Counter chart
          • Stat chart
          • Size chart
          • Dense Status chart
          • Honeycomb chart
          • Gauge chart
          • Pie chart
          • Disk chart
          • Table chart
          • Date time chart
      • Time-Series AI/ML
        • Anomaly Detection
        • Averaging
        • Standard Deviation(STD)
      • Data Explorer Dashboard
        • Create a Dashboard
        • Editing Dashboard
          • Dashboard level filters
    • Timestamp handling
      • Timestamp bookmark
    • Large log/events/metrics/traces
  • OBSERVE
    • Monitoring Overview
      • Connecting Prometheus
      • Connecting Amazon Managed Service for Prometheus
      • Windows Redis Monitoring
      • Writing queries
        • Query Snippets
      • Query API
      • Use Apica API to ingest JSON data
    • Distributed Tracing
      • Traces
      • Spans
      • Native support for OTEL Traces
      • Windows .NET Application Tracing
      • Linux+Java Application Tracing
    • Log Management
      • Terminology
      • Explore Logs
      • Topology
      • Apica Ascent Search Cheat Sheet
      • Share Search Results
      • Severity Metrics
      • Log2Metrics
      • Native support for OTEL Logs
      • Reports
        • Accessing Reports results via API
      • Role-Based Access Control (RBAC)
      • Configuring RBAC
    • AI and LLM Observability
      • AI Agent Deployment
      • Ascent AI Agent Monitoring
      • Ascent Quick Start Guide
    • Synthetic Check Monitoring
      • Map View
      • List View
      • Alerting for Check Results
  • Flow
    • Overview
    • Pipeline Management
      • Configuring Pipelines
      • Visualize Pipelines
      • Pipeline Overview Dashboard
      • Forwarding Data
    • OpenTelemetry Ingest
      • OpenTelemetry Logs / Traces
      • OpenTelemetry Metrics
        • Transforming Metrics through Code Rules
    • Vault
      • Certificates
      • Variables
      • Lookups
    • Rules
      • FILTER
      • EXTRACT
      • SIEM and TAG
      • REWRITE
      • CODE
      • FORWARD
        • Rename Attributes
      • STREAM
    • Functions
      • ascent.encode
      • ascent.decode
      • ascent.persist
      • Ascent.variables
      • ascent.crypto
      • Ascent.mask
      • Ascent.net
      • Ascent.text
      • Ascent.time
      • Ascent.lookups
    • List of Forwarders
    • OpenTelemetry Forwarding
      • Metrics
      • Traces
      • Logs
    • Splunk Forwarding
      • Apica UF Proxy App Extension
        • Standalone Instance
        • List of Indexer Instances
        • Indexer Discovery
      • Splunk HTTP Event Collector (HEC) Forwarder
        • Metric Indexes
        • Non Metric Indexes
      • Splunk Syslog Forwarding
    • Real-Time Stream Forwarding
      • AWS Kinesis
      • Azure Eventhub
      • Google Pub/Sub
    • Security Monitor Forwarding
      • Arc Sight
      • RSA New Witness
    • Forwarding to Monitoring Tools
      • Datadog Forwarding
      • New Relic Forwarding
      • Dynatrace Forwarding
      • Elasticsearch Forwarding
      • Coralogix Forwarding
      • Azure Log Analytics Forwarding
    • Object Store Forwarding
      • S3 Compatible
      • Azure Blob Storage
    • Forwarding to Data Warehouse
      • GCP Bigquery
  • Customized Forwarders
    • JS Code Forwarding
  • LAKE
    • Powered by Instastore™
  • FLEET MANAGEMENT
    • Overview
    • Agents
    • Configurations
    • Packages
    • Fleet Repository Management
    • Advanced Search
    • List of Agents
      • Datadog Agent
      • Fluent-bit Agent
      • Grafana Alloy
      • OpenTelemetry Collector
      • OpenTelemetry Kubernetes
      • Prometheus Agent
  • COMMAND LINE INTERFACE
    • apicactl Documentation
  • AUTONOMOUS INSIGHTS
    • Time Series AI-ML
      • Anomaly Detection
      • Averaging
      • Standard Deviation(STD)
      • Forecasting
      • AI-ML on PromQL Query Data Set
      • Statistical Data Description
    • Pattern-Signature (PS)
      • Log PS Explained
        • Unstructured Logs
        • Semi-structured JSON
        • Reduce Logs Based on PS
        • Log PS Use Cases
          • Log Outlier Isolation
          • Log Trending Analysis
          • Simple Log Compare
      • Config PS
        • Config JSON PS
    • ALIVE Log Visualization
      • ALIVE Pattern Signature Summary
      • ALIVE Log Compare
    • Log Explained using Generative AI
      • Configuring Generative AI Access
      • GenAI Example Using Log Explain
    • Alerts
    • Alerts (Simple/Anomaly)
    • Alerts On Logs
    • Rule Packs
    • AI-powered Search
  • PLATFORM DOCS
    • Synthetic Monitoring Overview
      • Getting Started with ASM
        • Achieving 3 Clicks to Issue Resolution via ASM
        • FAQ - Frequently Asked Questions
        • Creating A New Check
          • Creating a New Real Browser Check
      • Explore the Platform
        • API Details
        • Check Types
          • Android Check
          • Command Check
          • Compound Check
          • Browser Check
          • Desktop Application Check
          • AWS Lambda Check
          • DNS Resolver Check
          • DNS Security Check
          • Domain Availability Check
          • Domain Delegation Check
          • Domain Expiration Date Check
          • Hostname Integrity Check
          • iPad Check
          • iPhone Check
          • Ping Check
          • Port Check
          • Postman Check
          • Response Time Check
          • SSL Certificate Expiration Check
          • Scripted Check
        • Dashboards
        • Integrations
          • DynaTrace Integration
          • Google Analytics Integration
          • Akamai Integration
          • Centrify Integration
          • AppDynamics Integration
          • PagerDuty Integration
          • ServiceNow Integration
          • Splunk Integration
        • Metrics
          • Analyze Site
          • Result Values
          • Trends
          • Analyze Metrics
        • Monitoring
          • Integrating ASM Metrics into Grafana Using Apica Panels
            • Understanding the ASM Imported Dashboards
            • Using the Apica Panels Dashboards
          • Understanding ASM Check Host Locations
        • Navigation
          • Manage Menu
        • Reports
        • Use Cases
      • Configurations
        • Configuring Checks
          • Understanding Check Results
            • Understanding ZebraTester Check Results
            • Understanding Browser Check Results
            • Understanding Check Details
          • Editing Checks
            • Editing Browser Checks
            • Editing ZebraTester Checks
          • Using Regular Expressions Within the ASM Platform
          • Understanding the Edit Scenario Page
          • Comparing Selenium IDE Scripts to ASM Scenarios
          • Configuring Apica DNS Check Types
          • Implementing Tags Effectively Within ASM
          • Storing and Retrieving Information Using the ASM Dictionary
        • Configuring Users
          • Configuring SSO Within ASM
        • Configuring Alerts
          • Configuring Webhook Alerts
      • How-To Articles
        • ASM Monitoring Best Practices
        • API Monitoring Guide
        • IT Monitoring Guide
        • Monitor Mission-Critical Applications through the Eyes of Your Users
        • How To Mask Sensitive Data in ASM
        • How to Mask Sensitive Data When Using Postman Checks
        • How to Handle URL Errors in a Check
        • How To Set Up SSO Using Azure AD
        • How to Set Up SSO Using Centrify
        • ASM Scenarios How-To
          • How To Pace a Selenium Script
          • How to Utilize XPath Within a Selenium Script
          • How to Mask Sensitive Information Within an ASM Scenario
          • Handling Elements Which Do Not Appear Consistently
          • How to Handle HTML Windows in ASM Scenarios
    • ZebraTester Scripting
      • ZebraTester Overview
      • Install ZebraTester
        • Download ZebraTester
          • Core ZebraTester V7.5-A Documentation
          • Core ZebraTester V7.0-B Documentation
          • Core ZebraTester V7.0-A Documentation
          • Core ZebraTester V5.5-Z Documentation
          • Core ZebraTester V5.5-F Documentation
        • Download the ZebraTester Recorder Extension
        • Windows Installation
          • ZebraTester on Windows
          • Generate Private CA Root Certificate
          • Windows System Tuning
          • Install a new ZT version on Windows Server
          • Install/Uninstall ZT Windows Installer Silently
        • macOS Installation
          • macOS Preinstallation Instructions
          • Generate Private CA Root Cert (macOS X)
          • System Tuning (macOS)
          • Import a CA Root Certificate to an iOS device
          • Memory Configuration Guidelines for ZebraTester Agents
      • ZebraTester User Guide
        • Menu and Navigation Overview
        • 1. Get a Load Test Session
          • Recording Web Surfing Sessions with ZebraTester
            • Further Hints for Recording Web Surfing Sessions
            • Recording Extension
              • Record Web Session
              • Cookies and Cache
              • Proxy
              • Page Breaks
              • Recording Extension Introduction
              • Troubleshooting
            • Add URL to ZebraTester
            • Page Scanner
          • Next Steps after Recording a Web Surfing Session
        • 2. Scripting the Load Test Session
          • 1. Assertions - HTTP Response Verificaton
          • 2. Correlation - Dynamic Session Parameters
            • 2b. Configuring Variable Rules
            • 2a. Var Finder
          • 3. Parameterization: Input Fields, ADR and Input Files
            • ADR
          • 4. Execution Control - Inner Loops
          • 5. Execution Control - URL Loops
          • 6. Execution Control -User-Defined Transactions And Page Breaks
          • 7. Custom Scripting - Inline Scripts
          • 8. Custom Scripting - Load Test Plug-ins
            • ZebraTester Plug-in Handbooks
          • Modular Scripting Support
        • 3. Recording Session Replay
        • 4. Execute the Load Test
          • Executing a First Load Test
          • Executing Load Test Programs
            • Project Navigator
              • Configuration of the Project Navigator Main Directory
            • Real-Time Load Test Actions
            • Real-Time Error Analysis
            • Acquiring the Load Test Result
            • More Tips for Executing Load Tests
          • Distributed Load Tests
            • Exec Agents
            • Exec Agent Clusters
          • Multiple Client IP Addresses
            • Sending Email And Alerts
            • Using Multiple Client IP Addresses per Load-Releasing System
        • 5. Analyzing Results
          • Detail Results
          • Load Test Result Detail-Statistics and Diagrams
          • Enhanced HTTP Status Codes
          • Error Snapshots
          • Load Curve Diagrams
          • URL Exec Step
          • Comparison Diagrams
            • Analysis Load Test Response Time Comparison
            • Performance Overview
            • Session Failures
        • Programmatic Access to Measured Data
          • Extracting Error Snapshots
          • Extracting Performance Data
        • Web Tools
        • Advanced Topics
          • Execute a JMeter Test Plan in ZebraTester
          • Credentials Manager for ZebraTester
          • Wildcard Edition
          • Execution Plan in ZebraTester
          • Log rotation settings for ZebraTester Processes
          • Modify Session
          • Modular Scripting Support
          • Understanding Pacing
          • Integrating ZebraTester with GIT
            • GitHub Integration Manual V5.4.1
      • ZebraTester FAQ
      • ZebraTester How-to articles
        • How to Combine Multiple ZebraTester Scripts Into One
        • Inline Scripting
        • How to Configure a ZebraTester Script to Fetch Credentials from CyberArk
        • How to Configure a ZebraTester Scenario to Fetch Credentials from CyberArk
        • How to Convert a HAR file into a ZebraTester Script
        • How to Convert a LoadRunner Script to ZebraTester
        • How to Import the ZT Root Certificate to an iOS device
        • How to iterate over JSON objects in ZebraTester using Inline Scripts
        • How to round a number to a certain number of decimal points within a ZebraTester Inline Script
        • How to Use a Custom DNS Host File Within a ZebraTester Script
        • How to Move a ZebraTester Script to an Older Format
        • API Plugin Version
        • Setting up the Memu Player for ZebraTester Recording
        • Inline Script Version
      • Apica Data Repository (ADR) aka Apica Table Server
        • ADR related inline functions available in ZT
        • Apica Data Repository Release Notes
        • REST Endpoint Examples
        • Accessing the ADR with Inline Scripts
      • ZebraTester Plugin Repository
      • Apica YAML
        • Installing and Using the ApicaYAML CLI Tool
        • Understanding ApicaYAML Scripting and Syntax
    • Load Testing Overview
      • Getting Started with ALT
      • Creating / Running a Single Load Test
      • Running Multiple Tests Concurrently
      • Understanding Loadtest Results
    • Test Data Orchestrator (TDO)
      • Technical Guides
        • Hardware / Environment Requirements
        • IP Forwarding Instructions (Linux)
        • Self-Signed Certificate
        • Windows Server Install
        • Linux Server Install
        • User Maintenance
        • LDAP Setup
        • MongoDB Community Server Setup
        • TDX Installation Guide
      • User Documentation
        • End User Guide for TDO
          • Connecting to Orson
          • Coverage Sets and Business Rules
          • Data Assembly
          • Downloading Data
        • User Guide for TDX
          • Connecting to TDX
          • Setting up a Data Profile
          • Extracting Data
          • Analyzing Data Patterns
          • Performing Table Updates
        • API Guide
          • API Structure and Usage
          • Determining Attribute APIs
            • Create Determining Attribute (Range-based)
            • Create Determining Attribute (Value-based)
            • Update Determining Attributes
            • Get Determining Attribute Details
            • Delete a Determining Attribute
          • Coverage Set API’s
            • Create Coverage Set
            • Update Coverage Set
            • Get All Coverage Set Details
            • Get Single Coverage Set Details
            • Lock Coverage Set
            • Unlock Coverage Set
            • Delete Coverage Set
          • Business Rule API’s
            • Create Business Rule
            • Update Business Rule
            • Get Business Rule Details
            • Get All Business Rules
            • Delete Business Rule
          • Workset API's
            • Create Workset
            • Update Workset
            • Get All Worksets
            • Get Workset Details
            • Unlock Workset
            • Clone Workset
            • Delete Workset
          • Data Assembly API's
            • Assemble Data
            • Check Assembly Process
          • Data Movement API's
            • Ingest (Upload) Data Files
            • Download Data Files
              • HTML Download
              • CSV Download
              • Comma Delimited with Sequence Numbers Download
              • Pipe Delimited Download
              • Tab Delimited with Sequence Numbers Download
              • EDI X12 834 Download
              • SQL Lite db Download
              • Alight File Format Download
          • Reporting API's
            • Session Events
            • Rules Events
            • Coverage Events
            • Retrieve Data Block Contents
            • Data Assembly Summary
        • Workflow Guide
        • TDO Project Builder User Guide
          • Project Design
          • Projects
            • Select Existing Project
            • Create a New Project
        • Format Function Guide
      • Release Notes
        • Build 1.0.2.0-20250213-1458
  • IRONdb
    • Getting Started
      • Installation
      • Configuration
      • Cluster Sizing
      • Command Line Options
      • ZFS Guide
    • Administration
      • Activity Tracking
      • Compacting Numeric Rollups
      • Migrating To A New Cluster
      • Monitoring
      • Operations
      • Rebuilding IRONdb Nodes
      • Resizing Clusters
    • API
      • API Specs
      • Data Deletion
      • Data Retrieval
      • Data Submission
      • Rebalance
      • State and Topology
    • Integrations
      • Graphite
      • Prometheus
      • OpenTSDB
    • Tools
      • Grafana Data Source
      • Graphite Plugin
      • IRONdb Relay
      • IRONdb Relay Release Notes
    • Metric Names and Tags
    • Release Notes
    • Archived Release Notes
  • Administration
    • E-Mail Configuration
    • Single Sign-On with SAML
    • Port Management
    • Audit Trail
      • Events Trail
      • Alerts Trail
Powered by GitBook
On this page
  • Formatting an Apica YAML Script
  • Sample Formatting
  • Notes on Special Characters
  • The “config” section
  • Attributes of the “config” section
  • The “scenarios” section
  • Name
  • Flow
  • Example Scenarios
  • Simple Scenario
  • Complex Scenario

Was this helpful?

Edit on GitHub
Export as PDF
  1. PLATFORM DOCS
  2. ZebraTester Scripting
  3. Apica YAML

Understanding ApicaYAML Scripting and Syntax

  • Formatting an Apica YAML Script

    • Sample Formatting

  • Notes on Special Characters

  • The “config” section

    • Attributes of the “config” section

      • target

      • headers

      • inputs

      • variables

        • Examples of variable usage

      • externalfiles

      • inputfiles

  • The “scenarios” section

    • Name

    • Flow

      • Page

      • HTTP Methods

        • Get:

        • Post

        • Put

        • Delete

      • Transactions & Loops

        • Transaction

        • Loop

      • Capture

      • JSON

      • RegEx

      • Xpath

      • Boundary (Left Right Boundary)

      • Header

      • Regex Header

      • Assert

        • Status

        • Text

      • Mimetype

      • Size

      • Before & After

        • Before

        • After

        • A list that contains plugins or inline scripts that will be executed after the current HTTP request.

      • Plugins & Inline Scripts

        • Plugin

        • Inline

  • Example Scenarios

    • Simple Scenario

    • Complex Scenario

YAML scripts are divided into two main sections - a “config” section and a “scenarios” section. The “config” contains global parameters which are applied to all scripts within the “scenarios” section. The “scenarios” section contains the actual script(s) that will run during the test.

Dashes or other special characters cannot be used in the name of the scenario since the scenario name will become the Java class name and special characters are not allowed in the Java class name.

Formatting an Apica YAML Script

Each indentations level must be made with either two or four space characters, as long as it is consistent. The example below was done with an indent equaling 2 spaces.

Sample Formatting

Note the comments explaining the formatting.

config:                         # start of configuration. no indent.
  target: null                  # mandatory for the script. 1 indent.
  defaults:                     # start of settings for requests. 1 indent.
    headers:                    # start of a specific default setting. 2 indents.
      name: value               # default setting parameters. 3 indents.
  environments:                 # start of settings. 1 indent.
    name:                       # environment identifier. 2 indents.
      target: null              # environment address. 3 indents.
  inputs:                       # start of settings. 1 indent.
    - name: null                # mandatory input identifier. 2 indents and a dash.
    default: null               # optional default value. 2 indents.
  files:                        # start of settings. 1 indent.
    - path:"path/to/file"       # mandatory path to the file. 2 indents and dash followed by a quoted path.
      - fields:                 # start of list of variables to map to. 3 indents and dash.
        - name: "arbitraryName" # start of list of variables to map to. 4 indents and dash, followed by a quoted name.
      order: "sequential"       # read order (sequential / random). 3 indents followed by quoted order.
      scope: "global"           # file scope (global / user / loop). 3 indents followed by quoted scope.
  variables:                    # start of the Variables section. 1 indent.
    - date: null                # start of the date variables, each with a 2 indent dash and quoted name.
      - format: null
      - offset: null
      - hour: null
      - second: null
      - day: null
      - year: null
      - timestamp: null
    - random:                   # start of a variable. 2 indent dash and quoted name.
      scope: null               # variable Scope (global / user / loop). 3 indents followed by quoted scope.
      from: null                # start value. 3 indents followed by quoted scope.
      to: null                  # end value. 3 indents followed by quoted scope.
      leadingzeros: null        # an optional number of zeros to prefix the value. 3 indents followed by quoted scope.
    - java_property:            # start of a variable. 2 indent dash and quoted name.
      key: null.                # Java property key. 3 indent and quoted name.
      value: null               # Java property value. 3 indent and quoted value.
scenario:                       # Start of Scenario: No indentation.
  -name:                        # Name of: Dash and 1 indentation followed by the script name.
    flow:                       # Start of the section: 2 indents
      - get:                    # Start of Step: 3 indents and dash followed by the command.
        url:                    # URL to be accessed: 4 indents. The URL counts as the first attribute. Include https://www
        assert:                 # allow both 302 and 200 in the response code of the url you're getting. 4 indents
          - status:             # assert the returned URL is a certain status. 5 indents
            codes:              # assert status codes. 5 indents
              - 200             # assert given status codes. 6 indents
              - 302             # assert given status codes. 6 indents

Notes on Special Characters

Do not use a Word processor when creating ApicaYAML scripts; instead, treat the scripts as code and create/edit them in an IDE with integrated YAML formatting.

  • Indentations have a major impact on how the script is interpreted. If a script is indented incorrectly, it will not work.

  • Indentations must be made with “space” (in the examples we always use 2 spaces per indentation)

  • Single quotes (') are not escaped

  • Double quotes (“) are escaped

  • “\n” denotes a new line

  • “#” denotes a comment

  • Refer to the following rules for constructing arrays of attributes within ApicaYAML scripts:

    1. “-” (dash) denotes a member of an array of attributes.

    2. Only certain attributes are intended to be items in an array and only some attributes can be placed in arrays. The concept of an array is completely different here than it is in regular coding languages; instead arrays are intended an syntax component which specifies a one to many relationship in the ZT script. For instance, a user can have one to many input files in a script.

    3. The first item in the array is the first item which is specified after the dash and the last item in the array is the last item in the indentation block! That is, the array starts with a dash and ends when the indentation block ends.

    4. The dash can be placed on the same line as the first array item or on a separate line before the first array item. Arrays have no bearing on indentation level.

    5. Every member of an array must have a dash in front of it. This is acceptable syntax for an array:

      config:
        target: 'http://ticketmonster.apicasystem.com/ticket-monster' 
        inputs:
        - name: 'integrationTest'
          default: 'http://ticketmonsterdev.apicasystem.com/ticket-monster'
        - name: 'production'
          default: 'http://ticketmonster.apicasystem.com/ticket-monster'

      This is also acceptable syntax for an array:

      config:
        target: 'http://ticketmonster.apicasystem.com/ticket-monster' 
        inputs:
        - 
          name: 'integrationTest'
          default: 'http://ticketmonsterdev.apicasystem.com/ticket-monster'
        - 
          name: 'production'
          default: 'http://ticketmonster.apicasystem.com/ticket-monster'

The “config” section

config:
  target: "https://www.google.com"
  headers:
    acept: 'application/json'
    content-type: 'application/json'
  inputs: # here we define input variables that we can change in ASM
  - name: "inputsTestVar"
    default: "/vod/999/example(999).ism/example(999).m3u8"
  variables:
  - timestamp:
      as: "time111"
  - random:
      as: "ran111"
      from: 3
      to: 100
      leadingZeroes: false
  - javaproperty:
      key: "testJavaKey"
      value: "testJavaValue"
      as: "java11"
  externalfiles: # we can declare external files that we want to be added to the zip file that results from running ZebraCLI e.g host files
  - path: "test1014.yml" # the relative path to the file (inside the yml folder)
    javaoption: "addtoclasspath" # declare possible java options like classpath and xbootclasspath
  inputfiles:
  - path: "testInputFile1.csv"
    fields:
    - "testfield"
    order: "sequential"
    scope: "global"
    eof: "close"
  - path: "testInputFile2.csv"
    fields:
    - "testfield2"

Attributes of the “config” section

Note the tabs and spaces in the following code sections!

There is no default order for config attributes.

The config section begins with the “config:” keyword:

config:

target

The “target” keyword is mandatory!

The main URL of the application you want to test. This will become the base URL for all requests in the scenarios section unless you provide a full url for the request. If the protocol (HTTP or HTTPS) is omitted, the request will default to HTTP. Target may also include part of the URI path following the hostname.

  target: "https://ven03142.service-now.com"

headers

Specified headers will be applied to all requests in the YAML definition file. For example, the following headers will direct the YAML script to accept the “application/json” Content-Type in the response headers which comes back after the request is made:

  headers:
    accept: 'application/json'
    content-type: 'application/json'

The following image shows an example response header from a ZT script which was compiled from a YAML file which contains the above headers:

inputs

It is possible to provide inputs which become User Input Fields within the compiled ZebraTester script. In the following example, two “inputs” are provided in the YAML file. They should be inserted into the YAML script using the following syntax:

  inputs:
  - name: 'integrationTest'
    default: 'http://ticketmonsterdev.apicasystem.com/ticket-monster'
  - name: 'production'
    default: 'http://ticketmonster.apicasystem.com/ticket-monster'

The following inputs become User Input Fields in the final ZebraTester script:

variables

The “variables” keyword specifies runtime variables whose values are initialized when starting the test. You can either provide one field or an array of fields to use. The available runtime variables include:

  • timestamp: Create a Unix timestamp based on the time when the test is started. By default, the value of the variable will be reinitialized with every test iteration.

  • random: Create a random number in a range. By default, the value of the variable will be reinitialized with every test iteration.

  • javaproperty: Create a java system property. By default, the value of the variable will be reinitialized with every test iteration.

Examples of variable usage

  variables: 
  - timestamp: 
      as: 'myTimestamp' # Note that the key is the variable type that will be used. The value becomes the variable.  
  - random:
      as: 'myRandomNumber'
      scope: 'loop' # The variable will be reinitialized every time a new loop starts. 
      from: 1 # This is the lowest value that the random number may get.
      to: 1337 # This is the highest value that the random number may get. 
      leadingZeroes: true # Optional. All random numbers will be of equal length. Any values shorter than the max value will have zeros prepended to fill the remaining length.  
  - javaproperty: 
      as: 'myJavaProperty'
      key: 'myCustomKey' # This will be the name of the java system property 
      value: 'value' # This will be the initial value of the system property

The following screenshot shows how the above code translates into a ZebraTester script:

externalfiles

It is possible to declare external files to be used within a compiled ApicaYAML script. This functionality mirrors the external file import functionality which is built into ZebraTester and can be accessed by opening a URL Details window within a ZebraTester script and by clicking on the Folder icon at the top right of the Var Handler:

Clicking on that icon brings up the “Config External Resource” page:

You can specify an external file to add by adding the following attributes to your ApicaYAML definition file:

  externalfiles: # we can declare external files that we want to be added to the zip file that results from compiling an ApicaYAML script
    - path: "externalResource.java" # the relative path to the file (inside the yml folder)
      javaoption: "addtoclasspath" # declare possible java options like classpath and xbootclasspath

Place the input file (in this case, externalResource.java) within the “input” folder within your ApicaYAML Solutions folder:

When you compile an ApicaYAML script with the “externalfiles” attribute included and the specified file in the “input” folder, the file will be declared and added as a resource to the final script. This option is most commonly used for importing Java files and adding them to your classpath so they can be utilized by your ZebraTester script or a plugin which is referenced by your ZebraTester script.

inputfiles

It is possible to specify input files which contain test data which can be used by the ZebraTester script. These files will be shown in the “Input Files” section in ZT. External files should be placed in the “/scripts” subfolder of the project folder. You can either provide one file or an array of files to use. Supported file formats are .txt and .csv. Columns must be separated by “,” (comma). Attributes include:

  • Path (mandatory): The file path to the test data file. Remember that the path is relative to the YAML definition file.

  • Fields (mandatory): A list of variables that the fields in the test file should map to. The variables will be mapped to the columns in the order that they appear in the list. Columns must be separated by "," (comma).

  • Order (optional): How the rows in the test file will be picked during the test. Valid options are 'sequential' and 'random'. Random selection is chosen if this option is omitted.

  • Scope (optional): The scope the extracted variables should have during the test. Valid options are 'global', 'user' and 'loop'. Loop scope is chosen if this option is omitted.

    • Global: Same value (row) is used for each and all test iteration

    • User: One value (row) is used by each Virtual User for the whole test duration (one row per VU)

    • Loop: A new value (row) is selected for each test iteration

  • EOF: the action to take when the end of the file is reached

The following syntax will add the following files and input fields within the compiled ZebraTester script:

  inputfiles:
    - path: "users.csv" # This file must be located in the /script subdirectory
      fields:
      - "username" # 1st column will be used for the variable 'username' 
      - "password" # 2nd column will be used for the variable 'password'
      order: "sequential" # The data vill be picked line for line in sequential order 
      scope: "loop" # A new line will be read for each test iteration
      eof: "close" # defines the action to take when the file is finished being read

The following screenshot shows a created input file.

The following screenshots show the created variables. These can be assigned manually in the ZT script.

The “scenarios” section

scenarios: # we can define multiple scenarios in 1 yml file if we want to
  - name: "TV_Test_iosE789" # in this case we have only 1 scenario, this is the name of the scenario
    flow:
      - page: # URLs need to be attached to pages, if a page is not defined, then one is automatically created
          name: "FirstPage" # name of the page
          thinktime: 6000
      - transaction:
          transactionname: "Trans1"
          flow:
            - get: # HTTP method goes here
                url: "https://www.google.com" # this is an example of using a relative URL
            - get:
                url: "https://www.unsplash.com" # this is an example of using a relative URL
      - loop:
          loopname: "Loop1"
          count: 3
          flow:
            - page:
                name: "DRM"
            - get:
                url: "https://www.google.com"
                headers: # signifies the adding or updating of a header field
                  Authorization: "{{inputsTestVar}}" # variables go inside curly brackets

      - page:
          name: "Manifest"
      - get:
          capture:
            - header:
                target: 'x-cdn-forward'
                as: 'cdncompare'
          assert: # allow both 302 and 200 because the cdn can change
            - status:
                codes:
                  - 200
                  - 302
            - text:
                string1: "test"
                operand: "1"
                string2: "test2"
                onfail: "continue"
          after:
            - inline:
                code: |
                  isakamai = strCompareIgnoreCase(cdncompare,"Akamai")
                  IF isakamai THEN
                  Location = getHTTPResponseHeaderField("Location")
                  temp1 = strSplit(Location,"\\/vod")
                  temp2 = temp1(1)
                  temp3 = strSplit(temp2,"https:\\/\\/")
                  cdnforward = temp3(2)
                  ELSE
                  cdnforward = "lbs-usp-hls-vod.cmore.se"
                  ENDIF
                input:
                  - '{{testVarInlineAfterInput1}}'
                output:
                  - '{{testVarInlineAfterOutput1}}'
          before:
            - inline:
                code: |
                  isakamai = strCompareIgnoreCase(cdncompare,"Akamai")
                  IF isakamai THEN
                  Location = getHTTPResponseHeaderField("Location")
                  temp1 = strSplit(Location,"\\/vod")
                  temp2 = temp1(1)
                  temp3 = strSplit(temp2,"https:\\/\\/")
                  cdnforward = temp3(2)
                  ELSE
                  cdnforward = "lbs-usp-hls-vod.cmore.se"
                  ENDIF
                input:
                  - '{{testVarInlineBeforeInput1}}'
                output:
                  - '{{testVarInlineBeforeOutput1}}'
          url: "https://www.google.com/inputsTestVar"

      - get:
          capture:
            - regex:
                target: '([a-zA-Z0-9\(\)\_]+-audio_eng=[a-zA-Z0-9\(\)\_]+-video_eng=.*m3u8)'
                as: "submanifest_url"
            - xpath:
                target: 'test'
                as: "xpath_test"
            - json:
                target: 'jsonstring'
                as: "json_target"
          url: "http://zebracli.zebracli.zebracli"

      - page:
          name: "Submanifest"
      - get:
          capture:
            - boundary:
                leftboundary: "leftTest"
                rightboundary: "rightTest"
                as: 'boundaryVar'
            - regex:
                target: '(.*\.ts?)'
                occurrence: 1
                as: 'regexVar'
            - header:
                target: "headerName"
                as: "headerVar"
          url: "http://zebracli.zebracli.zebracli"

      - page:
          name: "Segments"
      - loop:
          flow:
            - page:
                name: "testPagex"
            - transaction:
                transactionname: "testTransactionx"
                flow:
                  - post:
                      url: "http://zebracli.zebracli.zebracli/segment_loop_Item"
                      body: "<html><body><h1>Hello,
                             World!</h1></body></html>"
                      before:
                        - inline:
                            code: |
                              print("test")
                            input:
                              - '{{testVarInlineBeforeInput2}}'
                            output:
                              - '{{testVarInlineBeforeOutput2}}'
                        # - plugin:
                        #     file: "addNumbers.class"
                        #     output:
                        #       - "testvaroutput"
                        #     input:
                        #       - "testvarinput"
                      # after:
                      #   - inline:
                      #       code: |
                      #         print("test")
                      #       input:
                      #         - '{{testVarInlineAfterInput2}}'
                      #       output:
                      #         - '{{testVarInlineAfterOutput2}}'
                      #   - plugin:
                      #       output:
                      #         - "testvaroutput2"
                      #       input:
                      #         - "testvarinput2"
                      #       file: "addNumbers.class"
                  - get:
                      url: "http://zebracli.zebracli.zebracli"          
            - get: # the "segment_loop_Item" part of the url is replaced with a segment defined in the "over" list
                after:
                  - inline:
                      code: |
                        print("test")
                      input:
                        - '{{testVarInlineAfterInput3}}'
                      output:
                        - '{{testVarInlineAfterOutput3}}'
                before:
                  - inline:
                      code: |
                        print("test")
                      input:
                        - '{{testVarInlineBeforeInput3}}'
                      output:
                        - '{{testVarInlineBeforeOutput3}}'
                url: "http://zebracli.zebracli.zebracli"
            - page:
                name: "testPagey"
          loopname: "segment_loop"
          over: # the loop iterates once over each item in the "over" list
            - "{{boundaryVar}}"
            - "{{regexVar}}"
            - "{{headerVar}}"

The scenarios section of the definition file is where one or more scripts are defined. If multiple scripts are defined, the config options will be applied to all scripts within the “scenarios” section.

Name

The “name” keyword is mandatory!

The “name” value specified in the ApicaYAML definition file becomes the name of the script.

This syntax

scenarios:
- name: "TM_OrderTickets_v2"

Becomes

Flow

The “Flow” keyword denotes any content which will be parsed into runnable pages, URLs, etc. within the ZebraTester script. It is where you will define URLs to GET, data to POST, etc.

The following flow objects are supported:

  • get

  • post

  • put

  • delete

  • page

  • loop

  • transaction

...
  flow:
      - page:
          ...
      - get:
          ...
      - post:
          ...
      - put:
          ...
      - delete:
          ...
      - loop:
          ...
      - transaction:
          ...
      

Page

A page break. Every script must start with a page break. Page breaks are followed by HTTP requests.

Can contain the following keywords:

  • name (mandatory)

  • thinktime

- page:
    name: "Page 1"
    thinktime: 3      

HTTP Methods

Get:

Can contain the following keywords:

  • url (mandatory)

  • capture

  • assert

  • before

  • after

  • headers

- get:
    url: "http://ticketmonster.apicasystem.com"

Post

Can contain the following keywords:

  • url (mandatory)

  • capture

  • assert

  • before

  • after

  • json

  • data

  • form

  • plain

  • headers

- post:
    url: "http://ticketmonster.apicasystem.com"

Put

Can contain the following keywords:

  • url (mandatory)

  • capture

  • assert

  • before

  • after

  • json

  • data

  • form

  • plain

  • headers

- put:
    url: "http://ticketmonster.apicasystem.com"

Delete

Can contain the following keywords:

  • url (mandatory)

  • capture

  • assert

  • before

  • after

  • headers

- delete:
    url: "http://ticketmonster.apicasystem.com"

Transactions & Loops

Transactions and loops must contain a flow that works the same way as the flow at the highest level of the script. This allows for nested transactions and loops.

Transaction

Used to declare multiple requests as part of a single transaction.

Can contain the following keywords:

  • flow (mandatory)

  • transactionname (mandatory)

- transaction:
    transactionname: "Transaction 1"
    flow:
      - get:
          url: "http://ticketmonster.apicasystem.com"

Loop

Used to loop over multiple requests.

  • flow (mandatory)

  • loopname (mandatory)

  • count

  • over

- loop:
    loopname: "Loop 1"
    count: 3
    flow:
      - get:
          url: "http://ticketmonster.apicasystem.com"

Capture

This is a list of captures. A capture signifies the extraction of data from an HTTP response.

The capture keywords are:

  • json

  • regex

  • xpath

  • boundary

  • header

  • regexheader

JSON

Can contain the following keywords:

  • target (mandatory)

  • as (mandatory)

  • fallback

  • random

  • occurrence

- get:
    url: "/ticket-monster/rest/events?_{{epoch_TS}}"
    capture:
      - json: 
          occurrence: 1
          random: true
          target: "$[*].id"
          as: "event_id"

RegEx

Can contain the following keywords:

  • target (mandatory)

  • as (mandatory)

  • fallback

  • random

  • occurrence

Xpath

Can contain the following keywords:

  • target (mandatory)

  • as (mandatory)

  • fallback

  • random

  • occurrence

Boundary (Left Right Boundary)

Can contain the following keywords:

  • leftboundary (mandatory)

  • rightboundary (mandatory)

  • as (mandatory)

  • fallback

  • random

  • occurrence

Header

Can contain the following keywords:

  • target (mandatory)

  • as (mandatory)

  • fallback

  • random

  • occurrence

Regex Header

Can contain the following keywords:

  • target (mandatory)

  • as (mandatory)

  • fallback

  • random

  • occurrence

Assert

This is a list of asserts. An assert signifies the verification of a status code, text string, mime type or the size of response data. The load test can be configured to abort if an assert fails.

Can contain the following keywords:

  • status

  • text

  • size

  • mimetype

Status

Can contain the following keywords:

  • codes

  • onfail

assert:
  - status:
      codes:
        - 200
        - 302
      onfail: "continue"

Text

Can contain the following keywords:

  • string1

  • string2

  • operand

  • onfail

assert:
  - text:
      string1: "Teststring1"
      string2: "Teststring2"
      operand: 2
      onfail: "abort"

Mimetype

Can contain the following keywords:

  • target

  • onfail

Size

Can contain the following keywords:

  • target

  • deviation

  • onfail

assert:
  - size:
      target: 3000
      deviation: 5
      onfail: "continue"

Before & After

The before and after keywords signify a list of plugins and inline scripts that can be run either before or after an HTTP request.

Before

A list that contains plugins or inline scripts that will be executed before the current HTTP request.

Can contain the following keywords:

  • plugin

  • inline

before: 
  - plugin
    ...
  - inline:
    ...

After

A list that contains plugins or inline scripts that will be executed after the current HTTP request.

Can contain the following keywords:

  • plugin

  • inline

Plugins & Inline Scripts

Plugin

Plugins are java programs specifically created for ZebraTester/ZebraCLI. Plugins extend the functionality of ZebraCLI and allow java code to be run before and after HTTP requests.

Can contain the following keywords:

  • file (mandatory)

  • input

  • output

before:
  - plugin:
      file: "addNumbers.class"
      input:
        - "1"
        - "2"
      output:
        - "result"

Inline

Inline scripts are written in BASIC and extend the functionality of ZebraCLI. Inline scripts can be run before and after HTTP requests.

Can contain the following keywords:

  • code (mandatory)

  • input

  • output

before:
  - inline:
      code: |
        quantity=random(1,3)
        email="user"+(getUserNumber() + 1) + "@acme.com"
      output:
      - '{{email}}'  
      - '{{quantity}}'

Example Scenarios

The following scenarios serve as examples to reference when creating and modifying scripts.

Note the indentations used in the script!

Simple Scenario

Complex Scenario

PreviousInstalling and Using the ApicaYAML CLI ToolNextLoad Testing Overview

Was this helpful?