Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This category is all about creating/generating the Load Test itself and then executing it on an agent, watching the Load Test progress for real-time analysis and interacting with the test, as it occurs.
This phase also includes getting the post-execution Load Test results and then how to deploy/upload the tests to the Apica Load Test (ALT) portal to execute that test on the Apica platform or to deploy the script to the Apica Synthetic Monitoring (ASM) portal to turn this load test into a long term monitoring script.
Generate HTTP(S) Load Test Program
Executing Load Test Programs
Distributed Load Tests
Multiple Client IP Addresses
You can now execute a first try of the load test if your recorded Web surfing session does not contain dynamically exchanged session parameters. What follows is only a short overview.
Step
Screenshot
Now, the load test program can be started. Apica recommends choosing only a few numbers of simulated users and a short execution time for the first test run
We are seeing the same failed URL errors during the load tests, on the same page and on the same URL.
Tip: Likewise: If your load test fails and a permanent error occurs at the same URL and this is a variable, you should call the Var Finder menu – and verify if the handling for dynamically exchanged session parameters must be applied.
In these cases, we strongly recommend reading the installed ZT manual about Handling "Dynamically-Exchanged Session Parameters" (Attached)
Choose or modify any other options as needed.
Optionally, Sace the Recorded Session with the Save Icon
Generate the Script. Note the Overwrite and Compile option.
Press Continue to go to the final Confirmation page
The Project Navigator Menu, or "Project Navigator, “offers additional useful functions besides starting and managing load test programs.
First, it is recommended that a simple directory structure be defined, relevant to your projects. It is also often useful for individual application releases, or even daily test programs, to be assigned their own sub-directories.
To create a new sub-directory, select an existing directory (at left), and then click the Create New Subdirectory button.
Note: new directories can also be created via the Operating System, for example, via File Explorer under Windows or by using a console. The Project Navigator menu has been designed to ensure no discrepancies exist between the menu and the Operating System view.
The new sub-directory can then be selected with a single click on the Project Navigator's left side.
After creating a new sub-directory, an existing load test program, including its recorded web surfing session and Input Files, can be zipped, copied, deleted, or moved by marking the corresponding checkboxes then clicking on the action icons.
Where the action is to Copy/Move files, please select the destination/target directory, and the files will be immediately copied/moved to their destination.
Individual Java load test programs can also be renamed or copied to a new name. This can only be done using the Project Navigator; that is, it cannot be done using the Operating System. This is because the Java program contains references in the source code to its own name. The Project Navigator handles this requirement and automatically makes the appropriate adjustments when copying or renaming a Java load test program.
Note: compiled Java programs (*.class files) can never be renamed; only source files (*.java) can be renamed.
Note that the Project Navigator will require confirmation when overwriting or deleting files using a red-shaded status row. Whenever a shaded status row appears, you should review the action before approving the no/abort/yes with a confirmation. An example is given below for deleting a file.
Clicking this button in the Project Navigator will provide a preview of the statistics files measurements, including the description associated with the corresponding test run. The description of the recorded web surfing sessions and the load test programs will also be displayed, if available.
This feature allows you to quickly compare statistics files of different tests, especially when the same load test program was executed several times with the same number of concurrent users.
The test result can be acquired after the load test is completed.
If errors are measured while executing the load test you can analyze them already during the running test.
Using the Magnifying Icon on the GET call that had all the errors, we get this analysis that states what the error was.
During the execution of a load test you can perform the following real-time actions:
Abort the load test
Suspend and resume the load test
Increase and/or decrease the number of users
Increase or decrease the planned load test duration
Further information can be found on the page, Executing Load Test Programs in the section .
ZebraTester can be configured to have its Project Navigator Main Directory on a shared disk or a shared directory, given all members of a team the same view of the data. On Windows, a directory "Share" must already exist. On Unix systems, the shared directory must be already mounted using NFS or mounted via Samba.
In Windows, the ZebraTester mytests.dat configuration file must be edited using a text editor such as Notepad. The entry in this file must point to the directory share. This directory shared must be created using Windows before the ZebraTester configuration file is edited. The mytests.dat is located in the ZebraTester installation directory.
Load tests can also be transmitted and started on remote computers. Similarly, a “single” load test can be divided up and run on several computers, in which case the load-releasing computers are combined into a "virtual" application cluster.
The configuration is very simple and only requires that an Exec Agent process be installed on the involved load-releasing systems. This is implied in the case where the product has been installed and started on several computers, as each system already will contain an Exec Agent. Alternatively, individual Exec Agent processes can be installed separately as a Windows service and/or a Unix daemon (see the Application Reference Manual).
The communication between the Web Admin GUI and the remote Exec Agent processes usually uses raw TCP/IP network connections to port 7993; however, this port number can be freely chosen if the Exec Agent process is installed separately. The communication can also be made over HTTP or HTTPS connections (tunneling), and also supports outbound HTTP/S proxy servers. The support of outbound HTTP/S proxy server means, in this case, that tests can be started from a protected corporate network and then transmitted, over the proxy server of the corporation, to any load releasing system on the internet - all without the need for ordering new firewall rules.
On Unix systems, the mytests.dat configuration file must be manually created in the ZebraTester installation directory using a text editor such as vi, The only entry in this file should be the path to the new main directory. Note: on Unix
systems that have only an Exec Agent started, this file is not necessary.
After setting the new Project Navigator main directory, the ZebraTester application must be closed. In addition, all cookies in your Web Browser must be deleted because the old main directory is also stored in a browser cookie. After that ZebraTester can then be re-started, and the new main directory will be active.
Further information about ZebraTester configuration files can be found in the "Application Reference Manual", Chapter 7.
The computers of a load-releasing cluster (the cluster members) may also be heterogeneous; that is, Windows and Unix systems, as well as strong and weak systems, can be mixed within the same cluster. The individual cluster members can be placed in different locations, and can also use different protocols to communicate with the Web Admin GUI (or rather with the local cluster job controller).
Please note that the underlying operating system of a single Exec Agent (load injector) can be overloaded if too many concurrent (virtual) users are executed there.
In most cases where a system is overloaded, the CPU(s) of the Exec Agent will be constantly at nearly 100% used. In these cases, the measured response times will not be valid because the measuring system itself is overloaded.
We recommend that you monitor the CPU consumption of the Exec Agent during the load test and that you use an Exec Agent Cluster, instead of a single Exec Agent when a single system does not have the necessary CPU resources to properly generate the load. The CPU consumption of the load-releasing system depends on the number of users (more users
Furthermore, we recommend that you tune the TCP/IP parameters of load releasing systems.
The Exec Agents can be configured so that Email and SMS Alert Notifications are released during the execution of a load test job.
The corresponding Alert Configuration Menu can be called from the Personal Settings Menu. The Alert Configuration Menu will create a file named AlertConfig.xml, located in the ZebraTester installation directory and contains the configuration data for all alert devices and all alert notifications. If no AlertConfig.xml file exists on an Exec Agent, no alerts are released from this Exec Agent ¹. When a job is started on an Exec Agent, the Exec Agent tries to read this file, which means that the file can be created, updated, or deleted without the need to restart the corresponding Exec Agent.
Optionally, you may want an Exec Agent to use multiple client IP addresses during the load test to simulate users from different network locations. If a load balancer is placed in front of a web server cluster or web server farm, the load balancer will often route all HTTP/S requests of one client IP address to only one member of the web server cluster. This is because web applications use session cookies, whose context information is only stored in a particular cluster member's transient memory. The server-side SSL cache is usually handled by the cluster members and not by the load balancer. This load balancer functionality is called “IP stickiness,” representing the recording of client IP addresses inside the load balancer algorithms. This term has nothing to do with the sticky bit of Unix file systems.
If you encounter this situation, the load will appear on only one web server and will not be distributed across all webserver cluster members. The solution to this load balancer behavior is to have the Exec Agent use multiple client IP addresses during the load test; therefore, each concurrent "user" will have its own IP address – or, if more concurrent users are running than available local IP addresses, the local IP addresses will be averaged across the concurrent users.
Warning: please contact your network administrator to get additional (free) IP addresses. An incorrect configuration of additional IP addresses without consulting the network administrator may impact several other computers of the same LAN, such that these other computers could lose their network connection due to IP address conflicts.
Additional load-releasing systems can be added by using the Network menu, which can be invoked from the Project Navigator:
In the upper left part of the Window, a list of currently defined Exec Agents is shown. The Exec Agent configuration can be changed by clicking on the corresponding icon. In the lower part of the window, additional Exec Agents can be defined, and/or existing Exec Agents can be modified. You must click the Refresh button in the right upper corner of the windows to add several Exec Agents.
Exec Agent Icons:
¹ As a further option, it is also supported to use a specific alert configuration for a particular load test program. In such a case, you have first to place a copy of the file AlertConfig.xmlinside the Project Navigator directory where the load test program is stored. After that, you can manually edit the copied AlertConfig.xmlfile; you have to ZIP it together with the load test program's compiled class (similar to the procedure required for using input files or using Plugins). This effect that the program-specific alert configuration is automatically transmitted to the Exec Agent(s) and that it overrides the Exec Agent's default behavior (s). Note: the copy of the AlertConfig.xmlfile is stored in such a case inside the job-specific directory on the Exec Agent.
The following Alert Conditions are supported:
If a Job cannot be started
At the start of a Job (information)
If an Internal Error occurs during the Execution of a Job
During the Execution of a Job in Periodically Intervals (configurable interval time in minutes)
At Every Interval (information)
If the Session Failure Rate is greater than a threshold in percent ¹
If the Average Response Time per Page is greater than a threshold in seconds ¹
If the Average Response Time of the Slowest Page is greater than a threshold in seconds ¹
At the End of a Job (information)
At the End of a Job: If the Session Failure Rate is greater than a threshold in percent
At the End of a Job: If the Average Response Time per Page is greater than a threshold in seconds
At the End of a Job: If the Average Response Time of the Slowest Page is greater than a threshold in seconds
¹ = The values for periodically checked alert conditions are calculated from the measurements collected within one interval. Repeated alerts are suppressed. A cancel notification is released if the measurement is later less than the threshold.
The Message Headlines for all Alert Notifications can be configured and support placeholders. The values of the placeholders are calculated at runtime and are replaced within the message headlines.
Generic Placeholders that can be used in every type of alert notification are:
{$timestamp}: The current date and time when the alert notification was created. Example: "01 Jun 2010 13:45:38 ECT"
{$generator}: The name of the Exec Agent (load generator) releases the alert notification.
{$jobId}: The job ID of the Exec Agent job.
{$programName}: The program name of the Exec Agent job.
During the Execution of a Job (Information at Every Interval) and at the End of a Job (Information):
{$sessionFailureRate}: The measured session failure rate in percent.
{$avResponseTimePerPage}: The measured average response time per page in seconds.
During the Execution of a Job and at the End of a Job: if the Session Failure Rate is greater than %
{$sessionFailureRate}: The measured session failure rate in percent.
{$sessionFailureRateLimit}: The configured threshold for the session failure rate in percent.
During the Execution of a Job and at the End of a Job: if the Average Response Time per Page is greater than seconds
{$avResponseTimePerPage}: The measured average response time per page in seconds.
{$avResponseTimePerPageLimit}: The configured threshold for the average response time per page in seconds.
Select: selects an Exec Agent. Thereafter you may modify its configuration.
Delete: removes an Exec Agent from the configuration.
Test: Test the network connection to an Exec Agent - used to verify and debug the access to the Exec Agent. Duplicate Exec Agent: duplicates the definition of an Exec Agent. Upload or Delete Exec Agent License Ticket: allows to upload a License Ticket to an Exec Agent or remove an already uploaded License Ticket from an Exec Agent. Using this functionality requires special license tickets: further information. Protect Access to Exec Agent: reconfigures an Exec Agent remotely so that other persons cannot access it. Or - if access protection has already been configured - the access protection can also be removed (in case if you know the actual user name and password of the Exec Agent).
Description: arbitrary name of the Exec Agent. The option list on the menu Execute Load Test and Jobs will display this name.
Host: TCP/IP address or DNS Hostname of the Exec Agent
Port: TCP/IP server port of the Exec Agent
Protocol: network protocol, applied for the internal communication from the Web Admin GUI to the Exec Agent:
plain: raw TCP/IP connection, using an outbound proxy is not supported.
HTTP: embeds the communication inside the HTTP protocol.
HTTPS: embeds the communication inside the encrypted HTTPS protocol.
Username: username for protected Exec Agents
Password: password for protected Exec Agents
Please leave all of these fields blank if you do not use an outbound proxy server.
Proxy Host: TCP/IP address or DNS Hostname of the outbound proxy
Proxy Port: TCP/IP port number of the outbound proxy
Proxy Username: username for authentication on the outbound proxy (maybe optional)
Proxy Password: password for authentication on the outbound proxy (maybe optional)
You can test the configuration and the accessibility of an Exec Agent by clicking the Test Network Connection to Exec Agent button within the list of Exec Agents (a functional “ping” of the Exec Agent):
Once an Exec Agent has successfully installed and started on a remote system, it can be used transparently by selecting it as load test executing host (input field: Execute Test from) in the Execute Load Test menu.
The first step to enable multiple IP addresses for an Exec Agent is to reconfigure the underlying Windows or Unix operating system, such that multiple local IP addresses are available. This can be done by assigning additional IP addresses to the same physical network interface.
You can configure multiple virtual IP addresses for the same network interface by executing the ifconfig command. The specific arguments for the ifconfig command depend on the Unix variant and operating system version (Linux, Solaris, Mac OS X ...). Please refer to your operating system manual to find out how to define virtual IP addresses on your system.
Step 2: The second step is to assign these multiple IP addresses to the Exec Agent configuration. For the local host where the Web Admin GUI is running, the second step can be done by invoking the “Setup” menu inside the Project Navigator (gear-wheel icon in the top navigation).
For Remote Exec Agents, you must edit the file javaSetup.dat.
With a text editor, add the entry javaVirtualIpAddresses to the javaSetup.dat file (located inside the ZebraTester installation Directory) and list the IP addresses to the Exec Agent.
Important Note: when you start a load test, you must use the additional option -multihomed
to specify that multiple IP addresses are to be used by the Exec Agents.
This option is also supported by Exec Agent clusters (load injector clusters), in which case each load-releasing cluster member (Exec Agent) uses its own configuration of client IP addresses.
Note: this menu can be called from the Project Navigator.
Several Exec Agents can be combined to a virtual Exec Agent Cluster, which allows executing very strong load tests with an unlimited number of concurrent users. (Such functionality is often called by other load test products as multiple "load-injectors").
Once an Exec Agent Cluster has been defined, the handling of such a cluster is from the user's perspective absolutely transparent - like the job would be only started from a single Exec Agent. But in fact, the Exec Agent Cluster enables to split and distribute a single test over an unlimited number of Exec Agents. Each Exec Agent will execute only a part of the load test. Thereafter, when the cluster job has been completed, the results of all corresponding Exec Agents (cluster members) are merged automatically to one united result.
The distribution of the load per Exec Agent (distribution of concurrent users) can be indirectly controlled over the load factor which should represent the capacity/power of the Exec Agent computer. By clicking on the magnifier icon, you can manually modify the load factor - or alternatively apply a suggested value, based on a short internal performance test of the Exec Agent itself.
A (single) load test job can be executed as a "cluster job". This will split the test over several Exec Agents
Cluster jobs are splitting virtual users - depending on the capability of the Exec Agents (load factor)
Input files can be spitted - depending on the number of virtual users per Exec Agent
The same cluster job can run over a mixed collection of Windows and Unix systems
Once an Exec Agent Cluster has defined, it can be used transparently by selecting it as a load test executing source (input field "Execute Test from") on the Execute Load Test menu.
It is not necessary that all cluster members have the same operating system time. Each time a cluster job is started, the cluster job controller automatically measures the time differences between the cluster members. These measured time differences will be automatically accounted for when the consolidated statistics data are merged.
If additional Exec Agents and/or clusters have been defined, you can select - when starting the test run - from which system or cluster the load test is to be released (input field: Execute Test from). The succeeding steps inside the Web Admin GUI are then the same as for executing the load test locally.
The (cluster) load test result data are automatically merged to a united result
The united result data can be expanded to examine the individual result of each cluster member (Exec Agent)
Running the Web Admin GUI and the Cluster Job Controller is not required during the load test execution
Live statistics overall cluster members and detailed statistics for each cluster member can be displayed during the load test execution
Several cluster jobs (over the same or different clusters) can run concurrently at the same time
If several Exec Agents have been defined, they can be combined to form a load-releasing cluster. You can also define more than one cluster by using some of the same Exec Agents in several different clusters.
After an arbitrary name of the cluster has been entered, the cluster members (Exec Agents) can be added to the cluster by clicking on the grey arrows in the list of Available Exec Agents.
In the above screenshot, we’ve clicked on the magnifier icon to show the Cluster 1 has two members, Sun Fire V240 and Test PC II. The Local Exec Agent is available to add to Cluster 1 by clicking on the grey area next to the Agent name.
To get a suggestion for the load factor of a particular Exec Agent, you can click on the icon within the list of all defined Exec Agents. It is, however, recommended that you click several times on the icon in order to get a stable result. Even so, this result may not accurately reflect the power of the computer system.
By clicking on the magnifier icon of a cluster member, the Load Factor of this member can be modified. The load factor controls how many users will be assigned to this cluster member when the load test is distributed across the cluster members. The load factor by itself is an abstract value, meaning that the distribution of the users is made based on the ratio between the load factors. If you mix strong and weak systems within the same cluster, it is recommended that you give a higher load to the stronger systems than to the weaker systems.
Optionally, you may want an Exec Agent to use multiple client IP addresses during the load test in order to simulate users from different network locations.
Optionally, you may want an Exec Agent to use multiple client IP addresses during the load test in order to simulate users from different network locations.
The Exec Agents can be configured in such a way that Email and SMS Alert Notifications are released during the execution of a load test job.
Introduction
Load Test (*.class) files.
Execute Load Test Steps in the Project Navigator
Executing a Load Test is started in the Project Navigator. The icon to the right of the *.class files will have a red arrow.
The icon to the right of the Job Template (*.xml) files will have a green arrow and will directly open a Start Load Test Job, which is covered ).
After the Project Navigator has called the load test program, you must enter the test input parameters for the test run (a single execution of the load test program is also called “test run”).
The most important parameters are the Number of Concurrent Users and Load Test Duration. For short-duration Load Tests, Apica recommends 100% URL sampling. We also recommend entering a small comment about the test run into the Annotation input field.
If evaluating for browser performance, please select any Browser Emulation and Caching Options needed.
If you have specified that a single Exec Agent executes the load test program (but not by an Exec Agent Cluster), the load test program is transmitted to the local or remote Exec Agent, and a corresponding load test job - with a job number - is created locally within the Exec Agent. The job is now in the state “configured”; that is, ready to run, but the job is not yet started.
Hint: each Exec Agent always executes load test jobs as separate background processes and can execute more than one job at the same time. The option Display Real-Time Statistic only means that the GUI opens an additional network connection to the Exec Agent, which reads the real-time data directly from the corresponding executed load test program's memory space.
Click the Start Load Test Job button to start the job.
If you have de-selected the checkbox Display Real-Time Statistic, the window will close after a few seconds; however, you can - at any time - access the real-time statistic data, or the result data, of the job by using the Jobs menu, which can be called from the Main Menu and also from the Project Navigator.
Alternatively, the load test program can also be scheduled to be executed at a predefined time. However, the corresponding Exec Agent process must be available (running) at the predefined time because the scheduling entry is stored locally inside the Exec Agent jobs working directory, which the Exec Agent itself monitors. Especially if you have started the local Exec Agent implicitly by using the ZebraTester Console - AND if the scheduled job should run on that local Exec Agent, you must keep the ZebraTester Console Window open so that the job will be started ¹.
¹ This restriction can be avoided by installing the Exec Agent as a Windows Service or as a Unix Daemon (see Application Reference Manual).
Note: if you close the window without clicking on the Start Load Test Job button, the job remains in the state "configured" or “scheduled.” Afterward, you can use the Load Test Jobs menu to start or delete the job or schedule or cancel this job schedule.
Real-time statistics shown in this window are updated every 5 seconds for as long as the load test job is running.
Note: closing this window will not stop the load test job. If you close this window, you can later acquire the load test result or return to this dialogue (if the load test is still running) by clicking on the Jobs icon in the Main Menu or the Project Navigator window.
You may abort a running load test job by clicking on the Job Actions button, which includes.
Abort Job: Aborting a job will take a few seconds because the job writes out the statistic result file (*.prxres) before it terminates.
Suspend Job
Increase Users
Decrease Users
More actual measurement details are available by clicking on the Detailed Statistic button. Especially, an overview of the current execution steps of the simulated users is shown:
The most relevant measured values of the URLs are shown for the selected page by clicking on the page's magnifier icon.
Using this menu, you can also display and analyze error snapshots by clicking on the magnifier icon next to the failure counter. In this way, you can begin analyzing errors immediately as they occur - during the running load test.
By clicking on a URL, the corresponding URL Response Time Diagram is shown.
All of these detailed data, including all error data, are also stored inside the final result file (.prxres), which can be accessed when the load test job has been completed.
Description: displays during the load test (at real-time) a diagram per web page about the measured response times.
Please consider that maybe only a fraction of the response times are shown, depending on the Additional Sampling Rate per Page Call, which was selected when the load test was started. For example, only every fifth response time is shown if the Additional Sampling Rate per Page Call was set to 20%.
Input Fields
Description: displays during the load test (at real-time) the response times of a URL and also a summary diagram about the measured errors of the URL.
Please consider that maybe only a fraction of the response times are shown, depending on the Additional Sampling Rate per URL Call, which was selected when the load test was started. For example, only every fifth response time is shown if the "Additional Sampling Rate per URL Call" was set to 20%.
Info Box / Measured Values
All values in this infobox are calculated overall completed calls of the URL, measured since the load test was started. These values are always "accurately" measured, which means that they do not depend on the value chosen for the "Additional Sampling Rate per URL Call."
URL Errors / Real-Time Profile of Error Types: This diagram shows an overview of what kind of errors did occur for the URL at which time, measured since the load test was started. This "basic error information" is always "accurately" measured independently of the value chosen for the "Additional Sampling Rate per URL Call" - and captured in every case, also if no more memory is left to store full error snapshots.
Description: Displays during the load test (at real-time) an overview of all occurred errors.
Failure Diagrams: The first diagram shows an overview of what kind of errors did occur, counted overall URLs, and measured since the load test was started. This "basic error information" is always captured in every case, also if no more memory is left to store full error snapshots.
The succeeding diagrams, shown per web page, provide only information at which time errors occurred. The tables on the right side of the diagrams show the number of errors that did occur on the URLs of the web page. You can click on an error counter to show the error detail information (error snapshots) for the corresponding URL. First Error Snapshots: Displays a list of errors at first (at the start of the load test). By clicking on a magnifier icon, the corresponding error detail information (error snapshot) is shown.
Latest Error Snapshots: Displays a list of the latest (newest) errors. By clicking on a magnifier icon, the corresponding error detail information (error snapshot) is shown.
Error Snapshot Memory: % used +: By clicking on the + (plus sign), you can increase the amount of memory available to store error snapshots. Please Note: when the memory is already 50% or more used, no additional error snapshots for non-fatal errors are captured. This means that increasing the memory may also re-enable capturing for non-fatal errors.
Description: displays statistical overview diagrams (in real-time) about a load test job.
Note: The values shown in the diagrams are captured at regular intervals, depending on the Statistic Sampling Interval, which was selected when the load test was started.
Real-Time Comments
Description: supports entering comments during the load test execution.
Real-time comments are notes or Tips, which you can enter during the load test execution:
These comments are later displayed inside all time-based diagrams of the load test result detail menu :
You can also modify, delete or add real-time comments before you generate the PDF report. However, all retroactively entered real-time comments are not permanently stored inside the result data.
After the load test job has been completed, the statistic results file (*.prxres) is stored in the local or remote Exec Agent's job directory. To access this results file, you must transfer it back to the (local) Project Navigator directory from which the load test program was started.
This menu shows all the load test files' files; however, only the statistics results file is usually needed, and this is already selected. The "*.out" file contains debug information, and the "*.err" file is either empty or contains internal error messages from the load test program itself.
By clicking on the Acquire Selected Files button, all selected files are transferred (back) to the (local) Project Navigator directory.
If the checkbox Load *.prxres File on Analyze Load Test Menu is selected, the statistics results file is also loaded into the memory area of the Analyze Load Tests menu, where the statistics and diagrams of the measured data can be shown, analyzed, and compared with results of previous test runs.
If you have specified that an Exec Agent Cluster executes the load test program, the load test program is transmitted to the local cluster job controller, coordinating all cluster members (Exec Agents). The cluster job controller creates a cluster job and allocates a cluster job number. The cluster job is now in the state “configured” (ready to run, but not yet started).
The number of concurrent users will be automatically distributed across the cluster members, depending on the individual computer systems' capability - called "load factor."
If the load test program uses Input Files, you are asked for each Input File - if you wish to split the Input File content. This can be useful, for example, if the Input File contains user accounts (usernames/passwords) but the web application does not allow duplicate logins. In this case, each cluster member must use different user accounts. By clicking on the corresponding magnifier icon, you can view how the Input File data would be distributed across the cluster members. If you do not use the split functionality, each cluster member will receive an entire Input File copy.
The distribution of users across the cluster members can also be modified manually; however, this is useful only if a cluster member is unavailable (marked with light red background). The cluster job can not be started. In this case, you can assign the unavailable cluster member users to other cluster members and then try to start the cluster job again. This redistribution may take a few seconds to complete.
Alternatively, the load test program can also be scheduled to be executed at a predefined time. However, the local Job Controller process must be available (running) at the predefined time because the scheduling entry for the cluster job is stored inside the Job Controller working directory, which the Job Controller itself monitors. If you have started the Job Controller implicitly by using the ZebraTester Console, you must keep the ZebraTester Console Window open so that the cluster job will be started ¹. ¹ This restriction can be avoided by installing the local Job Controller as a Windows Service or as a Unix Daemon.
After the cluster job has been scheduled, you can leave this menu by closing the window, and you can use later the Jobs menu to cancel or modify the schedule of this job.
The real-time statistics of a cluster job show the most important measured values, similar to the values shown in the Real-Time Statistic of Exec Agent Jobs. The cluster job itself contains Exec Agent jobs that the local cluster job controller has created. By clicking on a cluster member's magnifier icon, the corresponding Exec Agent job's real-time statistics can be displayed in its own window.
If you want to abort the cluster job, you must do it at this level, as this will also abort all Exec Agent jobs. Aborting a single Exec Agent job will not interrupt the cluster job.
The same applies to the statistics result file (*.prxres), which must be accessed at this level.
The statistics results file of a cluster job contains the consolidated (merged) measurements for all cluster members. The calculations for merging the results are extensive; therefore, it may take up to 60 seconds to show the result file. The individual measurements of the Exec Agents are embedded separately inside the same consolidated result file.
The consolidated statistics results file is marked with a grey/blue background and is already selected for you, depending on your ZebraTester version.
Click on the Acquire Selected Files button to get you to the Load Test job's associated files.
By clicking on the magnifier icon, you can access the "*.out" and "*.err" files of the corresponding Exec Agent jobs.
Usually, you would work inside the Analyze Load Tests menu with the consolidated measurement results only. However, it is also possible to expand the measurement results to access the results of each Exec Agent job:
This feature can be used to check if all cluster members have measured approximately the same response times; however, variations in a range of ± 20% or more may be normal:
All load test programs started from the Project Navigator are always executed as "batch jobs" by an (external) Exec Agent process or by an Exec Agent Cluster. This means that it is not required to wait for the completion of a load test program on the “Execute Load Test” window: you can close the "Execute Load Test" window at any time, and you can check later the result, or the actual effort, of all load test jobs by using this menu.
If a load test job has been completed, you will acquire the corresponding statistic result file (*.prxres). If a load test job is still running, you are disposed to the job's temporary live-statistic window.
Several load test jobs can be started from the GUI at the same time. However, the GUI does not have the ability to automatically run sequences of load test jobs, synchronize load test jobs, or automatically start several jobs with a single mouse click.
To perform these kinds of activities, you must program load test job scripts written in the “natural” scripting language of your operating system (Windows: *.bat files, Unix: *.sh, *.ksh, *.csh … files). Inside these scripts, the PrxJob utility is used as the interface to the ZebraTester system. When the Windows version of ZebraTester is installed, the installation kit creates the directory ScriptExamples within the Project Navigator, and this directory contains some example scripts.
The PrxJob utility allows you to start load test jobs on the local as well as on a remote system. It also provides the capability to create cluster jobs, synchronize jobs, obtain the current state of jobs, and acquire the statistics result files of jobs. More information about the PrxJob utility can be found in the Application Reference Manual, Chapter 4.
Whenever a load test is started, an additional job definition template file is stored in the actual Project Navigator directory (in XML format). Such a job definition template file contains all configuration data needed to rerun the same load test job.
If you click the corresponding button of a job definition template (XML) file in Project Navigator, the load test job, inclusive of all of its input parameters, is automatically transferred to the Exec Agent or the Exec Agent Cluster and immediately ready-to-run.
In the screenshot below, the job was preconfigured to run from a cluster of defined servers with a predefined set of .
Additionally, if you wish to trigger several load test jobs simultaneously to be ready-to-run (by using only one mouse click), you can zip several templates to one zip archive. After this, click the corresponding button of the zip archive:
Minimum Loop duration per User. Enabling this option sets a minimum time for all in the iteration executed page breaks and URL calls which must be elapsed before the next iteration starts. If the iteration completes earlier than the pacing time, the user will be inactive until the pacing time has been reached.
Startup Delay per User
The delay time to start an additional concurrent user (startup ramp of load). Used only at the start of the load test during the creation of all concurrent users.
Max. Network Bandwidth per User
The network bandwidth limitation per simulated user for the downlink (speed of the network connection from the webserver to the web browser) and the uplink (speed of the network connection from the web browser to the webserver). By choosing a lower value than "unlimited," this option allows simulating web users with a slow network connection.
Request Timeout per URL
The timeout (in seconds) per single URL call. If this timeout expires, the URL call will be reported as failed (no response from the webserver). Depending on the corresponding URL's configured failure action, the simulated user will continue with the next URL in the same loop, or it will abort the current loop and then continue with the next loop.
Max. Error-Snapshots
Limits the maximum number of error snapshots taken during load test execution. The maximum memory used to store error snapshots can be configured (recommended - for cluster jobs: value overall cluster members). The maximum number of error snapshots per URL can be configured (not recommended for cluster jobs: value per Exec Agent).
Statistic Sampling Interval
statistic sampling interval during the load test in seconds (interval-based sampling). Used for time-based overall diagrams like, for example, the measured network throughput.
If you run a load test over several hours, you must increase the statistic sampling interval to 10 minutes (600 seconds) to save memory. If the load test runs only some minutes, you may decrease the statistic sampling interval.
Additional Sampling Rate per Page Call
Captures the measured response time of a web page when a simulated user calls a web page (event-based sampling). Used to display the response time diagrams in real-time and in the Analyse Load Test Details menu. For endurance tests over several hours, it is strongly recommended that the sampling rate for web pages is set between 1% and 5%. For shorter tests, a 100% sampling rate is recommended.
For endurance tests over several hours, Apica strongly recommends setting the sampling rate for web pages between 1% and 5%. We recommend a 100% sampling rate for shorter tests.
Additional Sampling Rate per URL Call
captures the measured response time of a URL each time when a simulated user calls a URL (event-based sampling). Used to display the response time diagrams in real-time and in the Analyse Load Test Details menu.
For endurance tests over several hours, Apica strongly recommends either disabling the sampling rate for URL calls or setting them to 1% or 2%. We recommend a 100% sampling rate for shorter tests.
In addition to capturing the URL calls' response time, further data can be captured using one of the Add options.
Hint: these additional URL data can be displayed and/or exported in the form of an HTML table when the test run has been completed.
--- recommended: no additional data are captured Performance Details per Call: additionally, the TCP/IP socket open time (network establish time), the request transmit time, the response header wait time, the response header receive time, and the response content receive time URL calls are captured. TCP/IP Client Data: additionally the (load generator) TCP/IP client address, the TCP/IP client port, the network client-socket create date, the reuse count of the client-socket (keep-alive), and the SSL session ID (for encrypted connections only) are captured. This option also includes the option "Performance Details per Call." Resp. Throughput Chart per Call: additionally, in-depth throughput data of the received HTTP response content are captured and displayed as a chart (stream diagram of response). This option also includes the option "Performance Details per Call." Request Headers: additionally, the request headers of all URL calls are captured. This option also includes the option "Performance Details per Call." Request Content: additionally, the request content data of all URL calls are captured. This option also includes the option "Performance Details per Call." Request Headers & Content: additionally, the request headers and request content data (form data) of all URL calls are captured. This option also includes the option "Performance Details per Call." Response Headers: additionally, the response headers of all URL calls are captured. This option also includes the option "Performance Details per Call." Response Headers & Content: additionally, the response headers and the response content data of all URL calls are captured. This option also includes the option "Performance Details per Call" and "Resp. Throughput Chart per Call". All - But w/o Response Content: additionally, the request headers, the request content data, and the response headers of all URL calls are captured. This option also includes the option "Performance Details per Call" and "Resp. Throughput Chart per Call". All - Full URL Snapshots: additionally, all data of the URL calls are captured. This option also includes the option "Performance Details per Call" and "Resp. Throughput Chart per Call".
Debug Options
Choosing any debug option (other than "none") affects that additional information is written to the *.out file of the load test job. The following debug options can be configured:
none - recommended Recommended default value. Note that error snapshots are still taken, and therefore special debug options are normally not necessary to analyze a measured error. debug failed loops Write the performed execution steps of all failed loops to the *.out file of the load test job. debug loops Write the performed execution steps of all loops to the *.out file of the load test job. debug headers & loops Write additionally debug information about all transmitted and received HTTP headers to the *.out file of the load test job. debug content & loops Write additionally debug information about all transmitted and received content data to the *.out file of the load test job (without binary content data like images). debug cookies & loops Write additionally debug information about all transmitted and received cookies to the *.out file of the load test job. debug keep-alive & loops Write additionally debug information about the behavior of re-used network connections to the *.out file of the load test job. debug SSL handshake & loops Write additionally debug information about the SSL protocol and the SSL handshake to the *.out file of the load test job.
Additional Options
Several additional options for executing the load test can be combined by adding a blank character between each of the options. The following additional options can be configured.
-multihomed Initiates all Exec Agents to use multiple local IP addresses when executing a load test. This option allows simulating traffic from more than one IP address per Exec Agent. This option is only considered if the Exec Agent supports a multihomed network configuration (several IP addresses assigned to the same host). The first step to use this option is to configure the Windows or Unix operating system multiple IP addresses for the same host. The second step is to assign these IP addresses to the Exec Agent configuration. For the localhost, where the Web Admin GUI is running, the second step can be done by calling the menu inside the Project Navigator (gear-wheel icon at the top navigation). For remote Exec Agents, you have to edit the file javaSetup.dat, located inside the ZebraTester installation directory - by modifying the entry value javaVirtualIpAddresses: enter all IP addresses of the host on the same line, separated by comma characters. This option's effect is that each concurrent user uses its own client IP address during the load test. If fewer IP addresses are available than concurrent users are running, the IP addresses are averaged across the users. -ipperloop Using this option combined with the option -multihomed effects that an owned local IP address is used for each executed loop rather than for each simulated user. This option is considered only if also the option -multihomed is used. -tconnect <seconds> Set a timeout in seconds to open a TCP/IP socket connection to the Web server. If the time is exceeded, the URL call is aborted and marked as failed. Note that the value must be greater than zero but should be less than "Request Timeout per URL." -dnshosts <file-name> Effects: The load test job uses an own DNS hosts file to resolve hostnames rather than the underlying operating system's hosts file. Note that you have to ZIP the hosts file together with the compiled class of the script. To automate the ZIP, it's recommended to declare the hosts file as an (w/o adding it to the CLASSPATH). -dnstranslation <file-name> Effects: The load test job uses a DNS translation file, a text file containing a translation between two DNS names. If the first DNS name in the file matches the DNS name passed to the resolver, then the second DNS name is used to resolve the IP address. The first DNS name can also contain one or more wildcard characters ('*' = wildcard for multiple characters, ‘?' = wildcard for a single character). Lines or a part of a line can be commented on using the hash char '#.’
SSL: specifies which HTTPS/SSL protocol version should be used:
All: Automatic detection of the SSL protocol version. ZebraTester prefers the TLS 1.3 or TLS 1.2 protocol, but if the Web server does not support this, TLS 1.1, TLS 1.0, or SSL v3 is used. This is the normal behavior that is implemented in many Web browser products.
v3: Fixes the SSL protocol version to SSL v3. TLS: Fixes the SSL protocol version to TLS 1.0. TLS11: Fixes the SSL protocol version to TLS 1.1. TLS12: Fixes the SSL protocol version to TLS 1.2. TLS13: Fixes the SSL protocol version to TLS 1.3.
Browser Emulation
User-Agents and Caching
User-Agent Selection: This option is used to create a custom user agent string or select a user agent from the available list.
Browser Cache: This option emulates the cache setting of a real browser.
Check for newer versions of stored pages every time: when enabled, ZebraTester will check for later versions of the specified URL than those stored in the cache.
Annotation
Enter a short comment about the test run, such as purpose, current web server configuration, and so on. This annotation will be displayed on the result diagrams.
Abort Increase/Decrease Users
Extend Test Duration
Reduce Test Duration
the average time for waiting for the first byte of the web server response (-header), measured since the request has (completely) transmitted to the webserver.
Av. Response Header Receive Time
the average time for receiving the HTTP response header's remaining data, measured since the first byte of the response header was received.
Av. Response Content Receive Time
the average time for receiving the response content data, for example, HTML data or the data of a GIF image.
Average Response Time
the average response time for this URL. This value is calculated as \\.
The total network traffic which is generated by this load test job, measured in megabits per second.
Displays the TCP/IP address (remote computer) from which the job has been initiated.
Network bandwidth limitation per concurrent user in kilobits per second for the uplink (web browser to the webserver)
sampling <seconds>
Statistical sampling interval in seconds (interval-based sampling). Used for time-based overall diagrams like for example, the measured network throughput
percpage <percent>
Additional sampling rate in percent for response times of web pages (event-based sampling, each time when a web page is called)
percurl <percent>
Additional sampling rate in percent for response times of URL calls (event-based sampling, each time when a URL is called)
maxerrsnap <number>
Max. number of error snapshots per URL (per Exec Agent), 0 = unlimited
maxerrmem <megabytes>
Max. memory in megabytes which can be used to store error snapshots, -1 = unlimited
setuseragent "<text>"
Replaces the recorded value of the HTTP request header field User-Agent with a new value. The new value is applied for all executed URL calls.
nostdoutlog
Disables writing any date to the *.out file of the load test job. Note that the *.out file is nevertheless created but contains zero bytes.
dfl
Debug failed loops
dl
Debug loops
dh
Debug headers & loops
dc
Debug content & loops
dC
Debug cookies & loops
dK
Debug keep-alive for re-used network connections & loops
dssl
Debug information about the SSL protocol and the SSL handshake & loops
multihomed
Forces the Exec Agent(s) to use multiple client IP addresses
ipperloop
Using this option combined with the option -multihomed effects that an own local IP address is used for each executed loop rather than for each simulated user. This option is considered only if also the option -multihomed is used.
ssl <version>
Use fixed SSL protocol version: v3, TLS, TLS11 or TLS12
sslcache <seconds>
The timeout of SSL cache in seconds. 0 = cache disabled
nosni
Disable support for TLS server name indication (SNI)
dnshosts <file-name>
Effects: The load test job uses a known DNS hosts file to resolve hostnames rather than the underlying operating system's hosts file. Note that you have to ZIP the hosts file together with the load test program's compiled class. To automate the ZIP, it's recommended to declare the hosts file as an external resource (w/o adding it to the CLASSPATH).
dnssrv <IP-name-server-1>[,<IP-name-server-N>]
Effects that the load test job uses specific (own) DNS server(s) to resolve hostnames – rather than to use the DNS library of the underlying operating system.
dnsenattl
Enable consideration of DNS TTL by using the received TTL-values from the DNS server(s). This option cannot be used in combination with the option -dnsperloop.
dnsfixttl <seconds>
Enable DNS TTL by using a fixed TTL-value of seconds for all DNS resolves. This option cannot be used in combination with the option -dnsperloop.
dnsperloop
Perform new DNS resolves for each executed loop. All resolves are stable within the same loop (no consideration of DNS TTL within a loop). This option cannot be used in combination with the options -dnsenattl or -dnsfixttl.
dnsstatistic
Effects that statistical data about DNS resolutions are measured and displayed in the load test result using a known DNS stack on the load generators. Note: There is no need to use this option if any other, more specific DNS option is enabled because all (other) DNS options also effect implicitly that statistical data about DNS resolutions are measured. If you use this option without any other DNS option, the (own) DNS stack on the load generators will communicate with the default configured DNS servers of the operating system - but without considering the "hosts" file.
tz <value>
Time zone (see Application Reference Manual)
annotation <text>
Comment about the test-run
Startup delay per user in milliseconds
downlinkBandwidth
Downlink bandwidth per user in kilobits per second (0 = unlimited)
uplinkBandwidth
Uplink bandwidth per user in kilobits per second (0 = unlimited)
requestTimeout
Request timeout per URL call in seconds
maxErrorSnapshots
Limits the number of error snapshots taken during load test execution (0 = unlimited). Negative value: maximum memory in megabytes used to store all error snapshots, counted overall Exec Agents (recommended). Positive value: maximum number of error snapshots per URL, per Exec Agent (not recommended).
statisticSamplingInterval
Statistic sampling interval in seconds
percentilePageSamplingPercent
Additional sampling rate per Web page in percent (0..100)
percentileUrlSamplingPercent
Additional sampling rate per URL call in percent (0..100)
percentileUrlSamplingPercentAddOption
Additional URL sampling options per executed URLcall (numeric value):<br>0: no options<br>1: all URLperformance details (network connect time, request transmit time, …)2: request header3: request content (form data)4: request header & request content5: response header6: response header & response content7: all – but without response content8: all – full URL snapshot
debugOptions
Debug options: (string value)“-dl”: debug loops (including var handler)“-dh”: debug headers & loops“-dc”: debug content & loops“-dC”: debug cookies & loops“-dK”: debug keep-alive & loops“-dssl”: debug SSL handshake & loops
additionalOptions
Additional options (string)
sslOptions
SSL/HTTPS options: (string value)<br>“all”: automatic SSL protocol detection (TLS preferred)“tls”: SSL protocol fixed to TLS“v3”: SSL protocol fixed to v3“v2”: SSL protocol fixed to V2
testRunAnnotation
Annotation for this test-run (string)
userInputFields
Label, variable name, and the default value of User Input Fields
Field
Description
Save as template
Stores all load test input parameters additionally inside an XML template. Later, this template can be used to rerun (repeat) the same load test.
Execute Test From
selects the Exec Agent or the Exec Agent Cluster from which the load test will be executed.
Apply Execution Plan
Optionally, an Execution Plan can be used to control the number of users during the load test. The dropdown list shows the Execution Plans' Titles, extracted from all formal valid Execution Plan files (*.exepl files), located in the current Project Navigator directory. Note that the titles of invalid Execution Plan files are not shown. If an Execution Plan is selected, then the following input parameters are disabled: Number of Concurrent Users, Load Test Duration, Max. Loops per User, and Startup Delay per User.
Number of Concurrent Users
The number of simulated, concurrent users.
Load Test Duration
The planned duration of the load test job. If this time has elapsed, all simulated users will complete their current loop (repetition of web surfing session) before the load test ends. Thus the load test often runs a little bit longer than the specified test duration.
Max. Loops per User
This limits the number of web surfing session repetitions (loops) per simulated user. The load test stops if the limit has been reached for each simulated user.
Note: this parameter can be combined with the parameter "Load Test Duration" The limitation which first occurs will stop the load test.
Item
Description
<Exec Agent Name> or <Cluster Name>
The Exec Agent's name - or the Exec Agent Cluster's name - executes the load test job.
Job <number>
Unique job ID (unique per Exec Agent, or unique cluster job ID).
Real-Time Comment
If real-time comments are entered during test execution, these comments are later displayed inside all time-based diagrams of the load test result detail menu.
Job Parameter
The name of the load test program and the program arguments - test input parameter.
Diagram
Description
Web Transaction Rate
The Web Transaction Rate Diagram shows the actual number of (successful) completed URL calls per second, counted overall simulated users. By clicking on this diagram, the Response Time Overview Diagrams are shown.
Total Passed URL Calls - The total number of passed URL calls since the load test job was started.
Total Failed URL Calls - The total number of failed URL calls since the load test job was started.
HTTP Keep-Alive Efficiency (%) - The efficiency in percent about how often a network-connection to the webserver was successfully re-used, instead of creating a new network connection. This (floating) average value is calculated since the load test job was started.
AV Web Trans. Rate (URL calls/sec) - The (floating) average number of (successful) completed URL calls per second, calculated since the load test job was started
Session Failures / Ignored Errors
The Session Failures / Ignored Errors Diagram shows the actual number of non-fatal errors (yellow bars) and the number of fatal errors (red bars = failed sessions) counted over all simulated users.
Total Passed Loops - The total number of passed loops (repetitions of web surfing sessions) since the load test was started.
Total Failed Loops - The total number of failed loops (repetitions of web surfing sessions) since the load test was started.
Σ User's Think Time per Loop (sec): The total user's think time in seconds for one loop per simulated user.
Session Time per Loop (sec): The average session time for one loop per simulated user. This value is the sum of "the average response time of all URLs and all user's think times" per completed loop.
The Number of Users / Waiting Users Diagram shows the total number of currently simulated users (red bars) and the actual number of users who are waiting for a response from the webserver (purple bars). The users waiting for a response is a subset of the currently simulated users.
By clicking on this diagram, the Statistical Overview Diagrams are shown.
Users Waiting For Response - the actual number of users waiting for a response from the web server, compared to ("of") the total number of currently simulated users.
TCP Socket Connect Time (ms) - The time in milliseconds (per URL call) to open a new network connection to the webserver.
AV Network Throughput (Mbit/s) - The total network traffic generated by this load test job, measured in megabits per second. This (floating) average value is calculated since the load test job was started.
Total Transmitted Bytes - The total number of transmitted bytes, measured since the load test job was started.
Response Time (drop-down list)
Allows to select the period, from the current time back to the past, are shown in the diagrams within the response times.
Time Bars (drop-down list)
Allows selecting if the bars inside the diagrams are shown as average values or as max. values. Please note that there is only a difference between the max. values and the average values if multiple measured samples of the response time fall inside the same pixel (inside the same displayed bar):.
Diagram
Description
The tables at the right side of the diagrams contain the response times for all URLs of the web page. Also, these response times are either average values or max. values, depending on the selection in the Time Bars drop-down list. However, these values are calculated since the load test was started and always "accurately" measured, which means that they do not depend on the value chosen for the "Additional Sampling Rate per Page Call."
You can click on a URL response time to show the corresponding URL Response Time Diagram.
On the left side inside the diagram, the web page's average response time is shown as red-colored text, calculated since the load test was started. But depending on the selected period, this value may not be displayed in every case. On the right side inside the diagram, the last measured value is shown.
Response Time (drop-down list)
Allows to select the period, from the current time back to the past, are shown inside the diagram within the response times.
Time Bars (drop-down list)
Allows selecting if the bars inside the diagram are shown as average values or as max. values. Please note that there is only a difference between the max. values and the average values if multiple measured samples of the response time fall inside the same pixel (inside the same displayed bar).
Total Passed URL Calls
the total number of passed calls for this URL.
Total Failed URL Calls
the total number of failed calls for this URL.
Average Size (Req. + Resp.)
the average size of the transmitted + received data per URL call.
Max. Response Time
the maximum response time ever measured.
Min. Response Time
the minimum response time ever measured.
Av. TCP Socket Connect Time
the average time to open a new network connection to the webserver, measured for this URL. a blank /[---] instead of a value means that never a new network connection was opened for this URL because HTTP Keep-Alive (re-using of cached network connections) was always successful. The additional percentage value shown in brackets at the left-hand displays the percentage of how often a new network connection was opened to the web server, in comparison to how often this was not necessary. This percentage value is also called the reverse keep-alive efficiency.
Av. Request Transmit Time
the average time to transmit the HTTP request header + (optionally) the HTTP request content data (form data or file upload data) to the webserver, measured after the network connection was already established.
All failed URL Calls
effects that all errors about failed URL calls are shown (non-fatal and fatal errors).
Session Failures only
effects that only fatal errors about failed URL calls are shown (session failures).
Concurrent Users
The total number of simulated users.
Users Waiting For Response
The number of users who are waiting for a response from the webserver.
Session Failures
The number of failed sessions - which is the same as the number of fatal errors.
Session Time per User - per Loop
The session time for one loop per simulated user. This value is the sum of "the response time of all URLs and all user's think times" per successfully completed loop.
Web Transaction Rate
The number of (successful) completed URL calls per second measured overall simulated users.
Completed Loops per Minute
The number of (successful) completed loops (sessions) per minute measured overall simulated users.
TCP Socket Connect Time
The time in milliseconds (per URL call) to open a new network connection to the webserver.
Display Cluster Jobs
shows all Exec Agent Cluster jobs.
Display Exec Agent Jobs of
allows selecting the Exec Agent from which a list of all load test jobs is displayed.
Clean-Up: Delete All Non-Running Jobs
deletes all jobs except running and scheduled jobs.\\.
Clean-Up: Delete Old Completed Jobs
deletes all completed jobs except the newest one. This button is only shown if, at minimum, two jobs have been completed.
Item
Description
Job
Each job has its unique ID, which was automatically assigned when the job was defined. However, the ID is unique per Exec Agent. Cluster jobs have a known, separate ID (own enumeration counter).
[Search Icon]
Allows to acquire the statistic result file (.prxres) of an already completed load test job - or reconnects to the temporary statistic of the load test job if the job is still running – or allows to cancel the schedule of the job.
[Delete Icon]
Deletes all data (-files) of a completed load test job. Consider that you must first acquire the statistic result file (*.prxres) of the job before you delete all files of a job - otherwise, the job results are lost.
Date
Displays the date and time when the job has been defined or when the job has been completed, or - for scheduled jobs - the planned time when the job will be started.
State
Displays the current job state: configured (ready to run), scheduled, running, or completed. The state "???" means that the job data are corrupted - you should delete all jobs which have the state "???" because they delay the display of all jobs in this list.
Load Test Program & Arguments
Displays the name of the load test program and the arguments of the load test program.
Argument / Parameter
Meaning
u <number>
Number of concurrent users
d <seconds>
Planned test duration in seconds. 0 = unlimited
t <seconds>
Request timeout per URL call in seconds
sdelay <milliseconds>
Startup delay between creating concurrent users in milliseconds
maxloops <number>
Max. number of loops (repetitions of web surfing session) per user. 0 = unlimited
downlink <Kbps>
Network bandwidth limitation per concurrent user in kilobits per second for the downlink (web server to web browser)
Attribute Name
Description
loadTestProgramPath
Absolute file path to compiled load test program (*.class) or load test program ZIP archive
startFromExecAgentName
Name of the Exec Agent on which the load test is started (empty value if cluster job)
startFromClusterName
Name of the Exec Agent Cluster on which the load test is started (empty value if no cluster job)
concurrentUsers
Number of concurrent users
testDuration
Planned test duration in seconds (0 = unlimited)
loopsPerUser
Number of planned loops per user (0 = unlimited)
Pacing
Av. Response Header Wait Time
Network Throughput
Released from GUI(IP)
uplink <Kbps>
startupDelayPerUser
<?xml version="1.0" encoding="UTF-8"?>
<loadTestTemplate>
<proxySnifferVersion>V5.5-F</proxySnifferVersion>
<loadTestProgramPath>/Applications/ZebraTester/MyTests/CL_Demo_FF_Demo.class</loadTestProgramPath>
<startFromExecAgentName></startFromExecAgentName>
<startFromClusterName>Cluster 1</startFromClusterName>
<isPureJUnitLoadTest>false</isPureJUnitLoadTest>
<executionPlanFilePath></executionPlanFilePath>
<concurrentUsers>1</concurrentUsers>
<testDuration>60</testDuration>
<loopsPerUser>0</loopsPerUser>
<pacingPerLoop>0</pacingPerLoop>
<startupDelayPerUser>200</startupDelayPerUser>
<downlinkBandwidth>0</downlinkBandwidth>
<uplinkBandwidth>0</uplinkBandwidth>
<requestTimeout>60</requestTimeout>
<maxErrorSnapshots>-20</maxErrorSnapshots>
<statisticSamplingInterval>15</statisticSamplingInterval>
<percentilePageSamplingPercent>100</percentilePageSamplingPercent>
<percentileUrlSamplingPercent>20</percentileUrlSamplingPercent>
<percentileUrlSamplingPercentAddOption>0</percentileUrlSamplingPercentAddOption>
<debugOptions></debugOptions>
<additionalOptions></additionalOptions>
<sslOptions>all</sslOptions>
<pmaTemplateFileName></pmaTemplateFileName>
<testRunAnnotation>test</testRunAnnotation>
<userAgent>Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko</userAgent>
<browserLan>en</browserLan>
<enableBrowserCache>true</enableBrowserCache>
<checkNewVersion>true</checkNewVersion>
<browserCacheOptions></browserCacheOptions>
<userInputFields/>
</loadTestTemplate>Warning: capturing additional URL data takes much memory and also uses much CPU. Therefore the test duration should not exceed 15 minutes if you use one of these add-options in combination with a 100% sampling rate per URL call. Reducing the sampling rate to 10% may allow a load test duration of up to 30 minutes.
Note 1: It could be needed that TLS SNI (Server Name Indication) must be disabled if a DNS translation table is used.
Note 2: The HTTP request header field "Host" is not updated, so that you could call the Web server with the wrong hostname. You have to ZIP the DNS translation file together with the compiled class of the script. To automate the ZIP, it's recommended to declare the DNS translation file as an external resource (w/o adding it to the CLASSPATH).
-dnssrv <IP-name-server-1>[,<IP-name-server-N>] Effects that the load test job uses specific (own) DNS server(s) to resolve hostnames - rather than to use the DNS library of the underlying operating system. When using this option, at least one IP address of a DNS server must be specified. Multiple DNS servers can be configured separated by commas. If a resolved DNS hostname contains multiple IP addresses, the stressed Web servers are called in a round-robin order (user 1 uses resolved IP Address no. 1, user 2 uses resolved IP Address no. 2, etc.). -dnsenattl Enable consideration of DNS TTL by using the received TTL-values from the DNS server(s). This option cannot be used in combination with the option -dnsperloop. Note: when using this option, the resolved IP addresses (and therefore the stressed Web servers) may alter inside the executed loop of a simulated user at any time - suddenly from one URL call to the next one. -dnsfixttl <seconds> Enable DNS TTL by using a fixed TTL-value of seconds for all DNS resolves. This option cannot be used in combination with the option.
-dnsperloop Perform new DNS resolves for each executed loop. All resolves are stable within the same loop (no consideration of DNS TTL within a loop). This option cannot be used in combination with the options -dnsenattl or -dnsfixttl. Note: consider when using this option that the default or the configured DNS servers are stressed more than usual because each simulated user's executed loop will trigger one or more DNS queries. -dnsstatistic Effects that statistical data about DNS resolutions are measured and displayed in the load test result using a DNS stack on the load generators. Note: there is no need to use this option if any other, more specific DNS option is enabled because all (other) DNS options also effect implicitly that statistical data about DNS resolutions are measured. If you use this option without any other DNS option, the (own) DNS stack on the load generators will communicate with the default configured DNS servers of the operating system - but without considering the "hosts" file. -dnsdebug Effects that debug information about the DNS cache and DNS resolves are written to the stdout file (*.out) of the load test job. -enableIPv6 [<network-interface-name>] Enable IPv6 support only for load test execution (IPv4 disabled). Optionally you also can provide the IPv6 network interface name of the load generators(s), like "eno," for example. -enableIPv6v4 [<network-interface-name>] Enable IPv6 and IPv4 support for load test execution (first will try with IPv6, if fails, will try with IPv4). Optionally you also can provide the IPv6 network interface name of the load generators(s), like "eno," for example. -mtpu <number> Allows configuring how many threads per simulated user are used to process URLs in parallel (simultaneously). Note: This value applies only to URLs that have been configured to be executed in parallel. -nosdelayCluster Effects for Cluster Jobs that the Startup Delay per User is applied per Exec Agent Job instead of applying it overall simulated users of the Cluster Job. Thus a faster ramp-up of load can be achieved. -setuseragent "<text>" Replaces the recorded value of the HTTP request header field User-Agent with a new value. The new value is applied for all executed URL calls. -noECC Disable elliptic curves (ECC). -sslcache <seconds> Alters the timeout of the user-related SSL cache. The default value is 300 seconds. A value of 0 (zero) affects that the SSL cache is disabled. -sslrandom <type> Set the type of random generator used for SSL handshakes. Possible options are "fast", "iaik" (default) or "java". -sslcmode Apply SSL/HTTPS compatibility workarounds for deficient SSL servers. You may try this option if you constantly get the error type "Network Connection aborted by Server" for all URL calls. -nosni Disable support for TLS server name indication (SNI). -snicritical Set the TLS SNI extension as critical (default: non-critical). -tlssessiontickets Set the TLS to use Session Tickets for session resuming (non-critical). -iaikLast Adds the IAIK security provider at the last position (instead of the default: IAIK at first position). Note: Adding the IAIK security provider at the last position may have the side effect that weak or short cipher keys are used. -tz <timezone> Sets an alternatively time zone which is used by the script. The default time zone is equal to the selection made when installing ZebraTester, or - if modified subsequently - which has been set in the menu. Possible time zone values are described in chapter 6 of the Application Reference Manual. -Xbootclasspath/a:<path> Specify for the load test job a path of JAR archives and ZIP archives to append to the default bootstrap classpath. -Xbootclasspath/p:<path> Specify for the load test job a path of JAR archives and ZIP archives to prepend in front of the default bootstrap classpath. -Xmx<megabytes> Specify for the load test job the size of the Java memory in megabytes. Do not enter a space or a colon between the "-Xmx" and the value. Note: this option can only be used if the corresponding Exec Agent(s) supports this option, meaning that the Exec Agent(s) is started with the option -enableJobOverrideJavaMemory. -nostdoutlog Disable to write any data to the stdout file of the load test job.
Cache URLs with HTML Content: when enabled, ZebraTester will cache the HTML resources as well. You can decrease the memory footprint of each VU by unchecking this option.
Simulate a new user each loop: when enabled, ZebraTester will create a new cache per loop.