Scripted Check
Introduction
Why Use a Scripted Check?
Scripted Check Overview
Creating an Example Scripted Check
1. Scripting the Check
Import Libraries (as needed)
Set the URL Request
Add JSON format
Expanding the Returned values
Advanced JSON
About Adding JSON Return Values
Adding More Values
2. Uploading the Check to a Repository
3. Uploading and Running the Check in ASM
Open ASM
Add Script via Run Python.
Creating a Run Python Check, Step 1
Run Python Step 2
Step 3 Interval, Thresholds & Monitor Groups
Confirm Your Check
Check Created
Check Details Page
Check Results
Drill down
Messaging via JSON
3. Interpreting the Check Results in ASM
Open ASM
Select the Target check for the API
Postman Results of Standard API Check Endpoint
Apica API for Generic Results
Postman Results of Generic API Check Endpoint
The Result Object
Appendix
Adding a Custom Python or NodeJS Module to ApicaNet for use with Scripted Checks
Testing a Script on the Executing Agent Itself
Introduction
Apica's "Scripted Checks" are ASM checks written in several scripting languages. So, instead of needing a custom scripting tool or a proprietary scripting format, developers and monitoring teams can use familiar languages to create their custom monitoring scripts and metrics for the long-term understanding of their applications.
Video versions of this guide are available on the Apica Systems YouTube account here: https://www.youtube.com/playlist?list=PL7P4sd6wT60B5JAxU3l3Rzjhf7v01lqQz
Why Use a Scripted Check?
If you are in a DevOps shop, you can use the DevOps Toolchain that you might already have in place. If you've got the resources to write and edit Java, Python, or JavaScript, you can create a long-term global script for ASM without needing a unique scripting tool or proxy setup. If you know what URLs you want to monitor, this allows you to become less reliant on proprietary scripting solutions and also frees these script languages from being unable to do more than a test application performance from a single QA/developer machine.
Scripted Check Overview
The term “Scripted Check” refers to checks which run scripts which customers create on their own and upload to the ASM Platform for monitoring. Currently, Java, Python, JavaScript, and AWS Lambda scripts are supported within ASM. Scripts may be stored and uploaded to an HTTP server or GitHub repository. When you download a script for execution, you must specify either the HTTP URL or the GitHub Repo URL.
Note: this page will cover only the GitHub repository method of script uploading and storage.
Setup GitHub to store the scripts.
Script the check (in Java, Python, or JavaScript)
Upload the Script into GitHub.
Create the ASM Check to any one of Apica's global agents on the Apica Monitoring Platform.
Collect, compare and analyze the ASM Check Results OR send the results to integrated systems that use ASM as a data source.
Creating an Example Scripted Check
The following guide utilizes Python code, but the workflow can apply to Java checks, JavaScript checks, or any other Scripted check type.
1. Scripting the Check
Next is a simple example of writing your script and running it in ASM.
We will be coding a straightforward Python check for its use in ASM. This Python check will call a URL that we specify, and it will return the response that we received from this URL.
Step
Screenshot
Add JSON format
What JSON format does Apica’s back-end system expect? Apica’s back-end is based on MongoDB.
MongoDB allows us to have an expandable result format. So that's a result format that you can upload almost anything to, and it will become a part of the result.
We have the start and end times that we need. So the start and end times will be the start and end times of your check in ASM (These will show up in the check Result view in ASM).
Set
start_time = time.time()
.Set
end_time = time.time()
.
Note: we're doing the start_time
before our URL call and the end_time
at the end of our URL call. This measures the time it takes to call this URL.
Set a message. So our message is “URL call returned status code”, then adds a string response (the returned status code) the value that you see here, in the JSON format, it will be the value of the results.
So this is the main value that you will see. Usually, it's the duration, but it could be anything. To show this, let's say that it's the status code, because this is what we're saying in the message. So we'll set it to response.status.code
.
After running this, we have our JSON output; by itself, a valid result.
Expanding the Returned values
Let's expand this a little bit; we have an expandable JSON format, so let's give ourselves more content and data.
How many headers do we have here?
What is the length of the returned content?
Add these lines below the "value" (line 16 above) to return the response header count and the size of the content.
"header_count": Len(response. headers),
"content_size": len(response.content)
Although simple, the above is a perfectly valid example of a Python Scripted Check. It uses Python standard libraries and the 'requests' library, included in the Apica Scripted Check Private Agent installation.
Apica's Scripted Checks are very flexible; if your script requires additional Python libraries, you may simply add those libraries to your Scripted Check Private Agent.
Advanced JSON
Some additional points to the previous steps.
Step
Screenshot
Adding More Values
A very powerful concept that Apica supports with Scripted Checks: Add more fields to add more values.
Let’s capture the headers coming out of our response by creating another field and calling it 'headers
.' This is going to be an actual inner JSON object that contains our headers.
Add dict_response.headers
and rerun the check. The result shows that we have our header, JSON, inside this field.
Anything that JSON supports is supported in this result format. So you can add lists, inner dictionaries, null values, integers, booleans, etc. to this JSON.
Test your check locally before uploading to a repository which is linked to ASM. If you have a private agent with the necessary software installed and are planning to run the script on that agent, it is possible to test the script locally on the agent before uploading to your repository. See https://apica-kb.atlassian.net/wiki/spaces/ASMDOCS/pages/2135393547/Scripted+Check#Testing-a-Script-on-the-Executing-Agent-Itself for more information.
2. Uploading the Check to a Repository
When you have finished writing the check and testing locally, upload the check to a repository which has been linked to ASM. See Manage Repository Profiles for instructions on linking a repository for use within ASM. A repository can be added directly via the check creation wizard, when editing the check via the Edit Check page, or within the “Manage > Repository Profiles” page on the top ASM navigation bar.
3. Uploading and Running the Check in ASM
When the script is complete and uploaded to your repository profile, you are ready to create a Scripted check in ASM in order to utilize the script.
You MUST select either “+1 Stockholm, Sweden, [amazon] or a private agent with the correct software installed when selecting an agent location. See “Run Python Step 2” for more information.
Step
Screenshot
Run Python Step 2
Configure this check
Resource URL/Github URL
Resource Auth Type
Resource Auth
Resource Path
Secondary Resource
Script Runner
Script Arguments
Location
Note that the agent will delete any local files you create after running your script. Any sensitive data written to local files during script execution is deleted at the end of execution.
Resource URL/Github URL: This answers the question, "Where do we find your script?" This could be an HTTP download link, or it can be to your GitHub repository. For this example, go to your repository and copy+paste the URL here, ending with the branch (master/main
). Ours is main.
Enter the URL that this script resides at. In this example, it resides in a GitHub Repo at https://github.com/[username]/NewTestRepository/main
Resource Auth Type: This type of resource authorization will be needed, GitHub or HTTP. This example uses GitHub. But if you have your file on an HTTP server, you could use HTTP as the type.
Resource Auth: Resource authorization is required. The authorization header allows you to download resources.
It's a basic authorization header when your resource authorization type is HTTP.
If you have an HTTP server with no protection, you may do it that way, but Apica does not recommend it because it's not secure.
If your auth type is GitHub, this form
<USERNAME>:<TOKEN>
.Remember, the token is the Personal Access Token that we created back in the first step [it can also be empty if your repository is public].
Example if your username is foobar:
foobar:ghp_JlvGv7PGTrAzI2LWVIQZDhRthYBBQI1TGl0J
Resource Auth Best Practice
To set the Resource Auth, remember that it is a hidden field, so you won't be able to see anything you type here. Apica recommends taking your username as your username without the email domain and then assembling it with the colon and Personal Access Token so you can see it in another location.
Then append the colon ':.'
Finally, add the Personal Access Token, and your resource authorization looks like this and is ready to copy into that field:
foobar:ghp_JlvGv7PGTrAzI2LWVIQZDhRthYBBQI1TGl0J
Resource Path: This is the path inside your repository to the scripts you want to run. Our example scripts are just in the base level repository, so enter main.py
Secondary Resource: If your script requires any sort of additional files, you can use this secondary resource to download another file. However, you can also start your script off by downloading the file directly: That way, you can use any sort of security you want to protect it. For example, you could have a secondary resource, like a certificate protected by OAuth: Your script could go through the whole OAuth process and then use the local file.
In this example, the secondary resource will be blank because it is unnecessary.
It is possible to reference subfolders from a base directory using the “Secondary Resource” field. For instance, if your use case requires a “/python/main.py” file and main.py depends on a module defined in /python/modules, you can specify /python, and the check runner will recognize the module because it is able to “search” the /python folder for secondary resources.
For example, if “local_module_sample.py” depends on a subfolder in /python, you can specify the project like so:
Script Runner: Python is pre-selected (as the only choice).
Script Arguments: These will be provided if we enter them on the command lines. Enter http://example.com
So, we will pass this argument to our script.
Location: you must use either the “+1 Sweden, Stockholm, [amazon]” location OR a private agent with the necessary software installed to run the check! Not all check locations have the necessary software required to run the check.
Check Created
Uncheck Enable Failover (which is checked by default) because we don't want to have that enabled right now, as this is just for demo purposes.
Set the max attempts to 1 because we want the check to fail quickly for the test.
Click Save.
Apica generally recommends these settings for testing because what can happen is too long with the default behaviors. If Max Attempts remains at three and the Attempt Pause for each attempt is 30 seconds, this means that your test check could wait up to 90 seconds if it's failing. And so these settings don't help when trying just to debug something; it's better to have the information that your check failed from the beginning.
Click the Check Details button in the upper right as we're ready to run our check.
Drill down
Drilling into these results, we see the Result value (ms) is 200 because, even though the typical value for a result is the number of milliseconds it took to respond, we specified in our JSON that the value would be the response status code, so the 200 is displayed in its place. The number of Attempts is shown as 1, and beneath the result, code is the JSON that we specified:
Any metrics data that you want to record, you can keep for any data mining.
Results are stored for 13 months. So you'll have this data over a long time. It's a potent tool to make your customized results and even retrieve them in your front end.
In the next section, we will review how to retrieve your check information through the API.
3. Interpreting the Check Results in ASM
After creating our new check, using a Python script that we uploaded into GitHub, we know that the script presents the HTTP status code of the URL called as the value of the result in ASM. Next, we will use the ASM API to get information about this check.
Step
Screenshot
200 is the last status code of the URL. This is nice but is just a raw number without data or context, and there's no JSON used. This could be used just for a small script or something you can pull the last result of your check maybe test it for something.
A better API endpoint is the Checks Generic Check ID Results API endpoint.
This API endpoint looks up the results for checks that present a result type of generic. 'Generic' checks mean that they have the expandable JSON result format we saw in Step 5 above.
Generic type Checks: Run Python scripts, Run Javascript, Run Java, and (when released) Run Azure Cloud, Run Lamda, etc.
Postman Results of Generic API Check Endpoint
In Postman, using this API endpoint:
https://api-asm1.apica.io/dev/Checks/generic/43454/results?
auth_ticket=18FFE***-****-****-****-****0DCO
Instead of the earlier (for comparison):
https://api-asm1.apica.io/dev/Checks/49454/lastvalue?auth_ticket=18FFE***-****-****-****-****0DCO
Set a filter with a range
Return the most recent results
Return results that occurred in between defined two-millisecond values that, for example, answer the question, "What results came in between 1.2 and 2.3 seconds?"
Define a period to query (between two UTC stamps)
Return specific results IDs.
This is a POST endpoint:
Note the JSON results returned above. So you may need to use these in some other API call to lookup even more information. In this example, we're just going to use the most recent because that is the simplest and easiest to show.
What you choose to do next with these metrics is all up to your needs.
You could create a script that scrapes this URL every once in a while, looks up to the last hour of results, and parses the JSON for data that you need.
You could even create another check that would read this information and then crunch the data to present other results, e.g., the average size of the headers or content length.
There is much more, only limited by your use cases.
Review:
We've scripted our check (in this example, Python).
We've uploaded a script to GitHub.
We’ve created our ASM check using a script in a GitHub repository.
We've run our check in ASM and viewed the results.
We've pulled the results for the API, including the custom JSON.
Appendix
Adding a Custom Python or NodeJS Module to ApicaNet for use with Scripted Checks
Adding a custom python or node.js module to your private Browser agent is very simple and should take less than 5 minutes. This guide assumes you have administrator access to your agent or have an operations team that will perform these steps for you.
Determine the modules you need to install and log in to the private Browser agent.
The worker runs apicanet in a chroot shell. This means you cannot simply run the following commands but must enter a chroot shell. To do this, run the following command:
You should now be in a chroot shell. From here, you may interact with pip3 and npm (package managers for python 3.5 and nodejs).
Install the necessary packages by running the following commands:
Your packages are now installed and ready to use. If you wish, you can even test your script by opening a new shell instance and copying your script to /opt/asm-browser-agent/embedded/. You may then run your script in the chroot environment by using “node“ or “python3”. The script you placed in /opt/asm-browser-agent/embedded should be in the root folder of the chroot environment.
Tip: It’s a good idea to have 2 terminal windows open when testing; otherwise, it can be difficult to switch between the chroot and non-chroot windows in order to update and test your script. Alternatively, you can use scp (the secure copy command) to copy from your local computer to /opt/apicanet-worker/embedded/.
Testing a Script on the Executing Agent Itself
If you are running a script from a private agent, It is possible to run it locally in order to ensure that all packages are installed in the correct location and that no syntax errors, etc. are present. It is an excellent step to use when troubleshooting Python checks which are not running correctly.
Copy the script onto
/opt/asm-browser-agent/embedded
on the agent itselfcd ../
into /opt/asm-browser-agentrun
./chroot_shell.sh
(you can see this shell script if you run ls)The script you copied into
/opt/asm-browser-agent/embedded
should be in the root folder. Runls
to verify.You can run the script from the chroot shell using “node” or “python3”. Use the output to verify that the check is working without issue.
Was this helpful?