Today’s fast-moving technology demands accurate automation when building, testing and delivering new software releases.
Not only must we not fall behind our competitors, we must avoid the potentially disastrous results of delivering an inferior product to the market. CI/CD (Continuous Integration/Continuous deployment) pipelines have evolved to make this possible.
All too often, however, we concentrate on testing functional quality, but don’t give enough priority to ensuring new releases won’t degrade performance.
Including performance analysis tools such as a Java memory analyzer in the pipeline can go a long way towards preventing problems in production.
Java Memory Analyzer Tools
There are several excellent tools designed to parse a heap dump, extracting valuable information. Popular utilities include HeapHero and Eclipse MAT. They’re generally used for troubleshooting memory-related issues, but they can also be used in CI/CD to check various aspects of memory usage before allowing a build to succeed.
In CI/CD, our objective is to fully automate the entire process, and, if possible, eliminate human intervention.
Eclipse MAT includes a CLI command, parseHeapDump, which extracts information from the dump and outputs a series of reports in HTML format. While useful, it still requires human effort to check the reports, and cannot be said to be fully automated.
With HeapHero, on the other hand, REST APIs allow us to automatically extract criteria from a heap dump, and include this information in the build script’s decision as to whether to pass or fail the build.
The CI/CD Pipeline
In early computing systems, a new release was a major event, involving many hours of work to build, test and deploy the product. Software changes relating to a large number of change requests, bug fixes and new features were lumped together as a single event. These days, most software vendors find it more convenient to release frequent builds, and automate as much as possible of the testing and building processes.
Each build must include tests to ensure the software is always release-ready.
Most of the major repositories, such as GitHub, include facilities for automating the building and testing tasks. Independent CI/CD software, such as Jenkins, have also become popular.
Including the results of performance testing in the build criteria can proactively pick up warning signs that may not be apparent from conventional testing. Analyzing heap dumps, thread dumps and garbage collection logs lets us collect statistics and key performance indicators that give insights into whether the new release is healthy.
Heap dumps can render information such as:
- Has memory usage increased by more than a reasonable percentage since the last build?
- Is the heap size still within the constraints of the target device or container?
- Are too many objects waiting for finalization?
- Is the class count significantly higher than the last build?
- Are the JVM configuration options still adequate?
It’s simple to include HeapHero’s REST API in a build script. This returns comprehensive heap analysis information in JSON format. We can then use a JSON parser such as Linux’s jq utility to extract key information, and compare it to supplied thresholds. For more complex systems, we can programmatically compare the JSON data to previous results, or carry out deeper inspections of the extract.
Included in the JSON is a URL pointing to a full heap dump report, so if any problems are highlighted, we can fail the build and raise an alert to have the report inspected manually.
HeapHero’s sister products, GCeasy and fastThread, can also be included in CI/CD pipelines to monitor garbage collection logs and thread dumps respectively.
Using HeapHero in the CI/CD Pipeline
The first step is to establish the URL and credentials to access the REST API. For yCrash users, the simplest way to do this is to access HeapHero from the yCrash dashboard, and choose HeapHero > REST API from the row of tools at the top, as shown in the image.

Fig: Section of yCrashDashboard
This displays the following:

Fig: URL and API Key
Note that the API key has been blanked out in the image for confidentiality reasons.
Keep a record of both the URL and the API key, which you can use when calling the API. This only needs to be done once, before you use it for the first time.
Alternatively, contact HeapHero support to obtain the information.
There are several ways we can then use the API, including:
- From Windows Powershell using the Invoke-WebRequest command, using the -InFile argument to specify the location of the heap dump, and -Outfile to direct the output to a file;
- Using the Linux curl command, using the –data-binary argument to specify the location of the heap dump, and the -o argument to direct the output to a file;
- Using an API testing tool such as Postman’s Newman command line tool;
- From within a program.
For example, the Linux curl command may look like this:
curl -X POST --data-binary @FSD5_dump.hprof
"https://example.com/analyze-hd-api?&apiKey=MY_API_KEY" --header
"Content-Type: application/octet-stream" -o FSD5.jsn.txt
We would substitute our own API key for MY_API_KEY.
The contents of the resulting JSON file are described in the HeapHero API documentation.
For more information, you may like to read this article: Heap Dump Analysis API. You may also be interested in Jenkins Pipeline Integration with GC REST API.
A Simple Worked Example
Let’s assume we’re using a Jenkins pipeline that calls the script callrest.sh.
We’ve written a small application that is to be run on a tiny device, and we know it won’t work if the heap size exceeds 2MB. We therefore want to fail the build if the heap grows beyond this size.
The callrest.sh script:
- Stores the REST API’s URL and API key in a variable named RESTURL;
- Uses the curl command to send an HTTP request to the API to parse the heap dump file FSD5.hprof, storing the output in the file FSD5.jsn.txt;
- Sets the variable HEAPSIZE by invoking the jq command to extract the value for the key troubleshootReport.overview.usedHeapSize from the JSON file. This contains the actual heap usage;
- Sets the variable REPORTURL to the value for the key webReport, which contains the URL of the full heap report;
- If HEAPSIZE is greater than 2MB, it displays a message to this effect, displays the URL of the heap report, and exits with code 1 in order to tell Jenkins to fail the build.
The bash script to achieve this looks like this.
RESTURL="https://example.com/analyze-hd-api?apiKey=abcdefghijklmnopqrstuvwxyz-1234-5678-9012"
curl -X POST --data-binary @FSD5_dump.hprof $RESTURL --header "Content-Type: application/octet-stream" -o FSD5.jsn.txt
HEAPSIZE=$(jq '.troubleshootReport.overview.usedHeapSize' FSD5.jsn.txt)
REPORTURL=$(jq '.webReport' FSD5.jsn.txt)
if [ $HEAPSIZE -gt 2000000 ]; then
echo "Failing Build: Heap Size Too Large " $HEAPSIZE
echo Report URL:
echo $REPORTURL
exit 1
fi
This is a very elementary example, but the script can easily be adapted for more complex uses.
Conclusion
To make sure we always release quality software, it’s important to include performance criteria in the CI/CD pipeline.
HeapHero provides a simple way to include a Java memory analyzer in a build script. We can also include GC log and thread dump analysis using the GCeasy and fastThread tools.
This means we can proactively guard against performance issues and system crashes in the live system.

Share your Thoughts!