Jenkins – popular CI/CD pipeline is used for several critical operations in the organization such as building applications, conducting automated tests, deployments in pre-prod and prod environments, … If Jenkins is down, engineers’ productivity will be severely hampered. Thus, extra care is given to major organizations to keep them up 24 x 7.
While Java has 9 types of OutOfMemoryError, Jenkins is susceptible to 8 of them. In this blog series, we systematically walk through each of those 8 types, helping you identify, diagnose, and fix them. This post covers one of them.
Occasionally it can experience java.lang.OutOfMemoryError: Direct buffer memory, which would disrupt entire Jenkins availability. In this post let’s discuss what does ‘java.lang.OutOfMemoryError: Direct buffer memory’ mean, how to isolate the root cause of it quickly, what are its temporary and permanent fixes, and even better, how to prevent them from happening.
Immediate Stabilization Steps – OutOfMemoryError Direct buffer memory in Jenkins
When Jenkins experience ‘java.lang.OutOfMemoryError: Direct buffer memory’ here are the options one can take to stabilize the Jenkins immediately (basically first-aid):
1. Restart the JVM: When ‘java.lang.OutOfMemoryError: Direct buffer memory’ happens in Jenkins, it will put the JVM into an unstable state. It’s dangerous to run Jenkins in this setting, as it can result in erroneous behavior. Thus, it’s highly recommended to restart the JVM, so that it will come back in a clean slate.
2. Increase Direct buffer memory size: ‘java.lang.OutOfMemoryError: Direct buffer memory’ happens in Jenkins due to lack of space in the Direct buffer memory region of the JVM. Thus increase the Direct buffer memory region size. You can increase the Direct buffer memory region by passing following arguments to your JVM:
-XX:MaxDirectMemorySize=<size> Sets the upper limit for Direct buffer memory region
Why OutOfMemoryError Direct buffer memory Happens in Jenkins?
To better understand OutOfMemoryError Direct buffer memory, we first need to understand different JVM Memory regions. Here is a video clip that gives a good introduction about different JVM memory regions. But in nutshell, JVM has following memory regions:
Fig: JVM Memory Regions
- Young Generation: Newly created application objects are stored in this region.
- Old Generation: Application objects that are living for longer duration are promoted from the Young Generation to the Old Generation. Basically this region holds long lived objects.
- Metaspace: Class definitions, method definitions and other metadata that are required to execute your program are stored in the Metaspace region. This region was added in Java 8. Before that metadata definitions were stored in the PermGen. Since Java 8, PermGen was replaced by Metaspace.
- Threads: Each application thread requires a thread stack. Space allocated for thread stacks, which contain method call information and local variables are stored in this region.
- Code Cache: Memory areas where compiled native code (machine code) of methods is stored for efficient execution are stored in this region.
- Direct Buffer: ByteBuffer objects are used by modern framework (i.e. Spring WebClient) for efficient I/O operations. They are stored in this region.
- GC (Garbage Collection): Memory required for automatic garbage collection to work is stored in this region.
- JNI (Java Native Interface): Memory for interacting with native libraries and code written in other languages are stored in this region.
- misc: There are areas specific to certain JVM implementations or configurations, such as the internal JVM structures or reserved memory spaces, they are classified as ‘misc’ regions.

Fig: ‘java.lang.OutOfMemoryError: Direct buffer memory’
java.lang.OutOfMemoryError: Direct Buffer Memory occurs in Jenkins when direct buffer allocations exceed the limit set by -XX:MaxDirectMemorySize. Direct buffer memory sits outside the JVM heap and is used for high-throughput I/O operations. In a Jenkins environment, plugins or pipeline processes that rely on NIO-based operations, such as artifact streaming, log handling, or network-intensive build steps, can breach this limit by continuously allocating direct byte buffers without releasing them in time.
This video covers OutOfMemoryError: Direct buffer memory from a Java perspective, the same underlying concept that applies to Jenkins.
Root Causes of OutOfMemoryError Direct buffer memory in Jenkins
‘java.lang.OutOfMemoryError: Direct buffer memory’ in Jenkins is potentially caused because of the following reasons:
- Memory Leak due to Buggy code: If Jenkins application or the plugins that you use does not properly releasing direct buffers after use, they can accumulate over time and eventually exhaust the available direct buffer memory.
- High Rate of Allocation: If your Jenkins application or plugins is allocating direct buffers at a very high rate and not releasing them promptly, it can quickly consume the available memory.
How to Diagnose the OutOfMemoryError Direct buffer memory Problem in Jenkins (Step-by-Step)
It is very easy to identify the root cause of ‘OutOfMemoryError: Direct buffer memory’. This error would be printed in the Jenkins std error log (or console), with the stack trace of the code that is causing it. Example, you will see the following type of stack trace to be printed:
Exception in thread "main" java.lang.OutOfMemoryError: Direct buffer memory at java.nio.Bits.reserveMemory(Bits.java:695) at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) at com.buggyapp.oom.OOMDirectBuffer.main(OOMDirectBuffer.java:11)
It clearly points out that problem originates on line #11 in OOMDirectBuffer.java. Equipped with this information, one can easily isolate the root cause of the problem and fix it.
Solutions for OutOfMemoryError Direct buffer memory in Jenkins
Here are the potential solutions to address java.lang.OutOfMemoryError: Direct buffer memory in Jenkins:
- Identify & Fix the Memory Leak in Jenkins: Using the diagnostic steps described in the above section find the leaking objects in the memory and fix it.
- Remove the recently added Plugins: Whenever you add new plugins, it will occupy space in the Metaspace. Sometimes you might end up adding poorly implemented, memory inefficient plugins. Remove the recently added plugins and restart the JVM and see whether Jenkins stabilizes.
- Revert to Previous Jenkins Installation: If you have recently upgraded to latest version of Jenkins installation and Direct buffer memory OutOfMemoryError started to surface after it, consider reverting to previous Jenkins installation.
- Increase Direct Buffer size: If OutOfMemoryError surfaced due the increase in the traffic volume, then increase the JVM’s Direct Buffer Memory region size (-XX:MaxDirectMemorySize).
- Upgrade to Java 17 (or above version): Enhancements have been made in Java 17 to use the Direct Buffer Memory region in an effective manner. Thus, if you happen to be running Jenkins on a version less than Java 17, upgrade it. Here is a case study that showcases the performance optimization to Direct Buffer Memory region in Java 17.
How to Prevent OutOfMemoryError Direct buffer memory in Jenkins
Before you upgrade to new release of Jenkins or install a new Jenkins plugin in the production environment, you might be studying following key metrics in your performance lab:
- CPU Utilization
- Memory Utilization
- Response Time of key transactions
These are wonderful metrics that highlight the performance characteristics of the new release. However, several performance problems slowly build over the period of time, for example for most applications OutOfMemoryError happens only if it runs for more than 1 week. In the performance lab, we don’t run such long endurance tests.
Above mentioned metrics are more reactive indicators that don’t indicate the silently lurking problem in the environment. We recommend studying below mentioned Micro-metrics along with above reactive indicators in the performance lab and certify the release. These Micro-Metrics are good at predicting/forecasting performance problems even if they act at acute scale.
- GC Behavior Pattern: Detects memory leaks, poor GC configuration, or excessive object promotion causing GC pauses.
- Object Creation Rate: Identifies allocation surges that can trigger frequent GCs or memory pressure.
- GC Throughput: Highlights apps spending too much time in GC instead of work—can lead to CPU spikes or slowdowns.
- GC Pause Time: Surfaces stop-the-world GC events affecting responsiveness or causing thread backlogs.
- Thread Patterns: Flags CPU spikes, thread starvation, bursty load, and thread buildup from backend slowness.
- Thread States: Detects BLOCKED, DEADLOCKED, or WAITING threads due to DB chattiness, config limits, or locking.
- Thread Pool Behavior: Identifies thread exhaustion, request rejections, or poor pooling thresholds in backend services.
- TCP/IP Connection Count & States: Catches backend connection leaks, TIME_WAIT surges, or slow/unresponsive downstream services.
- Error Trends in Application Logs: Detects hidden runtime errors, JDBC leaks, logging misconfigurations, or disk issues.
yCrash tool facilitates you reporting these Micro-Metrics which will unearth several performance problems well in advance, before they silently surface in production. You can find the details on how to source and study these Micro-Metrics through yCrash from here.
Business Impact & ROI
Isolating and fixing OutOfMemoryError in Jenkins will have considerable business impact to your organization:
- Engineering Time Savings: yCrash dramatically reduces the time engineers spend analyzing Heap Dumps and pinpointing root causes in complex, multi-threaded applications.
- Suppose your organization is analyzing 10 incidents per month & each analysis traditionally will take around 40 hours.
- With a Performance Engineer’s hourly rate at USD $100, yCrash can save approximately $480,000 annually (10 incidents x 40 hours/dump x $100/hour x 12 months) by automating root cause analysis and reducing troubleshooting time.
- Rapid Deployments & Increased Productivity: yCrash minimizes prolonged downtime of Jenkins that can lead to delayed deployment, degeneration of engineers productivity and reputational damage of the organization. By quickly diagnosing issues, yCrash helps to prevent such large-scale impacts, protecting revenue and brand reputation.
- Protection from Escalated Operational Consequences: Certain Jenkins outages can have severe repercussions, including escalated consequences like organizational changes or job losses. yCrash’s rapid problem isolation capabilities prevent such disruptions, allowing teams to resolve issues before they escalate to crisis levels. By maintaining operational continuity and team stability, yCrash supports a steady, resilient organizational environment and protects against the high stakes impacts that can result from unmanaged production outages.
Conclusion
Jenkins is the backbone of your organization’s CI/CD pipeline. Keeping it stable is not optional, it’s rather essential. Even though ‘java.lang.OutOfMemoryError: Direct buffer memory’ is not a common error, it can hurt your entire Jenkins platform availability, when it surfaces. Hopefully this post has given you enough light on how to troubleshoot this problem effectively & efficiently.

Share your Thoughts!