The java.lang.OutOfMemoryError is one of the most dreaded runtime errors in Java. It indicates that the Java Virtual Machine (JVM) cannot allocate an object because it has run out of memory, and the Garbage Collector cannot reclaim enough space.
Watch the 10-minute summary of all 9 error types:
However, simply increasing the Heap Size (-Xmx) is not always the solution.
In fact, there are 9 distinct types of OutOfMemoryErrors, each with its own root cause ranging from Heap exhaustion and Metaspace limits to Native Thread issues and Container (Docker) restrictions. To fix the crash permanently, you must first identify which type you are facing. This guide covers every scenario, complete with causes and solutions.
To better understand OutOfMemoryError, we first need to understand different JVM Memory regions. Here is a video clip that gives a good introduction about different JVM memory regions. But in nutshell, JVM has following memory regions:

- Young Generation: Newly created application objects are stored in this region.
- Old Generation: Application objects that are living for longer duration are promoted from the Young Generation to the Old Generation. Basically, this region holds long-lived objects.
- Metaspace: Class definitions, method definitions and other metadata that are required to execute your program are stored in the Metaspace region. This region was added in Java 8. Before that metadata definitions were stored in the PermGen. Since Java 8, PermGen was replaced by Metaspace.
- Threads: Each application thread requires a thread stack. Space allocated for thread stacks, which contain method call information and local variables are stored in this region.
- Code Cache: Memory areas where compiled native code (machine code) of methods is stored for efficient execution are stored in this region.
- Direct Buffer: ByteBuffer objects are used by modern framework (i.e. Spring WebClient) for efficient I/O operations. They are stored in this region.
- GC (Garbage Collection): Memory required for automatic garbage collection to work is stored in this region.
- JNI (Java Native Interface): Memory for interacting with native libraries and code written in other languages are stored in this region.
- misc: There are areas specific to certain JVM implementations or configurations, such as the internal JVM structures or reserved memory spaces, they are classified as ‘misc’ regions.
Types of OutOfMemoryErrors
1. OutOfMemoryError: Java Heap Space

When more objects are created in the ‘Heap’ (i.e. Young and Old) region than the allocated memory limit (i.e., ‘-Xmx’), then JVM will throw ‘java.lang.OutOfMemoryError: Java heap space’.
What are the Common Causes of “Java Heap Space”?
This error is triggered by the JVM under following circumstances:
- Increase in Traffic Volume: When there is a spike in the traffic volume, more objects will be created in the memory. When more objects are created than the allocated Memory limit, JVM will throw ‘OutOfMemoryError: Java heap space’.
- Memory Leak due to Buggy Code: Due to the bug in the code, application can inadvertently retain references to objects that are no longer needed, it can lead to buildup of unused objects in memory, eventually exhausting the available heap space, resulting in OutOfMemoryError.
What are the Solutions for “Java Heap Space”?
Following are the potential solutions to fix Java Heap Space error:
- Fix Memory Leak: Analyze memory leaks or inefficient memory usage patterns using the approach given in this post. Ensure that objects are properly dereferenced when they are no longer needed to allow them to be garbage collected.
- Increase Heap Size: If OutOfMemoryError surfaced due the increase in the traffic volume, then increase the JVM heap size (-Xmx) to allocate more memory to the JVM. However, be cautious not to allocate too much memory, as it can lead to longer garbage collection pauses and potential performance issues.
2. OutOfMemoryError: GC Overhead Limit Exceeded

When Java process is spending more than 98% of its time doing garbage collection and recovering less than 2% of the heap and has been doing so far the last 5 (compile time constant) consecutive garbage collections then ‘java.lang.OutOfMemoryError: GC overhead limit exceeded’ gets thrown.
NOTE: When your application runs out of heap memory, the JVM may throw either ‘java.lang.OutOfMemoryError: Java heap space’ or ‘java.lang.OutOfMemoryError: GC overhead limit exceeded’. Although the messages differ, both indicate the same underlying issue i.e., the Java heap has been fully saturated and cannot accommodate further allocations.
In practice, these two errors are often raised interchangeably depending on how the JVM detects the condition. However, an important point to remember here is that the troubleshooting process does not change, whether you see “heap space” or “GC overhead limit exceeded,” the artifacts you need to capture, the diagnosis approach you should follow, and the resolution strategies remain exactly the same.
What are the Common Causes of “java.lang.OutOfMemoryError: GC overhead limit exceeded”?
This error is triggered by the JVM under following circumstances:
- Increase in Traffic Volume: When there is a spike in the traffic volume, more objects will be created in the memory. When more objects are created than the allocated Memory limit, JVM will throw ‘OutOfMemoryError: GC overhead limit exceeded’.
- Memory Leak due to Buggy Code: Due to the bug in the code, application can inadvertently retain references to objects that are no longer needed, it can lead to buildup of unused objects in memory, eventually exhausting the available heap space, resulting in OutOfMemoryError.
What are the Solutions for “java.lang.OutOfMemoryError: GC overhead limit exceeded”?
Following are the potential solutions to fix GC overhead limit exceeded error:
- Fix Memory Leak: Analyze memory leaks or inefficient memory usage patterns using the approach given in this post. Ensure that objects are properly dereferenced when they are no longer needed to allow them to be garbage collected.
- Increase Heap Size: If OutOfMemoryError surfaced due the increase in the traffic volume, then increase the JVM heap size (-Xmx) to allocate more memory to the JVM. However, be cautious not to allocate too much memory, as it can lead to longer garbage collection pauses and potential performance issues.
3. OutOfMemoryError: Requested Array Size

‘java.lang.OutOfMemoryError: Requested array size exceeds VM limit’ occurs when your application attempts to create an array that exceeds the maximum allowable size imposed by the JVM, which is Integer.MAX_VALUE (2,147,483,647). Even if you have sufficient heap memory available, this error will still be thrown if you try to create an array more than the imposed size limit.
What causes “java.lang.OutOfMemoryError: Requested array size exceeds VM limit”?
JVM throws ‘OutOfMemoryError: Requested array size exceeds VM limit’ under the following conditions:
- Parsing/Loading Large Files: Trying to load or parse very large files (e.g., reading an entire file into a byte array) without chunking can push array size beyond the safe limits.
- Data Structure Pre-Allocation: Some frameworks or poorly written utilities may try to pre-allocate massive arrays assuming they will be used to store large datasets in-memory, which may not be practical.
- Incorrect Calculations for Array Size: A bug or miscalculation in the code, such as multiplying large values, can cause the array size to exceed the valid integer range.
What are the Solutions for ‘OutOfMemoryError: Requested array size exceeds VM limit’?
Following are the potential solutions to fix Requested array size exceeds VM limit error:
- Increase Heap Size: You can increase the maximum heap size (-Xmx) when running your Java application. This allows more memory to be allocated to your application, potentially allowing larger arrays to be created. However, be cautious as increasing heap size may have performance implications and may not always be a feasible solution, especially in memory-constrained environments.
- Reduce array size: Try to see whether you can reduce the array size. If creating such a large array is unavoidable, consider alternative approaches such as using different data structures or processing data in smaller chunks.
4. OutOfMemoryError: Metaspace

When lot of class definitions, method definitions are created in the ‘Metaspace’ region than the allocated Metaspace memory limit (i.e., ‘-XX:MaxMetaspaceSize’), JVM will throw ‘java.lang.OutOfMemoryError: Metaspace’.
What are the Common Causes of “java.lang.OutOfMemoryError: Metaspace”?
This error is triggered by the JVM under following circumstances:
- Creating large number of dynamic classes: If your application uses Groovy kind of scripting languages or Java Reflection to create new classes at runtime.
- Loading large number of classes: Either your application itself has a lot of classes or it uses a lot of 3rd party libraries/frameworks which have a lot of classes in it.
- Loading large number of class loaders: Your application is loading a lot of class loaders.
What are the Solutions for ‘OutOfMemoryError: Metaspace’?
Following are the potential solutions to fix Metaspace error:
- Increase Metaspace Size: If OutOfMemoryError surfaced due to increase in number of classes loaded, then increase the JVM’s Metaspace size (-XX:MetaspaceSize and -XX:MaxMetaspaceSize). This solution is sufficient to fix most of the ‘OutOfMemoryError: Metaspace’ errors, because memory leaks rarely happen in the Metaspace region.
- Fix Memory Leak: Analyze memory leaks in your application using the approach given in this post. Ensure that class definitions are properly dereferenced when they are no longer needed to allow them to be garbage collected.
5. OutOfMemoryError: Permgen Space

When lot of class definitions, method definitions are created in the ‘Permgen space’ region than the allocated Permgen memory limit (i.e., ‘-XX:PermSize’), JVM will throw ‘java.lang.OutOfMemoryError: Permgen space’.
What are the Common Causes of “java.lang.OutOfMemoryError: Permgen space”?
This error is triggered by the JVM under following circumstances:
- Creating large number of dynamic classes: If your application uses Groovy kind of scripting languages or Java Reflection to create new classes at runtime.
- Loading large number of classes: Either your application itself has a lot of classes or it uses a lot of 3rd party libraries/frameworks which have a lot of classes in it.
- Loading large number of class loaders: Your application is loading a lot of class loaders.
What are the Solutions for ‘OutOfMemoryError: Permgen space’?
Following are the potential solutions to fix Permgen space error:
- Increase Permgen Size: If OutOfMemoryError surfaced due to increase in number of classes loaded, then increase the JVM’s Permgen size (-XX:PermSize). This solution is sufficient to fix most of the ‘OutOfMemoryError: Permgen space’ errors, because memory leaks rarely happen in the Permgen region.
- Fix Memory Leak: Analyze memory leaks in your application using the approach given in this post. Ensure that class definitions are properly dereferenced when they are no longer needed to allow them to be garbage collected.
6. OutOfMemoryError: Unable to create new native threads

When a large number of threads are created in the native memory region than the available RAM capacity, JVM will throw ‘java.lang.OutOfMemoryError: Unable to create new native threads’.
What are the Common Causes of Java’s “Unable to create new native threads” Error?
This error is triggered by the JVM under following circumstances:
- Thread Leak due to Buggy Code: Due to the bug in the code, application can inadvertently create a lot of new threads, it can lead to buildup of unused threads in memory, eventually exhausting the available native memory, resulting in OutOfMemoryError.
- Lack of RAM capacity: When there is a lack of RAM capacity in the container/device in which the application is running.
- More processes in Memory: When other processes are running on the container/device, it leaves less room for the threads to be created in the native memory.
- Kernel Limit: By default, Kernel sets a limit on the number of threads each process can create. When the application creates more threads than the allowed kernel limit.
What are the Solutions for ‘OutOfMemoryError: Unable to create new native threads’?
Following are the potential solutions to fix Unable to create new native threads error:
- Fix Thread Leak: Analyze the thread dump of your application and identify the leaking threads. Instrument fix to ensure that threads are properly terminated after it completed executing its tasks.
- Increase RAM capacity: Try to run your application on a container/device which has larger RAM capacity.
- Reduce other processes: Terminate (or move) other processes that are running on the container/device, so that there is more room for the java application to create new threads.
- Reduce thread stack size: When you reduce the thread’s stack size (by using -Xss JVM argument), your application can create a number of threads within the same amount of memory. However, be cautious when you pursue this option, as reducing thread stack size can result in StackOverflowError.
- Change Kernel setting per process thread limit: By default, Kernel sets a limit on the number of threads each process can create. If OutOfMemoryError is happening because of this limit, then you can increase this limit by using ‘limit -u’ command.
7. OutOfMemoryError: Direct buffer memory

When lot of objects are created in direct buffer region, than the allocated direct buffer memory limit, (i.e., ‘-XX:MaxDirectMemorySize’), JVM will throw ‘java.lang.OutOfMemoryError: Direct buffer memory’.
What are the Common Causes of “Direct buffer memory” OutOfMemoryError in Java?
This error is triggered by the JVM under following circumstances:
- Memory Leak due to Buggy code: If your application is not properly releasing direct buffers after use, they can accumulate over time and eventually exhaust the available direct buffer memory.
- High Rate of Allocation: If your application is allocating direct buffers at a very high rate and not releasing them promptly, it can quickly consume the available memory.
- Switching from Spring RestTemplate to WebClient: Spring Boot is a popular framework for Java enterprise applications. One common method of integration with internal or external applications is through RestTemplate APIs. Modern versions of Spring advocate to use Java NIO-based WebClient for better performance. While NIO based Webclient delivers better performance, it shifts the objects creation from the heap memory region to the Direct Buffer region. Thus when you make this shift, it will result in memory pressures in the Direct Buffer region.
What are the Solutions for ‘OutOfMemoryError: Direct buffer memory’?
Following are the potential solutions to fix Direct buffer memory error:
- Fix Memory Leak: Analyze memory leak in the Direct Buffer memory region, ensure that objects are properly dereferenced when they are no longer needed to allow them to be garbage collected.
- Increase Direct Buffer size: If OutOfMemoryError surfaced due the increase in the traffic volume, then increase the JVM’s Direct Buffer Memory region size (-XX:MaxDirectMemorySize).
- Upgrade to Java 17 (or above version): Enhancements have been made in Java 17 to use the Direct Buffer Memory region in an effective manner. Thus, if you happen to be running on a version less than Java 17, upgrade it. Here is a case study that showcases the performance optimization to Direct Buffer Memory region in Java 17.
8. OutOfMemoryError: Kill Process or Sacrifice Child

When there is a lack of RAM capacity in the container/device, the kernel will terminate the memory consuming processes to free up the RAM. If that terminated process turns out to be a Java application, it will result in ‘java.lang.OutOfMemoryError: Kill process (java) or sacrifice child’.
What are the Common Causes of “java.lang.OutOfMemoryError: Kill process or sacrifice child”?
This error is triggered by the JVM under following circumstances:
- More processes in the device: When a lot of other processes are running on the container/device, it leaves less memory for the Java application to run.
- Initial and Max Heap size set to different values: If initial heap size (i.e., -Xms) is configured at a lower value than the max heap size (i.e., -Xmx), then during runtime, Java application’s memory size will grow. If there is a lack of RAM capacity during that growth time, the kernel will terminate the Java application throwing this error.
- Native Memory region growing: In case the initial and max heap size is set to the same value, native memory region of the JVM can grow during the runtime. If native memory is growing and there is a lack of RAM capacity, then the kernel can terminate the Java application by throwing this error.
What are the Solutions for ‘OutOfMemoryError: Kill process (java) or sacrifice child’?
Following are the potential solutions to fix Kill process (java) or sacrifice child error:
- Increase RAM capacity: Try to run your application on a container/device which has larger RAM capacity.
- Reduce other processes: Terminate (or move) other processes that are running on the container/device, so that there is enough memory for the Java application.
- Set initial Heap and Max Heap to same value: When you set the initial heap size (i.e. -Xms) and max heap size (-Xmx) to the same value, JVM will be allocated with maximum heap size right at the startup time. Thus, JVM’s memory allocation will not grow or shrink at runtime. Kernel typically terminates the application which is constantly demanding more memory. Thus, the kernel will not terminate the Java application in the middle.
- Fix Leak in Native Memory: Sometimes there could be a leak in the Native Memory as well. There could be a thread leak, or direct buffer leak – which can cause increased memory consumption. Instrument proper fix to arrest those leaks.
9. OutOfMemoryError: reason stack_trace_with_native_method

‘OutOfMemoryError: stack_trace_with_native_error_method’ happens when the Java Virtual Machine (JVM) encounters a situation where it’s unable to allocate sufficient memory for the execution stack of a thread containing native method invocations.
NOTE: ‘java.lang.OutOfMemoryError: stack_trace_with_native_error_method’ occurs only if your application is using JNI (Java Native Interface) and connecting the native applications. Most applications don’t use JNI.
What are the Common Causes of “OutOfMemoryError: reason stack_trace_with_native_method”?
This error is triggered by the JVM under following circumstances:
- Heavy Usage of Native Methods: Native methods are functions implemented in languages other than Java, such as C or C++. Excessive usage of native methods can lead to increased memory consumption and stack overflow issues.
- Recursive Native Method Calls: Recursive calls to native methods can result in a rapidly growing call stack, eventually exhausting the available stack memory allocated to the thread.
What are the Solutions for ‘OutOfMemoryError: reason stack_trace_with_native_method’?
Following are the potential solutions to fix reason stack_trace_with_native_method error:
- Analyze Stack Traces: Examine the stack traces provided in the error logs to identify the sequence of native method invocations leading to the error. This can help pinpoint the source of the problem and guide troubleshooting efforts.
- OS native tools: You might need to use following Operative System native tools to diagnose the issue:
- a) DTrace: A powerful dynamic tracing framework available on certain Unix-like operating systems (e.g., macOS, FreeBSD). DTrace allows you to observe system behavior in real-time, making it useful for analyzing performance and diagnosing memory issues.
- b) pmap: This command-line utility provides detailed information about the memory mappings of a process, including memory usage and allocation. By examining the memory map of a Java process, you can identify memory-intensive areas and potential memory leaks.
- c) pstack: Used for printing the call stack of a running process, pstack can help identify the sequence of function calls leading to stack overflow errors caused by recursive native method invocations. Analyzing the call stack can reveal patterns or bottlenecks that contribute to memory exhaustion.
Conclusion
Understanding the different types of OutOfMemoryErrors (OOME) in Java is essential for diagnosing and resolving memory issues effectively. Each type of OOME points to a specific area of concern within your application, from heap space limitations to GC overhead, and more. By familiarizing yourself with these errors, you can better anticipate potential problems and implement preventive measures to ensure your applications run smoothly.
To recap, we have explored the nine types of OOME in Java, drawing parallels to the nine planets in our solar system. With this knowledge, you are now better equipped to handle memory-related issues and maintain the stability and performance of your Java applications.
FAQ
Q.1 – What is the difference between Heap Space and Metaspace OutOfMemoryError?
JVM memory has two parts:
a. Heap Memory which includes Young Generation and Old Generation
Heap Memory is where your application objects live—things like Customer, ArrayList, HashMap, etc. When this space runs out and the Garbage Collector can’t reclaim memory, the JVM throws: ‘OutOfMemoryError: Java heap space‘
b. Native Memory which includes Metaspace, Threads, Code Cache, Garbage Collector (GC) metadata, Direct Buffers, JNI allocations, and more
Metaspace, on the other hand, resides in Native Memory. It’s where the JVM stores class metadata—definitions of classes loaded by your application. If your app loads too many third-party JARs or generates new classes dynamically at runtime (common with frameworks like Spring, Hibernate, or CGLIB), it can exhaust Metaspace. When that happens, you’ll see: ‘OutOfMemoryError: Metaspace‘
Q.2 – How do I increase the Java heap size to prevent OutOfMemoryError?
You can increase the Java heap size by passing the -Xmx JVM argument when starting your application. Example: If you want to set the heap size to 6 GB, you can pass the JVM argument: ‘-Xmx6g’.
If you are running in containers you can either pass the above mentioned ‘-Xmx’ argument or you can also pass ‘-XX:MaxRAMPercentage’ to allocate a percentage of the container’s memory to the JVM heap. You can learn more here: Best Practices – Java Memory Arguments for Containers
Note: After updating these arguments, you’ll need to restart the JVM for the new settings to take effect.
Q.3 – What are common signs of a Java memory leak that might lead to OutOfMemoryError?
Before your application throws an OutOfMemoryError, it usually gives you a few warning signs. If you catch them early, you can prevent downtime and performance issues. Here’s what you should keep an eye on:
1. Gradual Memory Increase Over Time: Have you noticed your application’s heap memory slowly creeping up, even after garbage collection? In a healthy setup, memory usage goes up and down in a “saw-tooth” pattern. But if your memory keeps growing and doesn’t drop back down after GC, it could be a sign of a memory leak. Eventually, Full GCs might run more often but free up very little memory, meaning objects are being held when they shouldn’t be.
2. CPU Spikes: Is your CPU usage suddenly spiking without warning? As your app begins to leak memory, the JVM responds by triggering garbage collection more often. GC is a CPU-heavy process—it scans memory to identify and remove unused objects. That added workload can cause sharp spikes in CPU usage, sometimes pushing it all the way to 100%.
3. Degraded Response Time and Timeouts: If your app is suddenly slow or timing out, frequent GC pauses might be the reason. While GC is running, your application’s threads can’t do much, they’re paused. This can delay transactions, trigger failed health checks, or even cause user requests to time out. The app is technically up, but it’s not able to respond the way it should.
4. OutOfMemoryError: If memory runs out and the JVM can’t recover enough space, it will throw an OutOfMemoryError. The exact message—like “Java heap space” or “GC overhead limit exceeded”, depends on which part of memory is exhausted. When you see this, it’s usually too late to avoid impact.
If any of this sounds familiar, it might be time to take a much more closer look. You can learn more in this post: Symptoms of Memory Leak
Q.4 – Can the choice of Garbage Collector affect OutOfMemoryError?
When there is a hole in the bottom of the bucket, no matter how much water you fill, water will drain from the bucket. Water will not stay in the bucket. Similarly when there is a memory leak in the application, it doesn’t matter what type of GC algorithm you use, it will not prevent the application from experiencing OutOfMemoryError. However it can influence the type of OutOfMemoryError JVM throws.
Different GC algorithms are designed with distinct priorities, some for low pause times, others for high throughput, and some for managing very large heaps. These design choices play a big role in how efficiently memory reclamation happens and how your application behaves under memory pressure.
If you use G1GC and haven’t tuned region sizes, you might run into: ‘OutOfMemoryError: Java heap space‘. Because G1 can leave fragmented memory if regions are too small or object promotion fails.
If you use CMS (deprecated) and concurrent cycles can’t finish in time: ‘OutOfMemoryError: GC overhead limit exceeded‘. The collector is spending too much time trying (and failing) to reclaim memory.
In short, while OutOfMemoryError usually happens due to insufficient memory or memory leaks, the type of Garbage Collector you use can affect how soon it surfaces, what form it takes, and whether the application might be recovered.
Q.5 – What is Native Memory in the context of Java OutOfMemoryError?
Native Memory refers to the memory that the JVM allocates outside of the Java heap. It’s part of the system memory used for various internal JVM operations and native components. Key internal regions of Native Memory are:
Metaspace: Metaspace is a region in native memory where the JVM stores class metadata, including class names, methods, fields, annotations, and constant pools. Every time your application loads a new class, either from third-party libraries or through dynamic generation (e.g., proxies, CGLIB), it consumes space in Metaspace. If the number of loaded classes keeps increasing and Metaspace reaches its configured or system-imposed limit, the JVM will throw an OutOfMemoryError: Metaspace.
Threads: Each Java thread requires its own native memory stack. When your application creates too many threads, especially in systems with limited native memory or in containers with memory limits—the operating system may be unable to allocate memory for new thread stacks. In such cases, the JVM throws OutOfMemoryError: unable to create new native thread. This isn’t a heap issue but a native memory exhaustion scenario due to thread overprovisioning.
Direct ByteBuffers: Direct ByteBuffers are created outside the regular Java heap using ByteBuffer.allocateDirect(). They are often used for high-performance I/O operations, because they offer better performance. But remember, these buffers consume native memory, which means their usage is not tracked by the garbage collector. If the total memory used by direct buffers exceeds the limit defined by -XX:MaxDirectMemorySize, the JVM throws an OutOfMemoryError: Direct buffer memory.
Code Cache: The Code Cache is a part of native memory region where the JVM stores machine code created by the Just-In-Time (JIT) compiler. In applications with a large codebase or frequent dynamic compilation, this cache can grow substantially over time, which can contribute to native memory pressure. If the Code Cache becomes full, you may see degraded performance due to disabled JIT compilation or rare cases of ‘OutOfMemoryError: CodeCache is full’. You can configure its size using -XX:ReservedCodeCacheSize.
JVM Internal Data Structures: The JVM also uses native memory for various internal subsystems such as garbage collector metadata, safepoint structures, and other runtime bookkeeping. These usually don’t throw a direct OutOfMemoryError, but they still add to the overall native memory usage. In memory-constrained environments like containers, the OS may forcibly shut down the JVM, when combined memory usage (heap + native) exceeds the allowed limit, resulting in OutOfMemoryError: Kill Process or Sacrifice Child.
JNI Allocations: JNI (Java Native Interface) allows Java code to interact with native libraries. These libraries can allocate memory independently of the JVM, which won’t be tracked or reclaimed by the garbage collector. If JNI memory usage grows unchecked—due to leaks or large native buffers—it can exhaust native memory. Depending on how the native code handles memory failures, this may cause a crash or lead to an unspecified OutOfMemoryError.
In nutshell, Native memory is a critical part of your application’s memory footprint. If you ignore it and tune only the heap size, you’re still at risk of OutOfMemoryError—just from a different angle.
Q.6 – How can I analyze a Heap Dump to diagnose OutOfMemoryError?
Memory leak isolation is often made to appear complicated. But in most business applications, you can isolate leaks by following these simple steps:
1. Upload Your Heap Dump: Go to heaphero website. You can drag and drop your .hprof file right into the interface, or if you’re automating things, use their REST API. Whether it’s a Java or Android heap dump, HeapHero within seconds to a couple of minutes, will give you a detailed, interactive report ready.
2. Review ML-Based Problem Detection: HeapHero uses built-in machine learning to scan your heap dump for common issues—like memory leaks, duplicate objects, inefficient data structures, and more. These issues are surfaced right at the top of your report. Just click any of them to get a closer look: you’ll see which classes or objects are involved, how much memory they’re using, and what’s keeping them alive.
3. View High-Level Metrics: Before getting deep into the reports, take a quick glance at the summary metrics:
- Total memory usage
- Object count
- Number of duplicate strings
- Leak suspects
This gives you a high-level understanding of what’s going on under the hood.
4. Take a closer look at the Dominator Tree: Next go to the Dominator Tree section, which is the most important section in the report. In this section you will see which objects are holding on to the most memory (retained size). Always objects which are holding most of memory are responsible for memory leak. Thus investigate the top nodes in this section. Use the Incoming and Outgoing References to identify who is keeping these objects alive and what objects contain which is causing the memory leak.
5. Use Specialized Views: If you wish to go further, HeapHero has you covered with focused views for specific issues:
- Class Histogram for a sorted breakdown of object counts by class. If a particular class is creating too many instances, this section will be helpful to spot it.
- Duplicate Objects to catch repeated strings or values
- GC Roots to trace memory chains
- Unreachable Objects for spotting memory waste
- Leak Suspects to zoom in on likely culprits
These views can help you narrow down specific pain points without having to sift through everything manually.
6. Share and Collaborate: Once your report is ready, you can generate a secure, shareable link for your teammates. And if you’re working in a security-sensitive environment, HeapHero also offers on-premise deployments, so your data never leaves your firewall.
Q.7 – What are some best practices for preventing OutOfMemoryError in Java applications?
You can follow the mentioned best practices to prevent OutOfMemoryError in your Java applications. To learn about them in detail, you may read this post: Best practices for preventing Java OutOfMemoryError
1. Garbage Collection Study (During Performance Lab Testing): In performance lab testing, it’s important to study the Garbage Collection behaviour under different workload. Any anomaly in the GC behaviour such as repeated Full GC cycles, degradation in GC throughput or increase in heap trends are early & clear indicators of OutOfMemoryError.
2. Heap Dump Analysis (During Performance Lab Testing): Review heap dumps under test load using tools like HeapHero or Eclipse MAT to detect inefficient memory usage and leaking objects. This allows you to fix memory-heavy code paths before they cause OutOfMemoryErrors in production.
3. CI/CD Integration of GC & Heap Dump Study: By adding GC and heap dump analysis to your CI/CD pipeline, you can spot memory issues early, before they turn into production problems. Running automated checks after load or integration tests helps you catch leaks and inefficiencies when they’re easiest to fix.
4. Code Reviews: If you want to prevent memory bloat before it turns into a runtime headache, we’d recommend you to make memory-focused checks a part of your code reviews and look out for static leaks, misused ThreadLocals, and collections that can grow unchecked. Also try to use smart patterns like streaming, pagination, and immutable objects to keep your memory usage in check. This helps you avoid in-memory collections or caches that grow without limits, unless you have a clear eviction strategy in place.
5. Long Running Endurance Tests: Most of the OutOfMemoryError doesn’t happen instantly. Application has to be running for a longer duration, for the memory leak to build up and result in OutOfMemoryError. Thus in the performance lab, conduct long running endurance tests with realistic workloads to spot slow memory leaks and observe how GC behaves over time. These extended tests often catch issues that quick test runs miss.
6. Track Memory Metrics in Production: Keep an eye on key memory metrics like heap usage, GC pause times, GC Throughput and Metaspace growth through real-time dashboards and alerts. Catching unusual memory patterns early can help you steer clear of sudden memory crashes.
Q.8 – How does the Metaspace OutOfMemoryError relate to class loading?
The Metaspace region in JVM’s native memory holds class definitions and ClassLoader metadata. Every time your application loads a new class, it’s stored in this region.
If your application does any of the following:
- Loads a large number of third-party JAR files
- Uses Java Reflection extensively, generating many classes at runtime
- Creates dynamic classes (like proxies, lambdas, or CGLIB-based classes)
…it will cause a rapid increase in the number of class definitions stored in Metaspace.
If the allocated Metaspace size (-XX:MaxMetaspaceSize) isn’t sufficient to hold these classes, the JVM will eventually throw: OutOfMemoryError: Metaspace
To understand this error in detail and how to avoid it, refer to: OutOfMemoryError: Metaspace.
Q.9 – Are OutOfMemoryErrors always fatal to a Java application?
OutOfMemoryError is not always fatal, but there are exceptions depending on where and how the error occurs.
When Is It Fatal?
Main application threads throw the error: If the error occurs while a core thread is allocating memory (e.g., adding to a list, creating objects), the JVM might not be able to continue. You’ll likely see application crash, hang, or unpredictable behavior.
Garbage Collector fails: If the JVM throws the error during GC, it may panic and shut down, because it assumes it can no longer recover memory.
Native memory gets exhausted: Once class loading fails due to Metaspace exhaustion or native memory runs out, critical JVM operations may halt.
When It May Not Be Fatal?
Caught in a try-catch block: Technically, OutOfMemoryError is a subclass of Error, so it can be caught. But recovery is usually unsafe unless you’re doing something minimal (like logging or releasing a reference).
Background or auxiliary thread hits OOME: If the error happens in a non-critical thread and the application handles it gracefully, the app might survive — though this is rare and risky.
Specific errors like GC overhead limit exceeded: Sometimes you can configure the JVM to not halt on certain OOMEs. Example: -XX:-UseGCOverheadLimit disables that check. But continuing execution often leads to deeper instability.
Once an OutOfMemoryError occurs, your JVM is likely in an unstable state. Even if it doesn’t crash immediately, you can’t trust the application to behave correctly anymore. The best approach is to capture heap dump, restart the JVM, and investigate the root cause.
Q.10 – How do containerization (like Docker) and OutOfMemoryError interact?
Containerization platforms like Docker introduce another layer of complexity when it comes to managing memory and they directly influence how and when OutOfMemoryError occurs in Java applications.
How Docker Memory Limits Work with JVM? When you run a Java application inside a Docker container, the container might have less memory than the physical machine. If you don’t explicitly configure the JVM to respect the container’s memory limit, it might assume it can use all the host’s memory. This mismatch often leads to memory overuse, triggering an abrupt termination or OutOfMemoryError.
By default (prior to Java 10), the JVM doesn’t detect container limits unless you enable container awareness explicitly using:
-XX:+UseContainerSupport # For Java 8u191+ and Java 10+
Common Errors in Containerized Java Apps are:
OutOfMemoryError: Java heap space: This happens when your app uses more heaps than the assigned -Xmx value and GC can’t free up the memory fast enough. In a container, this often indicates that the heap is under-provisioned or JVM is not respecting the container’s memory limits. To learn more about causes & solution you may refer this post: OutOfMemoryError: Java heap space
OutOfMemoryError: Metaspace: Similar to heap issues, if too many classes are loaded and metaspace size exceeds the -XX:MaxMetaspaceSize value, this error may occur, especially in apps with frequent redeployments or dynamic proxies. To learn more about causes & solution you may refer this post: Java OutOfMemoryError: Metaspace
OutOfMemoryError: unable to create new native thread: This error happens when the container’s limited memory can’t support creating more application threads. This is common when thread pools aren’t properly bound. To learn more about causes & solution you may refer this post: Java OutOfMemoryError: Unable to Create New Native Threads
Killed / OOMKilled (no stacktrace): This is not a Java exception but a message from the container runtime (like Docker or Kubernetes). It means that your app exceeded the container’s memory limit, and the OS forcibly shut down the process. To learn how to troubleshoot this error, you may refer to this post: Java OutOfMemoryError: Kill Process or Sacrifice Child
Here are few best practices to prevent OOM in Containers:
1. Use container-aware memory settings: -XX:MaxRAMPercentage, -XX:InitialRAMPercentage, and -XX:MaxMetaspaceSize
2. Set up -XX:+HeapDumpOnOutOfMemoryError and -XX:HeapDumpPath=/mount/heap so you can retrieve dumps even if OOM happens
3. Monitor cgroup metrics to correlate JVM behavior with container memory limits
4. Use lightweight base images and tune direct memory with -XX:MaxDirectMemorySize
In nutshell, If you’re not container-aware when tuning JVM memory, you’re inviting OutOfMemoryError or worse, a silent OOMKilled. If you want to avoid unexpected crashes and catch the issues early, we recommend that you always configure your JVM to respect container boundaries, monitor native memory usage, and proactively analyze GC and heap behavior.

If I’m running a small Java application, what’s a safe default heap size to start with, and how do I decide when to increase it?
Great question, Annya!
For small Java apps, a safe starting point is usually 256–512 MB (-Xmx256m or -Xmx512m).
From there:
1. Monitor GC logs and heap usage under normal and peak load.
2. If you see frequent GC with little memory being freed, or the app slows under expected traffic, it’s time to increase.
3. Always adjust gradually and validate with load tests to avoid over-allocating.
If JVM is killed by the OS with OOMKilled (no stack trace), how we can confirm whether it was a heap issue, native memory issue, or container memory limit?
Great question, Sunitha!
When the JVM is killed by the OS (OOMKilled) without a stack trace, you can narrow it down by checking:
1. Heap issue → Look at -Xmx vs actual usage in GC logs/metrics.
2. Native memory issue → Check off-heap allocations, direct buffers, threads.
3. Container memory limit → Inspect Docker/K8s logs (kubectl describe pod) to see if it hit the container’s cgroup limit.
Correlating these with system monitoring tools (top, dmesg, container logs) usually points to the culprit.
For Requested array size exceeds VM limit, how can we identify whether it’s caused by parsing large files or by faulty array size calculations?
Great question, Jessy!
For “Requested array size exceeds VM limit”, you can usually differentiate it this way:
1. Parsing large files → You’ll see array allocations proportional to file size in heap dumps or logs.
2. Faulty array size calculations → Arrays are requested with unrealistic sizes (e.g., Integer overflow or unbounded logic).
Heap dump analysis tools like HeapHero or Eclipse MAT can quickly show which code path triggered the allocation.
Can Garbage Collector tuning alone prevent OOM error, or we always need application-level fixes?
Good question, Santhosh!
GC tuning can delay OOM errors by reclaiming memory more efficiently, but it can’t fix the root cause if the app is holding on to too many objects or allocating excessively. In most cases, you’ll need a mix of application-level fixes (e.g., leak prevention, better data structures) along with sensible GC tuning for stability.
Which OutOfMemoryError would be the most common one when we’re using container runtimes like Docker or Kubernetes?
Great question, Deepikha!
In containerized environments (Docker/K8s), the most common OOM is when the container’s memory limit is hit. This often shows up as OutOfMemoryError: Java heap space or simply OOMKilled by the orchestrator. Careful alignment of -Xmx with container limits (and leaving room for non-heap memory) is key to avoiding this.
This is one of the good articles I have ever read in JVM space.
I have one question related to OOM error related to direct buffer memory
Which version of Spring is demonstrating this issue?
Thanks a lot, Unni — glad you found the article useful!
Regarding direct buffer memory OOMs: this isn’t tied to a specific Spring version, but rather to how frameworks (including Spring) use NIO/direct buffers. It often happens when apps allocate buffers off-heap (e.g., Netty or JDBC drivers) without releasing them properly. Tuning -XX:MaxDirectMemorySize and ensuring proper cleanup usually helps.