Java Memory Leaks not only pose a serious threat to application availability, but it is also tricky to diagnose. Thus, in this post we would like to give a detailed overview on Java Memory Leak in terms of what causes them, what their symptoms are, and how to troubleshoot them effectively. We’ll also go one step further by explaining how to catch them early during the development phase itself. Let’s start with its symptoms:
Symptoms of Java Memory Leak
When a Java application suffers from memory leak, it shows the following symptoms:
- Gradual Memory Increase Over Time
- CPU Spike
- Degraded Response Time/Timeouts
- OutOfMemoryError
Let’s try to understand these symptoms in detail:
Why Are Memory Leaks Hard to Troubleshoot?
When a memory leak occurs, the symptoms appear very late. An application may run fine for days or even weeks before suddenly crashing. By the time you see the failure, the code responsible for the leak may have been executed millions of times, making root-cause analysis extremely difficult.
On top of that, memory leaks are hard to reproduce outside production. Logs and monitoring tools often look normal until the JVM runs out of memory, leaving you with limited data and a lot of guesswork.
This combination of rarity, delayed symptoms, and poor reproducibility makes JVM memory leaks among the hardest production issues to solve.
If you want to build the skills needed to diagnose and fix these problems with confidence, you can refer to the JVM Performance Master Class, which covers real-world memory leak patterns, heap dump analysis, and practical troubleshooting techniques.
Note: For a detailed walkthrough on generating and analyzing Java heap dumps to detect memory leaks, you may find this guide useful:Analyzing Java Heap Dumps for Memory Leak Detection.
1. Gradual Memory Increase Over Time

Fig: Memory consumption pattern of a healthy application
Above is the memory consumption pattern of a healthy application. You can notice that memory consumption grows till the peak point, then the Full Garbage Collection event (i.e., red triangle) gets triggered. After that memory consumption drops all the way to the bottom. You can see this saw-tooth pattern continuously repeating.
Now let’s review the memory consumption pattern of an application suffering from the memory leak.

Fig: Gradual Memory Increase in the application suffering from Memory Leak
In the above graph, you can observe that when the first Full Garbage Collection event ran, memory size dropped to 4 GB. But as time progresses, memory size keeps gradually increasing and never drops back to 4 GB (despite Full GC events). On the contrary, towards the right end of the graph, the Full Garbage Collection event keeps running with no memory reclamation at all. It’s a classic memory pattern to indicate that an application is suffering from memory leak.
2. CPU Spike
When memory leaks worsen, garbage collectors will start to run frequently to recover the memory space (the second graph at the section above) which you can notice at the right end of the above graph). As you might be aware, Garbage Collection is a heavy CPU intensive operation. Because it must scan tons of objects in memory, their references hierarchy all the way till their GC roots to identify whether objects have active references or not. Then all the unreferenced objects must be evicted from the memory. Thus garbage collection requires a lot of CPU cycles. When memory leak worsens, garbage collection has to run continuously. In such circumstances, application’s CPU consumption will start to consume up to 100% CPU. Thus it’s a golden rule: Whenever memory leak worsens CPU consumption will rise up to 100%.
3. Degraded Response Time/Timeouts
As mentioned in the previous section, when memory leaks worsen, garbage collectors will start to run frequently to recover the memory space. Garbage Collectors besides consuming a lot of CPU cycle, it also pauses your application threads. GC will not allow application threads to run, as they will keep modifying the object references in the memory and it will hinder the Garbage Collector’s functionality. When application threads are paused, no customer transactions will be processed when Garbage Collector runs.
It means if Garbage collection continuously runs, then application threads will be paused continuously. It means the overall application’s transaction response time will degrade; and transactions will start to have timeouts, which will cause application health checks to fail.
4. OutOfMemoryError
When memory leak becomes very serious, no more objects can be created in the memory. At this point JVM will throw java.lang.OutOfMemoryError. There are 9 types of OutOfMemoryError. Based on the JVM memory region in which the leak is happening, JVM throws an appropriate type of OutOfMemoryError.
Java Memory Model
To better understand Java Memory Leaks, we first need to understand different JVM Memory models. Here is a video clip that gives a good introduction about different JVM memory regions. But in nutshell, JVM has following memory regions:

Fig: JVM Memory Regions
- Young Generation: Newly created application objects are stored in this region.
- Old Generation: Application objects that are living for longer duration are promoted from the Young Generation to the Old Generation. Basically, this region holds long-lived objects.
- Metaspace: Class definitions, method definitions and other metadata that are required to execute your program are stored in the Metaspace region. This region was added in Java 8. Before that metadata definitions were stored in the PermGen. Since Java 8, PermGen was replaced by Metaspace.
- Threads: Each application thread requires a thread stack. Space allocated for thread stacks, which contain method call information and local variables are stored in this region.
- Code Cache: Memory areas where compiled native code (machine code) of methods is stored for efficient execution are stored in this region.
- Direct Buffer: ByteBuffer objects are used by modern framework (i.e. Spring WebClient) for efficient I/O operations. They are stored in this region.
- GC (Garbage Collection): Memory required for automatic garbage collection to work is stored in this region.
- JNI (Java Native Interface): Memory for interacting with native libraries and code written in other languages are stored in this region.
- misc: There are areas specific to certain JVM implementations or configurations, such as the internal JVM structures or reserved memory spaces, they are classified as ‘misc’ regions.
What is the difference between Heap Memory Leak & Native Memory Leak?
If the memory leak happens in the Java heap memory region (i.e. Young/Old Gen) then it’s called a heap memory leak. If the memory leak happens in the native memory region (i.e. Metaspace, Threads, Code Cache, Direct Buffer, GC, JNI, misc) then it’s called as native memory leak.
Java Heap Memory Leaks
In our java application we declare Java Collection objects such as HashMap, ArrayList, TreeSet …. as static and member variables. To the Collection Object, we add records. Sometimes due to the bug in the program or spikes in the traffic volume, we keep adding records continuously while removing the records from them. Thus, the Collection Object ‘s size will start to grow over the period of time. Once its size grows beyond the allocated heap size (i.e., -Xmx size), JVM will throw OutOfMemoryError. This is the most common pattern of memory leak. Some of the real world examples are Caching without expiry, storing user actions/error logs for analytics, session management in web applications, …
Let’s review the below sample program which simulate the memory leak that we described just now:
1: public class MapManager {
2:
3: private HashMap<Object, Object> myMap = new HashMap<>();
4:
5: public void grow() {
6:
7: long counter = 0;
8: while (true) {
9:
10: myMap.put("key" + counter, "Large stringgggggggggggggggggggggggggggg"
11: + "gggggggggggggggggggggggggggggggggggggggg" + counter);
12 }
13: }
14: }
Here in line #3, a ‘HashMap’ object is created. In line #10, records are added to this ‘HashMap’ object. In line #8, the ‘while’ condition is specified as ‘true’. (Note: Developers typically do not explicitly use ‘true’ as the condition, in real-world cases, such behavior occurs due to a logical error or missing termination condition.) Because of this, the thread will continuously add records to the ‘HashMap’, causing the memory leak. When the size of the ‘HashMap’ exceeds the JVM’s heap memory limit, the JVM will throw a ‘java.lang.OutOfMemoryError’.
How to troubleshoot Java Heap Memory Leaks?
It’s a two-step process:
1. Capture Heap Dump: You need to capture heap dump from the application, right before JVM throws OutOfMemoryError. In this post, 8 options to capture the heap dump are discussed. You might choose the option that fits your needs. My favorite option is to pass the -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=<FILE_PATH_LOCATION> JVM arguments to your application at the time of startup. Example:
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt/tmp/heapdump.bin
When you pass the above arguments, JVM will generate a heap dump and write it to ‘/opt/tmp/heapdump.bin’ whenever OutOfMemoryError is thrown.
What is Heap Dump?
Heap Dump is a snapshot of your application memory. It contains detailed information about the objects and data structures present in the memory. It will tell what objects are present in the memory, whom they are referencing, who are referencing, what is the actual customer data stored in them, what size of they occupy, are they eligible for garbage collection… They provide valuable insights into the memory usage patterns of an application, helping developers identify and resolve memory-related issues.
2. Analyze Heap Dump: Once a heap dump is captured, you need to use tools like HeapHero, JHat, … to analyze the dumps.
How to analyze Heap Dump?
In this section let’s discuss how to analyze heap dump using the HeapHero tool.
HeapHero is available in two modes:
1. Cloud: You can upload the dump to the HeapHero cloud and see the results.
2. On-Prem: You can register here and get the HeapHero installed on your local machine & then do the analysis.
Note: I prefer using the on-prem installation of the tool instead of using the cloud edition, because heap dump tends to contain sensitive information (such as SSN, Credit Card Numbers, VAT, …) and don’t want the dump to be analyzed in external locations.
Once the tool is installed, upload your heap dump to HeapHero tool. Tool will analyze the dump and generate a report. Let’s review the report generated by the tool for the above program.

From the above screenshot you can see HeapHero reporting that a problem has been detected in the application and it points out that the ‘MapManager’ class is occupying 99.96% of overall memory. Tool also provides an overall overview of the memory (Heap Size, Class count, Object Count, …)

Above is the ‘Largest Objects’ section screenshot from the HeapHero’s heap dump analysis report. This section shows the largest objects that are present in the application. In majority of the cases, the top 2 – 3 largest objects in the application are responsible for Memory Leak in the application. Let’s see what are the important information provided in this ‘Largest Objects’ section.

If you notice in #4 and #5, you can see the actual data is present in the memory. Equipped with this information you can know what are the largest objects in the application and what are the values present in the application. If you want to see who created or holding on to the reference of the largest objects, you can use the ‘Incoming References‘ feature of the tool. When this option is selected, it will display all the objects that are referencing the ‘MapManager’ class.

From this report you can notice that MapManager is referenced by Object2, which is in turn referenced by Object1, which in turn is referenced by MemoryLeakDemo.
Common Java Heap Memory Leaks
The following section summarizes common Java heap memory leak causes, with concise examples to help identify them quickly in code.
1. Ever Growing Datastructure
Sometimes objects are continuously added to the collections such as HashMap or ArrayList without proper cleanup. Over time, these collections grow in size unboundedly and cause memory leaks.
Example:
list.add(obj); // never removed
2. Unclosed Resources (Session, Connection…)
There are certain critical resources such as database connections, HTTP sessions, or file streams. These resources may not be properly closed in all execution paths. When they are unclosed, all the internal references it holds will not be garbage collected. Accumulation of unclosed resources will eventually cause memory leak.
Connection conn = ds.getConnection(); // not closed
3. Static Field Holding References
Objects referenced by static fields remain in memory for the entire JVM lifetime. If these static fields point to large collections or object graphs, they will not be reclaimed by the Garbage Collector.
static Map cache = new HashMap<>();
4. ThreadLocal Not Cleared
Values stored in ThreadLocal persist as long as the underlying thread lives. In thread-pool-based servers, this can cause objects to remain in memory much longer than intended if remove() is not called. For more details refer to Memory Leak due to uncleared ThreadLocal
threadLocal.set(obj); // no remove()
5. Wrong equals() and hashCode() Implementations
When keys of Java Collection objects (such as Map, List, Set) are mutated, after its creation, it will cause those objects to be unreferenced in the Collection. When keys get continuously mutated, a large set of objects will go unreferenced, which will eventually cause memory leak in the application. For more details refer to Memory Leak Due To Mutable Keys in Java Collections
Map<User, String> map = new HashMap<>(); User user = new User("abc@ycrash.io"); map.put(user, "data"); user.email = "xyz@ycrash.io"; // key mutated
6. Listener and Callback Not Unregistered
Objects registered as event listeners or callbacks are referenced by the publisher. If listeners are not explicitly unregistered after its use, they and the objects they reference will accumulate in the heap causing memory leaks.
EventBus.register(this); // missing: EventBus.unregister(this);
7. Wrong Cache Implementation
Sometimes Caches are configured without proper eviction policies, such as long TTL (time-to-live), or large size limits, which will make the cache grow infinitely, ultimately causing memory leak in the application.
cache.put(key, value); // no eviction
8. Holding Large Objects Longer Than Needed
Large objects such as byte arrays, request payloads, heap dumps, or in-memory files are held in memory longer than necessary time period, which causes spiked heap consumption.
byte[] bigData = new byte[100_000_000];
9. Accidental Memory Retention via Lambdas & Closures
Lambdas and anonymous functions may capture references to large objects. When these lambdas are stored in executors, schedulers, or queues, they can unintentionally prolong object lifetimes.
byte[] big = new byte[100_000_000]; executor.submit(() -> process(big)); // big cannot be GC’d
It can be fixed by capturing only what is required:
int size = big.length; executor.submit(() -> processSize(size));
Tools to Troubleshoot Memory Leaks
In order to troubleshoot memory leaks, engineers have the following tools at their disposal:
| Tool | Description |
| Garbage Collection Log | GC Log is a rich source of information that gives a detailed overview of what’s happening in JVM heap memory. How many objects created, how many objects reclaimed, whether memory leak trend is observed,… You can enable and analyze the GC log using tools like GCeasy, HP JMeter, IBM GC & Memory Visualizer, Google Garbage Cat. It’s widely used in production environments to study memory leaks, however certain organizations study them in the performance labs to forecast any memory problems. |
| Heap Dump Analyzer | Heap Dump Analyzers is the most effective tool strategy to diagnose production memory leaks. When a memory leak surfaces in production, engineers capture heap dump from the application and analyze using the heap dump analysis tools like: HeapHero, Eclipse MAT, JHat. Using Heap Dump analyzers one can introspect large objects in the memory, what objects they are holding on what object(s) are keeping them alive…, which will facilitate them to identify root cause of memory leak fairly quickly |
| Java Profilers | Java Profilers like Java Mission Control, JProfiler, YourKit, Java VisualVM, and the Netbeans Profiler are also used to analyze the memory leak. They provide metrics like Live Object Count by Class, Top Memory Consumers, Allocation Hotspots… However profilers add overhead to the JVMs, thus they are typically used in pre-prod environments only. |
| Static Code Analysis Tools | Static code analysis tools like SpotBugs, SonarQube, PMD, Error Prone (by Google), IntelliJ IDEA Inspections are also used during the development phase to detect memory-leak prone coding patterns. However memory leak is more a run-time behaviour, which depends on real workloads, object lifecycles and their reference chains, which static analysis can’t fully model. Thus static code analysis is not very effective at diagnosing real memory leaks in production. They serve more as a basic sanity check, facilitating developers to catch obvious coding issues such as unclosed resources, uncleared ThreadLocal,… |
Java Native Memory Leaks
Java application typically suffers from 3 common types of native memory leaks:
1. Metaspace Memory Leak
Metaspace stores class metadata in native memory. If your application dynamically loads classes (common in frameworks like Spring, Hibernate, OSGi, or servlet containers), and those classes aren’t unloaded, metaspace can keep growing until it triggers an OutOfMemoryError: Metaspace. To learn more about this error and strategies to resolve it, please read OutOfMemoryError: Metaspace.
This type of memory is caused by:
a. Creating large number of dynamic classes: If your application uses Groovy kind of scripting languages or Java Reflection to create new classes at runtime.
b. Loading large number of classes: Either your application itself has a lot of classes or it uses a lot of 3rd party libraries/frameworks which have a lot of classes in it.
c. Loading a large number of class loaders: Your application is loading a lot of class loaders.
Learn about a real-world case study of metaspace memory leak from here: Troubleshooting Microservice’s OutOfMemoryError: Metaspace
2. Threads Leak
Each thread consumes native memory for its stack (default: 1 MB on most OSes). If your app keeps creating threads and doesn’t shut them down properly, it will eventually hit the error: OutOfMemoryError: unable to create new native thread. You can learn more about this error and strategies to resolve it in this post: OutOfMemoryError: Unable to create new native threads.
This error is triggered by the JVM under following circumstances:
a. Thread Leak due to Buggy Code: Due to the bug in the code, application can inadvertently create a lot of new threads, it can lead to buildup of unused threads in memory, eventually exhausting the available native memory, resulting in OutOfMemoryError.
b. Lack of RAM capacity: When there is a lack of RAM capacity in the container/device in which the application is running.
c. More processes in Memory: When other processes are running on the container/device, it leaves less room for the threads to be created in the native memory.
d. Kernel Limit: By default, Kernel sets a limit on the number of threads each process can create. When the application creates more threads than the allowed kernel limit.
Learn about a real-world case study of thread leak from here: How a Major Financial Institution Resolved Middleware Outage.
3. Direct Buffer Memory Leak
Direct buffers (ByteBuffer.allocateDirect()) are allocated in off-heap memory. They aren’t managed by the JVM’s garbage collector in the same way as heap objects, and their cleanup relies on GC + finalize() or Cleaner mechanisms. This can lead to memory not being reclaimed fast enough — or never at all — causing: OutOfMemoryError: Direct buffer memory. This typically happens when:
a. Memory Leak due to Buggy code: If your application is not properly releasing direct buffers after use, they can accumulate over time and eventually exhaust the available direct buffer memory.
b. High Rate of Allocation: If your application is allocating direct buffers at a very high rate and not releasing them promptly, it can quickly consume the available memory.
c. Switching from Spring RestTemplate to WebClient: Spring Boot is a popular framework for Java enterprise applications. One common method of integration with internal or external applications is through RestTemplate APIs. Modern versions of Spring advocate to use Java NIO-based WebClient for better performance. While NIO based Webclient delivers better performance, it shifts the objects creation from the heap memory region to the Direct Buffer region. Thus, when you make this shift, it will result in memory pressures in the Direct Buffer region.
To learn more about this error and strategies to resolve it, please read OutOfMemoryError: Direct buffer memory.
Conclusion
We hope you will use the information that we shared in this post about Java Memory leaks in you organization and build memory leak free applications 😊
