Memory Analysis is conducted only when our applications suffer from serious problems such as OutOfMemoryError. In this post, let’s discuss how all memory is wasted, why its wastage goes unnoticed, how you can proactively identify them and what are the benefits in fixing them.
How are all Memory Wasted?
In enterprises memory are wasted in 3 different forms:
1. Over Allocation of RAM
2. Over Allocation of JVM Memory
3. Inefficient Coding Practices
Let’s review them in detail in this section.
1. Over Allocation of RAM
This is one of the common forms of memory wastage. Most of the time, we might be running on a device/container which has 64GB RAM, however our actual application might be using only 20GB of RAM. This over-allocation is quite common in several enterprises.
How to Identify the Over Allocation of RAM? You can use Operating System performance monitoring tools such as ‘top’. When you issue the ‘top’ command, it will provide you various metrics such as Load Average, CPU Times. However, you want to focus on these two metrics:
a. Overall RAM capacity of the device
b. Actual memory consumption by all the processes running on the device.
If there a large gap between #a and #b (i.e. overall RAM capacity and actual memory consumption) then it’s a clear indication that memory is overallocated.
Note 1: You might want to monitor the metric #b for a 24 hour period during a weekday. So that the device would have seen both high traffic volume and low traffic volume. If #b is consistently lower than #a, it’s a clear indication that RAM capacity is overallocated.
Note 2: You can also use tools yCrash which will capture the above-mentioned System level metrics and report if there is any overallocation as shown in the below figure.

Fig: RAM Over Allocation pointed by yCrash tool
2. Over Allocation of JVM Memory
You might have allocated your Java Application’s Memory to be 10GB (i.e. -Xmx), while the actual demand is only 2GB. Just because you have overallocated your JVM’s memory size, your application will start to consume more memory. It doesn’t mean your application needs all the 10GB of memory. Think like this: You are giving your teenage son $100 to spend, while his actual need is only $20. Just because you have given $100, your child will end up spending all the $100.
How to Identify the Over Allocation of JVM Memory? The best strategy to identify the overallocation of JVM memory is to study the Garbage Collection behavior of the application. GC behavior of the application can be best studied by analyzing GC Logs. You may follow the steps given in this post, to study the GC behavior of your application.
Here is a GC analysis report of an application whose memory is over allocated. In this application, JVM’s heap size is allocated as 28GB, whereas heap size after GC events never exceeds 7GB. If your application doesn’t have any GC problems such as (poor GC throughput, Consecutive Full GC pauses, long GC Pauses, …) and if you see heap size after GC events are considerably lower than the allocated heap size, then definitely you can lower the heap size. When you use GC log analysis tools such as yCrash GCeasy tool, it automatically detects such overallocation and reports to you as warning.

Fig: JVM Memory Over Allocation pointed by yCrash GCeasy tool
3. Inefficient Coding Practices
Even when you rightly size your RAM and JVM heap, your application can still waste memory due to inefficient coding practices. These inefficiencies are often unintentional but have a significant impact, leading to excessive object creation, heap bloat, and increased GC activity. Here are a few examples of inefficient coding practices.
- a) Autoboxing: Autoboxing occurs when primitive types like int, long, or double are automatically converted to their wrapper objects (Integer, Long, Double) by the compiler. While this might seem harmless, it leads to extra object creation and heap memory usage. For example, using List<Integer> instead of int[] can create thousands of unnecessary Integer objects, especially in tight loops or large collections, impacting both memory and CPU due to increased GC overhead.
- b) Duplicate Objects: Applications often create multiple instances of the same object even when one shared instance would suffice. For example, repeatedly constructing identical configuration objects, date formatters, or enums can lead to object duplication across the heap. These duplicates increase memory usage unnecessarily and are often the result of missing caching, poor object reuse strategies, or unintentional instantiations in hot paths.
- c) Duplicate Strings: Duplicate strings are a silent memory killer, especially in web applications that process repetitive inputs like usernames, URLs, or request headers. Java stores each String object separately unless explicitly interned. If your application reads repeated values from external sources (logs, databases, requests) without interning or caching them, it leads to thousands of identical string instances, bloating the heap and reducing GC efficiency.
- d) Inefficient Collections: Using collection classes like ArrayList, HashMap, or HashSet without specifying an initial capacity causes them to resize dynamically as elements are added. This resizing involves creating new internal arrays and copying elements, which consumes additional memory and CPU cycles. Over-sized collections or using the wrong data structure (e.g., LinkedList where ArrayList would suffice) can further compound memory inefficiencies.
- e) Object Headers: Every object in Java carries a memory overhead in the form of an object header, typically 12 or 16 bytes depending on JVM architecture. When an application creates millions of small objects (e.g., wrappers for primitives, tiny beans, or tuples), the header overhead becomes significant. In some cases, object headers alone can account for a large portion of heap usage, making it essential to consolidate objects where possible or use memory-efficient data structures.
You need to use JVM Memory profilers (VisualVM, JProfiler, YourKit) to isolate such inefficient code in the application and fix them. However, fixing such coding inefficiencies is an intrusive change that requires code refactoring in critical logic. Thus, such changes should be done with proper planning and thorough testing.
While addressing ‘Inefficient Coding Practices’ is tricky & intrusive, the first two strategies ‘Over Allocation of RAM’ and ‘Over Allocation of JVM Memory’ are non-intrusive and least risk approaches. So, you should get started with them first.
How to catch such Memory Drains?
You need to use the yCrash tool, which swiftly identifies all of the above memory drains. You can follow the instructions given here to run the yCrash tool.
Note: It’s recommended to capture the yCrash metrics in your production environment, since memory consumption is heavily dependent on the traffic volume and the performance lab environment hardly mimics the production behavior. Capturing yCrash metrics adds almost zero overhead.
Benefits of Proactive Memory Analysis
In this section, we will highlight key benefits of stopping the above mentioned memory wastage.
1. Lower Cloud or Data Center Costs
In the 1970s 1 byte of memory was USD $1. The photographs that we take in our modern cell phone occupies 1MB of space. It means, if I want to load a modern-day photograph into memory in the 1970s, it will cost USD $1 million. Yes, memory was so expensive back in the days. By those standards the cost of memory has come down dramatically. However, I would still like to present the case that memory is not cheap. There are 4 primary computing resources:
a. CPU
b. Memory
c. Network
d. Storage
Amongst these resources, memory is the bottleneck for most enterprise applications. In most enterprise applications, before CPU, Network or Storage gets saturated, memory would get saturated first, thus we end up commissioning more and more servers. In other words, while the other 3 resources are under-utilized or partially utilized, just because memory is getting saturated, we end up commissioning more and more servers. Thus the increase in memory consumption directly translates to an increase in computing cost.
2. Faster Response Times Due To Reduced GC Pauses
When you allocate lower memory to your application (or optimize your code to be memory efficient), then your application creates less objects. If less objects are created, garbage collection time also reduces automatically. When garbage collection pause time reduces, the overall application’s response time improves automatically.
3. Improved Application Availability
By profiling and fixing the inefficient memory consuming code, you are reducing the risk of your application running into serious memory problems such as Long GC Pauses, Consecutive GCs, OutOfMemoryError. These changes will ultimately improve the application’s availability.
Conclusion
Memory waste isn’t always obvious, but it’s almost always present. Overallocated RAM, oversized JVM heaps, and inefficient coding practices quietly erode performance and inflate costs. However, with minimal efforts you can address these inefficiencies. Without making any intrusive changes, you can optimize your applications responsiveness and lower infrastructure bills. Start with the simple wins, measure the impact, and build from there. Wish you the best!

Share your Thoughts!