Best Practices for Writing Memory-Efficient Java Code

Imagine your JVM heap being like a busy restaurant kitchen when it is dinner time. Orders are coming in all the time. People are making food constantly. The kitchen does not have a lot of space to put things. If the cooks start making food that nobody asked for, or if they do not clear the plates, the kitchen gets really crowded. At some point every aspect of the JVM starts to slow down. Eventually, it may even stop working. This is what happens with Java memory. 

Writing Java code that uses memory well is not only about stopping the OutOfMemoryError. It also affects how often the JVM has to request more memory from the operating system, how often it has to clean up redundant memory, how long it takes to clean up memory, how much work it can do and how much it costs to run the system.

Every time you make an object, it is like putting another plate on the counter. Every time you retain a reference to an object it is like leaving a plate on the table. If you keep making objects without stopping, or if you keep objects for too long, the JVM heap gets crowded, until the system slows down or stops working. If you want to see how making many objects can cause the JVM heap to fail, we talk about that in our guide on how to write Java code that uses memory well.

In this article we will go over 7 best practices to write memory-efficient Java code. We will give you examples to help you when you build a system that will handle a lot of work, process a lot of data or run for a long period of time.  This will help you keep your JVM heap stable and your garbage collector under control. 

Understanding Java Memory Regions

Before you start writing memory-efficient Java code, you need to know where memory is stored in the JVM. Managing memory is not just about using less memory. It’s about controlling how often you create objects, how long they live and how long they are kept in different parts of memory.

Fig: JVM Memory Region

Heap

All objects created using the new keyword are allocated in the Heap, which is managed by the garbage collector. The heap is typically divided into the Young Generation, where all objects are created, and the Old Generation, where long-lived objects are promoted. High allocation rates increase minor GC frequency, while excessive object retention pushes more objects into the Old Generation, resulting in longer and more expensive major GC pauses.

Stack

Each thread has its own Stack that stores method call frames. These frames hold, among other things, local variables. If the variables are Java primitives, the variable itself is stored here. If they are objects, the frame holds a reference to the object. The actual objects referenced reside in the Heap. Stack memory is automatically reclaimed when a method completes, making it fast and efficient, with no garbage collection overhead.

Native Memory

Beyond the heap, the JVM also uses Native Memory. This includes Metaspace (class metadata), thread stacks, direct buffers, the code cache, and JNI allocations. Native memory is not managed by the garbage collector in the same way as heap memory, and uncontrolled growth here can lead to failures such as OutOfMemoryError: Metaspace or OutOfMemoryError: Direct buffer memory.

Understanding these regions makes memory-efficient Java code practical. Once you know where objects are allocated, how long they live, and how they are reclaimed, memory behavior becomes predictable.

Now, think of the JVM as a busy restaurant kitchen.

  • The Heap is like the area where food is cooked.
    • The Young Generation is like a counter where they quickly store food that people will eat immediately. 
    • The Old Generation is like a storage room where they keep food that people are not eating yet.
  • The Stack is like the area where the chef is working. It is temporary. They clean it up automatically.
  • Native Memory is like equipment that’s outside the kitchen. The kitchen staff does not take care of it but if it gets too full it can shut down the whole kitchen.

If you make too much food too fast, the kitchen staff has to clean up more often. If you leave food out for too long, it takes the staff longer to clean up and it costs more. If you use up all the storage space the whole kitchen comes to a stop. 

That is how Java Virtual Machine memory problems start. JVM memory problems happen when it has trouble with memory.

Best Practice #1 – Avoid Unnecessary Object Creation

Don’t Cook Dishes Nobody Ordered

In the kitchen analogy, every object you create is a dish placed on the counter. Unnecessary dishes crowd the workspace and force the cleaning staff to work harder. In the JVM, that cleaning staff is the garbage collector.

Object creation is optimized in Java and it also costs us. Every allocation increases pressure on the Young Generation. A high allocation rate leads to frequent minor GC cycles, higher CPU usage, and latency variability under load.

Consider the following example:

public String formatUser(String name) {
return new String("User: " + name);
}

Here, new String() creates an unnecessary additional object. The JVM already handles string concatenation efficiently. A better approach would be:

public String formatUser(String name) {
return "User: " + name;
}

When working inside loops, repeated string concatenation creates multiple intermediate objects. Using StringBuilder reduces temporary allocations:

StringBuilder sb = new StringBuilder();
for (String name : names) {
sb.append("User: ").append(name).append("\n");
}

Reducing unnecessary allocations lowers GC frequency and stabilizes application performance under load.

Best Practice #2 – Choose the Right Data Structures

Use Stackable Trays, Not Individual Plates. 

In the kitchen, how you stack your plates does matter. It is always best to stack them in a tray rather than scatter them as individual plates. Individual plates take up more space and they are more messy. This kind of disorganization can also ultimately cause slow service. 

Similarly, in JVM, the same principle applies for the GC collections, because different data structures have different memory footprints. So choosing the right data structure or the wrong one can make a huge difference when it comes to an increase or decrease in object count, overhead or GC work. 

Here’s a simple example: 

List<String> list = new LinkedList<>();

In contrast, ArrayList uses a contiguous array, which improves memory density and cache locality:

List<String> list = new ArrayList<>();

If you already know the expected size of the collection, pre-sizing it avoids repeated resizing and array copying:

List<String> list = new ArrayList<>(1000);

Since many collection types double in capacity every time they are resized, resizing can waste a huge amount of space.

Selecting the correct structure reduces the object count, improves CPU cache utilization, and prevents unnecessary heap growth.

Best Practice #3 – Prefer Primitives Over Wrapper Classes

Avoid Over-Packaging Small Ingredients

Imagine a kitchen where the staff use an individual box to keep each small spice jar. That does not seem like a good idea does it? The same thing happens in Java when you use wrapper objects when you should be using things like numbers.

Wrapper classes, such as Integer, Long and Double are objects. Each one has a value inside and a header. In most JVMs, each header occupies 12 bytes, whereas the number, if it’s an integer, only occupies 4. All that overhead adds up fast.

For example:

List<Integer> numbers = new ArrayList<>();
numbers.add(10);

Each Integer  instance consumes far more memory than a primitive int. When working with large collections of numeric data, primitives are more memory-efficient:

int[] numbers = new int[1000];

While collections require wrapper types, performance-critical sections of code should prefer primitive arrays or specialized primitive collections. In large-scale systems, the difference between primitive and wrapper usage can reduce the heap size by hundreds of megabytes.

Best Practice #4 – Avoid Memory Leaks by Nullifying References

Clear the Tables After Guests Leave

Imagine that in the kitchen, the plates aren’t cleared when the guests leave. Space slowly fills up, and then boom… there’s no room to operate.

The same thing happens in Java. When objects are no longer needed but are still referenced, memory leaks happen. The garbage collector can only reclaim objects that are unreachable. If references remain, the heap keeps growing.

Consider this example:

public class Cache {
   private static List<Object> cache = new ArrayList<>();
   public static void add(Object obj) {
       cache.add(obj);
   }
}

If objects are continuously added and never removed, heap usage grows indefinitely. In such cases, bounded caches or eviction strategies are necessary. For example:

Map<String, Object> cache = new LinkedHashMap<>(100, 0.75f, true) {
   protected boolean removeOldestEntry(Map.Entry eldest) {
       return size() > 100;
   }
};

Nullifying references in local scopes can also make objects eligible for garbage collection sooner, when appropriate. Preventing unintended retention ensures long-term heap stability.

Best Practice #5 – Use Lazy Initialization

Don’t Pre-Cook Meals No One Ordered

The statement says it all. Do not prepare dishes before customers arrive, because it is a waste of space and ingredients. The same principle applies to object creation in Java.

Eager initialization allocates memory upfront, even if the object is never used. In large systems with optional components, this increases baseline heap usage and startup time.

Eager initialization:

private List<String> heavyList = loadData();

Lazy initialization:

private List<String> heavyList;
public List<String> getHeavyList() {
   if (heavyList == null) {
       heavyList = loadData();
   }
   return heavyList;
}

This approach ensures that expensive objects are created only when required. In large systems with multiple optional components, lazy loading significantly reduces baseline heap consumption.

Best Practice #6 – Reuse Objects with Object Pooling

Reuse Industrial Equipment; Not Disposable Cups

Let me explain what I mean. In a kitchen, you only reuse ovens and heavy equipment and not disposable cups or napkins. The same logic applies to object pooling in Java.

Resource-hungry objects, such as database connections, threads and network connections are like the heavy equipment in the kitchen. We re-use them, rather than getting a new one every time.

A valid use case is database connection pooling. This code snippet operates a connection pool, which can be used to create a fixed number of connections, then allow them to be requested by other parts of the application, and returned when they’re no longer needed so they can be reused.

private BlockingQueue<Connection> pool;
    private void initiate() {
        pool = new ArrayBlockingQueue<>(poolSize);
        for (int i = 0; i < poolSize; i++) {
            pool.add(createConnection());
        }
    }
    private Connection createConnection() throws SQLException {
        return DriverManager.getConnection(jdbcUrl, username, password);
    }
    public Connection getConnection() throws InterruptedException {
        return pool.take();
    }
    public void releaseConnection(Connection connection) {
        if (connection != null) {
            pool.offer(connection);
        }
    }

Connections are expensive to create, so pooling improves both performance and memory stability. However, pooling simple objects like String or small DTOs usually provides no benefit and increases complexity.

The key is understanding allocation cost before introducing pooling.

Best Practice #7 – Profile and Monitor Memory Usage

Install Cameras in the Kitchen

Guess what: you need constant monitoring, whether you’re running a kitchen or a Java application. In a kitchen, those cameras that you installed will tell you why the services slowed down, because you monitor orders, track inventory, and watch workflow. The same principle applies to Java memory.

You need to understand that memory-efficient Java code is not based on intuition, but based  on data. So, you will need to constantly monitor the allocation rate, heap usage trends, GC frequency and pause time and also object retention patterns. 

You can do this by enabling JVM options, such as writing GC logs and dumping the heap on OutOfMemory.

-XX:+HeapDumpOnOutOfMemoryError

Heap dumps and GC logs reveal important information, such as allocation rate, retained size, and pause behavior.

Tools such as HeapHero, GCeasy, and yCrash help analyze memory behavior and detect leaks or GC inefficiencies in production environments.

Optimization should always follow data, not intuition.

Memory Diagnostics Tools Worth Knowing

You can write memory-efficient Java code, but without the right tools, you’re still guessing. Memory behavior becomes clear only when you observe allocation trends, retained objects, and GC activity in real time or through post-mortem analysis.

Here are some tools worth knowing:

  • HeapHero: Automatically analyzes heap dumps and highlights memory leak suspects, large object graphs, and retention paths without manual deep-dive effort.
  • GCeasy: Converts verbose GC logs into easy-to-understand reports with insights on pause times, allocation rate, and GC tuning issues.
  • yCrash: Correlates heap dumps, GC logs, thread dumps, and system metrics to accelerate root cause analysis in production environments.
  • VisualVM: Monitors heap usage, threads, and allows heap dump capture during development. It’s a free tool that used to be included in early versions of the JDK as JVisualVM, but is now downloadable from GitHub.
  • JConsole: A lightweight JDK tool that provides a quick overview of heap usage, GC activity, and thread metrics.

Together, these tools move memory optimization from reactive debugging to proactive engineering.

Conclusion

Writing memory-efficient Java code is not about making every single line of code perfect. It is about being careful with how you create new objects, how long they stay in memory and what you do with them.

Think of it like a kitchen. The kitchen does not get messy because of one plate. It gets messy when you make many dishes that you do not need, when you leave the dirty plates out and when nobody is paying attention to what is going on. The same thing happens inside the Java system.

When you create many objects without control it makes the system work harder to clean up. When you choose the wrong way to store data, it uses more memory than it needs to. Using many extra layers adds extra work that you do not notice. Slowly the memory fills up and causes problems. Initializing things early also uses more memory than it needs to. When you create new resource-hungry objects every time, instead of re-using them by pooling, it wastes space.

All of these seem like a simple executive decision. When you add them all up, they make a huge difference in whether your Java code runs smoothly or crashes during peak hours.

Coding memory-efficiently is not something you do after you have a problem. It is something you plan for from the start and keep an eye on all the time. You need to keep these three in your mind when you write the code:

  1. You should create objects only when they’re needed, and release them when they’ve finished their job.
  2. You should keep the number of objects under control.
  3. You should measure how your code is doing all the time.

That is how you build Java systems that’re stable and work well.

One thought on “Best Practices for Writing Memory-Efficient Java Code

Add yours

Share your Thoughts!

Up ↑

Index

Discover more from HeapHero – Java & Android Heap Dump Analyzer

Subscribe now to keep reading and get access to the full archive.

Continue reading