SpringBoot is a widely used framework for building Java-based web applications and maintains a significant presence in the world of enterprise software development. It powers large-scale microservices and standalone applications. Most SpringBoot applications have embedded web servers and follow a distributed architecture consisting of three main types of application components:
- Backend: API services
- Backend: Event processors
- Frontend: Web-MVC applications
SpringBoot Apps – Performance Issues
All modern SpringBoot applications can experience significant performance bottlenecks and potential failure issues stemming from the following four underlying 4 categories:
- Memory issues
- Thread issues
- CPU spikes
- System issues
These performance issues can have a severe impact on various aspects of the service, including:
- SLA impacts resulting from service unavailability
- Cost impacts due to high resource usage
- Customer impacts in the form of application slow response times
- Operational impacts that may require increased support requirements
Performance Monitoring and Tuning
Application performance tuning mantra: ‘If you can’t measure it, you can’t improve it.’
The case study below demonstrates how to measure, monitor, and tune SpringBoot application performance. We will be utilising yCrash, one of the widely used tools available in the market, which provides the most comprehensive performance analysis and monitoring for SpringBoot applications.
Here’s what we did to get started:
Step 1: Install yCrash
We installed the yCrash application using the documentation given below:

Step 2: Sample Application
Performance monitoring can be on any SpringBoot app. We have used our own SpringBoot (Buggy) APIs to simulate the performance issues.
https://github.com/balad-tier1App/SpringBoot-Buggy-API.git

Once everything was ready we started the performance monitoring & tuning process. Let’s see in detail what the yCrash toolset helped us to identify and what can be the solution for those issues.
1. Memory Issues
Modern-day applications tend to have high memory usage, and SpringBoot applications exhibit high memory usage, with embedded web servers. The commonly identified memory issues include:
- Metaspace/Permgen Issues
- Memory leaks
Both the above issues can lead to significant application slowness due to out-of-memory issues affecting applications uptime.
To debug memory issues with the SpringBoot application, enable GC logging and heap dump analysis with these JVM arguments:
-Xloggc:gc.log
-XX:HeapDumpPath=heapdumo.log
Connect the application to the yCrash service to identify and tune memory issues.
1.1 Metaspace (or Permgen) Memory Issues
SpringBoot uses a significant amount of additional space for its dependencies and embedded servers, which can increase Metaspace usage. Metaspace is where the JVM stores class metadata. If it is not optimized or configured correctly, it can lead to out-of-memory errors or low throughput due to continuous garbage collection.
Root Cause Analysis
Metaspace issues can be identified through GC analysis as follows:
- Successive Full GC operations, potentially impeding throughput and causing application slowdowns, as illustrated in Fig 3.
- Inadequate MetaSpace reclamation, as illustrated in Fig 4, leading to elevated memory usage or potential out-of-memory scenarios.


Solutions
Recommendation | Description |
---|---|
Remove unused dependencies | Most SpringBoot projects have unwanted dependencies included. Analyse dependencies (for example, using ‘mvn dependency:tree’) and remove unused dependencies to save Metaspace. |
Increase MaxMeta space allocation | After performance testing and profiling, increase the maximum Metaspace size to match your application requirements, for example, XX:MaxMetaspaceSize=128mb, to avoid any out-of-memory or GC throughput issues. |
Avoid using reflections | Reflection in Java allows you to inspect and manipulate classes at runtime. However, every time reflection is used, the classes are loaded again, resulting in out-of-memory issues. |
1.2 Memory Leak Issues
A memory leak is a common issue in applications that can lead to high memory usage and potential application crashes.
Root Cause Analysis
Analysing the garbage collection (GC) and the heap can provide a strong indication of any memory leak problems and help identify incorrect memory settings for the GC.
In the figure below, yCrash presents GC-related concerns, encompassing potential out-of-memory challenges and issues with GC throughput.

Solutions
Recommendation | Description |
---|---|
Use the correct GC settings | It is essential to monitor a SpringBoot application’s memory usage and performance and adjust its GC settings, including sizes and algorithms:Minimum and Maximum heap size: -Xms512m -Xmx1024mEnable parallel GC: -XX:+UseParallelGCEnable concurrent GC: -XX:+UseConcMarkSweepGC |
Close resource connections | Always close resources such as streams, connections, and files once you are done using them. |
Avoid unwanted object creation | Don’t create an object if it’s unnecessary, especially in loops or methods that are frequently called. |
Use null | If an object is no longer needed, set its reference to null. This makes it eligible for garbage collection. |
Use soft, weak references | If an object is cacheable or replaceable, consider using SoftReference or WeakReference. These types of references are cleared by the garbage collector when memory is low. |
Memory efficient data structures | Some data structures (such as LinkedList or HashMap) can cause memory leaks if not used properly. Always choose the right data structure for your needs. |
2. Thread Issues
The majority of SpringBoot applications are multi-threaded and require careful analysis. Below are the common issues identified with multi-threaded SpringBoot applications.
- Deadlocks
- Blocked Threads
Thread Dump Analysis is required to debug any thread-related issues in applications. To perform debugging, you can use the thread dump by using:
jstack <pid>
Or use a visual root cause analysis tool such as yCrash, which comprehensively and automatically identifies and highlights issues.
2.1 Deadlocks
A thread deadlock in Java occurs when two or more threads are unable to proceed with their execution because each thread is waiting for a resource or a lock that is held by another thread.
Root Cause Analysis
Below is a list of commonly identified causes that can lead to deadlock:
- No pre-emption: Resources cannot be forcibly taken away from a thread; they can only be released voluntarily by the thread holding them.
- Circular wait: A cycle must exist in the resource allocation graph, where each thread in the cycle is waiting for a resource held by another thread in the cycle.
- Mutual exclusion: At least one resource must be held in a mutually exclusive mode.
- Hold and wait: A thread holds at least one resource and is waiting to acquire others held by different threads.

Solution
Recommendation | Description |
---|---|
Timeouts | Implement timeouts when acquiring locks. |
Concurrency utilities | Use the java.util.concurrent package, which provides mechanisms to manage thread synchronization more safely. |
Avoid nested locks | Minimize the use of nested locks which can lead to deadlocks. |
2.2 Blocked States
Blocked threads are waiting for resources that have not responded due to any of the following conditions:
- IO Blocking: Waiting for I/O operations, such as databases or other resources.
- Locked: Waiting for locks or wait/notify conditions.
- Sleeping: Waiting for a resource held by a sleeping thread.
Root Cause Analysis
The yCrash toolset exhibits blocked threads, providing comprehensive information including stack traces and necessary details to optimize or resolve these blocking issues.

Solutions
Recommendation | Description |
---|---|
Timeouts and thread interruption | When dealing with potentially long-running operations, use timeouts to prevent threads from blocking indefinitely. |
Asynchronous programming | In some cases, asynchronous programming techniques help avoid blocking issues. |
Resource management | Ensure that resources such as database connections, file handles, and network sockets are managed correctly. |
Thread pools | When dealing with thread creation and management, consider using thread pools provided by the Java Executor framework. |
3. CPU Spikes
In addition to CPU spikes caused by memory and thread issues, CPU spikes in Java SpringBoot applications can occur for various reasons, as provided below:
- Inefficient algorithms or code.
- Infinite loops or recursions.
- Concurrency issues.
- Misconfigured thread pools.
Root Cause Analysis
Identifying any CPU spikes in applications requires real-time CPU usage monitoring and Java thread analysis.
When we ran the report on yCrash, it detected a CPU spike in a SpringBoot application, reaching 94%.

The thread analysis shows high CPU usage of a single thread at 40%, indicating a potential issue.

Solutions
Recommendation | Description |
---|---|
Review unintended loops | Review your code for unintended infinite loops or inner loops and replace busy waiting with proper synchronization mechanisms such as wait/notify or use Thread.sleep() with reasonable intervals. |
Profile application and refactor | Profile your application using profiling tools to identify performance bottlenecks. Once identified, refactor, or optimize the code to improve efficiency. |
Optimise external resources calls. | Calls to external services or resources that experience delays or failures can lead to CPU spikes. Use caching, timeouts, and retries where applicable. |
4. System Issues
The SpringBoot applications rely on the system and the container they run on. Any bottleneck in the system resources will have a considerable impact on the applications. The system resources that need monitoring and resource planning include:
- System Process/Memory usage
- Disk space
- Network bottlenecks
- Continuous Log Monitoring
4.1 System Monitoring
To monitor underlying system issues comprehensively requires a 360-degree view of the system.

Root Cause Analysis
The analyzer highlights the system bottlenecks impacting the SpringBoot service as outlined below.

Solutions
Recommendation | Description |
---|---|
Hardware/Resources Planning | Evaluate the hardware and server configuration to ensure it meets the application’s requirements. Consider scaling horizontally (adding more servers) or vertically (increasing CPU and memory resources) as needed. |
Disk usage/archive policy | Monitor disks using thresholds as part of a maintenance policy that includes both clean-up and backup procedures. |
Network connection monitoring | Monitor network connections and raise alerts for alternate lines; otherwise, significant degradation in the SpringBoot application may occur due to connection retries. |
Log monitoring | Continuously monitor logs for exceptions caused by system resource constraints. |
Conclusion
In this article, we learned about common performance issues in Spring Boot applications, tuning them with yCrash for root cause analysis, and discussed possible solutions for each type of performance issue.
Leave a Reply