# How to Fix Java OutOfMemoryError: Java Heap Space

Your Java application crashes with this memory error:

bash
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
    at java.base/java.util.Arrays.copyOf(Arrays.java:3512)
    at java.base/java.util.Arrays.copyOf(Arrays.java:3481)
    at java.base/java.util.ArrayList.grow(ArrayList.java:237)
    at java.base/java.util.ArrayList.add(ArrayList.java:455)
    at com.myapp.DataProcessor.loadData(DataProcessor.java:42)
    at com.myapp.Main.main(Main.java:15)

This error means your application has exhausted the allocated heap memory. The JVM cannot allocate more objects because the heap is full and garbage collection cannot free enough space.

Understanding Heap Memory

Java heap is divided into generations: - Young Generation - New objects are allocated here - Old Generation - Long-lived objects survive to here - Metaspace - Class metadata (separate from heap)

When OutOfMemoryError: Java heap space occurs, the Old Generation is full and GC cannot reclaim space.

Diagnosis Steps

Step 1: Check Current Memory Settings

```bash # Check JVM flags for a running process jcmd <pid> VM.flags

# Or using jinfo jinfo -flags <pid> ```

Output shows current heap settings:

bash
-XX:InitialHeapSize=268435456 -XX:MaxHeapSize=4294967296

Step 2: Enable GC Logging

Before the crash happens, enable detailed GC logging:

bash
java -Xlog:gc*:file=gc.log:time,uptime,level,tags -Xmx4g -jar myapp.jar

For Java 8:

bash
java -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:gc.log -Xmx4g -jar myapp.jar

Analyze the GC log:

bash
# Look for full GC cycles near the crash
grep "Full GC" gc.log | tail -20

Step 3: Capture Heap Dump on OOM

Configure the JVM to capture a heap dump when OOM occurs:

bash
java -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/heapdump.hprof -Xmx4g -jar myapp.jar

When the error occurs, you'll see:

bash
java.lang.OutOfMemoryError: Java heap space
Dumping heap to /tmp/heapdump.hprof ...
Heap dump file created [1823456789 bytes in 12.345 secs]

Step 4: Analyze the Heap Dump

Use Eclipse Memory Analyzer (MAT) or VisualVM:

bash
# If using jcmd on a running process
jcmd <pid> GC.heap_dump /tmp/heapdump.hprof
  1. 1.In MAT:
  2. 2.Open the .hprof file
  3. 3.Run "Leak Suspects Report"
  4. 4.Check "Dominator Tree" for largest objects

Solutions

Solution 1: Increase Heap Size

The quick fix - allocate more memory:

```bash # Set both initial and max heap to 4GB java -Xms4g -Xmx4g -jar myapp.jar

# For containers, use percentage-based sizing (Java 10+) java -XX:MaxRAMPercentage=75.0 -jar myapp.jar ```

Important: Don't just increase blindly. Find the actual memory need through analysis.

Solution 2: Fix Memory Leaks

Common leak patterns and their fixes:

Unclosed Resources:

```java // BAD - Connection not closed public void loadData() { Connection conn = dataSource.getConnection(); Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery("SELECT * FROM large_table"); // Process results... // Forgot to close resources! }

// GOOD - Use try-with-resources public void loadData() { try (Connection conn = dataSource.getConnection(); Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery("SELECT * FROM large_table")) { // Process results... } } ```

Static Collections Growing Forever:

```java // BAD - Static cache never cleared public class Cache { private static final Map<String, Object> CACHE = new HashMap<>();

public static void put(String key, Object value) { CACHE.put(key, value); // Grows forever! } }

// GOOD - Use a bounded cache with eviction public class Cache { private static final int MAX_SIZE = 10000; private static final LinkedHashMap<String, Object> CACHE = new LinkedHashMap<>(16, 0.75f, true) { @Override protected boolean removeEldestEntry(Map.Entry<String, Object> eldest) { return size() > MAX_SIZE; } }; } ```

Listener Registration Without Removal:

```java // BAD - Listeners never deregistered public class EventBus { private static final List<EventListener> listeners = new ArrayList<>();

public static void register(EventListener listener) { listeners.add(listener); } }

// GOOD - Provide deregistration and use WeakReference public class EventBus { private static final List<WeakReference<EventListener>> listeners = new ArrayList<>();

public static void register(EventListener listener) { listeners.add(new WeakReference<>(listener)); }

public static void deregister(EventListener listener) { listeners.removeIf(ref -> listener.equals(ref.get())); } } ```

Solution 3: Stream Large Datasets

Don't load entire datasets into memory:

```java // BAD - Loads everything into memory public List<Order> getAllOrders() { return orderRepository.findAll(); // 10 million orders in memory! }

// GOOD - Stream with pagination or reactive public Stream<Order> streamAllOrders() { return orderRepository.streamAll(); // Process one at a time }

// Or use pagination public Page<Order> getOrders(int page, int size) { return orderRepository.findAll(PageRequest.of(page, size)); } ```

Solution 4: Tune Garbage Collection

For applications with specific memory patterns:

```bash # G1GC (default in Java 11+) - good for most apps java -XX:+UseG1GC -Xmx8g -XX:MaxGCPauseMillis=200 -jar myapp.jar

# For large heaps (> 16GB) java -XX:+UseZGC -Xmx32g -jar myapp.jar

# For low latency requirements java -XX:+UseZGC -XX:+ZGenerational -Xmx16g -jar myapp.jar ```

Solution 5: Fix Batch Processing Memory Issues

```java // BAD - Accumulate all before processing public void processLargeFile(String path) { List<Record> records = Files.lines(Paths.get(path)) .map(this::parseRecord) .collect(Collectors.toList()); // All in memory! records.forEach(this::process); }

// GOOD - Process in batches public void processLargeFile(String path) { AtomicInteger counter = new AtomicInteger(0);

Files.lines(Paths.get(path)) .map(this::parseRecord) .forEach(record -> { process(record); if (counter.incrementAndGet() % 1000 == 0) { System.gc(); // Hint for GC after batch } }); }

// EVEN BETTER - Use batching with clear memory pressure public void processLargeFile(String path) { List<Record> batch = new ArrayList<>(1000);

Files.lines(Paths.get(path)) .map(this::parseRecord) .forEach(record -> { batch.add(record); if (batch.size() >= 1000) { processBatch(batch); batch.clear(); } });

if (!batch.isEmpty()) { processBatch(batch); } } ```

Verification

Monitor memory after fixes:

```bash # Use jstat for GC statistics jstat -gc <pid> 1000

# Output columns: S0C S1C S0U S1U EC EU OC OU MC MU CCSC CCSU YGC YGCT FGC FGCT CGC CGCT GCT ```

Key metrics to watch: - OU (Old Used) - Should stabilize, not grow continuously - FGC (Full GC Count) - Should not increase frequently - FGCT (Full GC Time) - High values indicate memory pressure

Quick Reference

SymptomLikely CauseFix
OOM on startupInitial heap too smallIncrease -Xms
OOM after running hoursMemory leakAnalyze heap dump
OOM during batch jobData not streamedUse pagination/streaming
Frequent Full GCHeap too small or leakIncrease heap or fix leak
GC takes long pausesLarge old generationUse G1GC or ZGC

The key insight: increasing heap is a temporary fix. True solutions involve finding and fixing memory leaks or restructuring code to use less memory.