The Problem

Jenkins is sluggish, builds are failing randomly, or the service crashes entirely. You check the logs and see the telltale signs:

``` java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:3332) at hudson.model.RunMap.onLoad(RunMap.java:150)

# Or java.lang.OutOfMemoryError: GC overhead limit exceeded at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:688)

# Or java.lang.OutOfMemoryError: Metaspace at java.lang.ClassLoader.defineClass1(Native Method) ```

These are the three main memory errors you'll encounter, and each requires a different approach.

Diagnosing Memory Issues

Before making changes, understand your current memory situation:

```bash # Check current JVM settings for Jenkins ps aux | grep jenkins | grep -o '-Xmx[^ ]*' ps aux | grep jenkins | grep -o '-Xms[^ ]*'

# Check total system memory free -h

# Check Java process memory usage jstat -gc $(pgrep -f jenkins.war) 1s 5

# Check memory pressure cat /proc/meminfo | grep -i mem ```

Jenkins provides a built-in memory monitor. Navigate to Manage Jenkins > System Information and look for: - Runtime.getRuntime().maxMemory() - Maximum heap - Runtime.getRuntime().totalMemory() - Current heap allocation - Runtime.getRuntime().freeMemory() - Free memory in current allocation

The real indicator is freeMemory() / totalMemory(). If this ratio is consistently below 20%, you need more heap.

Solution 1: Increase Heap Size

The most common fix is simply giving Jenkins more memory.

For systemd installations:

bash
sudo systemctl edit jenkins

Add memory settings:

```ini [Service] # Give Jenkins 4GB of heap, start with 512MB Environment="JAVA_OPTS=-Xmx4g -Xms512m"

# For very large instances (8GB+) # Environment="JAVA_OPTS=-Xmx8g -Xms2g" ```

Apply:

bash
sudo systemctl daemon-reload
sudo systemctl restart jenkins

For init.d installations:

Edit /etc/default/jenkins (Debian/Ubuntu) or /etc/sysconfig/jenkins (RHEL/CentOS):

bash
# Find the JAVA_ARGS line and modify
JAVA_ARGS="-Xmx4g -Xms512m"

For Docker:

bash
docker run -d \
  -e JAVA_OPTS="-Xmx4g -Xms512m" \
  -p 8080:8080 \
  jenkins/jenkins:lts

Or with Docker Compose:

yaml
version: '3'
services:
  jenkins:
    image: jenkins/jenkins:lts
    environment:
      - JAVA_OPTS=-Xmx4g -Xms512m
    ports:
      - "8080:8080"

For Kubernetes:

yaml
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: jenkins
    image: jenkins/jenkins:lts
    env:
    - name: JAVA_OPTS
      value: "-Xmx4g -Xms512m"
    resources:
      limits:
        memory: "6Gi"
      requests:
        memory: "4Gi"

How Much Memory Does Jenkins Need?

The answer depends on your workload:

Instance SizeJobsExecutorsRecommended Heap
Small< 502-42-4 GB
Medium50-2004-84-8 GB
Large200-10008-168-16 GB
Enterprise1000+16+16-32 GB

Also consider: - Number of plugins (each adds memory overhead) - Build history retained - Concurrent builds - Pipeline complexity

Solution 2: Fix Metaspace Issues

If you're seeing OutOfMemoryError: Metaspace, the issue isn't heap - it's class metadata:

bash
# Add metaspace limits to JAVA_OPTS
Environment="JAVA_OPTS=-Xmx4g -Xms512m -XX:MaxMetaspaceSize=512m -XX:MetaspaceSize=128m"

Metaspace issues often indicate: - Too many plugins loaded - Memory leak in a plugin - Excessive class loading (dynamic pipelines)

Solution 3: Tune Garbage Collection

For large instances, the default G1GC might not be optimal. Switch to a tuned G1 configuration:

bash
Environment="JAVA_OPTS=-Xmx8g -Xms2g -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=4 -XX:ConcGCThreads=2 -XX:InitiatingHeapOccupancyPercent=35"

Key flags explained: - MaxGCPauseMillis=200 - Target 200ms max pause time - ParallelGCThreads=4 - Threads for parallel GC - InitiatingHeapOccupancyPercent=35 - Start GC when heap is 35% full

Enable GC logging to monitor effectiveness:

bash
Environment="JAVA_OPTS=-Xmx4g -Xlog:gc*:file=/var/log/jenkins/gc.log:time,level,tags:filecount=5,filesize=10m"

Solution 4: Detect Memory Leaks

If memory usage keeps growing despite adequate heap, you might have a leak. Install the Memory Monitor Plugin and watch the trend over time.

For deeper analysis, take a heap dump:

```bash # Find Jenkins Java PID JENKINS_PID=$(pgrep -f jenkins.war)

# Create heap dump jmap -dump:live,format=b,file=/tmp/jenkins-heap.hprof $JENKINS_PID

# Download and analyze with Eclipse MAT or VisualVM ```

Common leak sources: - Pipelines that don't clean up temporary files - Plugins with known memory issues (check plugin issue trackers) - Large archived artifacts in builds

Solution 5: Reduce Memory Footprint

If you can't add more memory, reduce Jenkins' needs:

Reduce build history:

groovy
// In Pipeline
options {
    buildDiscarder(logRotator(numToKeepStr: '10', artifactNumToKeepStr: '5'))
}

Clean old builds via script console (Manage Jenkins > Script Console):

groovy
Jenkins.instance.getAllItems(Job.class).each { job ->
    println "Processing ${job.fullName}"
    job.getBuilds().each { build ->
        if (build.number < job.nextBuildNumber - 50) {
            println "Deleting ${build.fullDisplayName}"
            build.delete()
        }
    }
}

Disable unused plugins: Navigate to Manage Jenkins > Plugins > Installed, and disable plugins you don't need.

Verifying the Fix

After making changes:

```bash # Restart Jenkins sudo systemctl restart jenkins

# Watch memory usage watch -n 1 'jstat -gc $(pgrep -f jenkins.war)'

# Check for memory errors in logs grep -i "OutOfMemoryError" /var/log/jenkins/jenkins.log | tail -20

# Monitor via web UI # Navigate to Manage Jenkins > System Information ```

Run some builds and verify: - No OutOfMemoryError in logs - Build times are consistent - UI response is snappy - Memory usage stabilizes (not constantly growing)