The Problem

Builds start failing with cryptic errors, the UI becomes unresponsive, or Jenkins won't start at all. The logs tell the real story:

``` java.io.IOException: No space left on device at java.io.FileOutputStream.writeBytes(Native Method) at hudson.FilePath.write(FilePath.java:2156)

# Or during build ERROR: Failed to archive artifacts: No space left on device Build step 'Archive the artifacts' changed build result to FAILURE

# Or at startup SEVERE: Failed to load Jenkins java.io.IOException: /var/lib/jenkins/config.xml: No space left on device ```

Jenkins is a storage hog. Between build artifacts, logs, workspace files, and plugin data, disk space can disappear quickly. Let's get your instance running again.

Immediate Recovery

First, check the actual disk situation:

```bash # Check disk usage df -h | grep -E '(Filesystem|jenkins|home)'

# Check Jenkins home directory size du -sh /var/lib/jenkins/

# Find largest directories du -h /var/lib/jenkins/ --max-depth=1 | sort -hr | head -20 ```

Typical output:

``` Filesystem Size Used Avail Use% Mounted on /dev/xvdf 100G 95G 5.0G 95% /var/lib/jenkins

8.5G /var/lib/jenkins/jobs 6.2G /var/lib/jenkins/workspace 5.1G /var/lib/jenkins/builds 3.8G /var/lib/jenkins/logs 2.1G /var/lib/jenkins/plugins ```

Quick Wins: Free Space Fast

1. Clean old workspaces:

```bash # List workspaces ls -la /var/lib/jenkins/workspace/

# Remove workspaces for jobs that no longer exist cd /var/lib/jenkins/workspace for dir in */; do job_name="${dir%/}" if [ ! -d "/var/lib/jenkins/jobs/$job_name" ]; then echo "Removing orphan workspace: $job_name" rm -rf "$dir" fi done ```

2. Clean tmp directories:

```bash # Check tmp usage du -sh /var/lib/jenkins/tmp/

# Clear tmp (Jenkins must be stopped) sudo systemctl stop jenkins rm -rf /var/lib/jenkins/tmp/* sudo systemctl start jenkins ```

3. Remove old agent jars:

```bash # Check for old agent jar versions ls -la /var/lib/jenkins/remoting/

# Keep only the latest cd /var/lib/jenkins/remoting/ ls -t | tail -n +2 | xargs rm -f ```

4. Clean build records via script console:

Go to Manage Jenkins > Script Console and run:

groovy
// Delete all builds older than 30 days
Jenkins.instance.getAllItems(Job.class).each { job ->
    def builds = job.getBuilds().byTimestamp(0, System.currentTimeMillis() - 30L*24*60*60*1000)
    println "Deleting ${builds.size()} builds from ${job.fullName}"
    builds.each { it.delete() }
}

5. Delete old artifacts:

groovy
// Script Console - Remove artifacts but keep build records
Jenkins.instance.getAllItems(Job.class).each { job ->
    job.getBuilds().each { build ->
        def artifacts = build.getArtifactsDir()
        if (artifacts.exists() && build.number < job.nextBuildNumber - 20) {
            println "Deleting artifacts from ${build.fullDisplayName}"
            artifacts.deleteDir()
        }
    }
}

Systematic Cleanup via Jenkins Configuration

Prevent disk issues from recurring with proper configuration.

Configure Build Discarders

For each job (or set as default):

groovy
// In Pipeline
options {
    buildDiscarder(logRotator(
        numToKeepStr: '20',          // Keep last 20 builds
        artifactNumToKeepStr: '5',    // Keep artifacts for 5 builds
        daysToKeepStr: '30'           // Or keep for 30 days
    ))
}

In Freestyle jobs: Post-build Actions > Discard Old Builds.

Set as system default in Manage Jenkins > System > Global Build Discarder.

Configure Log Rotation

Jenkins logs can grow unbounded. Configure rotation:

bash
# For systemd installations
sudo systemctl edit jenkins

Add:

ini
[Service]
StandardOutput=journal
StandardError=journal
# Limit journal size
LogRateLimitIntervalSec=30s
LogRateLimitBurst=10000

Or configure log rotation in Jenkins:

Go to Manage Jenkins > System Log > Add new recorder, add:

bash
Name: Jenkins Log
Level: INFO
Logger: all

Clean Plugin Caches

Plugins can accumulate cached data:

```bash # Check plugin data du -sh /var/lib/jenkins/plugins/*/META-INF/

# Clear specific plugin caches (example for workflow plugins) rm -rf /var/lib/jenkins/plugins/workflow-*/META-INF/cache/ ```

Adding More Storage

If cleanup isn't enough, you need more space.

Option 1: Expand Existing Volume

For cloud instances:

```bash # AWS example - after expanding EBS volume in console sudo growpart /dev/xvdf 1 sudo resize2fs /dev/xvdf1

# Verify df -h /var/lib/jenkins ```

Option 2: Move Jenkins to Larger Volume

```bash # Stop Jenkins sudo systemctl stop jenkins

# Create new volume and mount sudo mkfs.ext4 /dev/xvdg sudo mkdir /mnt/jenkins-new sudo mount /dev/xvdg /mnt/jenkins-new

# Copy data sudo rsync -avz /var/lib/jenkins/ /mnt/jenkins-new/

# Update fstab echo "/dev/xvdg /var/lib/jenkins ext4 defaults 0 0" | sudo tee -a /etc/fstab

# Unmount old, mount new sudo umount /mnt/jenkins-new sudo mount /var/lib/jenkins

# Start Jenkins sudo systemctl start jenkins ```

Option 3: Offload Artifacts

Use external artifact storage:

```groovy // Archive to S3 instead of local disk archiveArtifacts artifacts: 'build/**/*', fingerprint: true // Then use S3 plugin to upload

// Or with S3 plugin s3Upload bucket: 'my-artifacts', path: "builds/${BUILD_NUMBER}/" ```

Monitoring Disk Usage

Set up alerts before you run out of space:

```bash #!/bin/bash # /usr/local/bin/check-jenkins-disk.sh

THRESHOLD=80 JENKINS_HOME="/var/lib/jenkins"

USAGE=$(df -h $JENKINS_HOME | awk 'NR==2 {print $5}' | tr -d '%')

if [ $USAGE -gt $THRESHOLD ]; then echo "WARNING: Jenkins disk usage at ${USAGE}%" # Send alert (email, Slack, PagerDuty, etc.) exit 1 fi exit 0 ```

Add to cron:

bash
# /etc/cron.d/jenkins-disk-check
*/30 * * * * root /usr/local/bin/check-jenkins-disk.sh

Built-in Disk Usage Monitor

Install the Disk Usage Plugin and navigate to Manage Jenkins > Disk Usage for detailed breakdowns:

  • Per-job artifact sizes
  • Per-build log sizes
  • Workspace sizes
  • Trends over time

Verifying Recovery

After cleanup:

```bash # Check available space df -h /var/lib/jenkins

# Verify Jenkins is healthy sudo systemctl status jenkins

# Run a test build curl -X POST http://localhost:8080/job/test-job/build

# Check build logs for I/O errors grep -i "no space left" /var/log/jenkins/jenkins.log | tail -10 ```

  1. 1.Check in the UI:
  2. 2.Build history loads quickly
  3. 3.Artifact download works
  4. 4.No errors in system log
  5. 5.Workspace cleanup runs after builds