# Docker Disk Quota Exceeded: Complete Resolution Guide
You're trying to pull an image, create a container, or write data, and Docker throws an error about disk quota being exceeded. This can happen at different levels—system disk full, storage driver quota, or container-specific limits.
The error might appear as:
failed to create shim: OCI runtime create failed: disk quota exceededOr:
no space left on deviceOr in container logs:
write error: disk quota exceededLet's diagnose exactly where the quota issue originates and fix it.
Diagnosing the Quota Issue
Check System Disk Space
Start with the basics—is the host disk full?
df -hLook for mount points used by Docker:
df -h /var/lib/dockerIf you're using a different storage driver:
docker info | grep "Docker Root Dir"
df -h $(docker info | grep "Docker Root Dir" | awk '{print $3}')Check Docker Disk Usage
See how much space Docker is using:
docker system dfThis shows: - Images: Space used by images - Containers: Space used by container writable layers - Local Volumes: Space used by named volumes - Build Cache: Space used by build cache
For detailed breakdown:
docker system df -vCheck Storage Driver Quotas
If you're using overlay2 with XFS or a filesystem with project quotas:
docker info | grep "Storage Driver"For overlay2 on XFS with pquota, check project quotas:
xfs_quota -x -c 'report -h' /var/lib/dockerFor Btrfs:
btrfs qgroup show -r /var/lib/dockerCheck Container Size Limits
Some containers might have explicit size limits:
docker inspect <container_name> --format '{{.HostConfig.StorageOpt}}'Check Volume Limits
Volumes might be on a different filesystem with its own quota:
docker volume inspect <volume_name>Check the actual mount point:
df -h $(docker volume inspect <volume_name> --format '{{.Mountpoint}}')Common Causes and Fixes
Cause 1: Host Disk Full
The most common cause—the physical or virtual disk is out of space.
Symptoms:
``
ERROR: failed to register layer: Error processing tar file: no space left on device
Fix: Clean up Docker resources:
```bash # Remove unused data docker system prune -a
# More aggressive - removes everything not currently in use docker system prune -a --volumes
# Remove specific items docker image prune -a # Remove unused images docker container prune # Remove stopped containers docker volume prune # Remove unused volumes ```
Free system space:
```bash # Clean package caches (Ubuntu/Debian) sudo apt clean sudo apt autoclean
# Clean journal logs sudo journalctl --vacuum-size=500M
# Find large files sudo du -h --max-depth=1 /var | sort -hr | head -20 ```
Cause 2: Storage Driver Quota Exceeded
When using overlay2 on XFS with project quotas, Docker enforces per-container limits.
Symptoms:
``
failed to create shim: OCI runtime create failed: disk quota exceeded
Fix: Check and increase storage quota:
```bash # Check current quota settings docker info | grep -i "storage"
# View XFS project quotas xfs_quota -x -c 'report -h' /var/lib/docker ```
To increase limits, edit /etc/docker/daemon.json:
{
"storage-opts": [
"overlay2.size=50G"
]
}Then restart Docker:
sudo systemctl restart dockerNote: Changing this requires removing existing containers.
Cause 3: Container Base Size Limit
The default base size for containers might be too small.
Fix: Increase base device size in daemon configuration:
{
"storage-opts": [
"overlay2.size=20G"
]
}For devicemapper (older setups):
{
"storage-opts": [
"dm.basesize=20G"
]
}Cause 4: Specific Container Size Limit
A container might have been created with explicit size limits.
Fix: Recreate container with larger limit:
docker run --storage-opt size=50G <image>In Docker Compose:
services:
myapp:
image: myimage:latest
storage_opt:
size: "50G"Cause 5: Volume Quota Exceeded
The volume might be on a filesystem with quotas.
Symptoms: - Error occurs when writing to mounted volume - Container itself has space, but volume writes fail
Fix: Check volume location:
docker volume inspect <volume_name> --format '{{.Mountpoint}}'
df -h $(docker volume inspect <volume_name> --format '{{.Mountpoint}}')If volume is on XFS with quotas:
xfs_quota -x -c 'report -h' <mount_point>Move volume to filesystem with more space or increase quota.
Cause 6: Build Cache Full
The build cache can consume significant space during multi-stage builds.
Symptoms:
``
failed to export image: failed to create blob: disk quota exceeded
Fix: Clean build cache:
```bash # Prune all build cache docker builder prune -a
# Prune cache older than 24 hours docker builder prune --filter "until=24h" ```
Preventing Quota Issues
Configure Automatic Cleanup
Set up automatic cleanup in /etc/docker/daemon.json:
{
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"storage-opts": [
"overlay2.size=20G"
]
}Use Dedicated Docker Volume
Create a separate volume or partition for Docker:
```bash # Create a new volume for Docker sudo lvcreate -L 100G -n docker vg0 sudo mkfs.xfs /dev/vg0/docker
# Mount with project quotas enabled sudo mount -o pquota /dev/vg0/docker /var/lib/docker ```
Monitor Disk Usage
Set up monitoring:
```bash # Check Docker disk usage docker system df
# Script to alert when usage exceeds threshold THRESHOLD=90 USAGE=$(docker system df --format '{{.UsedPercent}}' | head -1 | sed 's/%//') if [ "$USAGE" -gt "$THRESHOLD" ]; then echo "Warning: Docker disk usage at ${USAGE}%" fi ```
Implement Log Rotation
Container logs can fill disk space rapidly:
# Configure in daemon.json
{
"log-opts": {
"max-size": "10m",
"max-file": "5"
}
}Per-container log settings:
docker run --log-opt max-size=10m --log-opt max-file=5 <image>Verification Steps
After applying fixes:
- 1.Check available space:
- 2.```bash
- 3.docker system df
- 4.df -h /var/lib/docker
- 5.
` - 6.Test container creation:
- 7.```bash
- 8.docker run --rm hello-world
- 9.
` - 10.Verify volume writes:
- 11.```bash
- 12.docker run --rm -v myvolume:/data alpine sh -c "echo test > /data/test.txt && cat /data/test.txt"
- 13.
` - 14.Monitor for recurrence:
- 15.```bash
- 16.watch -n 60 'docker system df && echo "---" && df -h /var/lib/docker'
- 17.
`
Disk quota issues are manageable with proper monitoring and cleanup procedures. The key is identifying whether the quota is at the system level, storage driver level, or container level, then applying the appropriate fix.