# Docker Build Failed in CI
Common Error Patterns
Docker build failures in CI typically show:
ERROR: failed to solve: failed to compute cache keyCannot connect to the Docker daemon at unix:///var/run/docker.sockdenied: requested access to the resource is deniedfailed to register layer: Error processing tar fileCOPY failed: file not found in build contextRoot Causes and Solutions
1. Docker Daemon Not Available
CI runner cannot connect to Docker daemon.
Solution:
For GitHub Actions, use Docker executor:
jobs:
build:
runs-on: ubuntu-latest
container:
image: docker:24
options: --privileged
volumes:
- /var/run/docker.sock:/var/run/docker.sock
steps:
- name: Build image
run: docker build -t myapp .Or use Docker action:
```yaml - name: Set up Docker Buildx uses: docker/setup-buildx-action@v3
- name: Build and push
- uses: docker/build-push-action@v5
- with:
- context: .
- push: true
- tags: myapp:latest
`
For Jenkins with Docker agent:
pipeline {
agent {
docker {
image 'docker:24'
args '-v /var/run/docker.sock:/var/run/docker.sock --privileged'
}
}
stages {
stage('Build') {
steps {
sh 'docker build -t myapp .'
}
}
}
}2. Dockerfile Build Context Issues
Files not in build context or wrong path.
Solution:
Verify build context:
# GitHub Actions
- name: Build
uses: docker/build-push-action@v5
with:
context: . # Current directory
file: ./Dockerfile # Dockerfile pathCommon Dockerfile errors:
```dockerfile # WRONG - File outside build context COPY ../src /app
# CORRECT - File in build context COPY src /app
# WRONG - Path doesn't exist COPY package.json /app/
# CORRECT - Verify path exists COPY ./package.json ./package-lock.json /app/ ```
Test Dockerfile locally:
```bash # Build with verbose output docker build --progress=plain -t myapp .
# Check build context ls -la .dockerignore cat .dockerignore ```
3. Registry Authentication Failure
Cannot push to container registry.
Solution:
For Docker Hub:
```yaml # GitHub Actions - name: Login to Docker Hub uses: docker/login-action@v3 with: username: ${{ secrets.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
- uses: docker/build-push-action@v5
- with:
- push: true
- tags: username/myapp:latest
`
For AWS ECR:
```yaml - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: arn:aws:iam::123456789012:role/my-role aws-region: us-east-1
- name: Login to Amazon ECR
- id: login-ecr
- uses: aws-actions/amazon-ecr-login@v2
- name: Build, tag, and push image
- env:
- ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
- ECR_REPOSITORY: myapp
- IMAGE_TAG: ${{ github.sha }}
- run: |
- docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
- docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
`
For GCR:
```yaml - name: Authenticate to Google Cloud uses: google-github-actions/auth@v2 with: credentials_json: ${{ secrets.GCP_CREDENTIALS }}
- name: Configure Docker
- run: gcloud auth configure-docker gcr.io
- name: Build and push
- run: |
- docker build -t gcr.io/my-project/myapp:${{ github.sha }} .
- docker push gcr.io/my-project/myapp:${{ github.sha }}
`
4. Layer Caching Not Working
Build cache not utilized, slow builds.
Solution:
Enable BuildKit caching:
```yaml # GitHub Actions with Docker Buildx - name: Set up Docker Buildx uses: docker/setup-buildx-action@v3
- name: Cache Docker layers
- uses: actions/cache@v4
- with:
- path: /tmp/.buildx-cache
- key: ${{ runner.os }}-buildx-${{ github.sha }}
- restore-keys: |
- ${{ runner.os }}-buildx-
- name: Build with cache
- uses: docker/build-push-action@v5
- with:
- context: .
- cache-from: type=local,src=/tmp/.buildx-cache
- cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max
`
Optimize Dockerfile for caching:
```dockerfile # BAD - Changes often, breaks cache COPY . /app RUN npm install
# GOOD - Layers that change less frequently first COPY package*.json /app/ RUN npm install COPY . /app/ RUN npm run build ```
5. Out of Disk Space
CI runner disk full during build.
Solution:
Clean up before build:
```yaml steps: - name: Free disk space run: | sudo rm -rf /usr/share/dotnet sudo rm -rf /opt/ghc sudo rm -rf /usr/local/lib/android docker system prune -af --volumes
- name: Build image
- run: docker build -t myapp .
`
Use multi-stage builds to reduce image size:
```dockerfile # Build stage FROM node:18 AS builder WORKDIR /app COPY package*.json ./ RUN npm install COPY . . RUN npm run build
# Production stage (smaller) FROM node:18-alpine WORKDIR /app COPY --from=builder /app/dist ./dist COPY --from=builder /app/node_modules ./node_modules CMD ["node", "dist/index.js"] ```
6. Network Issues During Build
Cannot download packages during build.
Solution:
Use proxy or mirror:
```dockerfile # Use npm mirror RUN npm config set registry https://registry.npmmirror.com && \ npm install
# Use apt mirror RUN sed -i 's/archive.ubuntu.com/mirrors.aliyun.com/g' /etc/apt/sources.list && \ apt-get update && apt-get install -y package ```
Or pre-download dependencies:
```yaml steps: - name: Download dependencies run: | npm install # Copy node_modules into build context
- name: Build image
- run: docker build --build-arg NODE_MODULES=$(pwd)/node_modules -t myapp .
`
7. Permission Denied in Build
Container user lacks permissions.
Solution:
Handle permissions in Dockerfile:
# Create user and set permissions
FROM node:18
WORKDIR /app
RUN chown -R node:node /app
USER node
COPY --chown=node:node . .
RUN npm installFor rootless Docker:
# GitHub Actions
- name: Build with rootless Docker
run: |
docker build --build-arg USER_ID=$(id -u) -t myapp .8. Base Image Not Found
Cannot pull base image.
Solution:
Check image exists:
docker pull node:18
docker pull node:18-alpineUse specific version:
```dockerfile # BAD - Latest may break or not exist FROM my-org/base:latest
# GOOD - Pin specific version FROM my-org/base:v1.2.3
# GOOD - Use digest for exact match FROM my-org/base@sha256:abc123... ```
Dockerfile Best Practices
Layer Order for Caching
```dockerfile # 1. Base image (changes least) FROM node:18-alpine
# 2. System packages RUN apk add --no-cache git curl
# 3. Dependencies WORKDIR /app COPY package*.json ./ RUN npm ci --only=production
# 4. Application code (changes most) COPY . . RUN npm run build
# 5. Runtime configuration ENV NODE_ENV=production EXPOSE 3000 CMD ["node", "dist/index.js"] ```
Minimize Layers
```dockerfile # BAD - Multiple layers RUN apt-get update RUN apt-get install -y git RUN apt-get install -y curl RUN apt-get clean
# GOOD - Single layer RUN apt-get update && \ apt-get install -y git curl && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* ```
Use .dockerignore
# .dockerignore
node_modules
npm-debug.log
.git
.gitignore
.env
.env.*
*.md
test/
tests/
*.test.js
Dockerfile*
.dockerignoreDebugging Commands
```bash # Build with verbose output docker build --progress=plain --no-cache -t myapp .
# Debug specific stage docker build --target builder -t myapp-builder .
# Inspect image docker inspect myapp:latest
# Check image size docker images myapp:latest
# Run container interactively docker run -it --rm myapp:latest sh
# Check container logs docker logs container-id ```
Quick Reference
| Error | Solution |
|---|---|
| Daemon not available | Mount docker.sock or use privileged |
| Permission denied | Add --privileged or fix permissions |
| Registry denied | Configure credentials |
| Cache not working | Use BuildKit and cache actions |
| Out of space | Prune docker system, multi-stage |
| COPY failed | Check build context and .dockerignore |
Prevention Tips
- 1.Use multi-stage builds for smaller images
- 2.Order Dockerfile layers for optimal caching
- 3.Pin base image versions, not
latest - 4.Use .dockerignore to reduce build context
- 5.Enable BuildKit for better caching
- 6.Test Dockerfile locally before pushing
Related Articles
- [Jenkins Build Failed](#)
- [GitHub Actions Workflow Failed](#)
- [Kubernetes Deployment Failed in CI](#)