Your pod won't start, and the error says configmap "my-config" not found. The pod is waiting for a ConfigMap that doesn't exist in the expected location. This is a common issue when deploying applications that rely on configuration data, and it usually comes down to naming, namespace, or creation order problems.

Understanding ConfigMap Errors

ConfigMaps store configuration data that pods can consume as environment variables, command-line arguments, or configuration files in volumes. When a pod references a ConfigMap that doesn't exist, Kubernetes prevents the pod from starting. The error can appear at pod creation time or when the kubelet tries to mount the volume.

There are two modes for handling missing ConfigMaps: the default strict mode (pod won't start) and optional mode (pod starts without the ConfigMap).

Diagnosis Commands

Start by checking if the ConfigMap exists:

```bash # Check if ConfigMap exists in the namespace kubectl get configmap -n namespace kubectl get configmap my-config -n namespace

# Get detailed ConfigMap information kubectl describe configmap my-config -n namespace

# Check ConfigMap across all namespaces kubectl get configmap -A | grep my-config ```

Check the pod error:

```bash # Get pod status kubectl get pods -n namespace

# Describe the failing pod kubectl describe pod pod-name -n namespace

# Check events for ConfigMap errors kubectl get events -n namespace --field-selector involvedObject.name=pod-name

# Look for specific error messages kubectl describe pod pod-name -n namespace | grep -A 5 -i "configmap|not found" ```

Check how the pod references the ConfigMap:

```bash # Get pod spec to see ConfigMap references kubectl get pod pod-name -n namespace -o yaml | grep -A 20 -i configmap

# Check environment variable references kubectl get pod pod-name -n namespace -o jsonpath='{.spec.containers[*].envFrom}' kubectl get pod pod-name -n namespace -o jsonpath='{.spec.containers[*].env}' ```

Common Solutions

Solution 1: Create the Missing ConfigMap

The most straightforward fix - create the ConfigMap if it doesn't exist:

```bash # Check if it exists kubectl get configmap my-config -n namespace

# Create from literal values kubectl create configmap my-config -n namespace --from-literal=key1=value1 --from-literal=key2=value2

# Create from file kubectl create configmap my-config -n namespace --from-file=config.properties

# Create from directory kubectl create configmap my-config -n namespace --from-file=path/to/config/dir

# Create from env file kubectl create configmap my-config -n namespace --from-env-file=config.env ```

Create from YAML manifest:

yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-config
  namespace: my-namespace
data:
  database_host: "postgres.default.svc.cluster.local"
  database_port: "5432"
  config.json: |
    {
      "setting1": "value1",
      "setting2": "value2"
    }

Apply the manifest:

bash
kubectl apply -f configmap.yaml

Solution 2: Fix Namespace Mismatch

ConfigMaps are namespace-scoped. The pod and ConfigMap must be in the same namespace:

```bash # Check pod namespace kubectl get pod pod-name --all-namespaces | grep pod-name

# Check ConfigMap namespace kubectl get configmap my-config --all-namespaces ```

If the ConfigMap exists in a different namespace, you have two options:

Option 1: Create the ConfigMap in the pod's namespace:

bash
# Copy ConfigMap to correct namespace
kubectl get configmap my-config -n source-namespace -o yaml | sed 's/namespace: source-namespace/namespace: target-namespace/' | kubectl apply -f -

Option 2: Reference ConfigMap from another namespace (not directly supported):

yaml
# ConfigMap references must be in same namespace
# Use environment variable substitution or external config systems instead

Solution 3: Fix ConfigMap Name Mismatch

The name in the pod spec must exactly match the ConfigMap name:

```bash # Check ConfigMap name kubectl get configmap -n namespace

# Check what the pod is looking for kubectl get pod pod-name -n namespace -o jsonpath='{.spec.containers[*].env[*].valueFrom.configMapKeyRef.name}' kubectl get pod pod-name -n namespace -o jsonpath='{.spec.volumes[*].configMap.name}' ```

Fix the reference:

```yaml # Pod spec with wrong ConfigMap name containers: - name: app env: - name: DATABASE_HOST valueFrom: configMapKeyRef: name: my-config-wrong # Wrong name key: database_host

# Fix: Update to correct name containers: - name: app env: - name: DATABASE_HOST valueFrom: configMapKeyRef: name: my-config # Correct name key: database_host ```

Solution 4: Fix Key Name Mismatch

The key referenced in the pod must exist in the ConfigMap:

```bash # Check ConfigMap keys kubectl get configmap my-config -n namespace -o jsonpath='{.data}' | jq .

# Check what key the pod expects kubectl describe pod pod-name -n namespace | grep -A 5 "configmap" ```

Fix the key reference:

```yaml # Check what keys exist kubectl get configmap my-config -n namespace -o yaml

# Pod referencing wrong key env: - name: DB_HOST valueFrom: configMapKeyRef: name: my-config key: database-host # Wrong key

# ConfigMap has this key data: database_host: "postgres" # Note: underscore, not hyphen

# Fix: Use correct key env: - name: DB_HOST valueFrom: configMapKeyRef: name: my-config key: database_host # Correct key ```

Solution 5: Mark ConfigMap as Optional

If the ConfigMap isn't always required, mark it as optional:

```yaml # For environment variables env: - name: OPTIONAL_CONFIG valueFrom: configMapKeyRef: name: optional-config key: some-key optional: true # Pod will start without this ConfigMap

# For volume mounts volumes: - name: config-volume configMap: name: optional-config optional: true # Pod will start without this ConfigMap ```

Solution 6: Fix Deployment Order

Use init containers or proper deployment ordering if ConfigMaps are created dynamically:

yaml
# Use a pre-deployment hook or init container
initContainers:
  - name: wait-for-config
    image: busybox
    command: ['sh', '-c', 'while ! kubectl get configmap my-config; do sleep 1; done']

Or use Helm hooks:

yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-config
  annotations:
    "helm.sh/hook": pre-install,pre-upgrade

Solution 7: Check for ConfigMap Updates

For pods already running, ConfigMap updates need propagation:

```bash # ConfigMap mounted as volume updates automatically (within ~1 minute) # Environment variables do NOT update automatically

# Force pod restart to pick up new ConfigMap kubectl rollout restart deployment/my-deployment -n namespace

# Or delete pods to recreate with new config kubectl delete pods -l app=myapp -n namespace ```

Solution 8: Debug with Describe

Use describe to see the exact error:

```bash # See detailed error kubectl describe pod pod-name -n namespace

# Common errors: # - "configmap "my-config" not found" # - "couldn't find key database_host in ConfigMap default/my-config" # - "failed to sync configmap cache" ```

Verification

After fixing the issue:

```bash # Verify ConfigMap exists kubectl get configmap my-config -n namespace

# Verify pod starts successfully kubectl get pods -n namespace -l app=myapp

# Check pod consumes ConfigMap correctly kubectl exec -it pod-name -n namespace -- env | grep DATABASE_HOST

# If mounted as volume kubectl exec -it pod-name -n namespace -- cat /etc/config/database_host

# Check events for success kubectl get events -n namespace --sort-by='.lastTimestamp' ```

ConfigMap Reference Patterns

As Environment Variables

yaml
containers:
  - name: app
    env:
      - name: DATABASE_HOST
        valueFrom:
          configMapKeyRef:
            name: my-config
            key: database_host
    # Or load all keys as env vars
    envFrom:
      - configMapRef:
          name: my-config

As Volume Mount

yaml
volumes:
  - name: config-volume
    configMap:
      name: my-config
containers:
  - name: app
    volumeMounts:
      - name: config-volume
        mountPath: /etc/config

With Subpath

yaml
volumes:
  - name: config-volume
    configMap:
      name: my-config
      items:
        - key: config.json
          path: config.json

ConfigMap Not Found Causes Summary

CauseCheck CommandSolution
ConfigMap not createdkubectl get cmCreate the ConfigMap
Wrong namespacekubectl get cm -ACreate in correct namespace
Name mismatchkubectl describe podFix reference name
Key mismatchkubectl get cm -o yamlFix key reference
Wrong deployment orderEvents show timingUse init containers or hooks
Controller cache issueRestart podkubectl rollout restart

Prevention Best Practices

Create ConfigMaps before deployments that depend on them. Use consistent naming conventions for ConfigMaps. Consider using optional: true for non-critical configuration. Implement proper CI/CD that creates ConfigMaps before pods. Use Helm or Kustomize for dependency management. Verify ConfigMap creation in deployment scripts.

ConfigMap not found errors are almost always straightforward - the ConfigMap either doesn't exist, is in the wrong namespace, or has a different name than expected. The kubectl describe pod command will tell you exactly which ConfigMap and key is causing the problem.