You updated a ConfigMap with new configuration values, but your pods continue using the old configuration. The application doesn't reflect the changes even though kubectl get configmap shows the updated values. ConfigMap update propagation is one of the more confusing aspects of Kubernetes because the behavior varies based on how the ConfigMap is consumed.

Understanding ConfigMap Update Behavior

ConfigMaps can be consumed as environment variables or mounted as files. The update propagation behavior differs significantly: environment variables are static and never update automatically, while mounted files update eventually but the application must detect and reload them. Many applications don't handle config file changes without restart.

Diagnosis Commands

Check ConfigMap content:

```bash # Verify ConfigMap is updated kubectl get configmap config-name -n namespace -o yaml

# Check specific values kubectl get configmap config-name -n namespace -o jsonpath='{.data}' ```

Check pod configuration:

```bash # See how ConfigMap is used kubectl get pod pod-name -n namespace -o yaml | grep -A 20 configMap

# Check if mounted as volume kubectl get pod pod-name -n namespace -o jsonpath='{.spec.volumes}'

# Check if used as env vars kubectl get pod pod-name -n namespace -o jsonpath='{.spec.containers[0].envFrom}' ```

Verify mounted content in pod:

```bash # Check mounted file content kubectl exec pod-name -n namespace -- cat /path/to/config/file

# Compare with ConfigMap kubectl get configmap config-name -n namespace -o yaml ```

Common Solutions

Solution 1: Fix Environment Variable Updates

Environment variables from ConfigMaps don't update automatically:

yaml
# ConfigMap as env vars - NEVER updates automatically
spec:
  containers:
  - name: app
    envFrom:
    - configMapRef:
        name: app-config

This is by design - environment variables are set at pod creation and immutable.

To update, you must restart pods:

```bash # Rollout restart deployment kubectl rollout restart deployment/deployment-name -n namespace

# Or delete pods to recreate with new env kubectl delete pods -l app=app-name -n namespace ```

Use immutable ConfigMaps for env-based config if you want explicit update control:

yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
immutable: true
data:
  key: value

Solution 2: Fix Volume Mount Updates

Mounted ConfigMaps update, but with delays:

yaml
# ConfigMap mounted as volume - UPDATES automatically (eventually)
spec:
  volumes:
  - name: config
    configMap:
      name: app-config
  containers:
  - name: app
    volumeMounts:
    - name: config
      mountPath: /etc/app

Updates propagate within ~1 minute (kubelet sync period). Verify:

```bash # Watch for update in pod kubectl exec pod-name -n namespace -- watch cat /etc/app/config-key

# Force immediate update by restarting kubelet on node (not recommended normally) # Or check kubelet sync interval ```

Configure optional behavior:

yaml
spec:
  volumes:
  - name: config
    configMap:
      name: app-config
      optional: true  # Pod starts even if ConfigMap missing

Solution 3: Fix SubPath Mount Issues

SubPath mounts don't receive updates:

yaml
# Using subPath - DOES NOT update automatically
spec:
  volumes:
  - name: config
    configMap:
      name: app-config
  containers:
  - name: app
    volumeMounts:
    - name: config
      mountPath: /etc/app/config.yaml
      subPath: config.yaml  # This breaks update propagation!

Remove subPath to get updates:

```yaml # Mount entire ConfigMap directory (updates work) spec: volumeMounts: - name: config mountPath: /etc/app # Mount whole volume, not subPath

# Or use symbolic link workaround kubectl exec pod-name -n namespace -- ln -sf /etc/config-mount/app.conf /etc/app/app.conf ```

Solution 4: Implement Config Reload in Application

Even when files update, applications may not reload:

```bash # Check if application supports config reload kubectl exec pod-name -n namespace -- ps aux | grep app

# Send signal to reload if supported kubectl exec pod-name -n namespace -- kill -HUP 1

# Or use inotify tools kubectl exec pod-name -n namespace -- apt-get install inotify-tools kubectl exec pod-name -n namespace -- while inotifywait /etc/app/config.yaml; do reload-app; done ```

Add reload mechanism to application:

yaml
# Use sidecar to watch config changes
spec:
  containers:
  - name: app
    volumeMounts:
    - name: config
      mountPath: /etc/app
  - name: config-reloader
    image: kiwigrid/k8s-config-watcher
    volumeMounts:
    - name: config
      mountPath: /etc/app
    env:
    - name: CONFIG_FILE_PATH
      value: "/etc/app/config.yaml"
    - name: RELOAD_COMMAND
      value: "curl http://localhost:8080/reload"

Solution 5: Use ConfigMap Reloader

Install automatic reloader:

```bash # Deploy configmap-reloader kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml

# Annotation triggers restart on ConfigMap change kubectl annotate deployment deployment-name \ configmap.reloader.stakater.com/reload="app-config" -n namespace ```

Or use Reloader annotations:

yaml
metadata:
  annotations:
    configmap.reloader.stakater.com/reload: "app-config"
    secret.reloader.stakater.com/reload: "app-secret"

Solution 6: Fix ConfigMap Reference Issues

Pods may reference wrong ConfigMap:

```bash # Check which ConfigMap pod references kubectl get pod pod-name -n namespace -o yaml | grep configMapRef

# Verify ConfigMap exists kubectl get configmap -n namespace ```

Fix ConfigMap reference:

yaml
spec:
  containers:
  - name: app
    env:
    - name: CONFIG_KEY
      valueFrom:
        configMapKeyRef:
          name: correct-config-name  # Fix name if wrong
          key: config-key

Solution 7: Handle ConfigMap Deletion Prevention

ConfigMaps may be deleted while pods reference them:

```bash # Check if ConfigMap exists kubectl get configmap app-config -n namespace

# Pod behavior if ConfigMap deleted: # - Volume mount: Container may fail to start # - Env vars: Container fails to start unless optional ```

Use optional references for safety:

yaml
spec:
  containers:
  - name: app
    env:
    - name: CONFIG_KEY
      valueFrom:
        configMapKeyRef:
          name: app-config
          key: config-key
          optional: true  # Won't fail if ConfigMap missing

Solution 8: Force Pod Recreation

When you need immediate config update:

bash
# Annotate to trigger recreation
kubectl patch deployment deployment-name -n namespace -p '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"'$(date -Iseconds)'"}}}}}'

Or scale down and up:

```bash # Scale to zero kubectl scale deployment deployment-name --replicas=0 -n namespace

# Wait for pods deleted kubectl get pods -n namespace -l app=app-name -w

# Scale back up kubectl scale deployment deployment-name --replicas=3 -n namespace ```

Verification

Verify ConfigMap updates work:

```bash # Update ConfigMap kubectl edit configmap app-config -n namespace

# Wait for propagation (volume mounts) sleep 60 kubectl exec pod-name -n namespace -- cat /etc/app/config-key

# Or restart pods for env var update kubectl rollout restart deployment/deployment-name -n namespace kubectl rollout status deployment/deployment-name -n namespace -w

# Check new pods have new config kubectl exec new-pod-name -n namespace -- env | grep CONFIG ```

ConfigMap Update Behavior Summary

MethodUpdates AutomaticallyRequired Action
envFromNoRestart pods
env valueFromNoRestart pods
Volume mountYes (~1 min)App must reload
Volume mount + subPathNoRestart pods
Immutable ConfigMapNeverDelete, recreate, restart pods

Prevention Best Practices

Use volume mounts for configs that need updates. Implement config reload in application. Use Reloader for automatic pod restarts. Avoid subPath for mounted ConfigMaps. Document which configs require restarts. Test config updates in development. Consider immutable ConfigMaps for stable configs.

ConfigMap updates don't propagate to environment variables - you must restart pods. Volume mounts update automatically but applications must detect and reload the changes. Understand these behaviors to implement proper config management.