What's Actually Happening
Kubernetes CronJob misses its scheduled execution time, not creating Jobs when expected. Scheduled tasks don't run, causing automation failures, backup gaps, or monitoring lapses.
The Error You'll See
No Jobs created:
```bash $ kubectl get jobs -l cronjob-name=my-cronjob
No resources found in default namespace.
$ kubectl get cronjob my-cronjob
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE my-cronjob */5 * * * * False 0 <none> 1h ```
CronJob events show missed:
```bash $ kubectl describe cronjob my-cronjob
Events: Type Reason Age From Message ---- ------ --- ---- ------- Normal Missed 5m cronjob-controller Missed scheduled time for ... ```
Jobs not starting:
```bash $ kubectl logs -l job-name=my-cronjob-*
No pods found for selector ```
Why This Happens
- 1.Wrong timezone - Schedule uses local time but cluster uses UTC
- 2.Concurrency policy blocking - Previous job still running
- 3.Starting deadline missed - Too far behind schedule
- 4.Cron syntax error - Invalid schedule format
- 5.Suspended CronJob - CronJob.spec.suspend=true
- 6.Controller issues - CronJob controller not running
Step 1: Check CronJob Status
```bash # Get CronJob details kubectl get cronjob my-cronjob -o yaml
# Check schedule kubectl get cronjob my-cronjob -o jsonpath='{.spec.schedule}'
# Check if suspended kubectl get cronjob my-cronjob -o jsonpath='{.spec.suspend}'
# Check last schedule time kubectl get cronjob my-cronjob -o jsonpath='{.status.lastScheduleTime}'
# Check active jobs kubectl get cronjob my-cronjob -o jsonpath='{.status.active}'
# Describe CronJob for events kubectl describe cronjob my-cronjob
# Check all jobs from this CronJob kubectl get jobs -l cronjob=my-cronjob ```
Step 2: Verify Cron Schedule Syntax
```bash # Check current schedule kubectl get cronjob my-cronjob -o jsonpath='{.spec.schedule}'
# Cron format: minute hour day month weekday # Examples: # */5 * * * * - Every 5 minutes # 0 * * * * - Every hour # 0 0 * * * - Every day at midnight # 0 0 * * 0 - Every Sunday at midnight # 0 9 * * 1-5 - Weekdays at 9am
# Test cron schedule online or with cron parser # Common mistakes: # 5 * * * * = At minute 5 (wrong if you meant every 5 min) # */5 * * * * = Every 5 minutes (correct)
# Check timezone (Kubernetes uses UTC by default) kubectl get cronjob my-cronjob -o jsonpath='{.spec.timeZone}'
# For Kubernetes 1.27+ with timezone support: spec: timeZone: "America/New_York" schedule: "0 9 * * *" # 9am New York time = different UTC time
# Without timeZone, schedule is UTC # If your server is in EST, "0 9 * * *" UTC = 4am EST ```
Step 3: Check Concurrency Policy
```bash # Check concurrency policy kubectl get cronjob my-cronjob -o jsonpath='{.spec.concurrencyPolicy}'
# Values: # Allow (default) - Multiple jobs can run simultaneously # Forbid - Skip new job if previous still running # Replace - Replace previous job with new one
# If using Forbid, check if previous job still running kubectl get jobs -l cronjob=my-cronjob
# Check job status kubectl describe job my-cronjob-12345
# If previous job stuck, next runs are skipped kubectl get pods -l job-name=my-cronjob-12345
# Fix stuck job kubectl delete job my-cronjob-12345
# Or change concurrency policy kubectl patch cronjob my-cronjob -p '{"spec":{"concurrencyPolicy":"Allow"}}' ```
Step 4: Check Starting Deadline Seconds
```bash # Check starting deadline kubectl get cronjob my-cronjob -o jsonpath='{.spec.startingDeadlineSeconds}'
# If CronJob controller was down, jobs may be missed # startingDeadlineSeconds controls how late a job can start
# If set to 100, and controller resumes after 200 seconds past schedule, # job is skipped (missed deadline)
# Increase starting deadline kubectl patch cronjob my-cronjob -p '{"spec":{"startingDeadlineSeconds":300}}'
# Or remove limit (job will start whenever controller recovers) kubectl patch cronjob my-cronjob --type json -p '[{"op":"remove","path":"/spec/startingDeadlineSeconds"}]'
# Default: no limit, but jobs too far behind are skipped anyway ```
Step 5: Check if CronJob is Suspended
```bash # Check suspend status kubectl get cronjob my-cronjob -o jsonpath='{.spec.suspend}'
# If true, CronJob is suspended # No new jobs will be created
# Unsuspend CronJob kubectl patch cronjob my-cronjob -p '{"spec":{"suspend":false}}'
# Or in yaml: spec: suspend: false
# Apply kubectl apply -f cronjob.yaml
# Verify kubectl get cronjob my-cronjob # SUSPEND should show False ```
Step 6: Check CronJob Controller Health
```bash # CronJob controller runs in kube-controller-manager
# Check kube-controller-manager status kubectl get pods -n kube-system -l component=controller-manager
# Check controller-manager logs kubectl logs -n kube-system kube-controller-manager-xxx | grep cronjob
# Check if controller is running ps aux | grep kube-controller-manager
# For managed Kubernetes (EKS, GKE, AKS), check provider status
# Restart controller-manager (self-managed clusters) kubectl rollout restart deployment kube-controller-manager -n kube-system
# Or static pod approach: mv /etc/kubernetes/manifests/kube-controller-manager.yaml /tmp/ sleep 10 mv /tmp/kube-controller-manager.yaml /etc/kubernetes/manifests/ ```
Step 7: Manual Trigger for Testing
```bash # Create job manually from CronJob template kubectl create job --from=cronjob/my-cronjob manual-job-001
# Check job created kubectl get job manual-job-001
# Check job logs kubectl logs -l job-name=manual-job-001
# If manual job works but scheduled doesn't, check schedule/CronJob controller
# Delete test job kubectl delete job manual-job-001
# Verify CronJob creates next scheduled job kubectl get jobs -w ```
Step 8: Fix Missed Job Execution
```bash # If jobs were missed and you need to catch up
# Option 1: Create manual jobs for missed times kubectl create job --from=cronjob/my-cronjob catchup-job-1 kubectl create job --from=cronjob/my-cronjob catchup-job-2
# Option 2: Adjust schedule temporarily kubectl patch cronjob my-cronjob -p '{"spec":{"schedule":"*/1 * * * *"}}'
# Wait for catch-up, then restore original schedule kubectl patch cronjob my-cronjob -p '{"spec":{"schedule":"*/5 * * * *"}}'
# Option 3: Increase successful jobs history limit kubectl patch cronjob my-cronjob -p '{"spec":{"successfulJobsHistoryLimit":10}}'
# Check history kubectl get jobs -l cronjob=my-cronjob ```
Step 9: Add Proper Timezone Configuration
```yaml # For Kubernetes 1.27+, use timeZone field apiVersion: batch/v1 kind: CronJob metadata: name: my-cronjob spec: timeZone: "America/New_York" # Specify timezone schedule: "0 9 * * *" # 9am in specified timezone jobTemplate: spec: template: spec: containers: - name: job image: job-image
# For older Kubernetes, handle timezone in job script: jobTemplate: spec: template: spec: containers: - name: job image: job-image command: - /bin/sh - -c - | # Convert to local timezone in script TZ=America/New_York current_hour=$(TZ=$TZ date +%H) if [ "$current_hour" -eq 9 ]; then echo "Running at 9am local time" # Run job fi
# Or use environment variable env: - name: TZ value: "America/New_York" ```
Step 10: Monitor CronJob Execution
```bash # Monitor CronJob events kubectl get events --field-selector involvedObject.kind=CronJob -w
# Create monitoring script cat << 'EOF' > monitor_cronjob.sh #!/bin/bash CRONJOB=my-cronjob
echo "=== CronJob Status ===" kubectl get cronjob $CRONJOB
echo "" echo "=== Recent Jobs ===" kubectl get jobs -l cronjob=$CRONJOB --sort-by=.metadata.creationTimestamp | tail -5
echo "" echo "=== Last Job Pod Status ===" LAST_JOB=$(kubectl get jobs -l cronjob=$CRONJOB --sort-by=.metadata.creationTimestamp -o jsonpath='{.items[-1].metadata.name}') kubectl get pods -l job-name=$LAST_JOB
echo "" echo "=== Recent Events ===" kubectl get events --field-selector involvedObject.name=$CRONJOB | tail -10 EOF
chmod +x monitor_cronjob.sh
# Set up alert for missed schedules # In Prometheus/alertmanager: - alert: CronJobMissedSchedule expr: | increase(kube_cronjob_status_missed_schedule_total[5m]) > 0 for: 1m labels: severity: warning annotations: summary: "CronJob {{ $labels.cronjob }} missed schedule" EOF ```
Kubernetes CronJob Schedule Checklist
| Check | Command | Expected |
|---|---|---|
| Schedule syntax | get cronjob -o yaml | Valid cron format |
| Suspend status | jsonpath .spec.suspend | false |
| Concurrency policy | jsonpath .spec.concurrencyPolicy | Appropriate |
| Last schedule time | describe cronjob | Recent timestamp |
| Active jobs | get jobs -l cronjob | Running jobs |
| Controller health | get events | No errors |
Verify the Fix
```bash # After fixing CronJob schedule issues
# 1. Check CronJob triggers at correct time kubectl get jobs -l cronjob=my-cronjob -w // Should create job at scheduled time
# 2. Verify job completes kubectl get job my-cronjob-12345 // Should show Completion: 1/1
# 3. Check job logs kubectl logs -l job-name=my-cronjob-12345 // Should show job execution output
# 4. Monitor CronJob events kubectl get events --field-selector involvedObject.kind=CronJob // No missed schedule events
# 5. Verify schedule is correct kubectl get cronjob my-cronjob -o jsonpath='{.spec.schedule}' // Should match intended schedule
# 6. Check timezone if applicable kubectl get cronjob my-cronjob -o jsonpath='{.spec.timeZone}' // Should be correct timezone ```
Related Issues
- [Fix Kubernetes CronJob Failed](/articles/fix-kubernetes-cronjob-failed)
- [Fix Kubernetes CronJob Not Scheduling](/articles/fix-kubernetes-cronjob-not-scheduling)
- [Fix Kubernetes Job Not Completing](/articles/fix-kubernetes-job-not-completing)