What's Actually Happening
You're trying to run terraform apply or terraform plan, but the operation fails immediately because Terraform detects that the state file is locked by another process. Sometimes the state file itself is corrupted - it has invalid JSON, missing resources, or references to resources that no longer exist. Other times, a previous Terraform run crashed or was interrupted, leaving behind a stale lock file.
Terraform uses state locking to prevent concurrent modifications that could corrupt the state file. When a run starts, it acquires a lock. If that run crashes or is terminated without releasing the lock, subsequent runs can't proceed. Corrupted state files are even more dangerous - they can cause Terraform to try to delete resources it shouldn't, or fail to track resources it created.
The Error You'll See
Terraform state lock and corruption issues manifest in several ways:
```bash # State file locked error $ terraform apply
Error: Error acquiring the state lock: 2 errors occurred: * ConditionalCheckFailedException: The conditional check failed. See the exception message for details. * ConditionalCheckFailedException: Item does not exist
Error: State locked by another operation A Terraform process with id "12345" has locked this state. Lock info: ID: abc123-def456-ghi789 Path: terraform.tfstate Operation: Operation type: Apply Who: user@hostname Version: Terraform v1.5.7 Created: 2026-04-08 10:15:23.456789 UTC Info: Running apply on main.tf
Terraform will not proceed without a lock. If this is an error, you may release the lock using 'terraform force-unlock'.
# State file corrupted - invalid JSON $ terraform plan
Error: Failed to load state: invalid character ',' looking for beginning of object key string
Error: state file "terraform.tfstate" is invalid: json: cannot unmarshal string into Go value of type tfstate.State
# State file missing resources $ terraform plan
Error: Provider produced inconsistent result after apply
When applying changes to module.example.aws_instance.web, provider "provider[\"registry.terraform.io/hashicorp/aws\"]" produced an unexpected new value: Root resource was present, but now absent.
This is a bug in the provider. Please report this to the provider maintainer.
# Remote state backend lock error (S3) $ terraform apply
Error: Error acquiring the state lock: Error retrieving lock metadata: AccessDenied: Access Denied status code: 403, request id: ABC123, host id: DEF456
# Terraform state refresh hangs indefinitely $ terraform refresh # (hangs forever - no output)
# State file references deleted resource $ terraform plan
Error: Provider "aws" failed to retrieve resource "aws_instance.example" during refresh: instance does not exist
# Backend configuration mismatch $ terraform init
Error: Backend configuration changed since last init The backend configuration has changed from "s3" to "local". This requires running 'terraform init -reconfigure' to update the state backend.
# DynamoDB lock table missing (for S3 backend) $ terraform apply
Error: Error acquiring the state lock: ResourceNotFoundException: Requested resource not found Item: terraform-state-lock-myproject ```
Additional symptoms:
- terraform apply hangs without producing any output
- State file size is 0 bytes or very small
- JSON parsing errors when manually inspecting terraform.tfstate
- Resources show in AWS console but not in Terraform state
- Terraform tries to recreate resources that already exist
- Multiple team members can't run Terraform simultaneously
- CI/CD pipeline Terraform steps fail with lock errors
Why This Happens
- 1.Previous Terraform Run Crashed: A Terraform apply or plan process was interrupted - killed, crashed due to memory issues, or terminated by CI/CD timeout. The process died without releasing the lock it acquired, leaving a stale lock that blocks all subsequent operations.
- 2.State File Manually Modified: Someone edited the terraform.tfstate file manually and introduced JSON syntax errors, removed resources incorrectly, or changed resource IDs that don't match reality. Manual state editing is risky and often causes corruption.
- 3.Network Interruption During State Write: Terraform was writing state to a remote backend (S3, Terraform Cloud, etc.) when network connectivity dropped. The write was incomplete, leaving a partially-written invalid state file.
- 4.Backend Migration Without Reconfigure: The backend configuration in Terraform code was changed (from local to S3, or different S3 bucket) but
terraform init -reconfigurewasn't run. Terraform tries to use the new backend but the state is still in the old location. - 5.Concurrent Terraform Runs: Two users or CI jobs ran Terraform simultaneously on the same state. One acquired the lock properly, but the second detected the lock and failed. Sometimes race conditions cause partial corruption.
- 6.DynamoDB Lock Table Deleted or Missing: For S3 backends using DynamoDB for locking, the lock table was deleted manually or wasn't created. Terraform can't acquire a lock because the table doesn't exist.
- 7.State File Deleted or Moved: The state file (terraform.tfstate or remote state) was accidentally deleted or moved to a different location. Terraform thinks the infrastructure doesn't exist and will try to recreate everything.
- 8.Provider Version Incompatibility: After upgrading Terraform or a provider, the state file format doesn't match what the new version expects. The state schema changed between versions.
Step 1: Identify Lock Type and State Backend Configuration
First, determine what backend you're using and where the state is stored.
```bash # Check Terraform version terraform version
# Check backend configuration in your Terraform code cat main.tf | grep -A20 "backend" cat backend.tf | grep -A20 "backend"
# Common backends: # - local: terraform.tfstate in working directory # - s3: State in S3 bucket, lock in DynamoDB table # - terraform cloud: State in Terraform Cloud # - http: State via HTTP endpoint # - consul: State in Consul KV store
# Check current backend configuration (from .terraform): cat .terraform/terraform.tfstate | grep -A10 "backend"
# If backend is local: ls -la terraform.tfstate ls -la terraform.tfstate.backup ls -la .terraform.tfstate.lock.info
# If backend is S3: cat main.tf | grep -A20 "backend \"s3\""
# Example S3 backend config: terraform { backend "s3" { bucket = "my-terraform-state" key = "prod/terraform.tfstate" region = "us-east-1" encrypt = true dynamodb_table = "terraform-locks" } }
# Check S3 bucket for state file: aws s3 ls s3://my-terraform-state/prod/ aws s3 cp s3://my-terraform-state/prod/terraform.tfstate /tmp/state-check.json
# Check DynamoDB lock table: aws dynamodb describe-table --table-name terraform-locks
# List items in lock table (shows current locks): aws dynamodb scan --table-name terraform-locks
# For Terraform Cloud backend: terraform login # Then check workspace state at app.terraform.io
# Check if there's a local lock file (local backend): cat .terraform.tfstate.lock.info 2>/dev/null || echo "No local lock file"
# Check lock metadata in DynamoDB (S3 backend): aws dynamodb get-item --table-name terraform-locks --key '{"LockID":{"S":"my-terraform-state/prod/terraform.tfstate"}}' ```
Document your backend type, state location, and lock mechanism.
Step 2: Inspect Current Lock and Determine Lock Owner
Find out what process owns the lock and whether it's still active.
```bash # For local backend, check lock file: cat .terraform.tfstate.lock.info
# Example output: # {"ID":"abc123","Operation":"OperationTypeApply","Info":"","Who":"user@host","Version":"1.5.7","Created":"2026-04-08T10:15:23Z","Path":"terraform.tfstate"}
# Extract lock ID: LOCK_ID=$(cat .terraform.tfstate.lock.info | jq -r '.ID') echo "Lock ID: $LOCK_ID"
# Extract who created lock: LOCK_OWNER=$(cat .terraform.tfstate.lock.info | jq -r '.Who') echo "Lock owner: $LOCK_OWNER"
# Check if the process that created lock is still running: LOCK_PID=$(cat .terraform.tfstate.lock.info | jq -r '.Info' | grep -o 'pid=[0-9]*' | cut -d= -f2) ps aux | grep $LOCK_PID
# For S3 backend, check DynamoDB lock: aws dynamodb get-item --table-name terraform-locks \ --key '{"LockID":{"S":"my-terraform-state/prod/terraform.tfstate"}}' \ --output json > /tmp/lock-info.json
cat /tmp/lock-info.json | jq '.Item'
# Extract lock info: LOCK_ID=$(cat /tmp/lock-info.json | jq -r '.Item.LockID.S') LOCK_DIGEST=$(cat /tmp/lock-info.json | jq -r '.Item.Digest.S')
# Check who created the lock: aws dynamodb scan --table-name terraform-locks --filter-expression "LockID = :lockid" \ --expression-attribute-values '{":lockid":{"S":"my-terraform-state/prod/terraform.tfstate"}}'
# For Terraform Cloud: # Check runs in the workspace UI or via API: curl -H "Authorization: Bearer $TF_TOKEN" \ https://app.terraform.io/api/v2/workspaces/$WORKSPACE_ID/runs
# Check if there's an in-progress run blocking the state: # In Terraform Cloud UI: workspace > runs > check for "running" or "pending" status ```
Step 3: Verify Lock is Stale (Process Not Running)
Confirm that the process holding the lock is no longer active before force-unlocking.
```bash # For local backend: # Check if Terraform process is still running: ps aux | grep terraform | grep -v grep
# Check for the specific PID mentioned in lock: LOCK_PID=$(ps aux | grep 'terraform' | awk '{print $2}') cat .terraform.tfstate.lock.info | grep "$LOCK_PID"
# If PID from lock is not in process list, lock is stale
# Check process start time vs lock creation time: LOCK_TIME=$(cat .terraform.tfstate.lock.info | jq -r '.Created') echo "Lock created: $LOCK_TIME"
# Current time: date -u
# If lock was created hours/days ago, it's likely stale
# For S3 backend: # Check DynamoDB lock age: aws dynamodb get-item --table-name terraform-locks \ --key '{"LockID":{"S":"my-terraform-state/prod/terraform.tfstate"}}' \ --output json | jq '.Item'
# For Terraform Cloud: # Check if there's an active run: terraform workspace show # Visit workspace in UI to check run status
# If no active run exists, the lock is stale
# Additional checks: # Is Terraform running in CI/CD? # Check CI pipeline status: # - GitHub Actions: Actions tab # - Jenkins: Build history # - CircleCI: Workflow status
# Ask team members if anyone is running Terraform # "Is anyone currently running terraform apply?"
# Check for zombie Terraform processes: ps aux | grep terraform | awk '{print $2}' | xargs -r ps -o pid,stat,etime,cmd -p
# Zombie processes show stat 'Z' ```
Step 4: Force Unlock the State
After confirming the lock is stale, force unlock it.
```bash # For local backend: # Method 1: Use terraform force-unlock command (recommended): terraform force-unlock LOCK_ID
# Get lock ID first: LOCK_ID=$(cat .terraform.tfstate.lock.info | jq -r '.ID') terraform force-unlock $LOCK_ID
# Terraform will ask for confirmation: # "Do you really want to force-unlock?" # Type "yes" to confirm
# Method 2: Delete lock file manually (less safe): rm .terraform.tfstate.lock.info
# Verify lock is removed: ls -la .terraform.tfstate.lock.info # Should not exist
# For S3 backend with DynamoDB: # Get lock ID from DynamoDB: LOCK_ITEM=$(aws dynamodb get-item --table-name terraform-locks \ --key '{"LockID":{"S":"my-terraform-state/prod/terraform.tfstate"}}' \ --output json)
LOCK_DIGEST=$(echo $LOCK_ITEM | jq -r '.Item.Digest.S')
# Use terraform force-unlock: terraform force-unlock -force $LOCK_DIGEST
# Or manually delete DynamoDB lock item: aws dynamodb delete-item --table-name terraform-locks \ --key '{"LockID":{"S":"my-terraform-state/prod/terraform.tfstate"}}'
# Verify lock is removed: aws dynamodb get-item --table-name terraform-locks \ --key '{"LockID":{"S":"my-terraform-state/prod/terraform.tfstate"}}' # Should return empty item
# For Terraform Cloud: # Use terraform force-unlock: terraform force-unlock RUN_ID
# Or cancel the stuck run in UI: # Workspace > Runs > click stuck run > "Cancel Run"
# For Consul backend: # Delete lock via Consul API: curl -X DELETE http://consul-server:8500/v1/kv/terraform-state-lock/myproject
# For HTTP backend: # Depends on server implementation - check API docs ```
Step 5: Check State File Validity and Integrity
Verify the state file itself is valid JSON and not corrupted.
```bash # For local backend: # Check state file exists and has content: ls -la terraform.tfstate wc -l terraform.tfstate
# Validate JSON syntax: cat terraform.tfstate | jq . > /dev/null && echo "JSON valid" || echo "JSON INVALID"
# If jq fails, state file is corrupted
# Check state file structure: cat terraform.tfstate | jq 'keys' # Should show: ["version", "terraform_version", "serial", "lineage", "outputs", "resources"]
# Check version number (should be 3 or 4 for modern Terraform): cat terraform.tfstate | jq '.version'
# Check resources exist: cat terraform.tfstate | jq '.resources | length'
# List all resource types: cat terraform.tfstate | jq '.resources[].type' | sort | uniq -c
# List all resource addresses: cat terraform.tfstate | jq '.resources[].instances[].attributes_flat[]' | head -20
# For S3 backend: # Download and check state: aws s3 cp s3://my-terraform-state/prod/terraform.tfstate /tmp/state-check.tfstate
cat /tmp/state-check.tfstate | jq . > /dev/null && echo "Valid" || echo "INVALID"
# Check state file size (should not be 0): aws s3 ls s3://my-terraform-state/prod/terraform.tfstate
# If size is 0, state file is empty/corrupted
# Check backup state if exists: aws s3 ls s3://my-terraform-state/prod/terraform.tfstate.backup
cat /tmp/tfc-state.tfstate | jq . > /dev/null && echo "Valid" || echo "INVALID"
# If state is invalid JSON, you need to recover it ```
Step 6: Recover Corrupted State File
If the state file is corrupted, restore from backup or reconstruct it.
```bash # Option 1: Restore from backup file (local backend): ls -la terraform.tfstate.backup
# Check backup is valid: cat terraform.tfstate.backup | jq . > /dev/null && echo "Backup valid" || echo "Backup invalid"
# If backup is valid, restore it: cp terraform.tfstate.backup terraform.tfstate
# Option 2: Restore from S3 backup (S3 backend): aws s3 ls s3://my-terraform-state/prod/ --recursive | grep backup
# Download backup: aws s3 cp s3://my-terraform-state/prod/terraform.tfstate.backup ./terraform.tfstate.backup
# Validate: cat terraform.tfstate.backup | jq . > /dev/null && cp terraform.tfstate.backup terraform.tfstate
# Upload restored state: aws s3 cp terraform.tfstate s3://my-terraform-state/prod/terraform.tfstate
| jq '.data[] |
|---|
# Download previous valid state: curl -H "Authorization: Bearer $TF_TOKEN" \ "https://app.terraform.io/api/v2/state-versions/$PREV_STATE_ID/download" \ > terraform.tfstate
# Option 4: Reconstruct state by importing existing resources: # First, create empty state: terraform init rm terraform.tfstate # Start fresh
# List all resources in your infrastructure (manually): # AWS: Check console or use aws cli aws ec2 describe-instances --query 'Reservations[].Instances[].InstanceId' aws s3 ls aws rds describe-db-instances --query 'DBInstances[].DBInstanceIdentifier'
# Import each resource: terraform import aws_instance.web i-1234567890abcdef0 terraform import aws_s3_bucket.data my-bucket-name terraform import aws_db_instance.main my-db-instance
# For each resource in your Terraform code: grep -r "resource \"" *.tf | grep -v "^#" | awk -F'"' '{print $2}' | while read type; read name; do echo "Import: $type.$name" done
# Import resources systematically: cat resources-to-import.txt | while read addr id; do terraform import $addr $id done
# Option 5: Manual JSON repair (advanced): # If corruption is minor, try to fix JSON manually: cat terraform.tfstate | python3 -c " import json import sys try: data = json.load(sys.stdin) json.dump(data, sys.stdout, indent=2) except json.JSONDecodeError as e: print(f'Error at line {e.lineno}, column {e.colno}: {e.msg}') sys.exit(1) " 2>&1 | tee terraform.tfstate.fixed
# If JSON error is fixable, use fixed version: mv terraform.tfstate.fixed terraform.tfstate ```
Step 7: Fix Missing Resources in State
Handle resources that Terraform thinks exist but don't, or vice versa.
```bash # Find resources in state that don't exist in provider: terraform plan -refresh-only 2>&1 | grep "does not exist"
# For each missing resource, remove from state: terraform state rm aws_instance.deleted_instance
# Or target the specific resource: terraform state rm 'module.web.aws_instance.main'
# List all resources in state: terraform state list
# Check each resource exists: terraform state list | while read addr; do echo "Checking: $addr" terraform state show $addr 2>&1 | head -5 done
# Remove resources that don't exist: terraform state list | grep "aws_instance" | while read addr; do ID=$(terraform state show $addr | grep "id" | head -1 | awk '{print $3}') aws ec2 describe-instances --instance-ids $ID 2>&1 | grep -q "InvalidInstanceID" && { echo "Removing $addr (instance doesn't exist)" terraform state rm $addr } done
# Import resources that exist but aren't in state: # Find untracked resources by comparing AWS to Terraform: # EC2 instances: aws ec2 describe-instances --query 'Reservations[].Instances[].InstanceId' --output text | while read id; do grep -q "$id" terraform.tfstate || echo "Instance $id not in state" done
# Import missing resources: terraform import aws_instance.untracked i-12345678
# Fix module paths: # If resource address changed (moved to module): terraform state mv aws_instance.web module.web.aws_instance.web
# Fix resource that was renamed: terraform state mv aws_instance.old_name aws_instance.new_name ```
Step 8: Reinitialize Backend After Configuration Changes
If backend config changed, reconfigure Terraform to use the correct backend.
```bash # Check current backend vs configured backend: cat .terraform/terraform.tfstate | jq '.backend' cat main.tf | grep -A10 "backend"
# If mismatch detected: terraform init -reconfigure
# Or if you want to migrate state to new backend: terraform init -migrate-state
# This will: # 1. Copy state from old backend to new backend # 2. Update .terraform/terraform.tfstate
# For S3 backend changes: # If bucket name changed: terraform init -reconfigure \ -backend-config="bucket=new-terraform-state" \ -backend-config="key=prod/terraform.tfstate"
# If DynamoDB table changed: terraform init -reconfigure \ -backend-config="dynamodb_table=new-locks-table"
# Verify backend is configured correctly: terraform init # Should say: "Successfully configured the backend"
# Check state location: terraform state list | head -5 # Should list your resources
# For Terraform Cloud migration: # Add cloud backend to config: ```
terraform {
cloud {
organization = "my-org"
workspaces {
name = "my-workspace"
}
}
}# Then:
terraform init -reconfigureStep 9: Verify State Consistency
After fixing state issues, verify that state matches reality.
```bash # Run refresh to update state with current infrastructure: terraform refresh
# Or use plan with refresh: terraform plan -refresh=true
# Check for drift: terraform plan -detailed-exitcode
# Exit codes: # 0 = No changes, state matches infrastructure # 1 = Error # 2 = Changes detected (drift exists)
# If exit code is 2, there's drift: terraform plan # Review changes carefully
# Check outputs are correct: terraform output
# Verify each important resource: terraform state show aws_instance.web terraform state show aws_db_instance.main
# Compare state attributes to actual resource: terraform state show aws_instance.web | grep "instance_type" aws ec2 describe-instances --instance-ids i-12345 | jq '.Reservations[].Instances[].InstanceType'
# Should match!
# Validate state file after all fixes: cat terraform.tfstate | jq '.serial' # Serial should have incremented after changes
# Create a fresh backup of fixed state: cp terraform.tfstate terraform.tfstate.fixed-backup aws s3 cp terraform.tfstate s3://my-terraform-state/prod/terraform.tfstate.fixed-backup
# Test that Terraform can now apply: terraform plan # Should produce plan without errors
# Verify no lock exists: ls .terraform.tfstate.lock.info 2>/dev/null || echo "No lock" aws dynamodb scan --table-name terraform-locks # Should show no locks ```
Step 10: Implement State Lock Monitoring and Auto-Recovery
Set up monitoring to detect stuck locks and automate recovery.
```bash # Create a script to detect stale locks: cat > scripts/check-terraform-lock.sh << 'EOF' #!/bin/bash # Terraform State Lock Health Check
STATE_BUCKET="${STATE_BUCKET:-my-terraform-state}" STATE_KEY="${STATE_KEY:-prod/terraform.tfstate}" LOCK_TABLE="${LOCK_TABLE:-terraform-locks}" LOCK_TIMEOUT_MINUTES="${LOCK_TIMEOUT_MINUTES:-60}"
echo "Checking Terraform state lock status..."
# Check DynamoDB lock LOCK_INFO=$(aws dynamodb get-item --table-name $LOCK_TABLE \ --key '{"LockID":{"S":"${STATE_BUCKET}/${STATE_KEY}"}}' \ --output json 2>/dev/null)
if [ -z "$LOCK_INFO" ] || [ "$(echo $LOCK_INFO | jq '.Item | length')" -eq "0" ]; then echo "✓ No lock detected" exit 0 fi
echo "⚠ Lock detected!" LOCK_DIGEST=$(echo $LOCK_INFO | jq -r '.Item.Digest.S') LOCK_INFO_RAW=$(echo $LOCK_INFO | jq -r '.Item.Info.S')
# Extract creation time (if available) # DynamoDB doesn't store creation time by default, but Terraform's lock info does
# For Terraform Cloud, check via API # For local, check file timestamp
# Check if lock is stale (> LOCK_TIMEOUT_MINUTES old) # This requires additional lock metadata
# Alert if lock exists echo "Lock digest: $LOCK_DIGEST" echo "Lock info: $LOCK_INFO_RAW"
# Optionally force unlock stale locks # Uncomment to auto-unlock: # terraform force-unlock -force $LOCK_DIGEST
exit 1 EOF
chmod +x scripts/check-terraform-lock.sh
# Run the check: scripts/check-terraform-lock.sh
# Create auto-recovery script: cat > scripts/terraform-state-recovery.sh << 'EOF' #!/bin/bash # Terraform State Recovery Script
set -e
BACKUP_DIR="${BACKUP_DIR:-/tmp/terraform-backups}" STATE_FILE="${STATE_FILE:-terraform.tfstate}" S3_BUCKET="${S3_BUCKET:-}" S3_KEY="${S3_KEY:-}"
mkdir -p $BACKUP_DIR
echo "=== Terraform State Recovery ==="
# 1. Backup current state if [ -f "$STATE_FILE" ]; then cp $STATE_FILE $BACKUP_DIR/$STATE_FILE.broken-$(date +%Y%m%d-%H%M%S) fi
# 2. Clear locks echo "Clearing locks..." rm -f .terraform.tfstate.lock.info
if [ -n "$S3_BUCKET" ] && [ -n "$S3_KEY" ]; then aws dynamodb delete-item --table-name terraform-locks \ --key '{"LockID":{"S":"${S3_BUCKET}/${S3_KEY}"}}' 2>/dev/null || true fi
# 3. Check state validity if [ -f "$STATE_FILE" ]; then if cat $STATE_FILE | jq . > /dev/null 2>&1; then echo "✓ State file JSON is valid" else echo "✗ State file JSON is invalid"
# Try backup if [ -f "$STATE_FILE.backup" ]; then echo "Restoring from backup..." cp $STATE_FILE.backup $STATE_FILE elif [ -n "$S3_BUCKET" ]; then echo "Downloading previous state from S3..." aws s3 cp s3://$S3_BUCKET/$S3_KEY.backup $STATE_FILE fi fi fi
# 4. Reinitialize terraform init -reconfigure
# 5. Verify state terraform state list | head -5
echo "=== Recovery Complete ===" EOF
chmod +x scripts/terraform-state-recovery.sh
# Add to CI/CD before Terraform steps: # In GitHub Actions: ```
```yaml - name: Check Terraform Lock run: scripts/check-terraform-lock.sh
- name: Terraform Init
- run: terraform init
- name: Terraform Plan
- run: terraform plan
`
```bash # Set up monitoring in Prometheus/Nagios: # Create metrics exporter for Terraform lock status
# Cron job to check locks periodically: (crontab -l; echo "*/10 * * * * /path/to/scripts/check-terraform-lock.sh") | crontab - ```
Checklist for Fixing Terraform State Lock and Corruption
| Step | Action | Command | Status | |
|---|---|---|---|---|
| 1 | Identify lock type and backend | `cat main.tf \ | grep backend` | ☐ |
| 2 | Inspect current lock and owner | cat .terraform.tfstate.lock.info | ☐ | |
| 3 | Verify lock is stale | `ps aux \ | grep terraform` | ☐ |
| 4 | Force unlock the state | terraform force-unlock LOCK_ID | ☐ | |
| 5 | Check state file validity | `cat terraform.tfstate \ | jq .` | ☐ |
| 6 | Recover corrupted state | Restore backup or import resources | ☐ | |
| 7 | Fix missing resources | terraform state rm/import | ☐ | |
| 8 | Reinitialize backend | terraform init -reconfigure | ☐ | |
| 9 | Verify state consistency | terraform plan -detailed-exitcode | ☐ | |
| 10 | Implement lock monitoring | Create check script and cron job | ☐ |
Verify the Fix
After fixing state issues, verify Terraform is working properly:
```bash # 1. No lock exists ls .terraform.tfstate.lock.info 2>/dev/null && echo "LOCK EXISTS" || echo "✓ No lock" aws dynamodb scan --table-name terraform-locks | jq '.Items | length' # Should be 0
# 2. State file is valid JSON cat terraform.tfstate | jq . > /dev/null && echo "✓ Valid JSON" || echo "✗ Invalid"
# 3. Terraform init succeeds terraform init # Should say: "Successfully configured the backend"
# 4. State list works terraform state list | wc -l # Should show number of resources > 0
# 5. Terraform refresh succeeds terraform refresh # Should update state without errors
# 6. Terraform plan works terraform plan # Should produce plan or say "No changes"
# 7. State matches infrastructure (no drift) terraform plan -detailed-exitcode # Exit code should be 0 or 2 (not 1/error)
# 8. Resources exist in provider terraform state list | head -5 | while read addr; do terraform state show $addr | grep "id" done
# Compare to actual resources in AWS/console
# 9. Outputs are available terraform output # Should show configured outputs
# 10. Apply test (dry run) terraform plan -out=test.tfplan # Should succeed, then discard: rm test.tfplan ```
Related Issues
- [Fix Terraform Provider Plugin Cache Corrupt](/articles/fix-terraform-provider-plugin-cache-corrupt) - Provider cache issues
- [Fix AWS Lambda Snapstart Not Working](/articles/fix-aws-lambda-snapstart-not-working) - AWS Lambda issues
- [Fix Kubernetes Namespace Terminating](/articles/fix-kubernetes-namespace-terminating) - Kubernetes resource issues
- [Fix Ansible Playbook Syntax Error](/articles/fix-ansible-playbook-syntax-error) - Ansible configuration issues
- [Fix Prometheus Remote Write Failing](/articles/fix-prometheus-remote-write-failing) - Monitoring state issues
- [Fix Git Clone Failed Repository Not Found](/articles/fix-git-clone-failed-repository-not-found) - Git remote issues
- [Fix S3 Bucket Permission Denied](/articles/fix-s3-bucket-permission-denied) - S3 access issues