The null_resource in Terraform provides a way to run provisioners or trigger actions based on changes in other resources. However, improper configuration causes various issues.
Understanding Null Resource Errors
Common null_resource errors include:
``
Error: null_resource triggers changed - resource will be replaced
Error: Provisioner failed: connection refused
Error: null_resource already exists
Error: Cycle detected involving null_resource
Issue 1: Unwanted Trigger Updates
Small changes in trigger values cause null_resource recreation.
Error Scenario:
``
# null_resource.configure will be replaced
-/+ resource "null_resource" "configure" {
~ triggers = {
~ "config_hash" = "abc123" -> "def456"
}
}
Root Cause: ```hcl resource "null_resource" "configure" { triggers = { # Any change triggers recreation config_content = file("config.txt") # File content changes trigger recreation timestamp = timestamp() # Always changes! }
provisioner "local-exec" { command = "apply-config.sh" } } ```
Solution:
Use stable triggers: ```hcl resource "null_resource" "configure" { triggers = { # Use hash instead of content config_hash = filesha256("config.txt")
# Use explicit triggers, not timestamp instance_id = aws_instance.web.id }
provisioner "local-exec" { command = "apply-config.sh" } }
# Only trigger when you want to run resource "null_resource" "manual_trigger" { triggers = { run_number = var.run_number # Explicitly controlled }
provisioner "local-exec" { command = "./deploy.sh ${var.run_number}" } } ```
Issue 2: Provisioner Not Running
null_resource provisioner doesn't execute when expected.
Error Example:
``
# No provisioner execution after apply
# null_resource created but script not run
Root Cause: ```hcl resource "null_resource" "setup" { # No triggers means it's created but provisioners only run on create provisioner "local-exec" { command = "setup.sh" } }
# After initial apply, changes to dependencies don't trigger provisioner ```
Solution:
Add triggers tied to dependencies: ```hcl resource "null_resource" "setup" { # Trigger on relevant changes triggers = { instance_id = aws_instance.web.id security_group_id = aws_security_group.web.id user_data_hash = sha256(var.user_data) }
provisioner "local-exec" { command = "./setup.sh ${aws_instance.web.public_ip}" }
# Ensure it runs after dependencies depends_on = [ aws_instance.web, aws_security_group.web ] } ```
Use triggers for targeted execution: ```hcl variable "force_setup" { type = string default = "initial" }
resource "null_resource" "setup" { triggers = { force_run = var.force_setup # Change this variable to force re-run }
provisioner "local-exec" { command = "./setup.sh" } } ```
Issue 3: Multiple Provisioner Execution Order
Provisioners run in wrong order or parallel.
Error Example:
``
Error: Script failed - prerequisite script not yet complete
Root Cause: ```hcl resource "null_resource" "deploy" { provisioner "local-exec" { command = "download-config.sh" # Should run first }
provisioner "local-exec" { command = "deploy-app.sh" # Should run second but may run in parallel } } ```
Solution:
Use separate null_resources for ordering: ```hcl resource "null_resource" "download_config" { triggers = { config_version = var.config_version }
provisioner "local-exec" { command = "download-config.sh" } }
resource "null_resource" "deploy_app" { depends_on = [null_resource.download_config]
triggers = { config_version = var.config_version app_version = var.app_version }
provisioner "local-exec" { command = "deploy-app.sh" } }
# Chain as needed resource "null_resource" "verify_deployment" { depends_on = [null_resource.deploy_app]
provisioner "local-exec" { command = "verify-deployment.sh" } } ```
Issue 4: Connection Failures on Remote Provisioners
null_resource cannot connect to target resources.
Error Example:
``
Error: Failed to connect to remote host during provisioner
Solution:
Handle connection timing: ```hcl resource "aws_instance" "web" { ami = var.ami instance_type = "t3.micro" }
# Wait for instance to be fully ready resource "null_resource" "wait_for_web" { provisioner "local-exec" { command = <<-EOT while ! nc -z ${aws_instance.web.public_ip} 22; do sleep 5 done sleep 30 # Additional wait for SSH to be ready EOT } }
resource "null_resource" "configure_web" { depends_on = [null_resource.wait_for_web]
provisioner "remote-exec" { inline = ["sudo apt-get update"]
connection { type = "ssh" user = "ubuntu" private_key = file("~/.ssh/id_rsa") host = aws_instance.web.public_ip } } } ```
Issue 5: Cycle Detection with Null Resources
Circular dependencies involving null_resources.
Error Example:
``
Error: Cycle: aws_instance.web -> null_resource.configure -> aws_instance.web
Solution:
Break the cycle by restructuring: ```hcl # Problem: Instance depends on null_resource, null_resource depends on instance resource "aws_instance" "web" { depends_on = [null_resource.prepare] # Creates cycle }
resource "null_resource" "prepare" { depends_on = [aws_instance.web] # Creates cycle }
# Solution: Remove the bidirectional dependency resource "aws_instance" "web" { ami = var.ami instance_type = "t3.micro" }
resource "null_resource" "prepare" { depends_on = [aws_instance.web] # Only one direction
provisioner "remote-exec" { connection { host = aws_instance.web.public_ip } inline = ["prepare-instance.sh"] } } ```
Issue 6: Destroy Provisioner Issues
Provisioners fail during destroy when resources are gone.
Error Example:
``
Error: Provisioner failed on destroy
Connection refused - resource already destroyed
Solution:
Handle destroy failures gracefully: ```hcl resource "null_resource" "cleanup" { provisioner "local-exec" { when = destroy command = "./cleanup.sh ${var.resource_name}"
# Don't fail if cleanup fails on_failure = continue } }
# Or use remote-exec with connection handling resource "null_resource" "cleanup_remote" { provisioner "remote-exec" { when = destroy
inline = ["rm -rf /tmp/application"]
connection { type = "ssh" user = "ubuntu" private_key = file("~/.ssh/id_rsa") host = aws_instance.web.public_ip
on_failure = continue # Don't fail if connection fails } } } ```
Use environment variables for destroy context: ```hcl resource "null_resource" "deprovision" { provisioner "local-exec" { when = destroy command = <<-EOT # Use state information during destroy INSTANCE_ID=${aws_instance.web.id} ./deprovision.sh $INSTANCE_ID EOT
environment = { STATE_FILE = path.root } } } ```
Issue 7: Terraform 0.12+ vs Older Versions
Behavior differences between Terraform versions.
Error Example:
``
Error: triggers attribute not supported in this Terraform version
Solution:
Use version-appropriate syntax: ```hcl # Terraform 0.12+ syntax resource "null_resource" "modern" { triggers = { key1 = "value1" key2 = sha256(var.content) } }
# For older versions (0.11), use different approach # Or upgrade Terraform for modern features terraform { required_version = ">= 0.12.0" } ```
Better Alternatives to Null Resources
Consider replacing null_resource with better options:
```hcl # Instead of null_resource for triggers, use time_sleep resource "time_sleep" "wait_30_seconds" { depends_on = [aws_instance.web]
create_duration = "30s" }
# Use for waiting on async operations resource "aws_instance" "web" { ami = var.ami instance_type = "t3.micro" }
resource "time_sleep" "wait_for_instance" { depends_on = [aws_instance.web]
create_duration = "60s" }
# Instead of provisioners, use cloud-init resource "aws_instance" "web" { ami = var.ami instance_type = "t3.micro"
user_data = <<-EOF #!/bin/bash apt-get update -y apt-get install -y nginx EOF }
# Instead of local-exec, use AWS Lambda resource "aws_lambda_invocation" "run_script" { function_name = aws_lambda_function.configure.name
input = jsonencode({ instance_ip = aws_instance.web.public_ip })
triggers = { instance_id = aws_instance.web.id } } ```
Verification Steps
Check null_resource status:
``bash
terraform state list | grep null_resource
terraform state show null_resource.configure
Debug provisioner execution:
``bash
export TF_LOG=DEBUG
terraform apply
Prevention Best Practices
- 1.Use stable, hash-based triggers instead of timestamps
- 2.Chain null_resources explicitly for ordered execution
- 3.Add on_failure = continue for destroy provisioners
- 4.Use depends_on to ensure prerequisites are ready
- 5.Consider alternatives like time_sleep, cloud-init, or Lambda
- 6.Avoid bidirectional dependencies
- 7.Test provisioners in isolation before integrating