What's Actually Happening
ExternalDNS is running but not creating or updating DNS records. Services and Ingresses don't get corresponding DNS entries.
The Error You'll See
```bash $ kubectl logs -n externaldns deploy/externaldns
No matching records for service default/my-service No matching records for ingress default/my-ingress ```
Provider error:
Error: failed to list records: AccessDeniedOwner mismatch:
Error: record "myapp.example.com" is not owned by this ExternalDNSAnnotation missing:
Skipping service default/my-service: no external-dns annotationWhy This Happens
- 1.Annotation missing - Service/Ingress missing external-dns annotation
- 2.Wrong source type - Source type not configured
- 3.Provider credentials - Invalid AWS/Azure/GCP credentials
- 4.Domain filter - Domain not in filter list
- 5.Owner ID mismatch - Record owned by different instance
- 6.RBAC issues - ExternalDNS lacks permissions
Step 1: Check ExternalDNS Pod Status
```bash # Check ExternalDNS deployment: kubectl get deploy -n externaldns
# Check pod status: kubectl get pods -n externaldns
# View pod logs: kubectl logs -n externaldns deploy/externaldns
# Check recent logs: kubectl logs -n externaldns deploy/externaldns --tail=50
# Describe pod: kubectl describe pods -n externaldns
# Check events: kubectl get events -n externaldns
# Check ExternalDNS resources: kubectl top pods -n externaldns
# Check container args: kubectl get deploy externaldns -n externaldns -o yaml | grep -A20 args ```
Step 2: Verify Annotations on Sources
```bash # Check Service annotation: kubectl get svc my-service -n default -o yaml | grep annotations
# Required annotation: # annotations: # external-dns.alpha.kubernetes.io/hostname: myapp.example.com
# Add annotation: kubectl annotate svc my-service -n default \ external-dns.alpha.kubernetes.io/hostname=myapp.example.com
# Check Ingress annotation: kubectl get ingress my-ingress -n default -o yaml | grep annotations
# For Ingress, hostname from spec.rules.host is used by default # Or add annotation: kubectl annotate ingress my-ingress -n default \ external-dns.alpha.kubernetes.io/hostname=myapp.example.com
# Check for TTL annotation: kubectl annotate svc my-service -n default \ external-dns.alpha.kubernetes.io/ttl=300
# Multi-hostname annotation: kubectl annotate svc my-service -n default \ external-dns.alpha.kubernetes.io/hostname=myapp.example.com,api.example.com
# Remove annotation (to skip): kubectl annotate svc my-service -n default \ external-dns.alpha.kubernetes.io/hostname- ```
Step 3: Check Source Configuration
```bash # Check ExternalDNS args for source: kubectl get deploy externaldns -n externaldns -o yaml | grep source
# Valid sources: # --source=service # --source=ingress # --source=service,ingress # --source=fake (for testing)
# Add source if missing: kubectl set args deploy/externaldns -n externaldns \ -- --source=service --source=ingress
# Check namespace filter: kubectl get deploy externaldns -n externaldns -o yaml | grep namespace
# Filter to specific namespace: # --namespace=default # Or exclude namespaces: # --exclude-namespace=kube-system
# Check label filter: kubectl get deploy externaldns -n externaldns -o yaml | grep label-filter
# Only process labeled services: # --label-filter=external-dns=true
# Add label to service: kubectl label svc my-service -n default external-dns=true ```
Step 4: Verify Domain Filter
```bash # Check domain filter in args: kubectl get deploy externaldns -n externaldns -o yaml | grep domain-filter
# ExternalDNS only manages domains in filter: # --domain-filter=example.com # --domain-filter=sub.example.com
# Check Service hostname domain: kubectl get svc my-service -n default -o yaml | grep hostname
# Hostname must match domain filter: # annotation: myapp.example.com # domain-filter: example.com ✓ # annotation: myapp.other.com # domain-filter: example.com ✗ (skipped)
# Add domain filter: kubectl set args deploy/externaldns -n externaldns \ -- --domain-filter=example.com
# Multiple domains: # --domain-filter=example.com --domain-filter=other.com
# Regex filter: # --regex-domain-filter=.*\\.example\\.com$
# Remove domain filter to manage all: kubectl set args deploy/externaldns -n externaldns \ -- --domain-filter= --regex-domain-filter= ```
Step 5: Check Provider Configuration
```bash # Check provider args: kubectl get deploy externaldns -n externaldns -o yaml | grep provider
# Valid providers: # --provider=aws (Route53) # --provider=google (Cloud DNS) # --provider=azure (Azure DNS) # --provider=cloudflare # --provider=rfc2136 (BIND)
# AWS Route53 credentials: kubectl get secret -n externaldns | grep aws
# Check AWS credentials secret: kubectl get secret aws-credentials -n externaldns -o yaml
# AWS credentials via IAM: # If running on EKS, use IAM roles for service accounts kubectl get sa -n externaldns externaldns -o yaml | grep annotations # annotations: # eks.amazonaws.com/role-arn: arn:aws:iam::xxx:role/ExternalDNSRole
# Azure DNS credentials: kubectl get secret azure-credentials -n externaldns -o yaml
# Cloudflare API token: kubectl get secret cloudflare-api-token -n externaldns -o yaml
# Test provider access: kubectl logs -n externaldns deploy/externaldns | grep -i "provider|zone" ```
Step 6: Fix AWS Route53 Credentials
```bash # Check Route53 zone: aws route53 list-hosted-zones
# Verify zone exists: aws route53 list-hosted-zones --query "HostedZones[?Name=='example.com.']"
# Check credentials: kubectl get secret aws-credentials -n externaldns -o jsonpath='{.data.aws_access_key_id}' | base64 -d kubectl get secret aws-credentials -n externaldns -o jsonpath='{.data.aws_secret_access_key}' | base64 -d
# Test credentials: AWS_ACCESS_KEY_ID=xxx AWS_SECRET_ACCESS_KEY=xxx aws route53 list-hosted-zones
# Create credentials secret: kubectl create secret generic aws-credentials -n externaldns \ --from-literal=aws_access_key_id=AKIAxxx \ --from-literal=aws_secret_access_key=xxx
# Configure ExternalDNS to use secret: kubectl set args deploy/externaldns -n externaldns \ -- --aws-zone-type=public
# For EKS with IAM roles: kubectl annotate sa externaldns -n externaldns \ eks.amazonaws.com/role-arn=arn:aws:iam::xxx:role/ExternalDNSRole
# IAM policy for ExternalDNS: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "route53:ChangeResourceRecordSets", "route53:ListHostedZones", "route53:ListResourceRecordSets" ], "Resource": "*" } ] } ```
Step 7: Fix Owner ID Mismatch
```bash # Check owner ID: kubectl get deploy externaldns -n externaldns -o yaml | grep txt-owner-id
# Owner ID is used to identify ExternalDNS instance: # --txt-owner-id=my-cluster
# If owner ID mismatch, ExternalDNS won't modify existing records
# Check TXT records for ownership: aws route53 list-resource-record-sets --hosted-zone-id Zxxx \ --query "ResourceRecordSets[?Type=='TXT']"
# Look for TXT record with owner ID: # myapp.example.com TXT "heritage=external-dns,external-dns/owner=my-cluster"
# Solutions:
# Option 1: Use same owner ID as existing records kubectl set args deploy/externaldns -n externaldns \ -- --txt-owner-id=existing-owner-id
# Option 2: Delete existing TXT records aws route53 change-resource-record-sets --hosted-zone-id Zxxx \ --change-batch '{"Changes":[{"Action":"DELETE","ResourceRecordSet":{"Name":"myapp.example.com","Type":"TXT","TTL":300,"ResourceRecords":[{"Value":"\"heritage=external-dns\""}]}}]}'
# Option 3: Force ownership takeover: # --txt-owner-id=my-cluster --txt-prefix=my-cluster-
# Option 4: Clean up old records: kubectl set args deploy/externaldns -n externaldns \ -- --txt-owner-id=my-cluster --annotation-filter=external-dns ```
Step 8: Check RBAC Permissions
```bash # Check ExternalDNS service account: kubectl get sa -n externaldns externaldns
# Check ClusterRole: kubectl get clusterrole externaldns -o yaml
# Required permissions: # - services (list, watch) # - ingresses (list, watch) # - endpoints (list, watch)
# Check ClusterRoleBinding: kubectl get clusterrolebinding externaldns -o yaml
# Create RBAC if missing: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: externaldns rules: - apiGroups: [""] resources: ["services", "endpoints", "pods"] verbs: ["get", "list", "watch"] - apiGroups: ["networking.k8s.io"] resources: ["ingresses"] verbs: ["get", "list", "watch"]
--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: externaldns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: externaldns subjects: - kind: ServiceAccount name: externaldns namespace: externaldns
# Check if RBAC working: kubectl auth can-i list services --as=system:serviceaccount:externaldns:externaldns ```
Step 9: Enable Dry-Run Mode
```bash # Test without making changes: kubectl set args deploy/externaldns -n externaldns \ -- --dry-run=true
# Check what would be created: kubectl logs -n externaldns deploy/externaldns --tail=100 | grep -i "create|update"
# Verify record changes are correct: # Log shows: # Create record: myapp.example.com A 1.2.3.4 # Update record: myapp.example.com A 1.2.3.5
# Disable dry-run after testing: kubectl set args deploy/externaldns -n externaldns \ -- --dry-run=false
# Watch records being created: kubectl logs -n externaldns deploy/externaldns -f | grep -i "changes" ```
Step 10: ExternalDNS Verification Script
```bash # Create verification script: cat << 'EOF' > /usr/local/bin/check-externaldns.sh #!/bin/bash
NS=${1:-"externaldns"}
echo "=== ExternalDNS Pods ===" kubectl get pods -n $NS
echo "" echo "=== ExternalDNS Args ===" kubectl get deploy externaldns -n $NS -o yaml | grep -A20 args
echo "" echo "=== Provider Secret ===" kubectl get secret -n $NS | grep -E "aws|azure|cloudflare|google"
echo "" echo "=== Annotated Services ===" kubectl get svc -A -o json | jq '.items[] | select(.metadata.annotations["external-dns.alpha.kubernetes.io/hostname"]) | {name: .metadata.name, ns: .metadata.namespace, hostname: .metadata.annotations["external-dns.alpha.kubernetes.io/hostname"]}'
echo "" echo "=== Annotated Ingresses ===" kubectl get ingress -A -o json | jq '.items[] | select(.metadata.annotations["external-dns.alpha.kubernetes.io/hostname"]) | {name: .metadata.name, ns: .metadata.namespace, hostname: .metadata.annotations["external-dns.alpha.kubernetes.io/hostname"]}'
echo "" echo "=== Recent Logs ===" kubectl logs -n $NS deploy/externaldns --tail=20
echo "" echo "=== Events ===" kubectl get events -n $NS --sort-by='.lastTimestamp' | tail -10
echo "" echo "=== Recommendations ===" echo "1. Add external-dns annotation to Services/Ingresses" echo "2. Verify domain filter matches hostname domain" echo "3. Check provider credentials are valid" echo "4. Ensure source type is configured (service, ingress)" echo "5. Verify owner ID matches existing TXT records" echo "6. Check RBAC permissions for service account" echo "7. Test with dry-run mode first" EOF
chmod +x /usr/local/bin/check-externaldns.sh
# Usage: /usr/local/bin/check-externaldns.sh externaldns ```
ExternalDNS Checklist
| Check | Expected |
|---|---|
| Annotations present | hostname annotation on Service |
| Source configured | --source=service or ingress |
| Domain filter | Hostname domain in filter |
| Provider credentials | Valid and in secret |
| Owner ID | Matches TXT record owner |
| RBAC | Can list/watch services |
| Zone accessible | Route53/CloudDNS zone exists |
Verify the Fix
```bash # After fixing ExternalDNS issues
# 1. Check ExternalDNS logs kubectl logs -n externaldns deploy/externaldns --tail=20 // Shows record updates
# 2. Verify DNS record dig myapp.example.com // Returns A record
# 3. Check TXT record dig TXT myapp.example.com // Shows heritage=external-dns
# 4. Test DNS resolution nslookup myapp.example.com // Resolves to service IP
# 5. Verify in provider console aws route53 list-resource-record-sets --hosted-zone-id Zxxx // Record exists
# 6. Check ExternalDNS status kubectl logs -n externaldns deploy/externaldns -f // No errors, watching sources ```
Related Issues
- [Fix DNS Resolution Failed](/articles/fix-dns-resolution-failed)
- [Fix Kubernetes Ingress Not Routing Traffic](/articles/fix-kubernetes-ingress-not-routing-traffic)
- [Fix Cloudflare 521 Web Server Down](/articles/fix-cloudflare-521-web-server-down)