Introduction
When Prometheus scrapes a target and receives a connection refused error, the target is marked as DOWN. This means the exporter process is either not running, listening on a different port, or blocked by a firewall or network policy. Missing scrape data creates gaps in monitoring dashboards and can prevent alert rules from firing correctly.
Symptoms
- Prometheus targets page shows target status as DOWN with
connection refused upmetric for the target is 0- Grafana panels depending on the target's metrics show
no data - Prometheus logs show
scrape failed: Get "http://target:9100/metrics": dial tcp: connection refused - Alert rules depending on the target's metrics stop evaluating or fire incorrectly
Common Causes
- Exporter process crashed or was not started after system reboot
- Exporter listening on a different port than Prometheus expects
- Firewall or Kubernetes NetworkPolicy blocking access to the exporter port
- Exporter bound to localhost only, not accessible from Prometheus pod or server
- TLS configured on the exporter but Prometheus scraping over HTTP
Step-by-Step Fix
- 1.Check if the exporter process is running: Verify the exporter is active.
- 2.```bash
- 3.systemctl status node_exporter
- 4.# Or for containerized exporters
- 5.docker ps | grep node-exporter
- 6.
` - 7.Verify the exporter is listening on the expected port: Check the listening port.
- 8.```bash
- 9.ss -tlnp | grep 9100
- 10.# Should show: LISTEN 0 128 *:9100
- 11.
` - 12.Test connectivity from the Prometheus server: Verify network reachability.
- 13.```bash
- 14.curl -s http://target-host:9100/metrics | head -5
- 15.# For Kubernetes
- 16.kubectl exec -n monitoring prometheus-0 -- curl -s http://target-pod:9100/metrics | head -5
- 17.
` - 18.Fix firewall rules or network policies: Allow Prometheus to reach the exporter.
- 19.```bash
- 20.# UFW example
- 21.ufw allow from prometheus-ip to any port 9100
- 22.# Kubernetes NetworkPolicy
- 23.kubectl apply -f - <<EOF
- 24.apiVersion: networking.k8s.io/v1
- 25.kind: NetworkPolicy
- 26.metadata:
- 27.name: allow-prometheus-scrape
- 28.spec:
- 29.podSelector:
- 30.matchLabels:
- 31.app: node-exporter
- 32.ingress:
- 33.- from:
- 34.- namespaceSelector:
- 35.matchLabels:
- 36.name: monitoring
- 37.ports:
- 38.- port: 9100
- 39.EOF
- 40.
` - 41.Restart the exporter and verify scrape recovery: Bring the exporter back online.
- 42.```bash
- 43.systemctl restart node_exporter
- 44.# Verify in Prometheus UI: http://prometheus:9090/targets
- 45.
`
Prevention
- Deploy exporters as systemd services with
Restart=alwaysfor automatic recovery - Use Prometheus service discovery (Kubernetes, EC2, Consul) to keep target lists current
- Monitor the
upmetric and alert when critical targets go down - Include exporter health checks in deployment pipelines before marking deployments as healthy
- Document expected ports and paths for all exporters in a runbook
- Test network connectivity between Prometheus and all exporters after any network configuration change