Introduction

When Prometheus scrapes a target and receives a connection refused error, the target is marked as DOWN. This means the exporter process is either not running, listening on a different port, or blocked by a firewall or network policy. Missing scrape data creates gaps in monitoring dashboards and can prevent alert rules from firing correctly.

Symptoms

  • Prometheus targets page shows target status as DOWN with connection refused
  • up metric for the target is 0
  • Grafana panels depending on the target's metrics show no data
  • Prometheus logs show scrape failed: Get "http://target:9100/metrics": dial tcp: connection refused
  • Alert rules depending on the target's metrics stop evaluating or fire incorrectly

Common Causes

  • Exporter process crashed or was not started after system reboot
  • Exporter listening on a different port than Prometheus expects
  • Firewall or Kubernetes NetworkPolicy blocking access to the exporter port
  • Exporter bound to localhost only, not accessible from Prometheus pod or server
  • TLS configured on the exporter but Prometheus scraping over HTTP

Step-by-Step Fix

  1. 1.Check if the exporter process is running: Verify the exporter is active.
  2. 2.```bash
  3. 3.systemctl status node_exporter
  4. 4.# Or for containerized exporters
  5. 5.docker ps | grep node-exporter
  6. 6.`
  7. 7.Verify the exporter is listening on the expected port: Check the listening port.
  8. 8.```bash
  9. 9.ss -tlnp | grep 9100
  10. 10.# Should show: LISTEN 0 128 *:9100
  11. 11.`
  12. 12.Test connectivity from the Prometheus server: Verify network reachability.
  13. 13.```bash
  14. 14.curl -s http://target-host:9100/metrics | head -5
  15. 15.# For Kubernetes
  16. 16.kubectl exec -n monitoring prometheus-0 -- curl -s http://target-pod:9100/metrics | head -5
  17. 17.`
  18. 18.Fix firewall rules or network policies: Allow Prometheus to reach the exporter.
  19. 19.```bash
  20. 20.# UFW example
  21. 21.ufw allow from prometheus-ip to any port 9100
  22. 22.# Kubernetes NetworkPolicy
  23. 23.kubectl apply -f - <<EOF
  24. 24.apiVersion: networking.k8s.io/v1
  25. 25.kind: NetworkPolicy
  26. 26.metadata:
  27. 27.name: allow-prometheus-scrape
  28. 28.spec:
  29. 29.podSelector:
  30. 30.matchLabels:
  31. 31.app: node-exporter
  32. 32.ingress:
  33. 33.- from:
  34. 34.- namespaceSelector:
  35. 35.matchLabels:
  36. 36.name: monitoring
  37. 37.ports:
  38. 38.- port: 9100
  39. 39.EOF
  40. 40.`
  41. 41.Restart the exporter and verify scrape recovery: Bring the exporter back online.
  42. 42.```bash
  43. 43.systemctl restart node_exporter
  44. 44.# Verify in Prometheus UI: http://prometheus:9090/targets
  45. 45.`

Prevention

  • Deploy exporters as systemd services with Restart=always for automatic recovery
  • Use Prometheus service discovery (Kubernetes, EC2, Consul) to keep target lists current
  • Monitor the up metric and alert when critical targets go down
  • Include exporter health checks in deployment pipelines before marking deployments as healthy
  • Document expected ports and paths for all exporters in a runbook
  • Test network connectivity between Prometheus and all exporters after any network configuration change