The Problem

Prometheus logs show label conflict errors, and metrics are being dropped or overwritten:

bash
level=warn ts=2026-04-04T05:30:18.456Z caller=scrape.go:1234 msg="Label conflict detected" component="scrape manager" target=http://10.0.0.5:9090/metrics err="duplicate label: \"job\""
level=error ts=2026-04-04T05:30:18.457Z caller=scrape.go:1235 msg="Error adding sample" err="label name \"__name__\" is reserved"

Label conflicts occur when: - Exposed metrics contain labels that conflict with Prometheus internal labels - Relabeling creates duplicate labels - Labels have invalid names or values

Diagnosis

Check for Label Issues

```bash # Check specific target metrics curl -s http://target:9090/metrics | grep -E '^[a-zA-Z_:][a-zA-Z0-9_:]*\{'

# Look for duplicate labels in output curl -s http://target:9090/metrics | grep -E '\{.*job=.*job=' ```

Prometheus Metrics for Label Issues

```promql # Check for label conflicts (if metric exists) prometheus_target_scrapes_sample_duplicate_label_total

# Rate of label conflicts rate(prometheus_target_scrapes_sample_duplicate_label_total[5m]) ```

Identify Problem Labels

```bash # List all labels from a target curl -s http://localhost:9090/api/v1/label/__name__/values | jq '.data[]' | head -20

# Check for reserved labels curl -s http://target:9090/metrics | grep -E '\{.*(__name__|job|instance|up)=.*\1=' ```

Solutions

1. Fix Duplicate Labels from Target

Metrics exposed by targets may include labels Prometheus already adds:

```yaml # prometheus.yml scrape_configs: - job_name: 'myapp' metric_relabel_configs: # Remove duplicate job label from target - action: labeldrop regex: 'job'

# But keep Prometheus-added job, rename target's job if needed - source_labels: [job] target_label: exported_job action: replace ```

2. Handle Label Name Conflicts

Invalid label names cause rejections:

```yaml scrape_configs: - job_name: 'myapp' metric_relabel_configs: # Fix labels starting with __ (reserved) - source_labels: [__meta_kubernetes_pod_label_version] target_label: version action: replace

# Drop internal labels we don't need - action: labeldrop regex: '__meta_kubernetes_.+' ```

3. Resolve Relabeling Conflicts

When relabeling creates duplicates:

```yaml scrape_configs: - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: # Correct order matters - set instance first - source_labels: [__meta_kubernetes_pod_ip] target_label: instance action: replace regex: '(.+)' replacement: '${1}:9090'

# Then set job, won't conflict - source_labels: [__meta_kubernetes_namespace] target_label: job action: replace

# Remove conflicting label - action: labeldrop regex: '(__address__|__meta_kubernetes_pod_label_job)' ```

4. Fix Label Value Conflicts

Different label values for same label name:

```yaml scrape_configs: - job_name: 'myapp' metric_relabel_configs: # Consolidate label values - source_labels: [status] target_label: status action: replace regex: '(success|ok|completed)' replacement: 'success'

  • source_labels: [status]
  • target_label: status
  • action: replace
  • regex: '(error|fail|failed)'
  • replacement: 'error'
  • `

5. Drop Conflicting Metrics

When conflicts can't be resolved:

```yaml scrape_configs: - job_name: 'myapp' metric_relabel_configs: # Drop metrics with conflicting labels - action: drop source_labels: [__name__, job] regex: 'problematic_metric;.+'

# Or keep only specific metrics - action: keep source_labels: [__name__] regex: '(http_requests_total|process_cpu_seconds_total)' ```

6. Use labelmap for Kubernetes Labels

Avoid conflicts with Kubernetes label mapping:

```yaml scrape_configs: - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: # Map specific labels, avoiding conflicts - action: labelmap regex: '__meta_kubernetes_pod_label_(app|component|tier)'

# Handle label values with special characters - source_labels: [__meta_kubernetes_pod_label_version] target_label: version action: replace regex: '([^/]+)(/.*)?' replacement: '${1}' ```

Verification

Test relabeling configuration:

```bash # Use the promtool to check config promtool check config prometheus.yml

# Test relabeling for a specific target curl -s 'http://localhost:9090/api/v1/targets' | jq '.data.activeTargets[] | select(.labels.job == "myapp") | .labels' ```

Verify no conflicts in logs:

```bash # Check for label conflicts journalctl -u prometheus --since "1 hour ago" | grep -i "label conflict"

# Check for duplicate labels journalctl -u prometheus --since "1 hour ago" | grep -i "duplicate" ```

Label Naming Best Practices

Follow these rules to avoid conflicts:

  1. 1.Reserved Labels: Never use __name__, job, instance, or labels starting with __
  2. 2.Naming Convention: Use snake_case for label names
  3. 3.Consistency: Same metric should have consistent labels across all instances
  4. 4.Cardinality: Keep label value combinations reasonable

```yaml # Good label configuration metric_relabel_configs: # Proper label transformation - source_labels: [__meta_kubernetes_pod_name] target_label: pod action: replace

  • source_labels: [__meta_kubernetes_namespace]
  • target_label: namespace
  • action: replace

# Drop internal labels after use - action: labeldrop regex: '__meta_kubernetes_.+' ```

Prevention

Add alerts for label issues:

```yaml groups: - name: label_alerts rules: - alert: LabelConflictDetected expr: rate(prometheus_target_scrapes_sample_duplicate_label_total[5m]) > 0 for: 5m labels: severity: warning annotations: summary: "Label conflicts detected in scrapes" description: "Target {{ $labels.instance }} has label conflicts"

  • alert: HighLabelCount
  • expr: count by (job) ({__name__=~".+"}) > 50000
  • for: 15m
  • labels:
  • severity: warning
  • annotations:
  • summary: "High label count for job {{ $labels.job }}"
  • `