What's Actually Happening

OpenTelemetry traces are not being exported to the backend. No traces appear in Jaeger, Tempo, or other observability platforms.

The Error You'll See

```bash $ kubectl logs myapp

ERROR: Failed to export traces: connection refused ```

Why This Happens

  1. 1.Wrong exporter endpoint
  2. 2.Network connectivity
  3. 3.Authentication failure
  4. 4.Collector down
  5. 5.Sampling rate too low

Step 1: Check Collector Status

bash
kubectl get pods -n monitoring -l app=otel-collector
kubectl logs -n monitoring deployment/otel-collector

Step 2: Check Endpoint Configuration

yaml
# In application config:
OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4317
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://otel-collector:4317/v1/traces

Step 3: Test Connectivity

bash
nc -zv otel-collector 4317
curl http://otel-collector:4318/v1/traces -X POST -d '{}'

Step 4: Check Collector Config

```yaml # config.yaml receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317

exporters: otlp: endpoint: tempo:4317

service: pipelines: traces: receivers: [otlp] exporters: [otlp] ```

Step 5: Check Sampling

```bash # Check sampling rate echo $OTEL_TRACES_SAMPLER echo $OTEL_TRACES_SAMPLER_ARG

# Set to always sample for testing: OTEL_TRACES_SAMPLER=always_on ```

Step 6: Check Authentication

bash
# If using API key:
OTEL_EXPORTER_OTLP_HEADERS="api-key=xxx"

Step 7: Enable Debug Logging

bash
OTEL_LOG_LEVEL=debug
OTEL_CPP_LOG_LEVEL=debug

Step 8: Check Resource Attributes

yaml
OTEL_RESOURCE_ATTRIBUTES=service.name=myapp,service.version=1.0

Step 9: Restart Application

bash
kubectl rollout restart deployment/myapp

Step 10: Verify Traces

bash
# Check in Jaeger/Tempo UI
curl http://tempo:3200/api/traces/<trace-id>
  • [Fix OpenTelemetry Collector Exporter Timeout](/articles/fix-opentelemetry-collector-exporter-timeout)
  • [Fix Tempo Ingestion Rate Limit](/articles/fix-tempo-ingestion-rate-limit)