What's Actually Happening
OpenTelemetry traces are not being exported to the backend. No traces appear in Jaeger, Tempo, or other observability platforms.
The Error You'll See
```bash $ kubectl logs myapp
ERROR: Failed to export traces: connection refused ```
Why This Happens
- 1.Wrong exporter endpoint
- 2.Network connectivity
- 3.Authentication failure
- 4.Collector down
- 5.Sampling rate too low
Step 1: Check Collector Status
kubectl get pods -n monitoring -l app=otel-collector
kubectl logs -n monitoring deployment/otel-collectorStep 2: Check Endpoint Configuration
# In application config:
OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4317
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://otel-collector:4317/v1/tracesStep 3: Test Connectivity
nc -zv otel-collector 4317
curl http://otel-collector:4318/v1/traces -X POST -d '{}'Step 4: Check Collector Config
```yaml # config.yaml receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317
exporters: otlp: endpoint: tempo:4317
service: pipelines: traces: receivers: [otlp] exporters: [otlp] ```
Step 5: Check Sampling
```bash # Check sampling rate echo $OTEL_TRACES_SAMPLER echo $OTEL_TRACES_SAMPLER_ARG
# Set to always sample for testing: OTEL_TRACES_SAMPLER=always_on ```
Step 6: Check Authentication
# If using API key:
OTEL_EXPORTER_OTLP_HEADERS="api-key=xxx"Step 7: Enable Debug Logging
OTEL_LOG_LEVEL=debug
OTEL_CPP_LOG_LEVEL=debugStep 8: Check Resource Attributes
OTEL_RESOURCE_ATTRIBUTES=service.name=myapp,service.version=1.0Step 9: Restart Application
kubectl rollout restart deployment/myappStep 10: Verify Traces
# Check in Jaeger/Tempo UI
curl http://tempo:3200/api/traces/<trace-id>Related Issues
- [Fix OpenTelemetry Collector Exporter Timeout](/articles/fix-opentelemetry-collector-exporter-timeout)
- [Fix Tempo Ingestion Rate Limit](/articles/fix-tempo-ingestion-rate-limit)