What's Actually Happening
Grafana dashboard panels fail to render when queries take too long to execute. The panel shows an error message instead of the visualization.
The Error You'll See
Panel error:
Panel rendering timeout
Timeout error: request timed out after 30 secondsDashboard error:
Failed to fetch dashboard data
Query timeout exceededBrowser console:
Error: Timeout exceeded
POST http://grafana:3000/api/ds/query 504 Gateway TimeoutWhy This Happens
- 1.Slow Prometheus queries - Large time ranges, high cardinality
- 2.Too many panels - Dashboard with many simultaneous queries
- 3.Panel timeout too low - Default 30s timeout insufficient
- 4.Grafana resources - CPU/memory limits too low
- 5.Network latency - Slow connection to data source
- 6.Data source overload - Backend cannot handle concurrent queries
Step 1: Check Panel Query
```bash # In Grafana UI: # Panel > Edit > Query inspector
# Check query duration: # Query time: 45s (exceeds 30s timeout)
# Copy query and test directly: curl 'http://prometheus:9090/api/v1/query_range?query=<panel-query>&start=<start>&end=<end>&step=<step>'
# Check query complexity # Simple query: up
# Complex query (slower): sum(rate(http_requests_total{job="api"}[5m])) by (endpoint) / sum(rate(http_requests_total[5m])) by (endpoint)
# Test with reduced time range # Change dashboard time range from 7d to 1h ```
Step 2: Adjust Panel Timeout
```bash # In Grafana panel settings: # Panel > Edit > Query options
# Increase timeout: Query timeout: 60s # Default: 30s
# Reduce data points: Max data points: 500 # Default: varies by panel size
# Set minimum step: Min step: 30s # Prevents high-frequency queries
# In Grafana configuration (grafana.ini): [dataproxy] timeout = 60 # Global data source timeout
# Or per data source: # Configuration > Data sources > Prometheus > Settings Timeout: 60s
# Restart Grafana systemctl restart grafana-server ```
Step 3: Optimize Panel Query
```bash # Use $__interval variable for dynamic resolution: rate(http_requests_total[$__interval])
# Use $__rate_interval for rate calculations: rate(http_requests_total[$__rate_interval])
# Add label filters early: sum(rate(http_requests_total{job="api"}[5m])) by (endpoint) # Instead of filtering after aggregation
# Use recording rules for pre-computed metrics: http_requests:rate5m:by_job # Instead of: sum(rate(http_requests_total[5m])) by (job)
# Reduce time range in dashboard: # From: Last 30 days # To: Last 7 days
# For long time ranges, use larger step: # In panel query options: Min step: 1h for 30d range Min step: 5m for 7d range ```
Step 4: Reduce Dashboard Panels
```bash # Check dashboard panel count: curl 'http://grafana:3000/api/dashboards/uid/<uid>' | jq '.dashboard.panels | length'
# If > 20 panels, consider splitting: # - Create multiple dashboards # - Use rows to collapse panels # - Remove unused panels
# Use repeated panels for efficiency: # Panel > Edit > Repeat options Repeat by variable: $job # Creates panels for each job value efficiently
# Disable auto-refresh for heavy dashboards: # Dashboard settings > Auto refresh: Off # Or increase refresh interval: 5m instead of 30s
# Check concurrent queries: curl 'http://grafana:3000/api/ds/query' -X POST -d '{"queries": [...]}' # Many simultaneous queries overload data source ```
Step 5: Check Grafana Resources
```bash # Check Grafana process ps aux | grep grafana
# Check memory usage free -h
# Check Grafana configuration cat /etc/grafana/grafana.ini | grep -A 5 "[server]"
# For systemd: cat /etc/systemd/system/grafana-server.service # Increase limits: MemoryLimit=4G CPUQuota=200%
# For Docker: docker stats grafana docker update --memory 4g --cpus 2 grafana
# For Kubernetes: resources: limits: memory: 4Gi cpu: 2 requests: memory: 2Gi cpu: 1
# Check Grafana logs journalctl -u grafana-server | grep -i timeout ```
Step 6: Optimize Data Source Connection
```bash # Check Prometheus connection curl 'http://prometheus:9090/api/v1/query?query=up'
# Check network latency ping prometheus mtr prometheus
# Test query from Grafana server curl 'http://prometheus:9090/api/v1/query_range?query=up&start=now-1h&end=now'
# Check Prometheus load curl 'http://prometheus:9090/metrics' | grep prometheus_engine_queries
# If Prometheus overloaded: # Increase Prometheus resources or reduce scrape targets
# Use Prometheus caching proxy: # Install Thanos Query or Cortex for query caching
# Configure Grafana data source caching: # Enterprise feature: Cached data sources ```
Step 7: Use Dashboard Variables
```bash # Add filter variables to reduce query scope: # Dashboard > Settings > Variables > Add variable
# Job variable: Name: job Type: Query Query: label_values(up, job)
# Add filter to queries: rate(http_requests_total{job="$job"}[5m])
# Use multi-select: # Allow multiple job selections rate(http_requests_total{job=~"$job"}[5m])
# Add time resolution variable: Name: resolution Type: Custom Values: 5m,15m,1h,6h
# Use in queries: rate(http_requests_total[$resolution]) ```
Step 8: Check Grafana Performance
```bash # Check Grafana metrics curl 'http://grafana:3000/api/metrics' | jq
# Enable internal metrics: # In grafana.ini: [metrics] enabled = true
# Key metrics to check: # grafana_http_request_duration_seconds # grafana_api_response_time_seconds
# Check Grafana database # For SQLite: ls -la /var/lib/grafana/grafana.db du -h /var/lib/grafana/grafana.db
# For PostgreSQL/MySQL: # Check connection pool size # In grafana.ini: [database] max_open_conn = 20 max_idle_conn = 10
# Check session count # Dashboard > Admin > Users > Sessions ```
Step 9: Implement Query Caching
```bash # Use Prometheus recording rules for pre-computed metrics cat << 'EOF' > /etc/prometheus/recording_rules.yml groups: - name: dashboard_metrics rules: - record: dashboard:http_requests:rate5m expr: sum(rate(http_requests_total[5m])) by (job, endpoint)
- record: dashboard:cpu_usage:rate5m
- expr: sum(rate(container_cpu_usage_seconds_total[5m])) by (namespace)
- EOF
# Use Grafana query caching (Enterprise): # Configuration > Data sources > Enable caching
# For OSS, use external cache: # Install and configure query-cache proxy
# Configure browser caching: # In grafana.ini: [server] enable_gzip = true ```
Step 10: Monitor Grafana Health
```bash # Create Grafana monitoring dashboard: cat << 'EOF' > /etc/grafana/provisioning/dashboards/grafana-health.json { "dashboard": { "title": "Grafana Health", "panels": [ { "title": "Query Duration", "type": "graph", "targets": [ {"expr": "histogram_quantile(0.9, rate(grafana_http_request_duration_seconds_bucket{path=\"/api/ds/query\"}[5m]))"} ] }, { "title": "Error Rate", "type": "graph", "targets": [ {"expr": "rate(grafana_http_request_duration_seconds_count{status=\"500\"}[5m])"} ] } ] } } EOF
# Set up alerts for Grafana: curl 'http://grafana:3000/api/alerts' -X POST -H 'Content-Type: application/json' -d '{ "name": "Grafana Query Timeout", "message": "Dashboard queries timing out", "conditions": [...] }'
# Check Grafana logs regularly journalctl -u grafana-server -f | grep -i "timeout|error" ```
Grafana Panel Timeout Checklist
| Check | Location | Expected |
|---|---|---|
| Panel timeout | Panel > Query options | > query duration |
| Query time | Query inspector | < timeout |
| Panel count | Dashboard | < 20 panels |
| Data points | Panel settings | Reasonable limit |
| Grafana memory | ps aux | < 80% limit |
| Prometheus load | /metrics | Not overloaded |
Verify the Fix
```bash # After optimizing panel settings
# 1. Open problematic dashboard # All panels render successfully
# 2. Check query inspector # Panel > Edit > Query inspector // Query time: 5s, Timeout: 60s
# 3. Test with original time range # Dashboard time: Last 30 days // Panels render within timeout
# 4. Check Grafana logs journalctl -u grafana-server | grep timeout // No timeout errors
# 5. Monitor Grafana metrics curl 'http://grafana:3000/api/metrics' // Request latency reasonable
# 6. Test multiple dashboards # Open several dashboards simultaneously // All render correctly ```
Related Issues
- [Fix Grafana Dashboard Not Loading](/articles/fix-grafana-dashboard-not-loading)
- [Fix Prometheus Query Timeout](/articles/fix-prometheus-query-timeout)
- [Fix Grafana Datasource Error](/articles/fix-grafana-datasource-error)