What's Actually Happening
MySQL queries execute slowly, taking seconds or minutes to complete instead of milliseconds. Performance degrades over time or with increasing data volume, affecting application responsiveness.
The Error You'll See
Slow query in logs:
```bash $ tail /var/log/mysql/mysql-slow.log
# Time: 2026-04-16T01:04:00.000000Z # User@Host: app[app] @ localhost [] # Query_time: 15.234567 Lock_time: 0.000123 Rows_sent: 100 Rows_examined: 5000000 SET timestamp=1744765440; SELECT * FROM orders WHERE customer_id = 123 AND status = 'pending'; ```
Application timeout:
# Application error:
mysql.connector.errors.DatabaseError: 2013 (HY000): Lost connection to MySQL server during queryEXPLAIN shows full scan:
```sql mysql> EXPLAIN SELECT * FROM orders WHERE customer_id = 123;
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | SIMPLE | orders | ALL | NULL | NULL | NULL | NULL | 5000000 | 10.00 | Using where |
Why This Happens
- 1.Missing indexes - Query scanning entire table
- 2.Inefficient query - Poorly written SQL
- 3.Large result set - Returning too many rows
- 4.Suboptimal joins - Join without proper indexes
- 5.Outdated statistics - MySQL using wrong plan
- 6.Insufficient memory - Not enough buffer pool
Step 1: Identify Slow Queries
```sql -- Enable slow query log SET GLOBAL slow_query_log = 'ON'; SET GLOBAL long_query_time = 2; SET GLOBAL slow_query_log_file = '/var/log/mysql/mysql-slow.log';
-- Check slow queries SELECT * FROM mysql.slow_log ORDER BY start_time DESC LIMIT 10;
-- Use Performance Schema (MySQL 5.7+) SELECT DIGEST_TEXT, COUNT_STAR, AVG_TIMER_WAIT/1000000000 as avg_latency_ms, SUM_ROWS_EXAMINED, SUM_ROWS_SENT FROM performance_schema.events_statements_summary_by_digest ORDER BY AVG_TIMER_WAIT DESC LIMIT 10;
-- Check current running queries SHOW PROCESSLIST;
-- Kill slow query if needed KILL QUERY 12345; ```
Step 2: Analyze Execution Plan
```sql -- Basic EXPLAIN EXPLAIN SELECT * FROM orders WHERE customer_id = 123;
-- Extended EXPLAIN EXPLAIN EXTENDED SELECT * FROM orders WHERE customer_id = 123; SHOW WARNINGS;
-- EXPLAIN with JSON format (MySQL 5.7+) EXPLAIN FORMAT=JSON SELECT * FROM orders WHERE customer_id = 123;
-- EXPLAIN ANALYZE (MySQL 8.0+) EXPLAIN ANALYZE SELECT * FROM orders WHERE customer_id = 123;
-- Key indicators to check: -- type: system > const > eq_ref > ref > range > index > ALL -- key: Which index is used (NULL = no index) -- rows: Estimated rows examined -- Extra: Using filesort, Using temporary = bad ```
Step 3: Add Missing Indexes
```sql -- Check existing indexes SHOW INDEX FROM orders;
-- Create index for WHERE clause CREATE INDEX idx_customer_id ON orders(customer_id);
-- Create composite index for multiple columns CREATE INDEX idx_customer_status ON orders(customer_id, status);
-- Order matters: customer_id, status -- Use customer_id = X AND status = Y -- Use customer_id = X (uses leftmost) -- NOT: status = Y (can't use index)
-- Create covering index (include all columns needed) CREATE INDEX idx_covering ON orders(customer_id, status, total_amount);
-- Index for ORDER BY CREATE INDEX idx_date ON orders(created_at);
-- For ORDER BY with LIMIT CREATE INDEX idx_date_customer ON orders(created_at, customer_id);
-- Check index usage after creation EXPLAIN SELECT * FROM orders WHERE customer_id = 123; -- type should be ref, key should show index name ```
Step 4: Optimize Query Structure
```sql -- AVOID: SELECT * SELECT * FROM orders WHERE customer_id = 123; -- Fetches all columns, more I/O
-- BETTER: Select only needed columns SELECT id, customer_id, total_amount FROM orders WHERE customer_id = 123;
-- AVOID: Functions on indexed columns SELECT * FROM orders WHERE YEAR(created_at) = 2026; -- Index not used!
-- BETTER: Range query SELECT * FROM orders WHERE created_at >= '2026-01-01' AND created_at < '2027-01-01';
-- AVOID: LIKE with leading wildcard SELECT * FROM customers WHERE name LIKE '%john%'; -- Full scan!
-- BETTER: Prefix match SELECT * FROM customers WHERE name LIKE 'john%';
-- AVOID: OR conditions SELECT * FROM orders WHERE customer_id = 123 OR status = 'pending'; -- May not use index
-- BETTER: UNION SELECT * FROM orders WHERE customer_id = 123 UNION SELECT * FROM orders WHERE status = 'pending';
-- AVOID: Subqueries in WHERE SELECT * FROM orders WHERE customer_id IN (SELECT id FROM customers WHERE status = 'active');
-- BETTER: JOIN SELECT o.* FROM orders o JOIN customers c ON o.customer_id = c.id WHERE c.status = 'active'; ```
Step 5: Optimize JOINs
```sql -- Check join execution EXPLAIN SELECT o.*, c.name FROM orders o JOIN customers c ON o.customer_id = c.id WHERE c.status = 'active';
-- Ensure join columns are indexed CREATE INDEX idx_orders_customer ON orders(customer_id); CREATE INDEX idx_customers_id ON customers(id);
-- Join order matters: -- Start with smallest table -- Filter early
-- Use explicit join conditions SELECT o.*, c.name FROM orders o INNER JOIN customers c ON o.customer_id = c.id WHERE c.status = 'active';
-- Avoid CROSS JOIN unless intentional -- Use STRAIGHT_JOIN to force join order (if optimizer chooses poorly) SELECT STRAIGHT_JOIN o.*, c.name FROM customers c JOIN orders o ON o.customer_id = c.id WHERE c.status = 'active'; ```
Step 6: Optimize Subqueries
```sql -- Avoid correlated subqueries (executed for each row) SELECT * FROM orders o WHERE total_amount > ( SELECT AVG(total_amount) FROM orders WHERE customer_id = o.customer_id );
-- Better: Use JOIN or derived table SELECT o.* FROM orders o JOIN ( SELECT customer_id, AVG(total_amount) as avg_amount FROM orders GROUP BY customer_id ) avg ON o.customer_id = avg.customer_id WHERE o.total_amount > avg.avg_amount;
-- Use EXISTS instead of IN for large subqueries SELECT * FROM orders o WHERE EXISTS ( SELECT 1 FROM customers c WHERE c.id = o.customer_id AND c.status = 'active' ); ```
Step 7: Partition Large Tables
```sql -- Check if table is large enough for partitioning SELECT table_name, table_rows, data_length/1024/1024 as data_mb, index_length/1024/1024 as index_mb FROM information_schema.tables WHERE table_schema = 'mydb';
-- Partition by range (e.g., by date) ALTER TABLE orders PARTITION BY RANGE (YEAR(created_at)) ( PARTITION p2023 VALUES LESS THAN (2024), PARTITION p2024 VALUES LESS THAN (2025), PARTITION p2025 VALUES LESS THAN (2026), PARTITION pfuture VALUES LESS THAN MAXVALUE );
-- Query benefits from partition pruning SELECT * FROM orders WHERE created_at >= '2025-01-01'; -- Only scans p2025 partition
-- Check partition usage EXPLAIN PARTITIONS SELECT * FROM orders WHERE created_at >= '2025-01-01'; ```
Step 8: Update Table Statistics
```sql -- Check last analyze SHOW TABLE STATUS LIKE 'orders';
-- Update statistics ANALYZE TABLE orders;
-- Analyze all tables in database mysqlcheck -u root -p --analyze mydb
-- Force index usage if optimizer chooses wrong SELECT * FROM orders FORCE INDEX (idx_customer_id) WHERE customer_id = 123;
-- Or ignore specific index SELECT * FROM orders IGNORE INDEX (idx_date) WHERE customer_id = 123 AND created_at > '2025-01-01'; ```
Step 9: Optimize Server Configuration
```sql -- Check buffer pool size SHOW VARIABLES LIKE 'innodb_buffer_pool_size'; -- Should be 70-80% of RAM for dedicated DB server
-- Increase buffer pool SET GLOBAL innodb_buffer_pool_size = 4294967296; -- 4GB
-- In my.cnf: [mysqld] innodb_buffer_pool_size = 4G
-- Check query cache (MySQL 5.7, removed in 8.0) SHOW VARIABLES LIKE 'query_cache%';
-- Enable query cache (if beneficial) SET GLOBAL query_cache_size = 67108864; SET GLOBAL query_cache_type = ON;
-- Check sort buffer SHOW VARIABLES LIKE 'sort_buffer_size';
-- Increase for large sorts SET GLOBAL sort_buffer_size = 262144; -- 256KB
-- Check join buffer SHOW VARIABLES LIKE 'join_buffer_size'; ```
Step 10: Monitor Query Performance
```bash # Create monitoring script cat << 'EOF' > monitor_mysql_queries.sh #!/bin/bash mysql -e " SELECT DIGEST_TEXT as query, COUNT_STAR as executions, ROUND(AVG_TIMER_WAIT/1000000000, 2) as avg_ms, ROUND(MAX_TIMER_WAIT/1000000000, 2) as max_ms, SUM_ROWS_EXAMINED as rows_scanned FROM performance_schema.events_statements_summary_by_digest ORDER BY AVG_TIMER_WAIT DESC LIMIT 20; "
echo "" echo "=== Current Running Queries ===" mysql -e "SHOW PROCESSLIST" | grep -v Sleep EOF
chmod +x monitor_mysql_queries.sh
# Monitor with MySQL Enterprise Monitor or PMM # Key metrics: # - Queries per second # - Average query latency # - Full table scans # - Slow query count
# Set up alerts for slow queries # In Prometheus/Grafana: # mysql_global_status_slow_queries > 100 ```
MySQL Slow Query Checklist
| Check | Command | Expected |
|---|---|---|
| Execution plan | EXPLAIN | type: ref/range, not ALL |
| Index used | EXPLAIN key | Index name shown |
| Rows examined | EXPLAIN rows | Reasonable for result |
| Buffer pool hit | SHOW STATUS | Buffer pool hit > 99% |
| Slow queries | slow_log | Few or none |
| Table stats | SHOW TABLE STATUS | Up to date |
Verify the Fix
```bash # After adding indexes and optimizing queries
# 1. Check EXPLAIN output mysql -e "EXPLAIN SELECT * FROM orders WHERE customer_id = 123" // type: ref, key: idx_customer_id
# 2. Measure query time mysql -e "SELECT SQL_NO_CACHE * FROM orders WHERE customer_id = 123" --time // Should be < 100ms
# 3. Check slow query log tail /var/log/mysql/mysql-slow.log // Fewer or no entries
# 4. Monitor query performance ./monitor_mysql_queries.sh // Lower avg_ms
# 5. Test with production-like load sysbench oltp_read_only run // Improved throughput
# 6. Verify buffer pool hit rate mysql -e "SHOW STATUS LIKE 'Innodb_buffer_pool_read%'" // High hit rate ```
Related Issues
- [Fix MySQL Deadlock Detected](/articles/fix-mysql-deadlock-detected)
- [Fix MySQL Connection Refused](/articles/fix-mysql-connection-refused)
- [Fix MySQL Lock Wait Timeout](/articles/fix-mysql-lock-wait-timeout)