# MySQL Slow Queries: Query Optimization Guide
Your application feels sluggish. Pages load slowly, and you suspect MySQL is the culprit. The slow query log shows queries taking seconds or even minutes. Let's diagnose and optimize these performance bottlenecks.
Step 1: Enable the Slow Query Log
First, ensure slow query logging is enabled:
-- Check current settings
SHOW VARIABLES LIKE 'slow_query_log';
SHOW VARIABLES LIKE 'long_query_time';
SHOW VARIABLES LIKE 'slow_query_log_file';Enable it:
```sql -- Enable slow query log SET GLOBAL slow_query_log = 'ON';
-- Set threshold (queries taking longer than 2 seconds) SET GLOBAL long_query_time = 2;
-- Log queries not using indexes SET GLOBAL log_queries_not_using_indexes = 'ON'; ```
For permanent configuration, add to /etc/mysql/my.cnf:
[mysqld]
slow_query_log = 1
long_query_time = 2
slow_query_log_file = /var/log/mysql/mysql-slow.log
log_queries_not_using_indexes = 1Restart MySQL:
sudo systemctl restart mysqlStep 2: Identify Slow Queries
View recent slow queries:
# Read the slow query log
sudo tail -100 /var/log/mysql/mysql-slow.logUse mysqldumpslow to summarize:
```bash # Top 10 queries by execution time mysqldumpslow -s t -t 10 /var/log/mysql/mysql-slow.log
# Top 10 queries by count mysqldumpslow -s c -t 10 /var/log/mysql/mysql-slow.log
# Top 10 queries by lock time mysqldumpslow -s l -t 10 /var/log/mysql/mysql-slow.log ```
Query performance schema (MySQL 5.7+):
```sql -- Enable performance schema UPDATE performance_schema.setup_consumers SET ENABLED = 'YES' WHERE NAME LIKE 'events_statements%';
-- Find slowest queries SELECT DIGEST_TEXT, COUNT_STAR as executions, ROUND(SUM_TIMER_WAIT / 1000000000, 2) as total_time_sec, ROUND(AVG_TIMER_WAIT / 1000000000, 4) as avg_time_sec, ROUND(MAX_TIMER_WAIT / 1000000000, 4) as max_time_sec FROM performance_schema.events_statements_summary_by_digest ORDER BY SUM_TIMER_WAIT DESC LIMIT 20; ```
Find currently running queries:
```sql SHOW PROCESSLIST;
-- Or for more detail SELECT id, user, host, db, TIME as duration_sec, state, info as query FROM information_schema.processlist WHERE command != 'Sleep' ORDER BY time DESC; ```
Step 3: Analyze Query Execution Plans
For any slow query, use EXPLAIN to understand its execution:
EXPLAIN SELECT u.name, o.total
FROM users u
JOIN orders o ON u.id = o.user_id
WHERE u.email LIKE '%gmail.com'
AND o.created_at > '2026-01-01';For more detailed output (MySQL 5.6+):
EXPLAIN FORMAT=JSON SELECT ...;Key columns to examine:
| Column | What to Look For |
|---|---|
| type | Avoid ALL (full table scan). Look for eq_ref, ref, range |
| key | Which index is actually being used (NULL = no index) |
| rows | Estimated rows examined (high numbers indicate problems) |
| Extra | "Using filesort" or "Using temporary" can be slow |
Example analysis:
EXPLAIN SELECT * FROM orders WHERE customer_id = 123;+----+-------------+--------+------------+------+---------------+------+---------+------+------+----------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------+------------+------+---------------+------+---------+------+------+----------+-------------+
| 1 | SIMPLE | orders | ALL | NULL | NULL | NULL | NULL | 500000| 10.00 | Using where |
+----+-------------+--------+------------+------+---------------+------+---------+------+------+----------+-------------+This shows a full table scan (type: ALL) with no index used (key: NULL) on 500,000 rows. This query needs an index.
Step 4: Create Appropriate Indexes
Basic index creation:
```sql -- Single column index CREATE INDEX idx_customer_id ON orders(customer_id);
-- Composite index (order matters!) CREATE INDEX idx_customer_date ON orders(customer_id, created_at);
-- Covering index (includes all selected columns) CREATE INDEX idx_customer_covering ON orders(customer_id, created_at, total); ```
After creating indexes, verify they're used:
EXPLAIN SELECT * FROM orders WHERE customer_id = 123;Now you should see:
type: ref
key: idx_customer_id
rows: 50 (much lower)Index best practices:
- 1.Index columns used in WHERE, JOIN, and ORDER BY
- 2.Put most selective column first in composite indexes
- 3.Use covering indexes to avoid table lookups
- 4.Don't over-index (indexes slow down writes)
Check existing indexes:
```sql SHOW INDEX FROM orders;
-- Or more detailed SELECT table_name, index_name, column_name, seq_in_index, non_unique FROM information_schema.statistics WHERE table_schema = 'mydb' ORDER BY table_name, index_name, seq_in_index; ```
Step 5: Optimize Query Patterns
Avoid SELECT *
```sql -- Bad SELECT * FROM users WHERE id = 123;
-- Good SELECT id, name, email FROM users WHERE id = 123; ```
Use LIMIT for Large Result Sets
```sql -- Bad: retrieves all matching rows SELECT * FROM logs WHERE level = 'error';
-- Good: limits results SELECT * FROM logs WHERE level = 'error' LIMIT 1000; ```
Optimize LIKE Queries
```sql -- Bad: leading wildcard prevents index use SELECT * FROM users WHERE email LIKE '%gmail.com';
-- Better: trailing wildcard can use index SELECT * FROM users WHERE email LIKE 'john%';
-- Best: use full-text search for text matching SELECT * FROM users WHERE MATCH(profile) AGAINST('developer'); ```
Optimize JOINs
```sql -- Ensure join columns are indexed CREATE INDEX idx_user_id ON orders(user_id);
-- Use explicit JOIN conditions SELECT u.name, COUNT(o.id) as order_count FROM users u INNER JOIN orders o ON u.id = o.user_id WHERE u.created_at > '2026-01-01' GROUP BY u.id ORDER BY order_count DESC; ```
Avoid Functions on Indexed Columns
```sql -- Bad: function prevents index use SELECT * FROM orders WHERE DATE(created_at) = '2026-04-01';
-- Good: range query uses index SELECT * FROM orders WHERE created_at >= '2026-04-01' AND created_at < '2026-04-02'; ```
Use UNION Instead of OR
```sql -- Bad: OR can prevent index use SELECT * FROM orders WHERE status = 'pending' OR status = 'processing';
-- Good: UNION uses indexes SELECT * FROM orders WHERE status = 'pending' UNION SELECT * FROM orders WHERE status = 'processing';
-- Even better: IN clause SELECT * FROM orders WHERE status IN ('pending', 'processing'); ```
Optimize Subqueries
```sql -- Bad: dependent subquery runs for each row SELECT * FROM users u WHERE EXISTS (SELECT 1 FROM orders o WHERE o.user_id = u.id);
-- Good: JOIN is more efficient SELECT DISTINCT u.* FROM users u INNER JOIN orders o ON u.id = o.user_id; ```
Step 6: Analyze and Optimize Tables
Check table statistics:
SHOW TABLE STATUS LIKE 'orders'\GUpdate statistics:
ANALYZE TABLE orders;Optimize fragmented tables:
OPTIMIZE TABLE orders;Note: This locks the table. For large tables, use pt-online-schema-change or do during maintenance.
Check for table fragmentation:
SELECT
table_name,
ROUND(data_length / 1024 / 1024, 2) AS data_mb,
ROUND(index_length / 1024 / 1024, 2) AS index_mb,
ROUND(data_free / 1024 / 1024, 2) AS free_mb,
ROUND(data_free / (data_length + index_length) * 100, 2) AS frag_pct
FROM information_schema.tables
WHERE table_schema = 'mydb'
ORDER BY data_free DESC;Step 7: Configure Query Cache (MySQL 5.7 and earlier)
Query cache is deprecated in MySQL 8.0. For earlier versions:
[mysqld]
query_cache_type = 1
query_cache_size = 64M
query_cache_limit = 2MFor MySQL 8.0+, focus on proper indexing instead.
Step 8: Monitor Query Performance Over Time
Set up regular monitoring:
-- Create a monitoring view
CREATE VIEW slow_query_monitor AS
SELECT
DIGEST_TEXT,
COUNT_STAR as executions,
ROUND(SUM_TIMER_WAIT / 1000000000, 2) as total_sec,
ROUND(AVG_TIMER_WAIT / 1000000000, 4) as avg_sec
FROM performance_schema.events_statements_summary_by_digest
ORDER BY SUM_TIMER_WAIT DESC
LIMIT 20;Track query performance:
-- Compare digest stats before and after optimization
SELECT * FROM slow_query_monitor;Step 9: Kill Long-Running Queries
When a query runs too long:
```sql -- Find the query SHOW PROCESSLIST;
-- Kill it KILL 12345; -- Use the process ID
-- Or kill all queries running longer than 60 seconds SELECT CONCAT('KILL ', id, ';') FROM information_schema.processlist WHERE time > 60 AND command = 'Query' INTO OUTFILE '/tmp/kill_queries.sql';
SOURCE /tmp/kill_queries.sql; ```
Set a maximum execution time (MySQL 5.7+):
```sql -- Set session timeout SET SESSION max_execution_time = 30000; -- 30 seconds
-- In query hint SELECT /*+ MAX_EXECUTION_TIME(30000) */ * FROM large_table; ```
Real-World Optimization Example
Slow query:
SELECT u.name, u.email, COUNT(o.id) as orders, SUM(o.total) as spent
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
WHERE u.status = 'active'
AND u.created_at > '2025-01-01'
GROUP BY u.id
ORDER BY spent DESC
LIMIT 100;EXPLAIN shows: - Full table scan on users - Full table scan on orders - Using temporary; Using filesort
Optimization steps:
```sql -- Add indexes CREATE INDEX idx_user_status_created ON users(status, created_at); CREATE INDEX idx_order_user_total ON orders(user_id, total);
-- Rewrite query SELECT u.id, u.name, u.email, COALESCE(COUNT(o.id), 0) as orders, COALESCE(SUM(o.total), 0) as spent FROM users u LEFT JOIN orders o ON u.id = o.user_id WHERE u.status = 'active' AND u.created_at > '2025-01-01' GROUP BY u.id ORDER BY spent DESC LIMIT 100; ```
Result: Query time reduced from 15 seconds to 0.05 seconds.
Verification
After optimization:
```sql -- Check query time SELECT ...; -- Your optimized query
-- Compare in performance schema SELECT DIGEST_TEXT, AVG_TIMER_WAIT FROM performance_schema.events_statements_summary_by_digest WHERE DIGEST_TEXT LIKE '%your query%'; ```
Optimization Checklist
- [ ] Enable slow query log
- [ ] Identify slowest queries
- [ ] Run EXPLAIN on each slow query
- [ ] Create appropriate indexes
- [ ] Rewrite inefficient query patterns
- [ ] Update table statistics
- [ ] Monitor for improvements
- [ ] Set up ongoing monitoring
Quick Reference
```sql -- Enable slow query log SET GLOBAL slow_query_log = 'ON'; SET GLOBAL long_query_time = 2;
-- Find slow queries SELECT * FROM sys.statements_with_runtime_errors; SELECT * FROM sys.statements_with_runtimes_in_95th_percentile;
-- Explain query EXPLAIN FORMAT=JSON SELECT ...;
-- Create index CREATE INDEX idx_name ON table(column);
-- Analyze table ANALYZE TABLE table_name;
-- Kill query KILL process_id; ```
Query optimization is iterative. Start with the slowest queries, measure before and after, and focus on high-impact changes. Proper indexing solves most performance problems.