Introduction
MySQL deadlock and transaction lock wait timeout errors occur when concurrent transactions compete for locks on the same resources, causing transactions to block indefinitely or exceed the lock wait timeout. Deadlocks happen when two or more transactions hold locks that the other needs, creating a circular dependency that MySQL resolves by rolling back one transaction. Lock wait timeouts occur when a transaction waits longer than innodb_lock_wait_timeout for a lock held by another transaction. Common causes include transactions updating rows in inconsistent order, gap locks from range queries blocking inserts, next-key locks from REPEATABLE READ isolation, long-running transactions holding locks, missing indexes causing table scans with more locks, foreign key constraint checks acquiring locks, implicit locks from UPDATE/DELETE without WHERE clause using primary key, and high concurrency on hot rows (counter increments, popular records). The fix requires understanding InnoDB locking behavior, transaction isolation levels, lock types (record, gap, next-key), deadlock detection, and query optimization. This guide provides production-proven troubleshooting for MySQL deadlocks and lock wait timeouts across MySQL 5.7, 8.0, and MariaDB.
Symptoms
ERROR 1213 (40001): Deadlock found when trying to get lockERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction- Transaction rolled back automatically
- Application retry logic triggered frequently
- Specific tables or queries cause frequent deadlocks
- Performance degradation under concurrent load
- SHOW ENGINE INNODB STATUS shows deadlock information
- Performance Schema shows lock waits
- Transactions take longer than expected to complete
- Application logs show "try restarting transaction" errors
Common Causes
- Inconsistent update order across transactions
- Gap locks from range queries
- Missing indexes causing full table scans
- Long-running transactions holding locks
- Foreign key constraint lock conflicts
- Hot row contention (counters, sequences)
- High isolation level (REPEATABLE READ default)
- Implicit locks from non-indexed WHERE clauses
- Batch operations locking many rows
- Autocommit disabled without explicit commits
Step-by-Step Fix
### 1. Diagnose deadlock and lock waits
Check recent deadlocks:
```sql -- View latest deadlock information SHOW ENGINE INNODB STATUS\G
-- Look for: -- LATEST DETECTED DEADLOCK -- ------------------------ -- Shows: -- - Transactions involved -- - Locks held by each transaction -- - Locks requested by each transaction -- - Victim transaction (rolled back) -- - SQL statements causing deadlock
-- Example deadlock output:
-- *** (1) TRANSACTION:
-- TRANSACTION 12345, ACTIVE 0 sec starting index read
-- mysql tables in use 1, locked 1
-- LOCK WAIT 3 lock struct(s), heap size 1136, 2 row lock(s)
-- MySQL thread id 10, OS thread handle 1234, query id 5678 localhost root updating
-- UPDATE accounts SET balance = balance - 100 WHERE id = 1
--
-- *** (1) HOLDS THE LOCK(S):
-- RECORD LOCKS space id 1 page no 3 n bits 72 index PRIMARY of table test.accounts
-- trx id 12345 lock_mode X
-- Record lock, heap no 4 PHYSICAL RECORD: n_fields 4
--
-- *** (1) WAITING FOR THIS LOCK TO BE GRANTED:
-- RECORD LOCKS space id 1 page no 5 n bits 72 index PRIMARY of table test.accounts
-- trx id 12345 lock_mode X lock_rec waiting
--
-- *** (2) TRANSACTION:
-- ... (similar info for second transaction)
--
-- *** WE ROLL BACK TRANSACTION (1)
```
Check current lock waits:
```sql -- MySQL 8.0+: View lock waits from Performance Schema SELECT r.trx_id waiting_trx_id, r.trx_mysql_thread_id waiting_thread, r.trx_query waiting_query, b.trx_id blocking_trx_id, b.trx_mysql_thread_id blocking_thread, b.trx_query blocking_query FROM information_schema.innodb_lock_waits w INNER JOIN information_schema.innodb_trx b ON b.trx_id = w.blocking_trx_id INNER JOIN information_schema.innodb_trx r ON r.trx_id = w.requesting_trx_id;
-- MySQL 5.7: Similar query with different table structure SELECT r.trx_id, r.trx_mysql_thread_id, r.trx_query, b.trx_id, b.trx_mysql_thread_id, b.trx_query FROM information_schema.innodb_trx r INNER JOIN information_schema.innodb_locks rl ON r.trx_id = rl.trx_id LEFT JOIN information_schema.innodb_locks bl ON bl.lock_id = rl.blocked_lock_id LEFT JOIN information_schema.innodb_trx b ON b.trx_id = bl.trx_id WHERE r.trx_state = 'LOCK WAIT';
-- Check running transactions SELECT * FROM information_schema.innodb_trx\G
-- Key columns: -- trx_state: RUNNING, LOCK WAIT, ROLLING BACK -- trx_started: When transaction started -- trx_mysql_thread_id: Thread ID to kill if needed -- trx_query: Current SQL statement ```
Monitor locks:
```sql -- MySQL 8.0+: Data Dictionary locks SELECT engine_transaction_id, threads.thread_id, events.event_name, objects.schema_name, objects.object_name, objects.object_type FROM performance_schema.data_locks dl JOIN performance_schema.threads ON dl.engine_transaction_id = threads.thread_id LEFT JOIN performance_schema.objects ON dl.object_instance_begin = objects.object_instance_begin;
-- View table locks SHOW OPEN TABLES WHERE In_use > 0;
-- Show process list SHOW FULL PROCESSLIST;
-- Look for: -- - State: "updating", "Locked" -- - Time: Long-running queries -- - Info: SQL statement ```
### 2. Fix transaction ordering
Standardize update order:
```sql -- DEADLOCK SCENARIO: -- Transaction A: UPDATE accounts SET balance = balance - 100 WHERE id = 1; -- UPDATE accounts SET balance = balance + 100 WHERE id = 2; -- Transaction B: UPDATE accounts SET balance = balance - 50 WHERE id = 2; -- UPDATE accounts SET balance = balance + 50 WHERE id = 1; -- Result: A holds id=1, wants id=2; B holds id=2, wants id=1 = DEADLOCK
-- FIX: Always update in same order (e.g., by primary key) -- Transaction A: UPDATE accounts SET balance = balance - 100 WHERE id = 1; -- UPDATE accounts SET balance = balance + 100 WHERE id = 2; -- Transaction B: UPDATE accounts SET balance = balance + 50 WHERE id = 1; -- UPDATE accounts SET balance = balance - 50 WHERE id = 2; -- Both update id=1 first, then id=2 = NO DEADLOCK ```
Application-level locking:
```python # Python example: Acquire locks in consistent order def transfer_money(from_id, to_id, amount): # Sort IDs to ensure consistent lock order lock_order = sorted([from_id, to_id])
conn.start_transaction() cursor = conn.cursor()
# Lock in sorted order for account_id in lock_order: cursor.execute( "SELECT balance FROM accounts WHERE id = %s FOR UPDATE", (account_id,) )
# Now safe to update if from_id < to_id: cursor.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s", (amount, from_id)) cursor.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s", (amount, to_id)) else: cursor.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s", (amount, to_id)) cursor.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s", (amount, from_id))
conn.commit() ```
Use SELECT ... FOR UPDATE:
```sql -- Lock rows before updating to ensure consistent order START TRANSACTION;
-- Lock rows in primary key order SELECT * FROM accounts WHERE id IN (1, 2, 3) ORDER BY id FOR UPDATE;
-- Now update in same order UPDATE accounts SET balance = balance - 100 WHERE id = 1; UPDATE accounts SET balance = balance + 100 WHERE id = 2;
COMMIT; ```
### 3. Reduce lock granularity
Use appropriate indexes:
```sql -- Without index: Full table scan locks all rows UPDATE orders SET status = 'shipped' WHERE customer_id = 123; -- Problem: If no index on customer_id, locks entire table
-- With index: Only locks matching rows ALTER TABLE orders ADD INDEX idx_customer_id (customer_id);
-- Verify index is used EXPLAIN UPDATE orders SET status = 'shipped' WHERE customer_id = 123; -- Look for: type = 'ref' or 'range' (not 'ALL') ```
Avoid gap locks:
```sql -- Gap locks occur with range queries in REPEATABLE READ
-- This acquires gap locks: SELECT * FROM orders WHERE order_date > '2024-01-01' FOR UPDATE; -- Locks: All rows with order_date > '2024-01-01' PLUS the gap
-- Insert in that gap will block: INSERT INTO orders (order_date, ...) VALUES ('2024-01-15', ...); -- BLOCKED by gap lock
-- FIX 1: Use unique index (no gap locks for unique lookups) SELECT * FROM orders WHERE order_id = 123 FOR UPDATE; -- Only locks specific row, no gap
-- FIX 2: Use READ COMMITTED isolation SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED; -- Gap locks disabled in READ COMMITTED
-- FIX 3: Use point queries instead of ranges SELECT * FROM orders WHERE order_id IN (1, 2, 3) FOR UPDATE; ```
Understand lock types:
```sql -- Record lock: Locks specific index record SELECT * FROM users WHERE id = 1 FOR UPDATE;
-- Gap lock: Locks gap between index records SELECT * FROM users WHERE id > 1 AND id < 10 FOR UPDATE; -- Locks: Records 2-9 PLUS gaps before 2 and after 9
-- Next-key lock: Record + gap (REPEATABLE READ default) -- Combination of record lock and gap lock
-- Check what locks your query acquires SELECT * FROM performance_schema.data_locks WHERE object_schema = 'your_db'\G ```
### 4. Optimize transaction duration
Keep transactions short:
```sql -- BAD: Long-running transaction holds locks too long START TRANSACTION; SELECT * FROM large_table WHERE ...; -- Takes 30 seconds -- Process data in application (20 seconds) UPDATE large_table SET ... WHERE ...; -- Wants lock, may deadlock COMMIT;
-- GOOD: Minimize time between START and COMMIT START TRANSACTION; SELECT * FROM large_table WHERE ... FOR UPDATE; -- Lock immediately UPDATE large_table SET ... WHERE ...; -- Update while holding lock COMMIT; -- Release immediately
-- Process data outside transaction if possible SELECT * FROM large_table WHERE ...; -- No lock -- Process data START TRANSACTION; UPDATE large_table SET ... WHERE ...; -- Short transaction COMMIT; ```
Avoid long SELECT ... FOR UPDATE:
```sql -- Problem: Holding locks while fetching large result set SELECT * FROM orders WHERE status = 'pending' FOR UPDATE; -- If 10000 rows, holds all locks while processing
-- Solution: Process in batches SET @batch_size = 100; SET @last_id = 0;
batch_loop: LOOP START TRANSACTION;
SELECT GROUP_CONCAT(id) INTO @ids FROM ( SELECT id FROM orders WHERE status = 'pending' AND id > @last_id ORDER BY id LIMIT @batch_size FOR UPDATE ) t;
IF @ids IS NULL THEN COMMIT; LEAVE batch_loop; END IF;
-- Process this batch UPDATE orders SET status = 'processing' WHERE FIND_IN_SET(id, @ids);
SET @last_id = (SELECT MAX(id) FROM orders WHERE FIND_IN_SET(id, @ids));
COMMIT; END LOOP; ```
### 5. Configure lock timeouts
Adjust lock wait timeout:
```sql -- Check current timeout SHOW VARIABLES LIKE 'innodb_lock_wait_timeout'; -- Default: 50 seconds
-- Set session timeout SET SESSION innodb_lock_wait_timeout = 30; ; Shorter for quick fail SET SESSION innodb_lock_wait_timeout = 120; ; Longer for batch operations
-- Set global timeout (requires restart for some versions) SET GLOBAL innodb_lock_wait_timeout = 50;
-- For deadlock detection timeout SHOW VARIABLES LIKE 'innodb_deadlock_detect'; -- Default: ON (automatic deadlock detection)
-- Disable deadlock detect for high-concurrency workloads -- (faster without detection, rely on timeout) SET GLOBAL innodb_deadlock_detect = OFF; ```
Handle deadlock in application:
```python import MySQLdb from MySQLdb import OperationalError import time
def execute_with_retry(cursor, query, params, max_retries=3): for attempt in range(max_retries): try: cursor.execute(query, params) return cursor.fetchall() except OperationalError as e: if e.args[0] == 1213: # Deadlock if attempt < max_retries - 1: time.sleep(0.1 * (2 ** attempt)) ; Exponential backoff continue else: raise elif e.args[0] == 1205: ; Lock wait timeout if attempt < max_retries - 1: time.sleep(1) continue else: raise else: raise
# Usage with explicit retry logic conn = MySQLdb.connect(...) cursor = conn.cursor()
max_retries = 3 for attempt in range(max_retries): try: cursor.execute("START TRANSACTION") cursor.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s", (amount, from_id)) cursor.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s", (amount, to_id)) cursor.execute("COMMIT") break except OperationalError as e: cursor.execute("ROLLBACK") if e.args[0] == 1213 and attempt < max_retries - 1: time.sleep(0.1 * (2 ** attempt)) continue raise ```
### 6. Fix foreign key lock conflicts
Check foreign key constraints:
```sql -- Show foreign keys for a table SELECT TABLE_NAME, COLUMN_NAME, CONSTRAINT_NAME, REFERENCED_TABLE_NAME, REFERENCED_COLUMN_NAME FROM information_schema.KEY_COLUMN_USAGE WHERE TABLE_SCHEMA = 'your_database' AND REFERENCED_TABLE_NAME IS NOT NULL;
-- Deadlock scenario with foreign keys: -- Parent table: users (id) -- Child table: orders (id, user_id) -- -- Transaction A: DELETE FROM users WHERE id = 1; -- - Locks users row -- - Locks orders index for FK check -- Transaction B: INSERT INTO orders (user_id, ...) VALUES (1, ...); -- - Locks orders row -- - Locks users index for FK check -- DEADLOCK!
-- FIX: Delete child records first START TRANSACTION; DELETE FROM orders WHERE user_id = 1; ; Delete children first DELETE FROM users WHERE id = 1; ; Then delete parent COMMIT; ```
Optimize foreign key indexes:
```sql -- Ensure FK columns are indexed -- MySQL automatically indexes FK columns, but verify:
SHOW INDEX FROM orders WHERE Key_name LIKE '%ibfk%';
-- If missing, add index ALTER TABLE orders ADD INDEX idx_user_id (user_id);
-- For cascading deletes, ensure indexes exist on both sides ALTER TABLE child_table ADD INDEX idx_parent_id (parent_id); ```
Consider disabling FK constraints (carefully):
```sql -- For bulk operations, temporarily disable FK checks SET FOREIGN_KEY_CHECKS = 0;
-- Perform bulk operation DELETE FROM users WHERE id IN (1, 2, 3);
-- Re-enable FK checks SET FOREIGN_KEY_CHECKS = 1;
-- WARNING: Only do this if you're certain data integrity is maintained ```
### 7. Reduce contention on hot rows
Handle counter increments:
```sql -- BAD: High contention on single row UPDATE counters SET value = value + 1 WHERE name = 'page_views'; -- All transactions lock same row
-- FIX 1: Use INSERT ... ON DUPLICATE KEY UPDATE INSERT INTO counters (name, value) VALUES ('page_views', 1) ON DUPLICATE KEY UPDATE value = value + 1; -- Less locking overhead
-- FIX 2: Use application-level batching -- Accumulate counts in application, update periodically -- Python example: class Counter: def __init__(self): self.local_count = 0 self.lock = threading.Lock()
def increment(self): with self.lock: self.local_count += 1 if self.local_count >= 100: self.flush()
def flush(self): cursor.execute( "UPDATE counters SET value = value + %s WHERE name = 'page_views'", (self.local_count,) ) conn.commit() self.local_count = 0
-- FIX 3: Use sharded counters -- Split into multiple rows: counter_0, counter_1, counter_2, counter_3 UPDATE counters SET value = value + 1 WHERE name = 'page_views' AND shard = MOD(RAND(), 4); -- Read: SELECT SUM(value) FROM counters WHERE name = 'page_views'; ```
Use optimistic locking:
```sql -- Add version column ALTER TABLE orders ADD COLUMN version INT DEFAULT 0;
-- Optimistic update pattern -- Read current state SELECT id, status, version FROM orders WHERE id = 123; -- version = 5
-- Update with version check UPDATE orders SET status = 'shipped', version = version + 1 WHERE id = 123 AND version = 5;
-- Check affected rows -- If 0 rows affected: Another transaction updated, retry with new version -- If 1 row affected: Success ```
### 8. Change isolation level
Check current isolation level:
```sql -- Check session isolation SELECT @@transaction_isolation; -- or SELECT @@tx_isolation;
-- Common isolation levels: -- REPEATABLE-READ (MySQL default) -- READ-COMMITTED -- READ-UNCOMMITTED -- SERIALIZABLE ```
Use READ COMMITTED to reduce locking:
```sql -- Set session to READ COMMITTED SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
-- Benefits: -- - No gap locks (only record locks) -- - Reduces deadlock likelihood -- - Better for high-concurrency workloads
-- Tradeoffs: -- - Non-repeatable reads possible -- - Phantom reads possible
-- Set globally (affects new connections) SET GLOBAL TRANSACTION ISOLATION LEVEL READ COMMITTED;
-- In application connection string: -- MySQL: isolation_level=READ COMMITTED -- Python: conn.isolation_level = psycopg2.extensions.ISOLATION_LEVEL_READ_COMMITTED ```
### 9. Monitor and alert on locks
Create lock monitoring query:
```sql -- Save this as a monitoring query SELECT r.trx_id AS waiting_transaction, r.trx_mysql_thread_id AS waiting_thread, r.trx_started AS waiting_started, TIMESTAMPDIFF(SECOND, r.trx_started, NOW()) AS waiting_seconds, b.trx_id AS blocking_transaction, b.trx_mysql_thread_id AS blocking_thread, b.trx_query AS blocking_query, r.trx_query AS waiting_query FROM information_schema.innodb_lock_waits w JOIN information_schema.innodb_trx b ON b.trx_id = w.blocking_trx_id JOIN information_schema.innodb_trx r ON r.trx_id = w.requesting_trx_id ORDER BY waiting_seconds DESC;
-- Kill long-blocking transactions (use carefully!) -- KILL <blocking_thread_id>; ```
Set up Performance Schema monitoring:
```sql -- Enable Performance Schema instruments UPDATE performance_schema.setup_instruments SET ENABLED = 'YES' WHERE NAME LIKE 'wait/lock/metadata/sql/mdl%';
-- Query lock events SELECT EVENT_ID, EVENT_NAME, SOURCE, THREAD_ID, OBJECT_SCHEMA, OBJECT_NAME, LOCK_TYPE, LOCK_DURATION, LOCK_STATUS FROM performance_schema.events_waits_current WHERE EVENT_NAME LIKE '%lock%' ORDER BY EVENT_ID; ```
### 10. Analyze deadlock patterns
Extract deadlock patterns from logs:
```sql -- Parse INNODB STATUS for deadlock analysis -- Run after deadlock occurs
SHOW ENGINE INNODB STATUS\G
-- Look for patterns: -- 1. Same tables involved repeatedly -- 2. Same statements causing deadlock -- 3. Similar lock orders
-- Common patterns: -- Pattern A: AB-BA deadlock (update order inconsistent) -- Fix: Standardize update order -- -- Pattern B: Gap lock deadlock -- Fix: Use READ COMMITTED or unique lookups -- -- Pattern C: Foreign key deadlock -- Fix: Delete children before parents -- -- Pattern D: Index merge deadlock -- Fix: Add covering index ```
Use sys schema for analysis:
```sql -- MySQL 5.7+: Use sys schema SELECT * FROM sys.innodb_lock_waits;
-- Show lock statistics SELECT * FROM sys.memory_global_by_current_bytes;
-- Find problematic queries SELECT * FROM sys.statements_with_full_table_scans; SELECT * FROM sys.statements_with_runtimes_in_95th_percentile; ```
Prevention
- Always acquire locks in consistent order (e.g., by primary key)
- Keep transactions short with minimal time between statements
- Use appropriate indexes to reduce lock scope
- Consider READ COMMITTED isolation for high-concurrency workloads
- Implement retry logic with exponential backoff in applications
- Avoid gap locks by using unique index lookups when possible
- Batch large operations to reduce lock duration
- Monitor lock waits and deadlocks with alerting
- Document transaction patterns for complex operations
Related Errors
- **ERROR 1213**: Deadlock found when trying to get lock
- **ERROR 1205**: Lock wait timeout exceeded
- **ERROR 1206**: Lock table full
- **ERROR 1005**: Can't create table (foreign key constraint)
- **ERROR 1451**: Cannot delete or update (foreign key constraint)