Introduction
MySQL deadlock and lock wait timeout errors occur when two or more transactions are waiting for each other to release locks, creating a circular dependency that cannot be resolved without intervention. InnoDB detects deadlocks and rolls back one transaction (error 1213: Deadlock found when trying to get lock), while lock wait timeout occurs when a transaction waits longer than innodb_lock_wait_timeout seconds (error 1205: Lock wait timeout exceeded). These errors indicate concurrency issues that can cause data inconsistency, application failures, and degraded performance under load.
Symptoms
- Application logs show
ERROR 1213 (40001): Deadlock found when trying to get lock - Application logs show
ERROR 1205 (HY000): Lock wait timeout exceeded - Transactions fail intermittently under concurrent load
SHOW ENGINE INNODB STATUSshowsLATEST DETECTED DEADLOCK- Query performance degrades as lock wait times increase
- Issue appears during peak traffic, batch operations, or after schema changes
- Multiple transactions updating same tables in different order
Common Causes
- Transactions update tables in inconsistent order (classic deadlock pattern)
- Missing indexes causing gap locks or table scans
- Long-running transactions holding locks for extended periods
- High concurrency with row-level lock contention
- Foreign key constraints creating implicit locks
- Transaction isolation level too strict (SERIALIZABLE)
- Batch operations locking many rows simultaneously
- Gap locks from range queries blocking inserts
Step-by-Step Fix
### 1. Enable deadlock detection and logging
Configure MySQL to capture deadlock information:
```sql -- Check current deadlock settings SHOW VARIABLES LIKE 'innodb_deadlock_detect'; -- Should be: ON (enabled by default in MySQL 5.6+)
SHOW VARIABLES LIKE 'innodb_lock_wait_timeout'; -- Default: 50 seconds -- Reduce for faster failure: 10-30 seconds -- Increase for long batch operations: 120+ seconds
SHOW VARIABLES LIKE 'innodb_status_output'; -- Enable deadlock printing to ERROR log SET GLOBAL innodb_status_output = ON;
SHOW VARIABLES LIKE 'innodb_status_output_locks'; -- Enable lock information in status output SET GLOBAL innodb_status_output_locks = ON;
-- Check error log for deadlock information -- Location defined by log_error variable SHOW VARIABLES LIKE 'log_error'; ```
View latest deadlock:
```sql -- Get detailed deadlock information SHOW ENGINE INNODB STATUS\G
-- Look for section: -- LATEST DETECTED DEADLOCK -- ------------------------ -- *** (1) TRANSACTION: -- TRANSACTION 123456789, ACTIVE 2 sec starting index read -- mysql tables in use 1, locked 1 -- LOCK WAIT 3 lock struct(s), heap size 1136, 2 row lock(s) -- MySQL thread id 100, OS thread handle 1234, query id 5678 updating -- UPDATE table1 SET col1 = 1 WHERE id = 100
-- *** (2) TRANSACTION: -- TRANSACTION 123456790, ACTIVE 1 sec starting index read -- ...
-- *** WE ROLL BACK TRANSACTION (1) -- This shows which transaction was rolled back ```
### 2. Analyze deadlock from error log
Parse deadlock details from MySQL error log:
```bash # Location of error log # Check my.cnf or mysql configuration grep -i "deadlock" /var/log/mysql/error.log | tail -20
# Or from inside MySQL SHOW VARIABLES LIKE 'log_error';
# Typical deadlock output: # *** (1) TRANSACTION: # TRANSACTION 123456, ACTIVE 0 sec updating or deleting # mysql tables in use 1, locked 1 # LOCK WAIT 2 lock struct(s), heap size 1136, 1 row lock(s), undo log entries 1 # MySQL thread id 100, OS thread handle 140123456789, query id 12345 localhost root updating # UPDATE orders SET status = 'shipped' WHERE order_id = 100
# *** (2) TRANSACTION: # TRANSACTION 123457, ACTIVE 0 sec updating or deleting # mysql tables in use 1, locked 1 # 2 lock struct(s), heap size 1136, 1 row lock(s), undo log entries 1 # MySQL thread id 101, OS thread handle 140123456790, query id 12346 localhost root updating # UPDATE orders SET status = 'processing' WHERE order_id = 100
# *** WE ROLL BACK TRANSACTION (1)
# Key information to extract: # - Which tables are involved # - Which indexes are being used # - What type of lock (X, S, gap, next-key) # - Transaction isolation level # - Which transaction was rolled back ```
### 3. Check current lock status
View active locks and waiting transactions:
```sql -- MySQL 5.7+ -- View all locks SELECT * FROM performance_schema.data_locks\G
-- View lock waits SELECT * FROM performance_schema.data_lock_waits\G
-- MySQL 8.0+ -- More detailed lock information SELECT engine_transaction_id, threads.thread_id, events.transaction_id, access_mode, isolation_level, lock_state FROM performance_schema.threads JOIN performance_schema.events_transactions_current events ON threads.thread_id = events.thread_id\G
-- Find blocking transactions SELECT requesting_trx_id, requested_lock_id, blocking_trx_id, blocking_lock_id FROM performance_schema.data_lock_waits;
-- Get transaction details SELECT trx.trx_id, trx.trx_state, trx.trx_started, trx.trx_mysql_thread_id, trx.trx_query, TIMESTAMPDIFF(SECOND, trx.trx_started, NOW()) as duration_seconds FROM information_schema.innodb_trx trx WHERE trx.trx_state IN ('LOCK WAIT', 'RUNNING') ORDER BY trx.trx_started;
-- Kill long-running transaction (use carefully) -- KILL <trx_mysql_thread_id>; ```
### 4. Identify problematic query patterns
Common deadlock patterns:
```sql -- Pattern 1: AB-BA Deadlock (most common) -- Transaction 1: UPDATE A -> UPDATE B -- Transaction 2: UPDATE B -> UPDATE A
-- Example: -- Transaction 1: START TRANSACTION; UPDATE accounts SET balance = balance - 100 WHERE id = 1; -- Locks row 1 UPDATE accounts SET balance = balance + 100 WHERE id = 2; -- Waits for row 2
-- Transaction 2 (concurrent): START TRANSACTION; UPDATE accounts SET balance = balance - 50 WHERE id = 2; -- Locks row 2 UPDATE accounts SET balance = balance + 50 WHERE id = 1; -- Deadlock!
-- Fix: Always update in consistent order (e.g., by id) -- Application code should enforce ordering
-- Pattern 2: Gap Lock Deadlock -- Range locks create gaps that block inserts
-- Transaction 1: START TRANSACTION; SELECT * FROM orders WHERE customer_id = 10 FOR UPDATE; -- Gap lock
-- Transaction 2: START TRANSACTION; INSERT INTO orders (customer_id, ...) VALUES (10, ...); -- Blocked by gap
-- Transaction 1: INSERT INTO orders (customer_id, ...) VALUES (10, ...); -- Deadlock!
-- Fix: Use READ COMMITTED isolation level to reduce gap locks
-- Pattern 3: Foreign Key Deadlock -- Implicit locks from foreign key checks
-- Parent table: categories(id) -- Child table: products(category_id)
-- Transaction 1: START TRANSACTION; DELETE FROM categories WHERE id = 1; -- Locks parent row -- Waiting for child row lock (FK check)
-- Transaction 2: START TRANSACTION; UPDATE products SET name = 'New' WHERE category_id = 1; -- Locks child row -- Waiting for parent row lock (FK check)
-- Fix: Add index on foreign key column CREATE INDEX idx_category_id ON products(category_id); ```
### 5. Fix missing indexes causing excessive locking
Missing indexes cause table scans with more locks:
```sql -- Check for full table scans in queries EXPLAIN UPDATE orders SET status = 'shipped' WHERE customer_id = 100\G
-- Expected output with missing index: -- id: 1 -- select_type: SIMPLE -- table: orders -- type: ALL -- Full table scan! -- possible_keys: NULL -- key: NULL -- rows: 100000 -- Scanning 100k rows -- Extra: Using where
-- This locks ALL rows scanned, not just matching rows
-- Fix: Add index on WHERE clause columns CREATE INDEX idx_customer_id ON orders(customer_id);
-- After adding index: -- type: ref -- Index lookup -- key: idx_customer_id -- rows: 10 -- Only 10 rows scanned -- Extra: Using where
-- Check for missing foreign key indexes SELECT TABLE_NAME, COLUMN_NAME, CONSTRAINT_NAME, REFERENCED_TABLE_NAME, REFERENCED_COLUMN_NAME FROM information_schema.KEY_COLUMN_USAGE WHERE TABLE_SCHEMA = DATABASE() AND REFERENCED_TABLE_NAME IS NOT NULL;
-- Verify FK columns have indexes SHOW INDEX FROM orders WHERE Key_name NOT IN ('PRIMARY');
-- Add missing FK index CREATE INDEX idx_fk_customer_id ON orders(customer_id); ```
### 6. Optimize transaction isolation level
Adjust isolation level to reduce locking:
```sql -- Check current isolation level SELECT @@transaction_isolation; -- or for older MySQL SELECT @@tx_isolation;
-- Isolation levels and lock behavior: -- READ UNCOMMITTED: No read locks, dirty reads possible -- READ COMMITTED: Row locks only, no gap locks (recommended for most cases) -- REPEATABLE READ: Row + gap locks (MySQL default, most deadlocks) -- SERIALIZABLE: Table locks, no concurrency (avoid for OLTP)
-- Change isolation level (session) SET SESSION transaction_isolation = 'READ-COMMITTED';
-- Change isolation level (global) SET GLOBAL transaction_isolation = 'READ-COMMITTED';
-- Or in my.cnf: -- [mysqld] -- transaction-isolation = READ-COMMITTED
-- Note: READ-COMMITTED disables gap locks -- This prevents many deadlock patterns but may allow phantom reads -- For most applications, this is acceptable trade-off
-- Application-level configuration: -- JDBC: connection.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED); -- Python SQLAlchemy: engine.execution_options(isolation_level="READ COMMITTED") -- Django: DATABASES['default']['OPTIONS']['isolation_level'] = 'READ COMMITTED' ```
### 7. Implement retry logic for deadlocked transactions
Handle deadlocks gracefully in application code:
```python # Python with retry logic import time import MySQLdb from MySQLdb import OperationalError
def execute_with_retry(query, params, max_retries=3): """Execute query with deadlock retry logic""" attempt = 0
while attempt < max_retries: try: cursor = connection.cursor() cursor.execute(query, params) connection.commit() return cursor.fetchall()
except OperationalError as e: error_code = e.args[0]
if error_code == 1213: # Deadlock attempt += 1 if attempt >= max_retries: raise
# Exponential backoff with jitter wait_time = (2 ** attempt) + random.uniform(0, 1) time.sleep(wait_time) continue
elif error_code == 1205: # Lock wait timeout # Log and alert - this indicates deeper issue logger.warning(f"Lock wait timeout: {query}") raise
else: raise
raise Exception("Unexpected retry exhaustion")
# Usage with transaction ordering def transfer_money(from_account, to_account, amount): """Transfer money between accounts with consistent ordering"""
# Always lock accounts in same order (by id) if from_account < to_account: first, second = from_account, to_account else: first, second = to_account, from_account
# Now always update in consistent order execute_with_retry( "UPDATE accounts SET balance = balance - %s WHERE id = %s", (amount, first) ) execute_with_retry( "UPDATE accounts SET balance = balance + %s WHERE id = %s", (amount, second) ) ```
Java with Spring:
```java @Service public class AccountService {
@Transactional( isolation = Isolation.READ_COMMITTED, // Reduce locking rollbackFor = {CannotAcquireLockException.class, PessimisticLockingFailureException.class} ) @Retryable( value = {CannotAcquireLockException.class}, maxAttempts = 3, backoff = @Backoff(delay = 100, multiplier = 2, random = true) ) public void transferMoney(Long fromAccount, Long toAccount, BigDecimal amount) { // Order accounts to prevent deadlock List<Long> sortedIds = Arrays.asList(fromAccount, toAccount) .stream() .sorted() .collect(Collectors.toList());
// Update in consistent order accountRepository.lockAndDebit(sortedIds.get(0), amount); accountRepository.lockAndCredit(sortedIds.get(1), amount); }
@Retryable public void processWithRetry(Runnable operation) { operation.run(); } } ```
### 8. Reduce transaction size and duration
Long transactions hold locks longer, increasing deadlock risk:
```sql -- WRONG: Large batch in single transaction START TRANSACTION; UPDATE orders SET status = 'archived' WHERE created_at < '2025-01-01'; -- May lock thousands of rows for seconds
-- CORRECT: Process in smaller batches -- Batch script example DELIMITER $$
CREATE PROCEDURE archive_orders_batch() BEGIN DECLARE done INT DEFAULT FALSE; DECLARE batch_count INT DEFAULT 0;
-- Process 100 rows at a time WHILE done = FALSE DO START TRANSACTION;
UPDATE orders SET status = 'archived' WHERE created_at < '2025-01-01' AND status != 'archived' LIMIT 100; -- Only 100 rows per transaction
SET batch_count = ROW_COUNT();
COMMIT;
IF batch_count = 0 THEN SET done = TRUE; END IF;
-- Small delay to reduce lock contention DO SLEEP(0.1); END WHILE; END$$
DELIMITER ;
-- Run procedure CALL archive_orders_batch(); ```
Application-level batch processing:
```python # Process large updates in batches def archive_old_orders(cutoff_date, batch_size=100): """Archive old orders in small batches"""
while True: # Small transaction per batch with transaction.atomic(): count = Order.objects.filter( created_at__lt=cutoff_date, status__!='archived' )[:batch_size].update(status='archived')
if count == 0: break
# Small delay between batches time.sleep(0.1)
logger.info(f"Archived {count} orders, continuing...") ```
### 9. Monitor and alert on lock contention
Set up monitoring for early detection:
```sql -- Create monitoring view CREATE OR REPLACE VIEW lock_monitor AS SELECT r.trx_id waiting_trx_id, r.trx_mysql_thread_id waiting_thread, r.trx_query waiting_query, b.trx_id blocking_trx_id, b.trx_mysql_thread_id blocking_thread, b.trx_query blocking_query, TIMESTAMPDIFF(SECOND, r.trx_started, NOW()) as wait_duration FROM information_schema.innodb_lock_waits w JOIN information_schema.innodb_trx b ON b.trx_id = w.blocking_trx_id JOIN information_schema.innodb_trx r ON r.trx_id = w.requesting_trx_id;
-- Query for monitoring SELECT * FROM lock_monitor WHERE wait_duration > 10;
-- Prometheus exporter query -- Use mysqld_exporter with custom queries ```
Prometheus alerting rules:
```yaml groups: - name: mysql_locks rules: - alert: MySQLLockWaitTimeout expr: rate(mysql_global_status_innodb_lock_wait_timeout_total[5m]) > 0 for: 5m labels: severity: warning annotations: summary: "MySQL lock wait timeout occurring"
- alert: MySQLDeadlockDetected
- expr: rate(mysql_global_status_innodb_deadlocks_total[5m]) > 0
- for: 5m
- labels:
- severity: warning
- annotations:
- summary: "MySQL deadlocks detected"
- alert: MySQLLongRunningTransaction
- expr: mysql_info_schema_innodb_trx_seconds_since_transaction_start > 60
- for: 5m
- labels:
- severity: warning
- annotations:
- summary: "MySQL transaction running > 60 seconds"
`
### 10. Use optimistic locking where appropriate
Reduce lock contention with optimistic locking:
```sql -- Pessimistic locking (current behavior) START TRANSACTION; SELECT * FROM inventory WHERE product_id = 100 FOR UPDATE; UPDATE inventory SET quantity = quantity - 1 WHERE product_id = 100; COMMIT;
-- Optimistic locking (alternative) -- Add version column ALTER TABLE inventory ADD COLUMN version INT DEFAULT 0;
-- Read with version SELECT id, quantity, version FROM inventory WHERE product_id = 100; -- Returns: id=1, quantity=50, version=3
-- Update with version check UPDATE inventory SET quantity = quantity - 1, version = version + 1 WHERE product_id = 100 AND version = 3; -- If affected_rows = 0, someone else modified the row
-- Application handles retry ```
Optimistic locking in application:
```python # Django optimistic locking from django.db import transaction from django.db.models import F
def purchase_item(product_id, quantity): """Purchase with optimistic locking"""
max_retries = 3 for attempt in range(max_retries): with transaction.atomic(): # Get current state product = Product.objects.select_for_update().get(id=product_id) current_version = product.version
# Check stock if product.quantity < quantity: raise ValueError("Insufficient stock")
# Update with version check updated = Product.objects.filter( id=product_id, version=current_version ).update( quantity=F('quantity') - quantity, version=current_version + 1 )
if updated: return True
# Retry if update failed (version mismatch) if attempt == max_retries - 1: raise Exception("Purchase failed after retries") ```
Prevention
- Always access tables in consistent order within transactions
- Keep transactions short and commit frequently
- Add indexes on foreign key columns
- Use READ COMMITTED isolation level for most workloads
- Implement retry logic for deadlock errors
- Process large updates in small batches
- Use optimistic locking for read-heavy workloads
- Monitor lock wait time and deadlock rate continuously
Related Errors
- **ERROR 1213 (40001)**: Deadlock found when trying to get lock
- **ERROR 1205 (HY000)**: Lock wait timeout exceeded
- **ERROR 1206 (HY000)**: The total number of locks exceeds the lock table size
- **ERROR 3572 (23000)**: Statement aborted because of deadlock