# PostgreSQL Deadlock Detected: Resolution and Prevention

Your application logs are showing this error:

bash
ERROR:  deadlock detected
DETAIL:  Process 12345 waits for ShareLock on transaction 123456; blocked by process 12346.
Process 12346 waits for ShareLock on transaction 123457; blocked by process 12345.
HINT:  See server log for query details.
CONTEXT:  while updating tuple (0,1) in relation "orders"
SQL state: 40P01

Deadlocks occur when two or more transactions hold locks that each other needs, creating a circular dependency. PostgreSQL detects this and kills one transaction to resolve the impasse.

Understanding Deadlocks

  1. 1.A deadlock happens when:
  2. 2.Transaction A locks row X
  3. 3.Transaction B locks row Y
  4. 4.Transaction A tries to lock row Y (waits for B)
  5. 5.Transaction B tries to lock row X (waits for A)
  6. 6.Both transactions wait forever - deadlock!

PostgreSQL automatically detects deadlocks and terminates one transaction (the "victim") to allow the other to proceed.

Immediate Response

Identify the Deadlocked Queries

Check PostgreSQL logs for deadlock details:

```bash # Find deadlock entries in logs grep -i "deadlock" /var/log/postgresql/postgresql-16-main.log

# Or enable more verbose logging ```

```sql -- Enable deadlock logging (if not already) ALTER SYSTEM SET log_lock_waits = on; ALTER SYSTEM SET deadlock_timeout = '1s'; SELECT pg_reload_conf();

-- View recent locks SELECT blocked_locks.pid AS blocked_pid, blocked_activity.usename AS blocked_user, blocking_locks.pid AS blocking_pid, blocking_activity.usename AS blocking_user, blocked_activity.query AS blocked_statement, blocking_activity.query AS blocking_statement, blocked_activity.application_name FROM pg_catalog.pg_locks blocked_locks JOIN pg_catalog.pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid JOIN pg_catalog.pg_locks blocking_locks ON blocking_locks.locktype = blocked_locks.locktype AND blocking_locks.database IS NOT DISTINCT FROM blocked_locks.database AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple AND blocking_locks.pid != blocked_locks.pid JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid WHERE NOT blocked_locks.granted; ```

Find Blocking Queries Right Now

```sql -- Current blocking situation SELECT pid, now() - pg_stat_activity.query_start AS duration, query, state FROM pg_stat_activity WHERE (now() - pg_stat_activity.query_start) > interval '5 minutes';

-- Kill long-running transactions if needed SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE state = 'active' AND now() - query_start > interval '30 minutes'; ```

Common Deadlock Scenarios

Scenario 1: Different Lock Order

The classic deadlock:

```sql -- Transaction 1 BEGIN; UPDATE accounts SET balance = balance - 100 WHERE id = 1; -- Locks row 1 -- Meanwhile... -- Transaction 2 BEGIN; UPDATE accounts SET balance = balance - 50 WHERE id = 2; -- Locks row 2 UPDATE accounts SET balance = balance + 50 WHERE id = 1; -- Waits for row 1

-- Back to Transaction 1 UPDATE accounts SET balance = balance + 100 WHERE id = 2; -- Waits for row 2 -- DEADLOCK! ```

Solution: Always lock rows in consistent order:

```sql -- Both transactions lock in same order BEGIN; -- Always lock accounts in ascending ID order UPDATE accounts SET balance = balance - 100 WHERE id = 1; UPDATE accounts SET balance = balance + 100 WHERE id = 2; COMMIT;

-- Use explicit locking to ensure order BEGIN; SELECT * FROM accounts WHERE id IN (1, 2) ORDER BY id FOR UPDATE; UPDATE accounts SET balance = balance - 100 WHERE id = 1; UPDATE accounts SET balance = balance + 100 WHERE id = 2; COMMIT; ```

Scenario 2: Foreign Key Deadlocks

Problem: Foreign key checks create implicit locks:

```sql -- Transaction 1 INSERT INTO orders (customer_id) VALUES (1); -- Locks customer 1

-- Transaction 2 UPDATE customers SET name = 'New' WHERE id = 1; -- Waits for lock on customer 1 INSERT INTO orders (customer_id) VALUES (2); -- Locks customer 2

-- Transaction 1 INSERT INTO orders (customer_id) VALUES (2); -- Waits for lock on customer 2 -- DEADLOCK! ```

Solution: Lock parent rows explicitly:

sql
BEGIN;
-- Lock parent rows first
SELECT * FROM customers WHERE id IN (1, 2) FOR SHARE;
-- Then insert
INSERT INTO orders (customer_id) VALUES (1);
INSERT INTO orders (customer_id) VALUES (2);
COMMIT;

Scenario 3: Index Lock Deadlocks

Problem: Concurrent updates on indexed columns:

```sql -- Transaction 1 UPDATE users SET email = 'a@b.com' WHERE id = 1; -- Locks index entry

-- Transaction 2 UPDATE users SET email = 'c@d.com' WHERE id = 2; -- Locks index entry UPDATE users SET email = 'x@y.com' WHERE id = 1; -- Waits for index lock

-- Transaction 1 UPDATE users SET email = 'z@w.com' WHERE id = 2; -- Waits for index lock -- DEADLOCK! ```

Solution: Avoid updating indexed columns frequently, or batch updates:

sql
-- Defer constraint checking
SET CONSTRAINTS ALL DEFERRED;
UPDATE users SET email = 'a@b.com' WHERE id = 1;
-- Other updates...
COMMIT;  -- Constraints checked at commit time

Scenario 4: Serializable Isolation Deadlocks

Problem: Serializable isolation level can cause serialization failures:

sql
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
-- Queries that overlap logically cause errors
-- ERROR:  could not serialize access due to read/write dependencies

Solution: Retry the transaction:

```python # Python example with retry logic import psycopg2 import time

def run_serializable_transaction(conn, query_func, max_retries=3): for attempt in range(max_retries): try: with conn.cursor() as cur: conn.set_session(isolation_level='serializable') result = query_func(cur) conn.commit() return result except psycopg2.OperationalError as e: conn.rollback() if 'could not serialize' in str(e): if attempt < max_retries - 1: time.sleep(0.1 * (attempt + 1)) # Exponential backoff continue raise raise Exception("Max retries exceeded") ```

Prevention Strategies

1. Consistent Lock Ordering

Always access resources in the same order:

```sql -- Create a helper function CREATE OR REPLACE FUNCTION get_lock_order(ids integer[]) RETURNS integer[] AS $$ BEGIN RETURN array(SELECT unnest(ids) ORDER BY 1); END; $$ LANGUAGE plpgsql;

-- Use it BEGIN; SELECT * FROM accounts WHERE id = ANY(get_lock_order(ARRAY[3, 1, 2])) FOR UPDATE; -- Now update in any order safely UPDATE accounts SET balance = balance - 100 WHERE id = 3; UPDATE accounts SET balance = balance + 50 WHERE id = 1; UPDATE accounts SET balance = balance + 50 WHERE id = 2; COMMIT; ```

2. Advisory Locks for Application Logic

Use PostgreSQL advisory locks for application-level coordination:

```sql -- Acquire exclusive advisory lock SELECT pg_advisory_lock(12345); -- Use a consistent lock ID -- Do work UPDATE accounts SET balance = balance - 100 WHERE id = 1; UPDATE accounts SET balance = balance + 100 WHERE id = 2; -- Release lock SELECT pg_advisory_unlock(12345);

-- Transaction-level lock (auto-released) BEGIN; SELECT pg_advisory_xact_lock(12345); -- Do work COMMIT; -- Lock automatically released ```

3. Reduce Transaction Scope

Keep transactions short and focused:

```sql -- BAD: One big transaction BEGIN; -- Many operations... INSERT INTO orders ...; UPDATE inventory ...; INSERT INTO audit_log ...; UPDATE customer_stats ...; COMMIT;

-- GOOD: Separate transactions INSERT INTO orders ...; -- Auto-commit INSERT INTO audit_log ...; -- Auto-commit

-- If you need atomicity, use savepoints BEGIN; INSERT INTO orders ...; SAVEPOINT after_order; UPDATE inventory ...; -- If inventory update fails, you can rollback to savepoint ROLLBACK TO after_order; COMMIT; ```

4. NOWAIT and SKIP LOCKED

Handle lock contention gracefully:

```sql -- NOWAIT: Fail immediately if lock not available BEGIN; SELECT * FROM accounts WHERE id = 1 FOR UPDATE NOWAIT; -- ERROR: could not obtain lock on row

-- SKIP LOCKED: Skip locked rows BEGIN; SELECT * FROM queue WHERE status = 'pending' FOR UPDATE SKIP LOCKED LIMIT 10; -- Returns unlocked rows only, skips locked ones -- Perfect for job queues UPDATE queue SET status = 'processing' WHERE id = ANY(ARRAY[...]); COMMIT; ```

5. Lower Isolation Level When Possible

```sql -- Read Committed is default and has fewer deadlock scenarios SET TRANSACTION ISOLATION LEVEL READ COMMITTED;

-- For read-heavy workloads SET TRANSACTION ISOLATION LEVEL READ COMMITTED; SET TRANSACTION READ ONLY; ```

Monitoring and Alerting

Track Deadlocks Over Time

```sql -- Enable deadlock monitoring ALTER SYSTEM SET log_lock_waits = on; ALTER SYSTEM SET deadlock_timeout = '1s'; SELECT pg_reload_conf();

-- Check deadlock statistics SELECT datname, deadlocks FROM pg_stat_database WHERE deadlocks > 0 ORDER BY deadlocks DESC;

-- Reset statistics SELECT pg_stat_reset(); ```

Create Deadlock Alert Function

```sql CREATE OR REPLACE FUNCTION check_deadlock_rate() RETURNS TABLE( database text, deadlocks bigint, transactions bigint, deadlock_rate numeric ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT datname::text, deadlocks, xact_commit + xact_rollback, CASE WHEN xact_commit + xact_rollback > 0 THEN ROUND(deadlocks::numeric / (xact_commit + xact_rollback) * 1000000, 2) ELSE 0 END FROM pg_stat_database WHERE datname IS NOT NULL ORDER BY deadlocks DESC; END; $$;

-- Run check SELECT * FROM check_deadlock_rate(); ```

Log Analysis Script

```bash #!/bin/bash # analyze_deadlocks.sh

LOG_FILE="/var/log/postgresql/postgresql-16-main.log"

# Count deadlocks today echo "Deadlocks today: $(grep -c "$(date +%Y-%m-)" "$LOG_FILE" | grep -c -i deadlock)"

# Extract deadlock details grep -i "deadlock detected" "$LOG_FILE" | tail -20

# Find which queries are involved grep -B5 -A5 "deadlock detected" "$LOG_FILE" | grep "DETAIL" ```

Application-Level Retry Logic

Implement retry logic in your application:

javascript // Node.js example async function withRetry(operation, maxRetries = 3) { for (let attempt = 0; attempt < maxRetries; attempt++) { try { return await operation(); } catch (error) { if (error.code === '40P01') { // Deadlock error code console.log(Deadlock detected, retry ${attempt + 1}/${maxRetries}`); await new Promise(resolve => setTimeout(resolve, 100 * (attempt + 1))); continue; } throw error; } } throw new Error('Max retries exceeded for deadlock recovery'); }

// Usage await withRetry(async () => { const client = await pool.connect(); try { await client.query('BEGIN'); await client.query('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [100, 1]); await client.query('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [100, 2]); await client.query('COMMIT'); } catch (e) { await client.query('ROLLBACK'); throw e; } finally { client.release(); } }); ```

Verification

After implementing fixes, verify deadlocks are reduced:

```sql -- Check recent deadlock count SELECT deadlocks FROM pg_stat_database WHERE datname = current_database();

-- Compare before and after -- Record baseline, implement fix, then compare after 24 hours ```

Quick Reference

ActionCommand
Error code40P01
Enable loggingSET log_lock_waits = on;
Set timeoutSET deadlock_timeout = '1s';
Current locksSELECT * FROM pg_locks WHERE NOT granted;
Kill blocked transactionSELECT pg_terminate_backend(pid);
Advisory lockSELECT pg_advisory_lock(id);
Skip locked rowsSELECT ... FOR UPDATE SKIP LOCKED;
Fail if lockedSELECT ... FOR UPDATE NOWAIT;