Introduction Phantom reads occur when a transaction re-executes a query and discovers new rows inserted by a concurrent committed transaction. Under MySQL's default REPEATABLE READ isolation, phantom reads can still happen with gap locks, and under PostgreSQL's REPEATABLE READ, they are prevented—but the snapshot may miss concurrent inserts that committed just before the transaction started.

Symptoms - Financial report totals differ between consecutive runs within the same transaction - Aggregated revenue counts don't match individual line item sums - End-of-day reconciliation shows missing transactions - Audit queries return inconsistent row counts when run twice in the same session

Common Causes - Using READ COMMITTED isolation for financial aggregate queries - Concurrent batch inserts happening during long-running report generation - Missing gap locks on range queries under REPEATABLE READ in MySQL - Report queries spanning multiple transactions instead of a single consistent snapshot

Step-by-Step Fix 1. **Identify the current transaction isolation level**: ```sql -- MySQL SELECT @@GLOBAL.transaction_isolation, @@SESSION.transaction_isolation;

-- PostgreSQL SHOW transaction_isolation; ```

  1. 1.Use SERIALIZABLE isolation for financial reports:
  2. 2.```sql
  3. 3.BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;

SELECT DATE(created_at) AS report_date, COUNT(*) AS transaction_count, SUM(amount) AS total_revenue FROM transactions WHERE created_at BETWEEN '2026-04-01' AND '2026-04-30' GROUP BY DATE(created_at);

COMMIT; ```

  1. 1.Catch serialization failures and retry:
  2. 2.```python
  3. 3.from psycopg2 import serialize
  4. 4.from psycopg2.extensions import ISOLATION_LEVEL_SERIALIZABLE

def generate_report(conn, start_date, end_date): max_retries = 3 for attempt in range(max_retries): try: conn.set_isolation_level(ISOLATION_LEVEL_SERIALIZABLE) cur = conn.cursor() cur.execute(""" SELECT COUNT(*), SUM(amount) FROM transactions WHERE created_at BETWEEN %s AND %s """, (start_date, end_date)) return cur.fetchone() except Exception as e: if e.pgcode == '40001': # serialization_failure conn.rollback() continue raise raise Exception("Failed after retries") ```

  1. 1.Use explicit table locks for critical aggregate operations:
  2. 2.```sql
  3. 3.BEGIN;
  4. 4.LOCK TABLE transactions IN SHARE MODE;
  5. 5.SELECT COUNT(*), SUM(amount) FROM transactions
  6. 6.WHERE created_at BETWEEN '2026-04-01' AND '2026-04-30';
  7. 7.COMMIT;
  8. 8.`
  9. 9.Add covering indexes to reduce lock duration:
  10. 10.```sql
  11. 11.CREATE INDEX idx_transactions_report
  12. 12.ON transactions (created_at, amount)
  13. 13.WHERE created_at >= '2026-01-01';
  14. 14.`

Prevention - Always use SERIALIZABLE isolation for financial reconciliation queries - Schedule reports during low-write windows when possible - Use materialized views refreshed on a schedule instead of real-time aggregation - Implement idempotent report generation with versioned snapshots - Monitor for serialization failures with `pg_stat_database` conflicts columns - Add application-level retry logic with exponential backoff for SERIALIZABLE transactions