Introduction MySQL limits the maximum size of a single communication packet with `max_allowed_packet`. When inserting large BLOBs, long TEXT fields, or bulk INSERT statements that exceed this limit, the connection is dropped with `ERROR 2006 (HY000): MySQL server has gone away` or `ERROR 1153 (HY000): Got a packet bigger than max_allowed_packet bytes`.

Symptoms - `Got a packet bigger than 'max_allowed_packet' bytes` error - `MySQL server has gone away` after attempting large INSERT - Connection drops silently with no explicit error message - `SHOW WARNINGS` shows `Data too long for column` for BLOB fields - Replication fails with packet size errors on the replica

Common Causes - Inserting images, PDFs, or files larger than `max_allowed_packet` - Bulk INSERT statements with many rows exceeding the packet limit - `max_allowed_packet` different between primary and replica - Client library default packet size smaller than the server setting - Replication events exceeding replica's `max_allowed_packet`

Step-by-Step Fix 1. **Check current max_allowed_packet settings": ```sql -- Server settings SHOW VARIABLES LIKE 'max_allowed_packet'; SHOW VARIABLES LIKE 'slave_max_allowed_packet_size';

-- Current value in bytes SELECT @@global.max_allowed_packet; ```

  1. 1.**Increase max_allowed_packet on both server and client":
  2. 2.```sql
  3. 3.-- On the server (persistent)
  4. 4.SET GLOBAL max_allowed_packet = 67108864; -- 64MB

-- In my.cnf -- [mysqld] -- max_allowed_packet = 64M -- [mysqldump] -- max_allowed_packet = 64M ```

  1. 1.**For the client, specify the packet size in the connection":
  2. 2.```bash
  3. 3.# Command-line client
  4. 4.mysql --max_allowed_packet=64M -u root -p

# mysqldump mysqldump --max_allowed_packet=64M -u root -p mydb > backup.sql

# Python import mysql.connector conn = mysql.connector.connect( host='localhost', user='root', password='pass', database='mydb', buffered=True, option_files={'max_allowed_packet': '64M'} ) ```

  1. 1.**For very large BLOBs, store as files instead":
  2. 2.```python
  3. 3.import hashlib

def store_file(cursor, filepath, metadata): with open(filepath, 'rb') as f: file_hash = hashlib.sha256(f.read()).hexdigest()

# Store metadata in MySQL, file on disk or S3 cursor.execute( "INSERT INTO files (hash, filepath, size, content_type, metadata) VALUES (%s, %s, %s, %s, %s)", (file_hash, filepath, os.path.getsize(filepath), 'application/pdf', metadata) ) ```

  1. 1.**Ensure replica can handle the same packet sizes":
  2. 2.```sql
  3. 3.-- On replica
  4. 4.SET GLOBAL max_allowed_packet = 67108864;
  5. 5.SET GLOBAL slave_max_allowed_packet_size = 67108864;

-- In replica my.cnf -- [mysqld] -- max_allowed_packet = 64M -- slave_max_allowed_packet_size = 64M ```

Prevention - Set `max_allowed_packet` on both primary and replica to the same value - Store large files (images, documents) in object storage (S3) rather than BLOB columns - If BLOBs are necessary, set `max_allowed_packet` to at least 2x the largest expected object - Monitor for `Aborted_clients` and `Aborted_connects` which can indicate packet size issues - Use chunked uploads for very large files (split into 1MB chunks) - Set client-side `max_allowed_packet` in all application connection configurations - Document the maximum BLOB/TEXT size supported by your MySQL configuration