# MySQL Packet Too Large: Max Allowed Packet

You encounter this error when working with large data:

bash
ERROR 1153 (08S01): Got a packet bigger than 'max_allowed_packet' bytes

Or when loading a large dump file:

bash
ERROR 2006 (HY000): MySQL server has gone away

These errors occur when MySQL receives a packet larger than its configured limit. This commonly happens with large BLOBs, long queries, or bulk inserts.

Understanding max_allowed_packet

The max_allowed_packet setting controls the maximum size of a single MySQL packet: - Default: 4MB to 16MB (depending on version) - Applies to: INSERT data, query strings, BLOB values, LOAD_FILE() results - Must be set on both client and server for uploads

Step 1: Check Current Limits

Check server limit:

sql
SHOW VARIABLES LIKE 'max_allowed_packet';

Check client limit (MySQL client):

bash
mysql --help | grep max_allowed_packet

Check with different connections:

```sql -- Current session SELECT @@max_allowed_packet;

-- Global value SELECT @@global.max_allowed_packet;

-- Session value SELECT @@session.max_allowed_packet; ```

Common packet sizes:

bash
4MB   = 4194304 bytes
16MB  = 16777216 bytes
32MB  = 33554432 bytes
64MB  = 67108864 bytes
128MB = 134217728 bytes
256MB = 268435456 bytes
512MB = 536870912 bytes
1GB   = 1073741824 bytes

Step 2: Increase Server Limit

Temporary (Runtime) Change

```sql -- Set for current session SET max_allowed_packet = 67108864; -- 64MB

-- Set globally (affects new connections) SET GLOBAL max_allowed_packet = 67108864; ```

This resets when MySQL restarts.

Permanent Configuration

Edit /etc/mysql/my.cnf or /etc/mysql/mysql.conf.d/mysqld.cnf:

ini
[mysqld]
max_allowed_packet = 64M

Restart MySQL:

bash
sudo systemctl restart mysql

Verify the Change

sql
SHOW VARIABLES LIKE 'max_allowed_packet';

Step 3: Increase Client Limit

The MySQL client also has a max_allowed_packet limit that affects sending data.

Command Line

```bash # Specify when connecting mysql -u root -p --max_allowed_packet=64M

# Or use long form mysql -u root -p --max-allowed-packet=67108864 ```

Configuration File

Add to ~/.my.cnf:

ini
[client]
max_allowed_packet = 64M

For mysqldump

bash
mysqldump -u root -p --max_allowed_packet=512M database > dump.sql

Add to configuration:

ini
[mysqldump]
max_allowed_packet = 512M

Step 4: Handle Large Data Operations

Large INSERT Statements

When inserting large datasets:

sql
-- Problem: Single huge INSERT
INSERT INTO logs (message) VALUES 
('very long message 1...'),
('very long message 2...'),
-- thousands more rows
;

Solution 1: Increase packet size

sql
SET max_allowed_packet = 256*1024*1024;  -- 256MB
-- Then run your INSERT

Solution 2: Split into batches

sql
-- Smaller batches
INSERT INTO logs (message) VALUES ('msg1'), ('msg2'), ... ('msg1000');
INSERT INTO logs (message) VALUES ('msg1001'), ('msg1002'), ... ('msg2000');

Solution 3: Use LOAD DATA

sql
LOAD DATA INFILE '/path/to/data.csv' 
INTO TABLE logs 
FIELDS TERMINATED BY ',' 
LINES TERMINATED BY '\n';

Large BLOB/CLOB Data

For storing large binary or text objects:

Check data size:

sql
SELECT 
    id,
    LENGTH(blob_column) as size_bytes,
    LENGTH(blob_column) / 1024 / 1024 as size_mb
FROM my_table
ORDER BY size_bytes DESC
LIMIT 10;

Store in chunks:

sql
-- If files are very large, consider storing in chunks
CREATE TABLE file_chunks (
    file_id INT,
    chunk_num INT,
    chunk_data LONGBLOB,
    PRIMARY KEY (file_id, chunk_num)
);

Or store file paths instead:

sql
-- Store path to file instead of file content
ALTER TABLE documents ADD COLUMN file_path VARCHAR(255);
UPDATE documents SET file_path = '/var/files/doc_123.pdf' WHERE id = 123;

Large Query Strings

For queries with very long IN clauses:

Problem:

sql
SELECT * FROM products WHERE id IN (1, 2, 3, ..., 100000);

Solution 1: Use a temporary table

```sql CREATE TEMPORARY TABLE temp_ids (id INT PRIMARY KEY); INSERT INTO temp_ids VALUES (1), (2), (3), ... (100000);

SELECT p.* FROM products p INNER JOIN temp_ids t ON p.id = t.id;

DROP TEMPORARY TABLE temp_ids; ```

Solution 2: Use BETWEEN for ranges

sql
-- Instead of thousands of IDs
SELECT * FROM products WHERE id BETWEEN 1 AND 100000;

Solution 3: Batch the queries

sql
-- Process in batches
SELECT * FROM products WHERE id IN (1, 2, ..., 1000);
SELECT * FROM products WHERE id IN (1001, 1002, ..., 2000);
-- etc.

Step 5: Fix "MySQL Server Has Gone Away"

This error often relates to packet size during imports:

Error during import:

bash
ERROR 2006 (HY000): MySQL server has gone away

Solutions:

```bash # Increase packet size for import mysql -u root -p --max_allowed_packet=256M database < large_dump.sql

# Or use source with settings mysql -u root -p ```

sql
SET GLOBAL max_allowed_packet = 256*1024*1024;
SET GLOBAL net_buffer_length = 16384;
SOURCE /path/to/large_dump.sql;

For mysqldump output:

Regenerate the dump with appropriate settings:

bash
mysqldump -u root -p --max_allowed_packet=512M --net-buffer-length=16384 database > dump.sql

Step 6: Application Configuration

PHP

```php // In my.cnf or at connection $mysqli = new mysqli('localhost', 'user', 'pass', 'db'); $mysqli->query("SET max_allowed_packet = 67108864"); // 64MB

// Or in php.ini mysqli.max_allowed_packet = 64M ```

Python (MySQL Connector)

```python import mysql.connector

conn = mysql.connector.connect( host='localhost', user='user', password='pass', database='db', # Set at connection max_allowed_packet=67108864 # 64MB ) ```

Python (PyMySQL)

```python import pymysql

conn = pymysql.connect( host='localhost', user='user', password='pass', database='db' )

conn.query("SET max_allowed_packet = 67108864") ```

Java (JDBC)

```java // In connection URL String url = "jdbc:mysql://localhost/db?maxAllowedPacket=67108864";

// Or after connection Statement stmt = conn.createStatement(); stmt.execute("SET max_allowed_packet = 67108864"); ```

Node.js (mysql2)

```javascript const mysql = require('mysql2');

const connection = mysql.createConnection({ host: 'localhost', user: 'user', password: 'pass', database: 'db', maxAllowedPacket: 67108864 // 64MB }); ```

net_buffer_length: Initial buffer size for connections

```sql SHOW VARIABLES LIKE 'net_buffer_length';

-- Set in config [mysqld] net_buffer_length = 16K ```

connect_timeout: Time to wait for connection

ini
[mysqld]
connect_timeout = 60

wait_timeout: Time to wait for query

ini
[mysqld]
wait_timeout = 28800

For replication:

ini
[mysqld]
max_allowed_packet = 64M
slave_max_allowed_packet = 64M

Step 8: Monitor and Debug

Check current queries:

sql
SELECT 
    id,
    user,
    host,
    db,
    LENGTH(info) as query_length,
    state
FROM information_schema.processlist
WHERE LENGTH(info) > 1000000
ORDER BY query_length DESC;

Log large packets:

Add to configuration:

ini
[mysqld]
log_error_verbosity = 3
log_warnings = 2

Check for packet errors:

sql
SHOW STATUS LIKE 'Packet%';
bash
+--------------------------+-------+
| Variable_name            | Value |
+--------------------------+-------+
| Packet_too_large         | 5     |
| Packet_too_small         | 0     |
+--------------------------+-------+

Step 9: Best Practices for Large Data

Use streaming for large inserts:

```python # Python example - batch inserts import mysql.connector

conn = mysql.connector.connect(...) cursor = conn.cursor()

batch_size = 1000 batch = []

for record in large_dataset: batch.append(record) if len(batch) >= batch_size: cursor.executemany( "INSERT INTO table (col1, col2) VALUES (%s, %s)", batch ) conn.commit() batch = []

# Insert remaining if batch: cursor.executemany("INSERT INTO table (col1, col2) VALUES (%s, %s)", batch) conn.commit() ```

Use LOAD DATA for bulk imports:

sql
-- Faster than INSERT for large datasets
LOAD DATA INFILE '/path/to/data.csv'
INTO TABLE my_table
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 ROWS;

Compress large data:

```sql -- Compress before storing INSERT INTO documents (id, compressed_data) VALUES (1, COMPRESS('very long text...'));

-- Decompress when reading SELECT UNCOMPRESS(compressed_data) FROM documents WHERE id = 1; ```

Verification

After increasing the limit:

```sql -- Verify setting SHOW VARIABLES LIKE 'max_allowed_packet';

-- Test with large insert INSERT INTO test_table (data) VALUES (REPEAT('x', 60000000));

-- Should succeed without error ```

Quick Reference

```sql -- Check current limit SHOW VARIABLES LIKE 'max_allowed_packet';

-- Temporarily increase (session) SET max_allowed_packet = 67108864; -- 64MB in bytes

-- Temporarily increase (global) SET GLOBAL max_allowed_packet = 67108864;

-- Permanent (in my.cnf) [mysqld] max_allowed_packet = 64M ```

```bash # Client with larger packet mysql -u root -p --max_allowed_packet=64M

# mysqldump with larger packet mysqldump -u root -p --max_allowed_packet=256M db > dump.sql

# Import with larger packet mysql -u root -p --max_allowed_packet=256M db < dump.sql ```

The max_allowed_packet setting is crucial when working with large data. Set it appropriately on both server and client, and consider alternative approaches like batching or LOAD DATA for very large operations.