# PostgreSQL Connection Limit Exceeded - Troubleshooting Guide

The dreaded "FATAL: sorry, too many clients already" error means PostgreSQL has hit its connection limit. This typically indicates either a connection leak in your application or insufficient connection pooling. Let's diagnose and fix the root cause.

Understanding Connection Limits

```bash # Check current connection limit psql -U postgres -c "SHOW max_connections;"

# Default is typically 100 ```

```sql -- View current connection count SELECT count(*) AS total_connections FROM pg_stat_activity;

-- View connections by state SELECT state, count(*) AS count FROM pg_stat_activity GROUP BY state ORDER BY count DESC;

-- View connections by database SELECT datname AS database, count(*) AS connections FROM pg_stat_activity GROUP BY datname ORDER BY connections DESC;

-- View connections by user and application SELECT usename AS username, application_name, client_addr, count(*) AS connections FROM pg_stat_activity GROUP BY usename, application_name, client_addr ORDER BY connections DESC; ```

Immediate Recovery

When you can't connect due to connection limit, use these approaches:

Method 1: Use Reserved Superuser Connection

```bash # PostgreSQL reserves connections for superusers (default 3) psql -U postgres -c "SHOW superuser_reserved_connections;"

# Connect as superuser to diagnose psql -U postgres

# If that fails, increase reserved temporarily # This won't help if all superuser connections are also used ```

Method 2: Terminate Idle Connections

```sql -- Terminate idle connections (be careful in production) SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE state = 'idle' AND usename NOT IN ('postgres') AND query_start < now() - interval '5 minutes';

-- More aggressive: terminate all but current session SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE pid <> pg_backend_pid() AND usename NOT IN ('postgres'); ```

Method 3: Kill via Operating System

```bash # List PostgreSQL processes ps aux | grep postgres

# Identify idle connections ps aux | grep "postgres:" | grep "idle"

# Kill specific process (last resort) kill -TERM <pid>

# Kill all idle connections (very aggressive) pkill -f "idle in transaction" ```

Diagnosing Connection Leaks

A connection leak occurs when applications open connections but fail to close them:

```sql -- Find long-running connections SELECT pid, usename, application_name, client_addr, state, query_start, now() - query_start AS duration, query FROM pg_stat_activity WHERE state <> 'idle' ORDER BY query_start;

-- Find connections waiting for something SELECT pid, usename, state, wait_event_type, wait_event, query FROM pg_stat_activity WHERE wait_event IS NOT NULL ORDER BY query_start;

-- Find idle-in-transaction connections (dangerous!) SELECT pid, usename, application_name, state, query_start, now() - query_start AS idle_duration FROM pg_stat_activity WHERE state = 'idle in transaction' ORDER BY query_start; ```

Application Connection Leak Patterns

```python # Python - BAD (connection leak) def get_user(user_id): conn = psycopg2.connect(dsn) cursor = conn.cursor() cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,)) return cursor.fetchone() # Connection never closed!

# Python - GOOD (using context manager) def get_user(user_id): with psycopg2.connect(dsn) as conn: with conn.cursor() as cursor: cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,)) return cursor.fetchone() ```

```javascript // Node.js - BAD (connection leak) function getUser(userId) { const client = new Client(config); client.connect(); return client.query('SELECT * FROM users WHERE id = $1', [userId]); // Client never ended! }

// Node.js - GOOD (using async/await) async function getUser(userId) { const client = new Client(config); try { await client.connect(); const result = await client.query('SELECT * FROM users WHERE id = $1', [userId]); return result.rows[0]; } finally { await client.end(); } } ```

Connection Pooling Configuration

Without connection pooling, each application instance opens its own connections:

```bash # Install PgBouncer (recommended pooler) sudo apt-get install pgbouncer

# Configuration file: /etc/pgbouncer/pgbouncer.ini [databases] mydb = host=127.0.0.1 port=5432 dbname=mydb

[pgbouncer] listen_addr = 127.0.0.1 listen_port = 6432 auth_type = md5 auth_file = /etc/pgbouncer/userlist.txt pool_mode = transaction max_client_conn = 1000 default_pool_size = 25 min_pool_size = 5 reserve_pool_size = 5 reserve_pool_timeout = 3 ```

Connection Pool Settings Explained

```ini # Pool mode options: # - session: One server connection per client connection (least efficient) # - transaction: Server connection released after each transaction (recommended) # - statement: Server connection released after each statement (limitations apply)

# Size calculations: # max_client_conn = max expected concurrent clients # default_pool_size = max connections to PostgreSQL per database # = (target_concurrent_queries * avg_query_time) / avg_client_wait_time # Typical: 20-50 for most workloads

# Example for 100 concurrent clients with 25ms avg query time: # default_pool_size = (100 * 0.025) / 0.1 = 25 connections to PostgreSQL ```

Increasing max_connections

If connection pooling isn't enough:

```bash # Edit postgresql.conf sudo nano /etc/postgresql/16/main/postgresql.conf

# Increase connection limit max_connections = 200

# Also adjust shared memory proportionally # Each connection uses ~2MB for work_mem shared_buffers = 256MB # Increase if max_connections increased significantly ```

Calculate Memory Impact

sql
-- Calculate potential memory usage
SELECT 
    current_setting('max_connections')::int AS max_conn,
    current_setting('work_mem') AS work_mem,
    (current_setting('max_connections')::int * 
     replace(current_setting('work_mem'), 'MB', '')::int) AS total_work_mem_mb,
    (current_setting('max_connections')::int * 
     replace(current_setting('work_mem'), 'MB', '')::int) / 1024 AS total_work_mem_gb;

For high connection counts, use PgBouncer instead of increasing max_connections directly. PostgreSQL performance degrades with too many connections.

Connection Timeout Settings

bash
# Set connection timeouts in postgresql.conf
sudo nano /etc/postgresql/16/main/postgresql.conf
ini
# Connection-related timeouts
connection_limit_duration = 0           # Disable connection limit checks
authentication_timeout = 1min           # Max time for authentication
idle_in_transaction_session_timeout = 600000  # Kill idle transactions after 10 min

Kill Idle Transactions Automatically

```sql -- Set global idle transaction timeout (PostgreSQL 9.6+) ALTER SYSTEM SET idle_in_transaction_session_timeout = '600000'; -- 10 minutes SELECT pg_reload_conf();

-- Or per-session SET idle_in_transaction_session_timeout = '300000'; -- 5 minutes ```

Monitoring Connections

```sql -- Create connection monitoring view CREATE OR REPLACE VIEW connection_monitor AS SELECT pid, datname AS database, usename AS username, application_name, client_addr, state, wait_event_type, wait_event, query_start, now() - query_start AS duration, query FROM pg_stat_activity WHERE datname IS NOT NULL ORDER BY query_start;

-- Quick health check SELECT state, count(*) AS count, string_agg(DISTINCT usename, ', ') AS users FROM pg_stat_activity GROUP BY state;

-- Find connections per client IP SELECT client_addr::inet, count(*) AS connections, array_agg(DISTINCT usename) AS users FROM pg_stat_activity GROUP BY client_addr::inet ORDER BY connections DESC; ```

Setting Connection Limits Per User/Database

```sql -- Limit connections per user ALTER USER webapp CONNECTION LIMIT 50; ALTER USER reporting CONNECTION LIMIT 10; ALTER USER admin CONNECTION LIMIT 5;

-- Limit connections per database ALTER DATABASE production CONNECTION LIMIT 100; ALTER DATABASE staging CONNECTION LIMIT 50;

-- Check limits SELECT rolname, rolconnlimit FROM pg_roles WHERE rolconnlimit <> -1;

SELECT datname, datconnlimit FROM pg_database WHERE datconnlimit <> -1; ```

Best Practices

  1. 1.Use connection pooling: PgBouncer or built-in pooling in frameworks
  2. 2.Monitor connection counts: Set up alerts at 80% of max_connections
  3. 3.Set idle timeouts: Prevent abandoned connections
  4. 4.Use connection limits per user/database: Prevent one app from consuming all connections
  5. 5.Close connections properly: Always use try/finally or context managers
  6. 6.Size work_mem appropriately: More connections = lower per-connection work_mem
  7. 7.Monitor for leaks: Regular audits of connection duration

Connection limit issues are almost always a sign of either missing connection pooling or application bugs. The fix is rarely to increase max_connections - it's to fix the leak or add proper pooling.