Introduction When PostgreSQL reaches its `max_connections` limit, new connections are rejected with `FATAL: sorry, too many clients already`. This is a common production issue caused by connection leaks, insufficient pooling, or `max_connections` set too low for the application concurrency requirements.

Symptoms - `FATAL: sorry, too many clients already` in application logs - `SELECT count(*) FROM pg_stat_activity` equals `max_connections` - New application instances fail to start because they cannot connect - `pg_stat_activity` shows many connections in `idle` state - Connection pool reports `connection refused` from PostgreSQL

Common Causes - Application not releasing connections back to the pool after use - `max_connections` set too low for the number of application pods/instances - Long-running idle transactions holding connections - Connection leak in application code (connections opened but never closed) - Multiple applications sharing the same database without connection limits

Step-by-Step Fix 1. **Check current connection usage": ```sql SELECT count(*) AS total_connections, state, count(*) OVER (PARTITION BY state) AS state_count FROM pg_stat_activity GROUP BY state;

-- Show current max_connections SHOW max_connections; ```

  1. 1.**Identify and terminate idle connections":
  2. 2.```sql
  3. 3.-- Find idle connections older than 10 minutes
  4. 4.SELECT pid, usename, application_name, client_addr,
  5. 5.now() - state_change AS idle_duration,
  6. 6.query
  7. 7.FROM pg_stat_activity
  8. 8.WHERE state = 'idle'
  9. 9.AND state_change < now() - interval '10 minutes'
  10. 10.ORDER BY idle_duration DESC;

-- Terminate idle connections SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE state = 'idle' AND state_change < now() - interval '10 minutes' AND pid != pg_backend_pid(); ```

  1. 1.**Increase max_connections if needed":
  2. 2.```sql
  3. 3.-- Requires restart
  4. 4.ALTER SYSTEM SET max_connections = 500;

-- Also increase shared_buffers proportionally ALTER SYSTEM SET shared_buffers = '4GB'; ```

  1. 1.**Deploy PgBouncer for connection pooling":
  2. 2.```ini
  3. 3.; /etc/pgbouncer/pgbouncer.ini
  4. 4.[databases]
  5. 5.production = host=localhost port=5432 dbname=production

[pgbouncer] listen_port = 6432 pool_mode = transaction max_client_conn = 2000 default_pool_size = 50 min_pool_size = 10 reserve_pool_size = 5 ```

  1. 1.**Set per-user connection limits":
  2. 2.```sql
  3. 3.ALTER ROLE app_user CONNECTION LIMIT 100;
  4. 4.ALTER ROLE analytics_user CONNECTION LIMIT 20;
  5. 5.`

Prevention - Always use a connection pooler (PgBouncer, ProxySQL) in production - Set `idle_in_transaction_session_timeout = '5min'` to prevent idle transactions - Set `statement_timeout` on application roles to prevent runaway queries - Monitor connection count and alert at 80% of max_connections - Use connection limits per role to prevent a single application from exhausting all connections - Implement application-level connection pool sizing (pool_size = max_connections / app_instances) - Monitor `pg_stat_activity` for connection leaks in application code