# PostgreSQL Too Many Connections: Max Connections Limit

Your application is throwing errors and users can't connect:

bash
FATAL:  sorry, too many clients already
FATAL:  remaining connection slots are reserved for non-replication superuser connections

PostgreSQL has a hard limit on concurrent connections. When you hit this limit, new connections are refused. Let's diagnose and fix this systematically.

Understanding the Connection Limit

PostgreSQL's max_connections parameter sets the maximum number of concurrent connections. The default is typically 100.

Check your current limits:

```sql -- Current max connections setting SHOW max_connections;

-- Connections currently in use SELECT count(*) FROM pg_stat_activity;

-- Available slots SELECT setting::int AS max_connections, (SELECT count(*) FROM pg_stat_activity) AS current_connections, setting::int - (SELECT count(*) FROM pg_stat_activity) AS available_slots FROM pg_settings WHERE name = 'max_connections'; ```

Immediate Resolution

Free Up Connections Now

If you're locked out, first try to connect as superuser (reserved slots exist for this):

```bash # Superusers have reserved connections psql -U postgres -d postgres

# If that fails, use single-user mode postgres --single -D /var/lib/postgresql/data postgres ```

Once connected, find and terminate problematic connections:

```sql -- View all connections SELECT pid, usename, application_name, client_addr, state, query_start, query FROM pg_stat_activity ORDER BY query_start;

-- Find idle connections SELECT pid, usename, application_name, client_addr, state, query_start, now() - query_start AS idle_duration FROM pg_stat_activity WHERE state = 'idle' ORDER BY idle_duration DESC;

-- Terminate specific connection SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE pid = 12345;

-- Terminate all idle connections older than 10 minutes SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE state = 'idle' AND query_start < now() - interval '10 minutes' AND pid <> pg_backend_pid();

-- Terminate all connections from a specific user SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE usename = 'problematic_user' AND pid <> pg_backend_pid(); ```

Increase Connection Limit (Temporary Fix)

To allow more connections, modify postgresql.conf:

```sql -- Check current value SHOW max_connections;

-- View config file location SHOW config_file; ```

Edit the configuration:

conf
# postgresql.conf
max_connections = 200

Important: Each connection uses memory. Ensure you have enough:

```sql -- Check shared_buffers and work_mem SHOW shared_buffers; SHOW work_mem;

-- Rough calculation for max_connections: -- Total memory needed ≈ max_connections × (work_mem + maintenance_work_mem) + shared_buffers ```

After editing, restart PostgreSQL:

bash
sudo systemctl restart postgresql

Finding the Root Cause

Analyze Connection Sources

```sql -- Connections by user SELECT usename, count(*) AS connection_count FROM pg_stat_activity GROUP BY usename ORDER BY connection_count DESC;

-- Connections by application SELECT application_name, count(*) AS connection_count FROM pg_stat_activity GROUP BY application_name ORDER BY connection_count DESC;

-- Connections by client IP SELECT client_addr, count(*) AS connection_count FROM pg_stat_activity WHERE client_addr IS NOT NULL GROUP BY client_addr ORDER BY connection_count DESC;

-- Connections by database SELECT datname, count(*) AS connection_count FROM pg_stat_activity GROUP BY datname ORDER BY connection_count DESC; ```

Identify Connection Leaks

```sql -- Find connections idle for extended periods SELECT pid, usename, application_name, client_addr, state, query_start, now() - query_start AS idle_time FROM pg_stat_activity WHERE state IN ('idle', 'idle in transaction', 'idle in transaction (aborted)') AND query_start < now() - interval '5 minutes' ORDER BY idle_time DESC;

-- Find connections in transaction for too long SELECT pid, usename, state, query_start, now() - query_start AS transaction_duration FROM pg_stat_activity WHERE state = 'idle in transaction' ORDER BY transaction_duration DESC; ```

Check for Prepared Transactions

```sql -- Prepared transactions hold connections SELECT * FROM pg_prepared_xacts;

-- If you find old prepared transactions, commit or rollback -- COMMIT PREPARED 'transaction_id'; -- ROLLBACK PREPARED 'transaction_id'; ```

Connection Per-User Limits

PostgreSQL allows setting connection limits per user:

```sql -- Set user connection limit ALTER USER appuser WITH CONNECTION LIMIT 20;

-- Set superuser limit (though superusers have reserved slots) ALTER USER admin WITH CONNECTION LIMIT 10;

-- Set per-database limit ALTER DATABASE mydb WITH CONNECTION_LIMIT 50;

-- View current limits SELECT rolname, rolconnlimit FROM pg_roles WHERE rolconnlimit >= 0; ```

Long-Term Solution: Connection Pooling

Increasing max_connections is a band-aid. Connection pooling is the proper solution.

Option 1: PgBouncer (Recommended)

PgBouncer sits between your application and PostgreSQL, managing a pool of connections:

```bash # Install on Ubuntu/Debian sudo apt install pgbouncer

# Install on RHEL/CentOS sudo yum install pgbouncer ```

Configuration (/etc/pgbouncer/pgbouncer.ini):

```ini [databases] mydb = host=127.0.0.1 port=5432 dbname=mydb

[pgbouncer] listen_addr = 0.0.0.0 listen_port = 6432 auth_type = scram-sha-256 auth_file = /etc/pgbouncer/userlist.txt pool_mode = transaction max_client_conn = 500 default_pool_size = 25 min_pool_size = 5 reserve_pool_size = 5 reserve_pool_timeout = 3

[users] appuser = max_user_connections=50 ```

Create userlist.txt:

```bash # Generate password hash sudo -u postgres psql -c "SELECT '\"appuser\" \"' || passwd || '\"' FROM pg_shadow WHERE usename = 'appuser';"

# Or manually echo '"appuser" "SCRAM-SHA-256$..."' | sudo tee /etc/pgbouncer/userlist.txt ```

Start PgBouncer:

bash
sudo systemctl enable pgbouncer
sudo systemctl start pgbouncer

Connect through PgBouncer:

bash
psql -h localhost -p 6432 -U appuser -d mydb

Update your application's connection string:

bash
postgresql://appuser:password@localhost:6432/mydb

Option 2: Built-in Application Pooling

Many frameworks support connection pooling:

Node.js (node-postgres): ``javascript const { Pool } = require('pg'); const pool = new Pool({ host: 'localhost', database: 'mydb', user: 'appuser', password: 'password', max: 20, // Maximum connections in pool idleTimeoutMillis: 30000, // Close idle connections after 30s connectionTimeoutMillis: 2000, });

Python (psycopg2): ```python import psycopg2 from psycopg2 import pool

connection_pool = psycopg2.pool.ThreadedConnectionPool( minconn=5, maxconn=20, host='localhost', database='mydb', user='appuser', password='password' ) ```

Java (HikariCP): ``java HikariConfig config = new HikariConfig(); config.setJdbcUrl("jdbc:postgresql://localhost/mydb"); config.setUsername("appuser"); config.setPassword("password"); config.setMaximumPoolSize(20); config.setMinimumIdle(5); config.setIdleTimeout(30000); HikariDataSource dataSource = new HikariDataSource(config);

Monitoring Connections

Set Up Connection Monitoring

```sql -- Create a view for connection monitoring CREATE VIEW connection_stats AS SELECT datname, usename, application_name, client_addr, state, count(*) OVER (PARTITION BY datname) AS db_connections, count(*) OVER (PARTITION BY usename) AS user_connections, count(*) OVER () AS total_connections, (SELECT setting::int FROM pg_settings WHERE name = 'max_connections') AS max_connections FROM pg_stat_activity;

-- Query it SELECT * FROM connection_stats WHERE state = 'active'; ```

Create an Alert Function

```sql CREATE OR REPLACE FUNCTION check_connection_limit() RETURNS TABLE(alert_level text, current_count int, max_allowed int) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT CASE WHEN count(*) > (SELECT setting::int * 0.9 FROM pg_settings WHERE name = 'max_connections') THEN 'CRITICAL' WHEN count(*) > (SELECT setting::int * 0.75 FROM pg_settings WHERE name = 'max_connections') THEN 'WARNING' ELSE 'OK' END, count(*)::int, (SELECT setting::int FROM pg_settings WHERE name = 'max_connections') FROM pg_stat_activity; END; $$;

-- Run check SELECT * FROM check_connection_limit(); ```

Configuration Tuning

When increasing max_connections, adjust related settings:

```conf # postgresql.conf

# Connection limit max_connections = 200

# Reserved for superusers (default 3) superuser_reserved_connections = 5

# Memory per connection affects total RAM usage work_mem = 4MB # Lower if many connections maintenance_work_mem = 64MB

# Shared memory shared_buffers = 256MB # Should be ~25% of RAM

# Connection timeout settings tcp_keepalives_idle = 60 tcp_keepalives_interval = 10 tcp_keepalives_count = 5 ```

Calculate memory requirements:

``` Approximate memory usage: = shared_buffers + (max_connections × work_mem) + (max_connections × maintenance_work_mem)

Example with max_connections=200, work_mem=4MB, shared_buffers=256MB: = 256MB + (200 × 4MB) + (200 × 64MB) = 256MB + 800MB + 12.8GB = ~14GB ```

Verification Steps

After implementing fixes:

```bash # Check max_connections is applied psql -U postgres -c "SHOW max_connections;"

# Monitor connections over time watch -n 5 'psql -U postgres -c "SELECT count(*) FROM pg_stat_activity;"'

# Check for PgBouncer psql -h localhost -p 6432 -U appuser -d mydb -c "SELECT 1;"

# Verify connection pool is working psql -U postgres -c "SELECT usename, count(*) FROM pg_stat_activity GROUP BY usename;" ```

Quick Reference

ActionCommand
Check max connectionsSHOW max_connections;
Count current connectionsSELECT count(*) FROM pg_stat_activity;
Find idle connectionsSELECT * FROM pg_stat_activity WHERE state='idle';
Terminate connectionSELECT pg_terminate_backend(pid);
Set user limitALTER USER name WITH CONNECTION LIMIT n;
Reload configSELECT pg_reload_conf();
Restart PostgreSQLsudo systemctl restart postgresql