Introduction

The MaxRetriesExceededError in the Python requests library occurs when urllib3's connection pool is exhausted and all retry attempts fail. This happens in high-concurrency applications that create many sessions without configuring pool sizes, or when the target server is slow to respond, holding connections open longer than normal. The default connection pool size of 10 is easily exceeded in threaded or async environments, causing requests to queue up and eventually time out rather than establishing new connections.

Symptoms

bash
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api.example.com', port=443): Max retries exceeded with url: /v1/data (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f9a2c3d4b80>: Failed to establish a new connection: [Errno 11] Resource temporarily unavailable'))

Or connection pool warnings:

bash
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f9a2c3d4b80>: Failed to establish a new connection: Connection refused

High memory usage from leaked connections:

python
import psutil
process = psutil.Process()
print(process.memory_info().rss / 1024 / 1024)
# Memory grows from 50MB to 500MB+ as connections accumulate

Common Causes

  • Creating a new Session per request: Each Session opens its own connection pool
  • Pool size too small for concurrency: Default pool_connections=10 exceeded by thread pool
  • No timeout configured: Connections held indefinitely on slow servers
  • Not closing response bodies: Response content not read, connection not returned to pool
  • DNS resolution failure: Hostname cannot be resolved, all retries exhausted
  • Server connection refused: Target service down or firewall blocking

Step-by-Step Fix

Step 1: Configure session with proper pool size and retries

```python import requests from requests.adapters import HTTPAdapter from urllib3.util.retry import Retry

def create_session(): session = requests.Session()

# Configure retry strategy with exponential backoff retry_strategy = Retry( total=3, backoff_factor=1, status_forcelist=[429, 500, 502, 503, 504], allowed_methods=["GET", "POST", "PUT", "DELETE"], )

# Configure adapter with larger pool and retries adapter = HTTPAdapter( max_retries=retry_strategy, pool_connections=50, pool_maxsize=50, )

session.mount("https://", adapter) session.mount("http://", adapter)

return session ```

Step 2: Always close response bodies

python
def safe_request(session, url, method="GET", **kwargs):
    """Ensure response body is always consumed and connection released."""
    try:
        response = session.request(method, url, timeout=10, **kwargs)
        # Always read the body to release the connection
        data = response.json()
        response.raise_for_status()
        return data
    except requests.exceptions.ConnectionError as e:
        # Connection pool issues - log and raise
        print(f"Connection error: {e}")
        raise
    except requests.exceptions.Timeout:
        print(f"Request timed out: {url}")
        raise
    except requests.exceptions.RequestException as e:
        print(f"Request failed: {e}")
        raise

Step 3: Use context manager for cleanup

```python def fetch_data(urls, max_workers=10): """Fetch multiple URLs with proper connection management.""" from concurrent.futures import ThreadPoolExecutor, as_completed

session = create_session() results = {}

def fetch_one(url): with session.get(url, timeout=10) as response: response.raise_for_status() return url, response.json()

with session, ThreadPoolExecutor(max_workers=max_workers) as executor: futures = {executor.submit(fetch_one, url): url for url in urls} for future in as_completed(futures): try: url, data = future.result() results[url] = data except Exception as e: print(f"Failed to fetch {futures[future]}: {e}")

return results ```

Prevention

  • Always use requests.Session() instead of module-level requests.get()
  • Configure pool_maxsize to exceed your maximum concurrent connections
  • Set explicit timeouts on every request (connect + read timeout tuple)
  • Always consume response bodies with .json() or .content to release connections
  • Use session.close() or context managers when sessions go out of scope
  • Monitor connection pool stats with adapter.poolmanager.pools for debugging
  • Add circuit breaker logic (e.g., pybreaker) to stop hammering failing services