Introduction
httpx is a modern async HTTP client for Python, but when clients are not properly managed, connection pools fill up and new requests fail with ConnectTimeout or PoolTimeout. Unlike the synchronous requests library, httpx requires explicit client lifecycle management with async context managers. Creating new clients per request without closing them, or using the client beyond its async context, causes resource leaks that consume file descriptors and eventually exhaust the connection pool.
Symptoms
httpx.PoolTimeout: Unable to acquire connection within 5.0 seconds
File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 1540Or resource warnings:
ResourceWarning: unclosed <ssl.SSLSocket fd=15, family=2, type=1, proto=6>
ResourceWarning: unclosed transport <asyncio.sslproto._SSLProtocolTransport>Or connection errors:
httpx.ConnectError: [Errno 104] Connection reset by peerCommon Causes
- Creating new client per request: Each client opens its own connection pool
- Client not closed: async client created without async context manager
- Connection limits too low: Default limits cannot handle concurrent requests
- Using sync code with async client: Mixing sync/async incorrectly
- Server closes idle connections: Keep-alive timeout shorter than client expects
- Response body not consumed: Response not fully read, connection not reusable
Step-by-Step Fix
Step 1: Use shared client with async context manager
```python import httpx import asyncio
class APIClient: def __init__(self, base_url: str, max_connections: int = 50): self.base_url = base_url self.limits = httpx.Limits( max_connections=max_connections, max_keepalive_connections=20, keepalive_expiry=30.0, # Seconds to keep idle connections ) self._client: httpx.AsyncClient | None = None
async def __aenter__(self): self._client = httpx.AsyncClient( base_url=self.base_url, limits=self.limits, timeout=httpx.Timeout(10.0, connect=5.0), ) return self
async def __aexit__(self, *exc_info): await self._client.aclose()
async def get(self, path: str, **kwargs): response = await self._client.get(path, **kwargs) response.raise_for_status() return response.json()
# Usage async def main(): async with APIClient("https://api.example.com") as client: data = await client.get("/users/123") ```
Step 2: Configure timeouts and connection limits
```python limits = httpx.Limits( max_connections=100, # Total connections max_keepalive_connections=25, # Idle connections to keep keepalive_expiry=60.0, # Expire idle connections after 60s )
timeout = httpx.Timeout( timeout=30.0, # General timeout connect=5.0, # Connection establishment read=15.0, # Reading response body write=10.0, # Sending request body pool=5.0, # Waiting for connection from pool )
client = httpx.AsyncClient(limits=limits, timeout=timeout) ```
Step 3: Handle connection errors gracefully
async def safe_request(client: httpx.AsyncClient, url: str, retries: int = 3):
for attempt in range(retries):
try:
response = await client.get(url)
response.raise_for_status()
# Always consume the response body
return response.json()
except httpx.PoolTimeout:
print(f"Pool timeout on attempt {attempt + 1}")
await asyncio.sleep(2 ** attempt)
except httpx.ConnectError as e:
print(f"Connection error: {e}")
await asyncio.sleep(2 ** attempt)
except httpx.HTTPStatusError as e:
print(f"HTTP error {e.response.status_code}")
raise
raise RuntimeError(f"Failed after {retries} retries")Prevention
- Always use
async with httpx.AsyncClient()for proper resource cleanup - Configure httpx.Limits based on expected concurrent connections
- Set explicit timeouts for connect, read, write, and pool operations
- Consume response bodies completely to enable connection reuse
- Enable HTTP/2 with
httpx.AsyncClient(http2=True)for multiplexed connections - Monitor connection pool usage with
client._transport.poolfor debugging - Use a single shared client across your application, not one per request