Introduction

The HikariCP connection is not available, request timed out after 30000ms error occurs when all connections in the pool are in use and no new connections can be created. Unlike some pool errors that are transient, this error indicates a systemic problem: either the pool is too small for the workload, connections are being held too long (leaked), or the database itself has reached its maximum connection limit. In production, this error cascades quickly -- every thread waiting for a connection adds to the queue, and soon the entire application becomes unresponsive.

Symptoms

bash
java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms.
    at com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:696)
    at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:197)
    at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112)

HikariCP warning logs:

bash
WARN  com.zaxxer.hikari.pool.PoolBase - HikariPool-1 - Failed to validate connection com.mysql.cj.jdbc.ConnectionImpl@abc123
(Connection.setNetworkTimeout cannot be called on a closed connection)

Or the leak detection fires:

bash
WARN  com.zaxxer.hikari.pool.ProxyLeakTask - Connection leak detection triggered for com.zaxxer.hikari.pool.HikariProxyConnection@def456 on thread http-nio-8080-exec-5, stack trace follows
java.lang.Exception: Apparent connection leak detected
    at com.example.service.UserService.processOrders(UserService.java:45)

Common Causes

  • Pool size too small: Default maximumPoolSize=10 may be insufficient for high-concurrency applications
  • Connection leak: Code obtains a connection but never calls close(), permanently consuming pool slots
  • Long-running transactions: A single transaction holding a connection for minutes while doing external API calls
  • Database max_connections limit: HikariCP tries to create more connections than the database allows
  • Slow database queries: Queries that take seconds block connections, reducing effective pool capacity
  • Connection validation failures: Stale connections in the pool fail validation, reducing available connections

Step-by-Step Fix

Step 1: Configure pool sizing correctly

yaml
spring:
  datasource:
    hikari:
      maximum-pool-size: 20
      minimum-idle: 10
      connection-timeout: 10000    # 10 seconds (fail fast)
      idle-timeout: 300000        # 5 minutes
      max-lifetime: 1200000       # 20 minutes
      leak-detection-threshold: 30000  # 30 seconds

The formula for maximum-pool-size: (expected_concurrent_requests * average_query_time_ms) / 1000 + spare_connections. For most web applications, 20-50 is sufficient.

Step 2: Enable and act on leak detection

yaml
spring:
  datasource:
    hikari:
      leak-detection-threshold: 30000  # Warn after 30 seconds

When the warning fires, the stack trace shows exactly where the connection was obtained but not returned:

bash
java.lang.Exception: Apparent connection leak detected
    at com.example.repository.OrderRepository.findByUserId(OrderRepository.java:32)
    at com.example.service.OrderService.getOrders(OrderService.java:56)

Step 3: Fix connection leaks with try-with-resources

```java // WRONG - connection not closed on exception public List<Order> getOrders(Long userId) { Connection conn = dataSource.getConnection(); PreparedStatement stmt = conn.prepareStatement("SELECT * FROM orders WHERE user_id = ?"); stmt.setLong(1, userId); ResultSet rs = stmt.executeQuery(); // If this throws, conn is never closed // ... }

// CORRECT - try-with-resources ensures cleanup public List<Order> getOrders(Long userId) throws SQLException { String sql = "SELECT * FROM orders WHERE user_id = ?"; try (Connection conn = dataSource.getConnection(); PreparedStatement stmt = conn.prepareStatement(sql)) { stmt.setLong(1, userId); try (ResultSet rs = stmt.executeQuery()) { List<Order> orders = new ArrayList<>(); while (rs.next()) { orders.mapOrder(rs)); } return orders; } } } ```

With Spring's JdbcTemplate or @Transactional, connections are automatically managed.

Step 4: Monitor pool metrics

```java @Component public class HikariMetrics {

private final HikariDataSource dataSource;

public HikariMetrics(HikariDataSource dataSource) { this.dataSource = dataSource; }

@Scheduled(fixedRate = 60000) public void logPoolStats() { HikariPoolMXBean poolBean = dataSource.getHikariPoolMXBean(); log.info("HikariCP stats - Active: {}, Idle: {}, Waiting: {}, Total: {}", poolBean.getActiveConnections(), poolBean.getIdleConnections(), poolBean.getThreadsAwaitingConnection(), poolBean.getTotalConnections()); } } ```

Prevention

  • Set connection-timeout to 10 seconds (not 30) to fail fast and avoid cascading timeouts
  • Enable leak-detection-threshold in staging environments to catch leaks before production
  • Never hold connections during external API calls or file processing
  • Use @Transactional for database operations instead of manual connection management
  • Monitor ThreadsAwaitingConnection metric -- any value > 0 indicates pool pressure
  • Set database max_connections to at least 1.5x the sum of all HikariCP maximum-pool-size values across services