What's Actually Happening

ClickHouse queries exceed timeout limit. Long-running queries fail before completion.

The Error You'll See

```bash $ clickhouse-client --query "SELECT * FROM large_table"

Code: 159. DB::Exception: Timeout exceeded: elapsed 300.000000 seconds ```

Why This Happens

  1. 1.Query too complex
  2. 2.Insufficient resources
  3. 3.Timeout too low
  4. 4.Missing indexes
  5. 5.Large dataset

Step 1: Check Query Log

sql
SELECT * FROM system.query_log WHERE exception_code = 159 ORDER BY event_time DESC LIMIT 10;

Step 2: Check Settings

sql
SELECT name, value FROM system.settings WHERE name LIKE '%timeout%';

Step 3: Increase Timeout

sql
SET max_execution_time = 600;
xml
<!-- In config.xml -->
<max_execution_time>600</max_execution_time>

Step 4: Optimize Query

sql
-- Use proper indexes
-- Limit columns
-- Use WHERE clauses
-- Avoid SELECT *
SELECT col1, col2 FROM large_table WHERE date = today() LIMIT 1000;

Step 5: Check Resources

bash
free -m
top -bn1 | head -20

Step 6: Use MergeTree Engine

sql
CREATE TABLE my_table (date Date, id UInt64) ENGINE = MergeTree() ORDER BY id;

Step 7: Add Partitioning

sql
CREATE TABLE my_table (date Date, id UInt64) 
ENGINE = MergeTree() 
PARTITION BY toYYYYMM(date) 
ORDER BY id;

Step 8: Check Cluster Status

bash
clickhouse-client --query "SELECT * FROM system.clusters"

Step 9: Monitor Performance

sql
SELECT * FROM system.metrics;
SELECT * FROM system.events;

Step 10: Use Async Mode

bash
clickhouse-client --query_id "my-query" --async "SELECT ..."
clickhouse-client --query "SELECT status FROM system.processes WHERE query_id = 'my-query'"
  • [Fix ClickHouse Query Memory Limit Exceeded](/articles/fix-clickhouse-query-memory-limit-exceeded)
  • [Fix ClickHouse Connection Refused](/articles/fix-clickhouse-connection-refused)