What's Actually Happening
ClickHouse queries exceed timeout limit. Long-running queries fail before completion.
The Error You'll See
```bash $ clickhouse-client --query "SELECT * FROM large_table"
Code: 159. DB::Exception: Timeout exceeded: elapsed 300.000000 seconds ```
Why This Happens
- 1.Query too complex
- 2.Insufficient resources
- 3.Timeout too low
- 4.Missing indexes
- 5.Large dataset
Step 1: Check Query Log
SELECT * FROM system.query_log WHERE exception_code = 159 ORDER BY event_time DESC LIMIT 10;Step 2: Check Settings
SELECT name, value FROM system.settings WHERE name LIKE '%timeout%';Step 3: Increase Timeout
SET max_execution_time = 600;<!-- In config.xml -->
<max_execution_time>600</max_execution_time>Step 4: Optimize Query
-- Use proper indexes
-- Limit columns
-- Use WHERE clauses
-- Avoid SELECT *
SELECT col1, col2 FROM large_table WHERE date = today() LIMIT 1000;Step 5: Check Resources
free -m
top -bn1 | head -20Step 6: Use MergeTree Engine
CREATE TABLE my_table (date Date, id UInt64) ENGINE = MergeTree() ORDER BY id;Step 7: Add Partitioning
CREATE TABLE my_table (date Date, id UInt64)
ENGINE = MergeTree()
PARTITION BY toYYYYMM(date)
ORDER BY id;Step 8: Check Cluster Status
clickhouse-client --query "SELECT * FROM system.clusters"Step 9: Monitor Performance
SELECT * FROM system.metrics;
SELECT * FROM system.events;Step 10: Use Async Mode
clickhouse-client --query_id "my-query" --async "SELECT ..."
clickhouse-client --query "SELECT status FROM system.processes WHERE query_id = 'my-query'"Related Issues
- [Fix ClickHouse Query Memory Limit Exceeded](/articles/fix-clickhouse-query-memory-limit-exceeded)
- [Fix ClickHouse Connection Refused](/articles/fix-clickhouse-connection-refused)