Introduction
Cloudflare Workers run in a V8 isolate environment with strict resource limits. On the free plan, each worker request gets a 10ms CPU time budget. When the worker script exceeds this limit, the request is terminated with Error 1101: Worker threw a JavaScript exception or CPU time limit exceeded. Even on paid plans (which have higher limits), inefficient code patterns can cause workers to hit CPU limits during traffic spikes, leading to intermittent failures.
Symptoms
- Cloudflare returns
Error 1101for worker requests - Worker logs show
Uncaught Error: CPU time limit exceeded - Worker works for simple requests but fails for complex inputs
- Performance degrades under load as CPU budget is consumed faster
wrangler tailshows worker execution time approaching the limit
Common Causes
- Heavy computation in the worker (regex on large strings, JSON parsing of large bodies)
- Synchronous operations that block the event loop
- Large payload processing without streaming
- Inefficient regex with catastrophic backtracking
- Too many sequential subrequests without parallelization
Step-by-Step Fix
- 1.Profile worker execution time:
- 2.```bash
- 3.wrangler tail
- 4.# Shows execution time for each request
- 5.# Look for requests approaching the 10ms limit
- 6.
` - 7.Add timing instrumentation to the worker:
- 8.```javascript
- 9.addEventListener('fetch', event => {
- 10.const startTime = Date.now()
- 11.event.respondWith(handleRequest(event.request, startTime))
- 12.})
async function handleRequest(request, startTime) {
// Your logic here
const elapsed = Date.now() - startTime
console.log(Worker executed in ${elapsed}ms)
// If approaching limit, log a warning
if (elapsed > 8) {
console.warn(Approaching CPU limit: ${elapsed}ms)
}
return new Response('OK')
}
```
- 1.Optimize regex operations:
- 2.```javascript
- 3.// BEFORE: Potentially catastrophic regex
- 4.const match = largeBody.match(/(a+)+b/)
// AFTER: Use more specific patterns const match = largeBody.match(/a{1,100}b/) // Or use string methods instead of regex where possible const index = largeBody.indexOf('target') ```
- 1.Use async/parallel operations instead of sequential:
- 2.```javascript
- 3.// BEFORE: Sequential subrequests
- 4.const res1 = await fetch(url1)
- 5.const res2 = await fetch(url2) // Waits for res1 to complete
// AFTER: Parallel subrequests const [res1, res2] = await Promise.all([ fetch(url1), fetch(url2) ]) ```
- 1.Stream large responses instead of buffering:
- 2.```javascript
- 3.// BEFORE: Buffer entire response (expensive for large bodies)
- 4.const body = await response.text()
- 5.const processed = body.replace(/old/g, 'new')
// AFTER: Stream through TransformStream const transformer = new TransformStream({ transform(chunk, controller) { const text = new TextDecoder().decode(chunk) controller.enqueue(new TextEncoder().encode(text.replace(/old/g, 'new'))) } }) return new Response(response.body.pipeThrough(transformer)) ```
- 1.Upgrade to a paid plan if optimization is not sufficient:
- 2.Paid plans offer higher CPU limits. Monitor usage in the Cloudflare dashboard and upgrade if your worker consistently needs more than 10ms after optimization.
Prevention
- Benchmark worker execution time with representative test data before deployment
- Use
wrangler devto profile worker performance locally - Implement CPU time monitoring and alerting in worker code
- Avoid processing request/response bodies larger than 1MB in workers
- Use Cloudflare's built-in features (caching, transforms) instead of custom workers where possible