Introduction
Dispatchers.IO uses a shared thread pool with a default limit of 64 threads (or the number of CPU cores, whichever is larger). When more than 64 coroutines simultaneously perform blocking I/O operations, new coroutines must wait for a thread to become available. This causes cascading delays, request timeouts, and degraded app performance.
Symptoms
- Network requests queue up and timeout
- Database queries take much longer than expected
- App UI freezes because IO work is blocking
- Thread dump shows many coroutines in
WAITINGstate - Works with few concurrent requests but degrades under load
Example scenario:
``kotlin
// 100 concurrent file reads will starve the IO dispatcher
coroutineScope {
repeat(100) { i ->
launch(Dispatchers.IO) {
// Only 64 of these can run simultaneously
val content = File("data_$i.txt").readText()
process(content)
}
}
}
// The remaining 36 coroutines wait indefinitely
Common Causes
- Launching too many concurrent IO coroutines
- Using
Dispatchers.IOfor CPU-intensive work - Blocking operations (synchronous HTTP clients, file I/O) holding threads
- Not limiting concurrency in parallel operations
- Nested IO dispatchers multiplying thread usage
Step-by-Step Fix
- 1.Limit concurrency with a Semaphore:
- 2.```kotlin
- 3.import kotlinx.coroutines.sync.Semaphore
- 4.import kotlinx.coroutines.sync.withPermit
suspend fun processManyFiles(files: List<File>) = coroutineScope { val semaphore = Semaphore(permitCount = 10) // Max 10 concurrent
files.map { file -> async { semaphore.withPermit { val content = file.readText() process(content) } } }.awaitAll() } ```
- 1.Use chunked processing for bounded parallelism:
- 2.```kotlin
- 3.suspend fun processManyFiles(files: List<File>) = coroutineScope {
- 4.files.chunked(10).forEach { chunk ->
- 5.// Process 10 files at a time
- 6.chunk.map { file ->
- 7.async(Dispatchers.IO) {
- 8.file.readText()
- 9.}
- 10.}.awaitAll()
- 11.}
- 12.}
- 13.
` - 14.Use Flow for controlled concurrency:
- 15.```kotlin
- 16.import kotlinx.coroutines.flow.*
suspend fun processManyFiles(files: List<File>) { files.asFlow() .flowOn(Dispatchers.IO) .map { file -> file.readText() } .buffer(10) // Buffer up to 10 items .collect { content -> process(content) } } ```
- 1.Convert blocking code to suspending:
- 2.```kotlin
- 3.// Blocking: holds a thread
- 4.fun readFileSync(path: String): String {
- 5.return File(path).readText()
- 6.}
// Suspending: uses NIO, does not block a thread suspend fun readAsync(path: String): String = withContext(Dispatchers.IO) { File(path).readText() // Still uses a thread, but at least it's bounded }
// For truly async I/O, use OkHttp async calls or similar: suspend fun fetchAsync(url: String): String = suspendCancellableCoroutine { cont -> client.newCall(Request.Builder().url(url).build()).enqueue( object : Callback { override fun onResponse(call: Call, response: Response) { cont.resume(response.body?.string() ?: "") } override fun onFailure(call: Call, e: IOException) { cont.resumeWithException(e) } } ) } ```
- 1.Monitor thread pool usage:
- 2.```kotlin
- 3.fun logThreadPoolStatus() {
- 4.val pool = (Dispatchers.IO as ExecutorCoroutineDispatcher).executor
- 5.// Note: ThreadPoolExecutor internals are not directly accessible
- 6.// Use thread naming pattern to count IO threads
- 7.val ioThreads = Thread.getAllStackTraces().keys.count {
- 8.it.name.startsWith("DefaultDispatcher-worker-")
- 9.}
- 10.Log.d("Threading", "Active IO threads: $ioThreads / 64 max")
- 11.}
- 12.
`
Prevention
- Limit concurrency with
Semaphorefor parallel I/O operations - Use
flowOnandbufferfor controlled stream processing - Chunk large collections before processing in parallel
- Prefer async APIs over blocking APIs where available
- Monitor thread pool utilization in production
- Use
Dispatchers.IOonly for blocking I/O, not CPU work (useDispatchers.Default) - Configure OkHttp connection pool size to match expected concurrency