Introduction

Dispatchers.IO shares a thread pool limited to 64 threads (or the number of CPU cores, whichever is larger). When more than 64 coroutines perform blocking I/O operations simultaneously, the remaining coroutines wait for a thread to become available. If all threads are blocked on long-running operations (database queries, network calls, file I/O), the entire application appears frozen because no threads are available to make progress.

Symptoms

  • Coroutines using Dispatchers.IO hang indefinitely
  • Application works with few concurrent requests but fails under load
  • Thread dump shows all IO threads blocked
  • withContext(Dispatchers.IO) does not return
  • CPU usage low but no work is being done

Debug thread count: ``kotlin // Check current IO dispatcher thread count fun printIOThreadInfo() { val threads = Thread.getAllStackTraces().keys.filter { it.name.startsWith("DefaultDispatcher-worker") } println("Active IO threads: ${threads.size}") threads.forEach { t -> println(" ${t.name}: ${t.state} - ${t.stackTrace.firstOrNull()?.className}") } }

Common Causes

  • More than 64 concurrent blocking I/O operations
  • Blocking operations that do not release threads (infinite waits)
  • Database connection pool smaller than coroutine count
  • Long-running file operations occupying IO threads
  • Mixing blocking CPU-intensive work with IO dispatcher

Step-by-Step Fix

  1. 1.**Limit concurrency to match IO thread pool":
  2. 2.```kotlin
  3. 3.// WRONG - unlimited concurrency, can exhaust IO threads
  4. 4.suspend fun processAllUrls(urls: List<String>): List<Result> = coroutineScope {
  5. 5.urls.map { url ->
  6. 6.async(Dispatchers.IO) {
  7. 7.fetchUrl(url) // 1000 concurrent fetches, only 64 threads!
  8. 8.}
  9. 9.}.awaitAll()
  10. 10.}

// CORRECT - limit concurrent operations suspend fun processAllUrls(urls: List<String>): List<Result> = coroutineScope { urls.chunked(50).flatMap { chunk -> chunk.map { url -> async(Dispatchers.IO) { fetchUrl(url) } }.awaitAll() } }

// Or use a semaphore for fine-grained control suspend fun processWithSemaphore(urls: List<String>): List<Result> { val semaphore = Semaphore(50) // Max 50 concurrent operations

return coroutineScope { urls.map { url -> async(Dispatchers.IO) { semaphore.acquire() try { fetchUrl(url) } finally { semaphore.release() } } }.awaitAll() } } ```

  1. 1.**Use proper blocking coroutine wrappers":
  2. 2.```kotlin
  3. 3.// For blocking operations that hold threads for a long time
  4. 4.suspend fun longRunningIO() = withContext(Dispatchers.IO.limitedParallelism(10)) {
  5. 5.// limitedParallelism restricts this block to max 10 threads
  6. 6.// This prevents it from starving other IO operations
  7. 7.performLongRunningDatabaseQuery()
  8. 8.}

// Kotlin 1.7+ - limit parallelism of IO dispatcher val limitedIO = Dispatchers.IO.limitedParallelism(32)

suspend fun balancedWork() { coroutineScope { // Use limited IO for heavy operations launch(limitedIO) { heavyIOOperation1() } launch(limitedIO) { heavyIOOperation2() } // Other coroutines can still use full IO dispatcher launch(Dispatchers.IO) { lightIOOperation() } } } ```

  1. 1.**Configure thread pool for specific needs":
  2. 2.```kotlin
  3. 3.// Create a custom dispatcher for database operations
  4. 4.val dbDispatcher = Executors.newFixedThreadPool(10).asCoroutineDispatcher()

// Use custom dispatcher for database work suspend fun queryDatabase() = withContext(dbDispatcher) { // Limited to 10 threads, does not compete with general IO database.executeQuery() }

// Remember to close the dispatcher when done // (usually in Application.onDestroy or DI cleanup) (dbDispatcher as ExecutorCoroutineDispatcher).close() ```

Prevention

  • Limit concurrent IO operations with chunked or Semaphore
  • Use limitedParallelism() to cap thread usage per operation type
  • Use custom dispatchers for specific resource types (database, network)
  • Monitor thread pool utilization in production
  • Set connection pool sizes to match coroutine concurrency limits
  • Use Dispatchers.IO only for actual I/O, not CPU-intensive computation