Introduction

Logback's AsyncAppender buffers log events in a bounded BlockingQueue before handing them off to a background thread for actual I/O. When the queue fills up faster than the background thread can drain it -- during a log storm, slow disk I/O, or when the disk is full -- the calling thread either blocks (default behavior) or drops log messages. In high-throughput production environments, a full queue can cause application threads to block on logging, creating a cascading latency spike across the entire service.

Symptoms

Application threads block during high log volume:

bash
"http-nio-8080-exec-42" #42 daemon prio=5 os_prio=0 tid=0x00007f9c2c1a3d90 nid=0x1a2b waiting on condition
    at sun.misc.Unsafe.park(Native Method)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
    at java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:353)
    at ch.qos.logback.classic.AsyncAppender.append(AsyncAppender.java:133)

Or log messages are silently dropped:

bash
14:23:45,123 |-WARN in ch.qos.logback.classic.AsyncAppender[ASYNC] -
Queue is full, dropping log event. Consider increasing queueSize or setting neverBlock=true.

Memory spike from large queue:

bash
java.lang.OutOfMemoryError: Java heap space
    at java.util.concurrent.ArrayBlockingQueue.<init>(ArrayBlockingQueue.java:162)
    at ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders

Common Causes

  • Default queue size of 256 too small: High log throughput fills the queue in milliseconds
  • Slow disk I/O: Background thread cannot drain the queue fast enough
  • Log storm: An error condition generates thousands of log entries per second
  • neverBlock=false (default): Application threads block when the queue is full
  • Queue size too large: Setting queueSize to millions causes OOM when messages accumulate
  • No discard policy: Default behavior drops the oldest messages, which may be the most important

Step-by-Step Fix

Step 1: Configure async appender with appropriate queue size

```xml <configuration> <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>logs/app.log</file> <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy"> <fileNamePattern>logs/app-%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern> <maxFileSize>100MB</maxFileSize> <maxHistory>30</maxHistory> </rollingPolicy> <encoder> <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern> </encoder> </appender>

<appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender"> <queueSize>1024</queueSize> <!-- Default is 256 --> <discardingThreshold>0</discardingThreshold> <!-- 0 = never discard INFO+ --> <neverBlock>false</neverBlock> <!-- true = drop messages instead of blocking --> <maxFlushTime>1000</maxFlushTime> <!-- Max ms to wait for queue drain on shutdown --> <includeCallerData>false</includeCallerData> <!-- true = more expensive but includes line numbers --> <appender-ref ref="FILE"/> </appender>

<root level="INFO"> <appender-ref ref="ASYNC"/> </root> </configuration> ```

Step 2: Use neverBlock for latency-sensitive applications

xml
<appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
    <queueSize>2048</queueSize>
    <neverBlock>true</neverBlock>  <!-- Drop messages instead of blocking app threads -->
    <discardingThreshold>512</discardingThreshold>  <!-- Start dropping when 512 slots remain -->
    <appender-ref ref="FILE"/>
</appender>

With neverBlock=true, the application never waits for logging. Messages are dropped when the queue is full. Set discardingThreshold to start dropping less important messages (TRACE, DEBUG) before important ones (WARN, ERROR).

Step 3: Use Logstash encoder for structured async logging

```xml <appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender"> <queueSize>4096</queueSize> <neverBlock>true</neverBlock> <appender-ref ref="LOGSTASH"/> </appender>

<appender name="LOGSTASH" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>logs/app.json</file> <encoder class="net.logstash.logback.encoder.LogstashEncoder"> <includeCallerData>false</includeCallerData> </encoder> <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy"> <fileNamePattern>logs/app-%d{yyyy-MM-dd}.%i.json.gz</fileNamePattern> <maxFileSize>100MB</maxFileSize> <maxHistory>7</maxHistory> </rollingPolicy> </appender> ```

Prevention

  • Set queueSize to 1024-4096 for most production workloads
  • Use neverBlock=true for latency-sensitive applications, false for log-critical applications
  • Monitor queue utilization via JMX: ch.qos.logback.classic.AsyncAppender MBean
  • Set discardingThreshold to protect ERROR-level messages from being dropped
  • Use includeCallerData=false for better performance (caller data is expensive to collect)
  • Add a WARN-level appender that is NOT async to ensure critical errors are always logged