Introduction

Go's defer statement does not execute until the surrounding function returns. When defer is used inside a loop, each deferred call accumulates and only executes when the function exits. A loop that opens files, creates database connections, or allocates memory with defer will exhaust file descriptors, database connections, or memory before the function returns. This is one of the most insidious Go gotchas because the code looks correct -- each resource has a matching defer -- but the timing of cleanup is wrong.

Symptoms

bash
open /tmp/data/file_1024.dat: too many open files

Or:

bash
runtime: out of memory
# Defer accumulated thousands of large allocations

Or database connection pool exhaustion:

bash
dial tcp 127.0.0.1:5432: connect: connection refused
# All connections still open because defer hasn't run

Common Causes

  • Defer in for loop: Defer accumulates until function returns
  • File open in loop with defer close: Each file stays open until end of function
  • Rows.Scan in loop without close: Database rows not closed between iterations
  • Response body in loop: HTTP response bodies accumulate
  • Large allocations with defer free: Memory not released until function exit
  • Defer in anonymous function inside loop: Closure captures loop variable

Step-by-Step Fix

Step 1: Use explicit cleanup instead of defer in loops

```go // WRONG: Defer accumulates in loop func processFilesWrong(dir string) error { entries, _ := os.ReadDir(dir) for _, entry := range entries { f, err := os.Open(filepath.Join(dir, entry.Name())) if err != nil { return err } defer f.Close() // All files stay open until function returns! processFile(f) } return nil }

// CORRECT: Explicit cleanup in loop func processFilesCorrect(dir string) error { entries, _ := os.ReadDir(dir) for _, entry := range entries { f, err := os.Open(filepath.Join(dir, entry.Name())) if err != nil { return err } processFile(f) f.Close() // Close immediately after use } return nil } ```

Step 2: Use anonymous function for defer scope

go
func processFilesWithDefer(dir string) error {
    entries, _ := os.ReadDir(dir)
    for _, entry := range entries {
        // Anonymous function creates defer scope per iteration
        if err := func() error {
            f, err := os.Open(filepath.Join(dir, entry.Name()))
            if err != nil {
                return err
            }
            defer f.Close()  // Runs when anonymous function returns
            return processFile(f)
        }(); err != nil {
            return err
        }
    }
    return nil
}

Step 3: Limit concurrent resource usage

```go func processFilesConcurrent(dir string, maxOpen int) error { entries, _ := os.ReadDir(dir) sem := make(chan struct{}, maxOpen) // Semaphore errChan := make(chan error, len(entries)) var wg sync.WaitGroup

for _, entry := range entries { wg.Add(1) sem <- struct{}{} // Acquire semaphore go func(e os.DirEntry) { defer wg.Done() defer func() { <-sem }() // Release semaphore

f, err := os.Open(filepath.Join(dir, e.Name())) if err != nil { errChan <- err return } defer f.Close()

if err := processFile(f); err != nil { errChan <- err } }(entry) }

wg.Wait() close(errChan)

for err := range errChan { if err != nil { return err } } return nil } ```

Prevention

  • Never use defer in a loop that iterates more than a few times
  • Use explicit Close() calls in loops, or wrap iteration in anonymous functions
  • Limit concurrent resource usage with semaphores (buffered channels)
  • Monitor file descriptor count with lsof -p $$ during development
  • Use runtime/debug.FreeOSMemory() for long-running loops with heavy allocations
  • Add resource limit checks to tests that process large datasets
  • Use static analysis tools like go-critic that flag defer in loop patterns