Introduction
When defer resp.Body.Close() is called inside a loop, the deferred calls do not execute until the enclosing function returns. If the loop processes many HTTP requests, all response bodies remain open simultaneously, eventually exhausting file descriptors. This is one of the most subtle resource leak patterns in Go code.
Symptoms
Get "https://api.example.com/data": dial tcp: too many open files- Works for small batches, fails for large batches
- File descriptor count grows with each iteration
lsof -p <pid>shows hundreds of open socket connectionsdeferstack grows linearly with loop count
```go // WRONG - ALL response bodies stay open until function returns func fetchAll(urls []string) ([]string, error) { var results []string for _, url := range urls { resp, err := http.Get(url) if err != nil { return nil, err } defer resp.Body.Close() // Not executed until ALL URLs processed!
data, _ := io.ReadAll(resp.Body) results = append(results, string(data)) } return results, nil } // 1000 URLs = 1000 open connections simultaneously ```
Common Causes
- Batch processing multiple HTTP endpoints
- Scraping or data aggregation over many URLs
- Health check loops monitoring many services
- Bulk API calls to microservices
- Testing with many mock endpoints
Step-by-Step Fix
- 1.Wrap loop body in anonymous function:
- 2.```go
- 3.func fetchAll(urls []string) ([]string, error) {
- 4.var results []string
- 5.for _, url := range urls {
- 6.result, err := func() (string, error) {
- 7.resp, err := http.Get(url)
- 8.if err != nil { return "", err }
- 9.defer resp.Body.Close() // Executed when anonymous function returns
data, err := io.ReadAll(resp.Body) if err != nil { return "", err } return string(data), nil }() if err != nil { return nil, fmt.Errorf("failed to fetch %s: %w", url, err) } results = append(results, result) } return results, nil } ```
- 1.Explicit close without defer:
- 2.```go
- 3.func fetchAll(urls []string) ([]string, error) {
- 4.var results []string
- 5.for _, url := range urls {
- 6.resp, err := http.Get(url)
- 7.if err != nil { return nil, err }
data, err := io.ReadAll(resp.Body) resp.Body.Close() // Explicit, immediate close if err != nil { return nil, err }
results = append(results, string(data)) } return results, nil } ```
- 1.Concurrent fetching with worker pool:
- 2.```go
- 3.func fetchAllConcurrent(urls []string, workers int) ([]string, error) {
- 4.type result struct {
- 5.data string
- 6.err error
- 7.}
urlCh := make(chan string, len(urls)) resultCh := make(chan result, len(urls))
// Worker goroutines var wg sync.WaitGroup for i := 0; i < workers; i++ { wg.Add(1) go func() { defer wg.Done() for url := range urlCh { resp, err := http.Get(url) if err != nil { resultCh <- result{err: err} continue }
data, err := io.ReadAll(resp.Body) resp.Body.Close() // Close immediately in worker
if err != nil { resultCh <- result{err: err} } else { resultCh <- result{data: string(data)} } } }() }
// Send URLs for _, url := range urls { urlCh <- url } close(urlCh)
// Wait and collect wg.Wait() close(resultCh)
var results []string for r := range resultCh { if r.err != nil { return nil, r.err } results = append(results, r.data) } return results, nil } ```
Prevention
- Use
go vetwhich warns about some defer-in-loop patterns - Never use
deferinside loops that process more than a few items - Set
ulimit -nto a low value during testing to catch leaks early - Monitor file descriptor count in production:
cat /proc/<pid>/fd | wc -l - Use
http.Clientwith connection pool limits to bound concurrent connections - Consider
errgroup.Groupwith bounded concurrency for parallel HTTP calls: - ```go
- g, ctx := errgroup.WithContext(ctx)
- g.SetLimit(10) // Max 10 concurrent requests
`