Your application crashes with "Too many open files" or accept() fails with EMFILE. High-traffic servers, databases, and proxies are particularly prone to hitting file descriptor limits. Linux has multiple layers of limits, and understanding how to adjust them is essential for production systems.

Understanding File Descriptor Limits

Linux limits the number of file descriptors (which include files, sockets, pipes, and other I/O resources) that a process can open. There are two types of limits: - Soft limit: Current limit that can be raised up to the hard limit - Hard limit: Maximum ceiling that can't be exceeded without root

Typical Error Messages

bash
Too many open files
accept(): Too many open files
socket(): Too many open files
EMFILE (Process file descriptor table full)
Cannot open file: Too many open files

Common symptoms: - Application crashes or becomes unresponsive - New connections fail - Unable to open new files - Database connection errors - Web server 500 errors under load

Diagnosing File Descriptor Issues

Check Current Limits

```bash # View all limits for current shell ulimit -a

# View specific limits ulimit -n # Soft limit for open files ulimit -Hn # Hard limit for open files ulimit -Sn # Soft limit (same as -n)

# View in a specific format ulimit -a -S # All soft limits ulimit -a -H # All hard limits

# Check limits for a running process cat /proc/<pid>/limits

# Example for nginx cat /proc/$(cat /var/run/nginx.pid)/limits

# Check current file descriptor usage ls /proc/<pid>/fd | wc -l

# Check for the process with most open files lsof | awk '{print $1}' | sort | uniq -c | sort -rn | head ```

Check System-Wide Limits

```bash # System-wide file descriptor limit cat /proc/sys/fs/file-max

# Current system-wide usage cat /proc/sys/fs/file-nr # Format: allocated unused maximum

# Per-user limits grep -v "^#" /etc/security/limits.conf

# Check systemd user slice limits systemctl show user-1000.slice | grep LimitNOFILE ```

Check Process-Specific Limits

```bash # Find process PID pidof nginx pgrep -f "java.*myapp"

# Check process limits cat /proc/<pid>/limits | grep "open files"

# Check current file descriptors ls /proc/<pid>/fd | wc -l

# List all open files for a process lsof -p <pid>

# List network connections lsof -i -p <pid> ```

Solutions

Solution 1: Temporary Limit Increase (Current Shell)

```bash # Increase soft limit (up to hard limit) ulimit -n 65535

# This only affects the current shell and its children # Verify ulimit -n

# Run a command with higher limits ulimit -n 65535 && ./my-application ```

Solution 2: Permanent User Limits

Edit /etc/security/limits.conf:

```bash # Edit limits configuration sudo nano /etc/security/limits.conf

# Add entries for specific users or groups: # <domain> <type> <item> <value>

# For all users * soft nofile 65535 * hard nofile 65535

# For specific user myuser soft nofile 100000 myuser hard nofile 100000

# For a group @developers soft nofile 65535 @developers hard nofile 65535

# For root root soft nofile 65535 root hard nofile 100000 ```

You may also need to create a file in /etc/security/limits.d/:

```bash # Create a custom limits file sudo nano /etc/security/limits.d/nofile.conf

# Add the same format entries * soft nofile 65535 * hard nofile 65535 ```

After editing, the user must log out and log back in for changes to take effect.

Solution 3: Fix PAM Session Configuration

Limits won't apply if PAM doesn't load the session module:

```bash # Check if PAM session module is enabled grep pam_limits /etc/pam.d/*

# Should see: # session required pam_limits.so

# If not present, add to common-session sudo nano /etc/pam.d/common-session

# Add: session required pam_limits.so

# For SSH specifically sudo nano /etc/pam.d/sshd # Ensure: session required pam_limits.so

# Also check sshd_config sudo nano /etc/ssh/sshd_config # Ensure: UsePAM yes ```

Solution 4: Systemd Service Limits

For services managed by systemd, limits must be set in the service file:

```bash # Check current service limits systemctl show nginx | grep LimitNOFILE

# Edit service to increase limits sudo systemctl edit nginx --full

# Add or modify in [Service] section: [Service] LimitNOFILE=65535 LimitNOFILESoft=65535

# Or use a drop-in directory sudo systemctl edit nginx

# Add: [Service] LimitNOFILE=65535

# Apply changes sudo systemctl daemon-reload sudo systemctl restart nginx

# Verify systemctl show nginx | grep LimitNOFILE cat /proc/$(pidof nginx)/limits | grep "open files" ```

For all systemd services globally:

```bash # Edit global systemd configuration sudo nano /etc/systemd/system.conf

# Uncomment and set: DefaultLimitNOFILE=65535

# Also in user.conf for user services sudo nano /etc/systemd/user.conf DefaultLimitNOFILE=65535

# Reload systemd sudo systemctl daemon-reexec ```

Solution 5: Increase System-Wide Limits

```bash # Check current system limit cat /proc/sys/fs/file-max

# Increase temporarily sudo sysctl -w fs.file-max=1000000

# Make permanent sudo nano /etc/sysctl.conf

# Add or modify: fs.file-max = 1000000

# Apply sudo sysctl -p

# Verify cat /proc/sys/fs/file-max ```

Solution 6: Application-Specific Configuration

Many applications have their own file descriptor settings:

Nginx: ```nginx # /etc/nginx/nginx.conf worker_rlimit_nofile 65535;

events { worker_connections 4096; } ```

Apache: ``bash # ServerLimit and MaxRequestWorkers affect file usage # In /etc/apache2/mods-enabled/mpm_prefork.conf or similar: MaxRequestWorkers 1000

MySQL/MariaDB: ``ini # /etc/mysql/my.cnf or /etc/my.cnf [mysqld] open_files_limit = 65535 # Also set in systemd override: # LimitNOFILE=65535

PostgreSQL: ```bash # systemd override sudo systemctl edit postgresql

[Service] LimitNOFILE=65535

# Also in postgresql.conf (for superuser reserved) superuser_reserved_connections = 3 ```

Redis: ```bash # /etc/redis/redis.conf maxclients 10000

# systemd override sudo systemctl edit redis

[Service] LimitNOFILE=65535 ```

Elasticsearch: ```bash # In jvm.options or environment MAX_OPEN_FILES=65535

# systemd override sudo systemctl edit elasticsearch

[Service] LimitNOFILE=65535 ```

Node.js: ```bash # Set limit before starting ulimit -n 65535 && node app.js

# Or in systemd service [Service] LimitNOFILE=65535 ExecStart=/usr/bin/node /app/server.js ```

Solution 7: Docker Container Limits

```bash # Set ulimit for container docker run --ulimit nofile=65535:65535 myimage

# In docker-compose.yml ulimits: nofile: soft: 65535 hard: 65535

# Set default in Docker daemon # /etc/docker/daemon.json { "default-ulimits": { "nofile": { "Name": "nofile", "Hard": 65535, "Soft": 65535 } } }

# Restart Docker sudo systemctl restart docker ```

Solution 8: Fix Leaking File Descriptors

If your application is leaking file descriptors:

```bash # Find processes with many open files lsof | awk '{print $1}' | sort | uniq -c | sort -rn | head

# Check specific process lsof -p <pid> | wc -l

# Find deleted but still open files lsof +L1

# Find network connections lsof -i

# Find open files by user lsof -u username

# Find files opened by a specific command lsof -c nginx

# Monitor file descriptor usage over time watch -n 1 'ls /proc/<pid>/fd | wc -l' ```

Checking What's Using File Descriptors

```bash # List all open files for a process lsof -p <pid>

# Count by type lsof -p <pid> | awk '{print $5}' | sort | uniq -c | sort -rn

# Show only TCP connections lsof -i TCP

# Show only listening sockets lsof -i -P -n | grep LISTEN

# Show files opened from a specific directory lsof +D /var/log

# Show open files by port lsof -i :80 lsof -i :443 ```

Troubleshooting in Production

When you hit limits in production:

```bash # Quick temporary fix sudo sysctl -w fs.file-max=1000000

# For specific process, if you can restart systemctl restart nginx

# Check if restart helps cat /proc/$(pidof nginx)/limits

# Monitor in real-time watch -n 1 'cat /proc/sys/fs/file-nr' ```

Verification

After making changes:

```bash # Verify user limits (after re-login) ulimit -n ulimit -Hn

# Verify system limits cat /proc/sys/fs/file-max cat /proc/sys/fs/file-nr

# Verify process limits cat /proc/<pid>/limits | grep "open files"

# Verify systemd service limits systemctl show nginx | grep LimitNOFILE

# Test by opening many files # WARNING: Run with caution python3 -c " import os files = [] try: for i in range(100000): files.append(open('/dev/null', 'r')) print(f'Opened {len(files)} files') except Exception as e: print(f'Failed at {len(files)} files: {e}') "

# Check current usage lsof | wc -l cat /proc/sys/fs/file-nr ```

Quick Reference

```bash # View limits ulimit -n # Soft limit ulimit -Hn # Hard limit ulimit -a # All limits

# Set temporary limit ulimit -n 65535

# Set permanent limit # Edit /etc/security/limits.conf: # * soft nofile 65535 # * hard nofile 65535

# System-wide limit sudo sysctl -w fs.file-max=1000000

# Systemd service limit # Edit service: LimitNOFILE=65535

# Check current usage lsof | wc -l cat /proc/sys/fs/file-nr

# Find file descriptor leaks lsof +L1 # Deleted but open files lsof -p <pid> | wc -l # Count for process ```