Introduction SQL Server transaction log files grow when log records cannot be truncated—typically due to uncommitted transactions, missing log backups, or long-running operations. When the log file grows autogrow events cause I/O stalls, write operations slow down dramatically, and the database can become temporarily unavailable.

Symptoms - `WRITELOG` wait type appearing at the top of wait statistics - Transaction log file growing by GBs in minutes - `DBCC SQLPERF(logspace)` shows log space used at 99%+ - Error 9002: `The transaction log for database is full` - Write operations timing out during autogrow events

Common Causes - Log backups not running, preventing log truncation in FULL recovery model - Long-running uncommitted transaction holding log records - Large bulk operations (index rebuild, bulk insert) generating excessive log - Log file autogrow configured with percentage instead of fixed size - Too many Virtual Log Files (VLFs) causing slow recovery and growth

Step-by-Step Fix 1. **Check log file usage and VLF count": ```sql -- Log space usage DBCC SQLPERF(logspace);

-- VLF count SELECT DB_NAME(database_id) AS database_name, COUNT(*) AS vlf_count FROM sys.dm_db_log_info(DB_ID()) GROUP BY database_id; ```

  1. 1.**Check what is preventing log truncation":
  2. 2.```sql
  3. 3.SELECT
  4. 4.name,
  5. 5.log_reuse_wait_desc,
  6. 6.recovery_model_desc
  7. 7.FROM sys.databases
  8. 8.WHERE name = 'mydb';

-- Common values: -- LOG_BACKUP - needs log backup -- ACTIVE_TRANSACTION - long-running transaction -- REPLICATION - replication not consuming log ```

  1. 1.**Take a log backup to allow truncation":
  2. 2.```sql
  3. 3.-- In FULL recovery model, back up the log
  4. 4.BACKUP LOG mydb TO DISK = N'D:\Backups\mydb_log.trn'
  5. 5.WITH COMPRESSION, STATS = 10;

-- Then shrink the log file DBCC SHRINKFILE (mydb_log, 4096); -- Shrink to 4GB ```

  1. 1.**Kill long-running transactions blocking truncation":
  2. 2.```sql
  3. 3.SELECT
  4. 4.s.session_id,
  5. 5.s.login_name,
  6. 6.t.transaction_begin_time,
  7. 7.DATEDIFF(minute, t.transaction_begin_time, GETDATE()) AS minutes_open,
  8. 8.r.command,
  9. 9.t.database_transaction_log_bytes_used / 1024 / 1024 AS log_mb_used
  10. 10.FROM sys.dm_tran_database_transactions t
  11. 11.JOIN sys.dm_tran_session_transactions st ON t.transaction_id = st.transaction_id
  12. 12.JOIN sys.dm_exec_sessions s ON st.session_id = s.session_id
  13. 13.LEFT JOIN sys.dm_exec_requests r ON r.session_id = s.session_id
  14. 14.WHERE t.database_id = DB_ID('mydb')
  15. 15.ORDER BY t.database_transaction_log_bytes_used DESC;
  16. 16.`
  17. 17.**Pre-size the log file to avoid autogrow":
  18. 18.```sql
  19. 19.-- Set the log file to an appropriate size upfront
  20. 20.ALTER DATABASE mydb
  21. 21.MODIFY FILE (NAME = mydb_log, SIZE = 10GB, FILEGROWTH = 512MB);

-- Avoid percentage-based growth ALTER DATABASE mydb MODIFY FILE (NAME = mydb_log, FILEGROWTH = 512MB); ```

Prevention - Set up regular log backups (every 15 minutes for active databases) - Pre-size log files to avoid autogrow during peak operations - Use fixed MB growth size (512MB) instead of percentage - Monitor `log_reuse_wait_desc` with alerting when not `NOTHING` - Keep VLF count under 200 by sizing log file appropriately - Use SIMPLE recovery model for non-critical databases that do not need point-in-time recovery - Monitor `WRITELOG` waits in DMVs and alert on sustained high values