Introduction

Write concern controls the level of acknowledgment MongoDB provides for write operations. When a write cannot achieve the specified write concern level, the operation fails even if the data was written to the primary. This protects data durability but can cause application errors when replica set members are unavailable or replication is lagging.

Symptoms

Write concern errors appear with specific error codes:

```text # Write concern timeout MongoWriteConcernError: Waiting for replication timed out Error: writeConcernError: { code: 100, codeName: "CannotSatisfyWriteConcern" }

# Insufficient members MongoServerError: Not enough data-bearing nodes to satisfy write concern No replica set member available for write concern

# Member unavailable MongoWriteConcernError: replica set member unavailable

# In driver response { "writeConcernError": { "code": 100, "codeName": "CannotSatisfyWriteConcern", "errmsg": "Waiting for replication timed out" } }

# In logs {"msg":"Write concern failed","attr":{"w":"majority","wtimeout":5000}} ```

Common Causes

  1. 1.Secondary members unavailable - Not enough members to satisfy w:majority
  2. 2.Replication lag high - Secondaries cannot catch up within timeout
  3. 3.wtimeout too short - Timeout expires before replication completes
  4. 4.Network connectivity issues - Primary cannot reach secondaries
  5. 5.Member in maintenance mode - Members temporarily not voting
  6. 6.Arbiter configuration - Not counting arbiters in data-bearing nodes
  7. 7.Write concern too strict - w:3 or w:all when members are unavailable

Step-by-Step Fix

Step 1: Check Replica Set Status

Verify member availability:

```javascript mongosh --host primary:27017

// Replica set status rs.status()

// Key fields: // - stateStr: PRIMARY/SECONDARY // - health: 1 (healthy) or 0 (unhealthy) // - optime: current replication position // - lastHeartbeatMessage: connectivity status

// Count healthy data-bearing members let healthyDataMembers = rs.status().members.filter(m => m.health === 1 && m.stateStr !== "ARBITER" && !rs.conf().members.find(c => c._id === m._id)?.arbiterOnly ).length print("Healthy data-bearing members: " + healthyDataMembers) ```

Check replication lag:

```javascript // Print lag information rs.printSlaveReplicationInfo()

// Or detailed rs.status().members.forEach(m => { if (m.stateStr === "SECONDARY") { let primary = rs.status().members.find(p => p.stateStr === "PRIMARY") let lag = primary.optimeDate - m.optimeDate print(m.name + " lag: " + (lag/1000).toFixed(2) + " seconds") } }) ```

Step 2: Diagnose Write Concern Configuration

Check current write concern settings:

```javascript // Default write concern db.adminCommand({ getDefaultRWConcern: 1 })

// Collection-level write concern (if set) db.collection.stats().writeConcern ```

Check the failing write operation:

javascript
// Example write with w:majority
db.collection.insertOne(
  { data: "test" },
  { writeConcern: { w: "majority", wtimeout: 5000 } }
)

Step 3: Resolve Member Availability Issues

If members are unhealthy:

```javascript // Check why member is down rs.status().members.forEach(m => { if (m.health === 0) { print(m.name + " - " + m.lastHeartbeatMessage) } })

// Common issues: // - Network connectivity // - Service crashed // - Disk full ```

Restart unavailable members:

```bash # On unavailable member sudo systemctl status mongod sudo systemctl start mongod

# Check connectivity mongosh --host primary:27017 --eval "rs.status()" ```

Step 4: Adjust Write Concern Settings

Use appropriate write concern:

```javascript // For critical data - w:majority (default recommended) db.collection.insertOne( { data: "critical" }, { writeConcern: { w: "majority" } } )

// For less critical data - w:1 (primary only) db.collection.insertOne( { data: "log" }, { writeConcern: { w: 1 } } )

// For maximum durability - w:all (requires all data members) // Only use when all members are healthy db.collection.insertOne( { data: "audit" }, { writeConcern: { w: "all" } } ) ```

Set default write concern:

```javascript // Set cluster default db.adminCommand({ setDefaultRWConcern: 1, defaultWriteConcern: { w: "majority", wtimeout: 5000 } })

// Verify db.adminCommand({ getDefaultRWConcern: 1 }) ```

Step 5: Adjust Timeout Settings

If timeout is too aggressive:

```javascript // Increase timeout for high-lag scenarios db.collection.insertOne( { data: "test" }, { writeConcern: { w: "majority", wtimeout: 30000 } } // 30 seconds )

// No timeout (wait indefinitely - risky) db.collection.insertOne( { data: "test" }, { writeConcern: { w: "majority" } } ) ```

Step 6: Handle Maintenance Mode Members

Members being re-synced or upgraded:

```javascript // Check member state rs.status().members.forEach(m => { print(m.name + " - " + m.stateStr) // STARTUP2 = initial sync // RECOVERING = catching up })

// Temporarily reduce write concern during maintenance db.collection.insertOne( { data: "test" }, { writeConcern: { w: 1 } } // Primary only during maintenance )

// Or wait for members to recover before using w:majority ```

Step 7: Handle Network Issues

Test connectivity:

```bash # From primary to secondaries for host in secondary1 secondary2; do echo "=== $host ===" mongosh --host $host:27017 --eval "db.runCommand({ping:1})" done

# Check firewall sudo iptables -L -n | grep 27017 ```

Fix network connectivity:

```bash # Ensure bindIp includes replica set member IPs cat /etc/mongod.conf | grep bindIp

# Should be: net: bindIp: localhost,192.168.1.10,192.168.1.11,192.168.1.12

# Firewall rules sudo firewall-cmd --add-port=27017/tcp --permanent sudo firewall-cmd --reload ```

Step 8: Handle j:true (Journal) Issues

Journal write concern requires primary journal:

```javascript // w:1, j:true - Primary writes to journal before ack db.collection.insertOne( { data: "test" }, { writeConcern: { w: 1, j: true } } )

// Check journaling is enabled db.serverStatus().dur // "commits" should be incrementing

// If journal disabled (rare), cannot use j:true ```

Verification

Verify write concern working:

```javascript // 1. All members healthy rs.status().members.every(m => m.health === 1)

// 2. Replication lag acceptable rs.printSlaveReplicationInfo() // Should show lag < 10 seconds

// 3. Test write with majority let result = db.test.insertOne( { test: Date.now() }, { writeConcern: { w: "majority", wtimeout: 10000 } } ) printjson(result) // Should have no writeConcernError

// 4. Check write acknowledged on secondaries rs.status().members.forEach(m => { if (m.stateStr === "SECONDARY") { mongosh --host m.name --eval db.test.findOne({ test: ${result.insertedId} }) } }) ```

Check default write concern:

javascript
db.adminCommand({ getDefaultRWConcern: 1 })

Common Pitfalls

  • w:majority during member failure - 2 of 3 members healthy = 1 majority available
  • wtimeout too short for replication lag - High lag causes timeout before catchup
  • w:all with arbiter - Arbiters don't count toward w:all
  • Forcing w:3 on 3-node set - Any member failure breaks writes
  • Not handling writeConcernError - Application should retry or log

Best Practices

  • Use w:majority as default for production
  • Set reasonable wtimeout (5-30 seconds based on lag tolerance)
  • Monitor replica set health with alerts
  • Reduce write concern temporarily during maintenance windows
  • Handle writeConcernError gracefully in application code
  • Test write concern behavior during failover scenarios
  • Document write concern requirements per data type
  • MongoDB Read Concern Error
  • MongoDB Replica Set Error
  • MongoDB Transaction Error
  • MongoDB Oplog Error