Introduction DynamoDB throttling (ProvisionedThroughputExceededException) occurs when your application exceeds the table's read or write capacity. This causes request failures, increased latency, and potential data inconsistency if retries are not properly implemented.

Symptoms - Application receives: `An error occurred (ProvisionedThroughputExceededException) when calling the PutItem operation` - CloudWatch metric ThrottledRequests > 0 - Increased latency for DynamoDB operations - Application logs show retry storm patterns

Common Causes - Hot partition: single partition key receiving disproportionate traffic - Sudden traffic spike exceeding provisioned capacity - Scan operations consuming large amounts of RCU - Auto scaling reacting too slowly to traffic spikes - Burst writes exceeding 2x provisioned write capacity

Step-by-Step Fix 1. **Check throttling metrics**: ```bash aws cloudwatch get-metric-statistics \ --namespace AWS/DynamoDB --metric-name ThrottledRequests \ --dimensions Name=TableName,Value=my-table Name=Operation,Value=PutItem \ --start-time $(date -u -d '1 hour ago' +%Y-%m-%dT%H:%M:%SZ) \ --end-time $(date -u +%Y-%m-%dT%H:%M:%SZ) --period 300 --statistics Sum ```

  1. 1.Increase provisioned capacity immediately:
  2. 2.```bash
  3. 3.aws dynamodb update-table --table-name my-table \
  4. 4.--provisioned-throughput ReadCapacityUnits=200,WriteCapacityUnits=200
  5. 5.`
  6. 6.Implement exponential backoff in application code:
  7. 7.```python
  8. 8.import boto3
  9. 9.from botocore.exceptions import ClientError
  10. 10.import time

dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('my-table')

def put_with_retry(item, max_retries=5): for attempt in range(max_retries): try: return table.put_item(Item=item) except ClientError as e: if e.response['Error']['Code'] == 'ProvisionedThroughputExceededException': delay = (2 ** attempt) * 0.1 time.sleep(delay) else: raise raise Exception("Max retries exceeded") ```

  1. 1.Enable auto scaling:
  2. 2.```bash
  3. 3.aws application-autoscaling register-scalable-target \
  4. 4.--service-namespace dynamodb \
  5. 5.--scalable-dimension dynamodb:table:WriteCapacityUnits \
  6. 6.--resource-id table/my-table --min-capacity 5 --max-capacity 500
  7. 7.`

Prevention - Use on-demand capacity mode for unpredictable workloads - Design partition keys to distribute traffic evenly - Implement exponential backoff with jitter in all DynamoDB clients - Use DAX for read-heavy workloads - Batch writes using BatchWriteItem (up to 25 items per call)