Introduction
After upgrading a message producer to a newer version, serialization schema incompatibility can cause messages to fail deserialization on the consumer side. This occurs when the producer introduces breaking changes -- such as removing fields, changing field types, or altering the serialization format -- that existing consumers cannot handle.
Symptoms
- Consumers throw
SerializationException,InvalidProtocolBufferException, or schema registry compatibility errors - Messages land in dead letter queues with deserialization failure reasons
- Producer logs show schema registration rejected with backward compatibility violation
- Partial data loss where newer messages are unreadable by older consumer versions
- Error message:
Incompatible schema: BACKWARD compatibility check failed
Common Causes
- Producer upgraded with a schema change that removes a required field consumers still expect
- Schema registry enforces backward compatibility but the new schema version breaks the contract
- Avro or Protobuf schema evolved without proper field numbering (Protobuf) or default values (Avro)
- JSON serialization changed field naming convention (snake_case to camelCase) without consumer update
- Schema version ID embedded in message headers does not match any registered version in the registry
Step-by-Step Fix
- 1.Identify the incompatible schema version: Check the schema registry for the latest registered schema and its compatibility status.
- 2.```bash
- 3.curl -s http://schema-registry:8081/subjects/my-topic-value/versions/latest | jq '.schema | fromjson'
- 4.
` - 5.Roll back the producer schema to the last compatible version: Revert the producer to the previous schema version while consumers are updated.
- 6.```bash
- 7.curl -X POST http://schema-registry:8081/subjects/my-topic-value/versions \
- 8.-H "Content-Type: application/vnd.schemaregistry.v1+json" \
- 9.-d '{"schema": "{\"type\":\"record\",\"name\":\"Event\",\"fields\":[{\"name\":\"id\",\"type\":\"string\"},{\"name\":\"timestamp\",\"type\":\"long\"}]}"}'
- 10.
` - 11.Add default values for new fields to maintain backward compatibility: Ensure the updated schema includes defaults so older consumers can deserialize.
- 12.```json
- 13.{
- 14."type": "record",
- 15."name": "Event",
- 16."fields": [
- 17.{"name": "id", "type": "string"},
- 18.{"name": "timestamp", "type": "long"},
- 19.{"name": "newField", "type": ["null", "string"], "default": null}
- 20.]
- 21.}
- 22.
` - 23.Deploy consumers with the new schema before upgrading producers: Follow a consumer-first deployment strategy.
- 24.```bash
- 25.kubectl rollout restart deployment/consumer-service
- 26.kubectl rollout status deployment/consumer-service --timeout=300s
- 27.
` - 28.Verify schema compatibility mode is set correctly: Configure the schema registry subject to enforce backward compatibility.
- 29.```bash
- 30.curl -X PUT http://schema-registry:8081/config/my-topic-value \
- 31.-H "Content-Type: application/vnd.schemaregistry.v1+json" \
- 32.-d '{"compatibility": "BACKWARD"}'
- 33.
`
Prevention
- Always deploy schema changes in a backward-compatible manner: consumers first, then producers
- Enable schema registry compatibility checks at the subject level before any producer deployment
- Use Avro or Protobuf with explicit field IDs and default values for all optional fields
- Implement integration tests that verify serialization/deserialization across schema versions
- Maintain a schema version matrix tracking which producer and consumer versions support which schema versions