Introduction

After upgrading a message producer to a newer version, serialization schema incompatibility can cause messages to fail deserialization on the consumer side. This occurs when the producer introduces breaking changes -- such as removing fields, changing field types, or altering the serialization format -- that existing consumers cannot handle.

Symptoms

  • Consumers throw SerializationException, InvalidProtocolBufferException, or schema registry compatibility errors
  • Messages land in dead letter queues with deserialization failure reasons
  • Producer logs show schema registration rejected with backward compatibility violation
  • Partial data loss where newer messages are unreadable by older consumer versions
  • Error message: Incompatible schema: BACKWARD compatibility check failed

Common Causes

  • Producer upgraded with a schema change that removes a required field consumers still expect
  • Schema registry enforces backward compatibility but the new schema version breaks the contract
  • Avro or Protobuf schema evolved without proper field numbering (Protobuf) or default values (Avro)
  • JSON serialization changed field naming convention (snake_case to camelCase) without consumer update
  • Schema version ID embedded in message headers does not match any registered version in the registry

Step-by-Step Fix

  1. 1.Identify the incompatible schema version: Check the schema registry for the latest registered schema and its compatibility status.
  2. 2.```bash
  3. 3.curl -s http://schema-registry:8081/subjects/my-topic-value/versions/latest | jq '.schema | fromjson'
  4. 4.`
  5. 5.Roll back the producer schema to the last compatible version: Revert the producer to the previous schema version while consumers are updated.
  6. 6.```bash
  7. 7.curl -X POST http://schema-registry:8081/subjects/my-topic-value/versions \
  8. 8.-H "Content-Type: application/vnd.schemaregistry.v1+json" \
  9. 9.-d '{"schema": "{\"type\":\"record\",\"name\":\"Event\",\"fields\":[{\"name\":\"id\",\"type\":\"string\"},{\"name\":\"timestamp\",\"type\":\"long\"}]}"}'
  10. 10.`
  11. 11.Add default values for new fields to maintain backward compatibility: Ensure the updated schema includes defaults so older consumers can deserialize.
  12. 12.```json
  13. 13.{
  14. 14."type": "record",
  15. 15."name": "Event",
  16. 16."fields": [
  17. 17.{"name": "id", "type": "string"},
  18. 18.{"name": "timestamp", "type": "long"},
  19. 19.{"name": "newField", "type": ["null", "string"], "default": null}
  20. 20.]
  21. 21.}
  22. 22.`
  23. 23.Deploy consumers with the new schema before upgrading producers: Follow a consumer-first deployment strategy.
  24. 24.```bash
  25. 25.kubectl rollout restart deployment/consumer-service
  26. 26.kubectl rollout status deployment/consumer-service --timeout=300s
  27. 27.`
  28. 28.Verify schema compatibility mode is set correctly: Configure the schema registry subject to enforce backward compatibility.
  29. 29.```bash
  30. 30.curl -X PUT http://schema-registry:8081/config/my-topic-value \
  31. 31.-H "Content-Type: application/vnd.schemaregistry.v1+json" \
  32. 32.-d '{"compatibility": "BACKWARD"}'
  33. 33.`

Prevention

  • Always deploy schema changes in a backward-compatible manner: consumers first, then producers
  • Enable schema registry compatibility checks at the subject level before any producer deployment
  • Use Avro or Protobuf with explicit field IDs and default values for all optional fields
  • Implement integration tests that verify serialization/deserialization across schema versions
  • Maintain a schema version matrix tracking which producer and consumer versions support which schema versions