Introduction
The Kafka Schema Registry enforces compatibility rules to prevent breaking changes in message schemas. When a producer attempts to register a new schema version, the registry validates it against the compatibility level configured for the subject. If the new schema violates backward compatibility -- such as removing a field that existing consumers depend on -- the registration is rejected, blocking the producer deployment.
Symptoms
- Schema registry returns HTTP 409 Conflict during schema registration
- Producer deployment fails with
IncompatibleSchemaException - Error message:
Schema being registered is incompatible with an earlier schema for subject - CI/CD pipeline fails at the schema validation step
- Rollback required to deploy the previous schema version
Common Causes
- New schema removes a field that existing consumers still reference
- Field type change from
inttostringbreaking binary compatibility in Avro - Required field added without a default value, breaking backward compatibility
- Schema registry compatibility mode set to
BACKWARDbut change is onlyFORWARDcompatible - Enum value removed or renamed in an incompatible way
Step-by-Step Fix
Prevention
- Always deploy consumers before producers when evolving schemas (consumer-first strategy)
- Use Avro union types with defaults for all new fields to maintain backward compatibility
- Never remove fields from an active schema -- mark them as deprecated instead
- Integrate schema compatibility checks into CI/CD pipelines before deployment
- Use schema registry's test compatibility endpoint to validate changes before registration
- Maintain a schema change log documenting all evolution decisions and their compatibility impact