Strategies for atomic writes to a database and an event queue

I had to design a mechanism for publishing a message atomically to a transactional database (such as MySQL) and also to an event store (such as Kafka). Thought I’d write about it!

Outline of strategies
  • Option-1: Write to event store first, and after a successful publish, use an event store subscriber to publish the same event to the RDBMS. If write to RDBMS fails, send a “compensating transaction” to the event store. Notes: Implementing a compensating transaction pattern can be complicated.
  • Option-2: Write to RDBMS first, use change data capture (CDC) from RDBMS to stream events from the database to publish to the event store. Most RDBMSs have associated CDC tooling. Notes: CDC could add load to RDBMS resources. External clustered CDC solutions such as RedHat’s Debezium do exist, might need complex integrations.
  • Option-3: Use an event store that supports distributed transactions with an external RDBMS (not shown in the above diagram). Notes: Oracle’s AQDB is an event store that supports distributed transactions with RDBMS.
  • Option-4: Write the event to the RDBMS first, and also write to an “event table” in the RDBMS, within the same transaction boundary. Then drain the “event table” via an async process that publishes the events to the event store, and upon successful publish, deletes the event from the “event table”. Notes: This is equivalent of implementing CDC.
  • Option-5: Write to event store first, after a successful publish, write to RDBMS. These are not part of the same transaction. If write to RDBMS fails, send a “compensating transaction” to the event store. Notes: Implementing a compensating transaction pattern can be complicated.

Software engineer by passion and profession !