This blog focuses on transactional and message delivery behavior, particularly as it relates to microservice architectures. There are of course numerous areas to compare MongoDB, PostgreSQL, and Kafka with the converged Oracle DB and Oracle Transactional Event Queues/AQ that are beyond the scope of this blog.
The Oracle database itself was first released in 1979 (PostgreSQL was released in 1997 and MongoDB in 2009). Oracle Advanced Queuing (AQ) is a messaging system that is part of every Oracle database edition and was first released in 2002 (Kafka was open-sourced by LinkedIn in 2011 and Confluent was founded in 2014). AQ sharded queues introduced partitioning in release 12c and is now called Transaction Event Queues (TEQ). TEQ supports partitioning, a Kafka Java client, and a number of other features for Kafka interoperability, as well as upcoming technologies such as automated Saga support, etc. that will be covered in future blogs.
The Oracle database and AQ/TEQ have years of experience and hardening and at the same time have continuously evolved to support not only relational data and SQL (as PostgreSQL does) but also JSON document-based access (as MongoDB does) and other data types, such as XML, spatial, graph, time-series, blockchain, etc.
In doing so, Oracle provides an ideal Converged Database solution for microservices that supports both polyglot data models and polyglot programming languages as well as native communication support via both REST endpoints/APIs and, as we will discuss in more detail now, robust transactional messaging. This provides a number of advantages as far as better and more simplified administration, security, availability, cost, etc. while also providing the bounded context model prescribed by microservices where data can be isolated at various levels (schema, Pluggable DB, Container DB, etc.) It is the best of both worlds.
While REST, gRPC, API, etc. communication is certainly widespread and has its purpose, more and more microservice systems use an event-driven architecture for communication for a number of reasons including decoupling/adaptability, scalability, and fault tolerance (removing the need for retries, consumers may be offline, etc.). Related to this, more data-driven systems are employing an event sourcing pattern of one form or another. Changes to data (eg a SQL command to "insert order") are sent via events that describe the data change (eg an "orderPlaced" event) that are received by interested services. Thus the data is sourced from the events and event sourcing in general moves the source of truth for data to the event broker. This fits nicely with the decoupling paradigm of microservices.
(Note that while the terms "message" and "event" can technically have somewhat different semantics, we are using them interchangeably in this context.)
It is very important to notice that there are actually two operations involved in event sourcing, the data change being made and the communication/event of that data change. There is, therefore, a transactional consideration and any inconsistency or failure causing a lack of atomicity between these two operations must be accounted for. This is an area where TEQ has an extremely significant and unique advantage as it, the messaging/eventing system, is actually part of the database system itself and therefore can conduct both of these operations in the same local transaction and provide this atomicity guarantee. This is not something that can be done between Kafka and any database (such as PostgreSQL and MongoDB) which must use outbox patterns that directly read database logs and stream them as events via a connector or use change data capture or polling or other techniques. A further advantage of TEQ usage is that it allows for the inclusion of arbitrary context with messages rather than the static context options of messages sent via Kafka.
This is in addition to the fact that TEQ provides exactly-once message delivery without the need for the developer to write their microservices to be idempotent and/or explicitly implement a duplicate-consumer pattern. This is a significant advantage and again a perfect match for microservice architectures where service implementations should be limited to domain-specific logic (thus opening up such development to domain experts) and any possible cross-cutting concerns (such as duplicate message handling, etc.) should be moved out of the core service (which is one of the main advantages of service meshes and indeed the TEQ event mesh). This is also particularly useful for the migration of monoliths to microservices where monolithic apps may well not have been programmed to be idempotent (ie were designed presuming calls were only made/sent once).
Looking closer at the possible effects of not having exactly-once message delivery and/or atomicity of database and messaging operations mentioned, we can take an order inventory flow as an example, where the following occurs...
A matrix of failures (occurring at three selected points in the flow) and recovery scenarios comparing MongoDB, PostgreSQL, and Kafka against Oracle Converged DB with Transactional Event Queues/AQ are shown in the following table.
Every concept and technology discussed in this blog is shown in the Building Data-Driven Microservices with Oracle Converged Database Workshop which can easily be set up and run in ~30 minutes! The source for the workshop can be found directly at https://github.com/oracle/microservices-datadriven (specific to this blog are setup, order-mongodb-kafka, inventory-postgres-kafka-inventory, order-oracleteq, and inventory-oracleteq).
Finally, the Developer Resource Center for Microservices landing page is a great resource for finding various microservice architecture materials.
Please feel free to provide any feedback here, on the workshop, on the GitHub repos, or directly. We are happy to hear from you.
Paul is Architect and Developer Evangelist for Microservices and the Converged Database
His focus includes data and transaction processing, service mesh and event mesh, observability, and polyglot data models, languages, and frameworks.
The 18 years prior to this he was the Transaction Processing Dev Lead for the Mid-tier and Microservices (including WebLogic and Helidon)
Creator of the workshop "Building Microservices with Oracle Converged Database" @ http://bit.ly/simplifymicroservices
Holds 20+ patents and has given numerous presentations and publications over the past 20+ years.