Introduction

Oracle Transactional Event Queues (TxEventQ) brings event streaming natively inside the Oracle AI Database. Built on top of Advanced Queuing (AQ) but enhanced for scale, partitioning, and pub-sub semantics, TxEventQ eliminates the need for a separate broker infrastructure in many architectures.

To make integration seamless for Kafka-based systems, Oracle offers two key options:

  1. Kafka Java Client for TxEventQ (OKafka) – a brokerless replacement path.

  2. Kafka Connect Source and Sink Connectors for TxEventQ – a bridge between Oracle AI Database and existing Kafka clusters.

Each approach serves a different need—either simplifying your stack or enabling hybrid interoperability.

Option A: Kafka Java Client for TxEventQ (OKafka)

What It Is

OKafka is a Kafka-compatible Java client that replaces the external Kafka broker with the Oracle AI Database itself. It implements familiar Kafka APIs such as Producer, Consumer, and Admin, but routes messages through JDBC and AQ to TxEventQ topics inside the database.
This allows existing Kafka applications to run with minimal changes—mostly configuration adjustments.

How It Works

In the OKafka path, Kafka topics map directly to TxEventQ queues.
Partitions correspond to database-level partitions, and offsets are maintained transactionally.
Every enqueue (produce) and dequeue (consume) operation participates in the same Oracle transaction, guaranteeing atomicity and exactly-once semantics.

Your producer and consumer code stays nearly identical; you only point bootstrap.servers to the Oracle listener, configure JDBC or wallet-based credentials, and use uppercase topic names.
The Oracle AI Database acts as both the event store and message broker, maintaining persistence, ordering, and recovery automatically.

Setup Basics

  1. Ensure you are using Oracle AI Database 21c or later, where TxEventQ is available.

  2. Create or grant a user with AQ privileges (DBMS_AQADM, DBMS_AQ, etc.).

  3. Set an appropriate STREAMS_POOL_SIZE and ensure the database listener is reachable.

  4. Add OKafka and Oracle JDBC/AQ libraries to your application classpath.

  5. Create a TxEventQ topic via DBMS_AQADM.CREATE_DATABASE_KAFKA_TOPIC and reference it from your producer.

Example Usage

Properties props = new Properties();
    props.put("bootstrap.servers", "dbhost:1521");
    props.put("oracle.service.name", "ORCLPDB1");
    props.put("security.protocol", "PLAINTEXT");
    props.put("oracle.net.tns_admin", "/path/to/tns");
    
    AdminClient admin = AdminClient.create(props);
    NewTopic topic = new NewTopic("MY_TOPIC", 3, (short)1);
    admin.createTopics(Collections.singleton(topic)).all().get();
    
    KafkaProducer<String,String> producer =
        new KafkaProducer<>(props, new StringSerializer(), new StringSerializer());
    producer.send(new ProducerRecord<>("MY_TOPIC", "key", "value")).get();
    producer.close();
    admin.close();
    
    

 

Benefits and Tradeoffs

Pros

  • Minimal code changes from existing Kafka clients.

  • Strong transactional guarantees for produce and consume.

  • Simpler stack—no external brokers to operate.

  • Lower latency for in-database workflows.

Cons

  • Scaling depends on database resources rather than a distributed broker.

  • Some Kafka APIs or features are limited.

  • Tight coupling between database and messaging infrastructure.

  • Observability differs from the native Kafka tooling model.


Option B: Kafka Connect Source and Sink Connectors for TxEventQ

What It Is

For teams that want to keep a Kafka cluster but integrate it with Oracle AI Database, the Kafka Connect connectors offer a practical bridge.
The Sink Connector moves data from Kafka topics into TxEventQ, while the Source Connector streams messages from TxEventQ topics back to Kafka.
This enables hybrid streaming, bidirectional synchronization, and gradual migration scenarios.

How It Works

You deploy the TxEventQ connector JAR into Kafka Connect workers.
The connector uses Oracle JMS and JDBC to interact with TxEventQ, mapping JMS messages to Kafka records.
Configuration files define database URL, wallet path, topic/queue names, and batching options.
Kafka Connect manages offsets, scaling, and retry logic.

Example Configurations

Sink Connector (Kafka → TxEventQ)

{
      "name": "txeventq-sink",
      "connector.class": "com.oracle.database.messaging.txeventq.TxEventQSinkConnector",
      "topics": "MY_KAFKA_TOPIC",
      "oracle.url": "jdbc:oracle:thin:@dbhost:1521/ORCLPDB1",
      "oracle.user": "TEQUSER",
      "oracle.password": "secret",
      "queue.topic.name": "MY_TXQ_TOPIC"
    }
    

Source Connector (TxEventQ → Kafka)

{
      "name": "txeventq-source",
      "connector.class": "com.oracle.database.messaging.txeventq.TxEventQSourceConnector",
      "queue.topic.name": "MY_TXQ_TOPIC",
      "topics": "MY_KAFKA_TOPIC",
      "oracle.url": "jdbc:oracle:thin:@dbhost:1521/ORCLPDB1",
      "oracle.user": "TEQUSER",
      "oracle.password": "secret"
    }

Benefits and Tradeoffs

Pros

  • Preserves existing Kafka workflows while introducing Oracle AI Database streaming.

  • Ideal for hybrid or phased adoption.

  • Leverages Kafka Connect’s ecosystem and monitoring.

  • Scales horizontally with worker tasks.

Cons

  • Additional latency through the Connect bridge.

  • Requires operating Kafka Connect infrastructure.

  • Limited transactional coupling between systems.

  • Needs version alignment across Kafka Connect, connector, and Oracle AI Database.


Choosing the Right Path

Aspect OKafka (Java Client) Kafka Connect Bridge
Primary Goal Replace Kafka with in-database messaging Integrate or bridge Oracle and Kafka
Complexity Low – only database + client Higher – requires Kafka Connect
Latency Lowest, no external hop Slightly higher
Transactional Integration Full ACID within DB Best-effort across systems
Flexibility Suited for DB-centric apps Ideal for hybrid ecosystems
Scaling Model Vertical (DB resources) Horizontal (Kafka workers)

Recommendation:
Use OKafka when consolidating workloads inside Oracle AI Database or when transactional consistency is paramount.
Use the Kafka Connect bridge when interoperating with external systems or migrating gradually from Kafka clusters.


Conclusion

Oracle TxEventQ provides two complementary ways to connect the database world with Kafka-style event streaming.
The OKafka client path brings simplicity and transactional depth for database-native microservices, while the Kafka Connect bridge offers interoperability for hybrid deployments.
Together, they give architects the freedom to choose between embedded and federated streaming—without compromising reliability or enterprise data integrity.


Learn more:

https://docs.oracle.com/en/database/oracle/oracle-database/23/adque/Kafka_cient_interface_TEQ.html?utm_source=chatgpt.com

https://docs.oracle.com/en/database/oracle/oracle-database/21/adque/interoperability_TEQ_AQ.html?utm_source=chatgpt.com

https://github.com/oracle/okafka?utm_source=chatgpt.com

https://central.sonatype.com/artifact/com.oracle.database.messaging/txeventq-connector?utm_source=chatgpt.com

https://docs.oracle.com/en/database/oracle/oracle-database/26/adque/rel-changes.html?utm_source=chatgpt.com