On March 26, 2025, we celebrated the inauguration of the Oracle Innovation Center in Brazil, featuring an entire floor dedicated to a variety of technological experiments and provocations.

The Innovation Center features dedicated spaces for different market segments, such as banks, restaurants, telecom, engineering, healthcare, and more. Each of these spaces offers demonstrations or experiments, making extensive use of Oracle technologies—most of them leveraging artificial intelligence, machine learning, and analytics.

Guiding some of the presentations, there is an AI agent called Cora AI, designed to answer questions about the spaces, experiences, and control of robots and automation on the floor, such as lights and blinds.

But how does it work?

CORA AI
Several guiding principles shaped the development of Cora AI from the outset. For example, it needed to interact via voice commands to answer questions about the Innovation Center and be able to control the floor: turning lights on and off, raising or lowering blinds, interacting with various robots, among other possible actions.

Given the scope, the project was divided into two major workstreams:

  • Integrations: all robots and automation elements needed to expose APIs, to be centrally managed through Oracle Integration Cloud (OIC).
  • Cora AI: would become the agent responsible for understanding conversations and commands and triggering the required integrations (tools) or answering questions via RAG (Retrieval-Augmented Generation) or tools.

Let’s focus on Cora AI! (Cora means Colaborative Oracle Assistant)

HISTORY
Beyond LLMs, the project explored different Speech2Text and Text2Speech models—both Oracle cloud and third-party services. This was expected within a space meant for experimentation and innovation. Another challenge, however, came from the microphones.

Initially, we envisioned interacting with Cora AI without handling any device. Early in the project, we used extremely sensitive ceiling microphones that captured virtually every sound. Presentations at the Innovation Center are dynamic, with many people involved, which made it unfeasible. Only one space, called Watch Tower—where there is stricter control over people’s positions—remained viable. It’s worth a visit; there, you can literally “talk to your data”!

At first, we didn’t want presenters to have to use cell phones to interact with Cora, so an intermediate solution was found: high-quality lapel microphones, which also allow audio control (muting/unmuting).

High Quality Lapel Microphones
High Quality Lapel Microphones

With the audio input challenge solved, it was time to develop Cora AI!

ARCHITECTURE
As a living experiment, CORA has had several architectures over the span of 1 year. The first version used an Oracle service called ODA (Oracle Digital Assistant), a low-code/no-code platform. The initial solution followed this architecture:

Initial CORA AI solution using ODA, Generative AI, and OIC, all services on Oracle OCI cloud.
First CORA AI version, using ODA

Over time, we arrived at the current design (shown below), using the LangGraph framework and MCP servers to interface with various integrations and tools used to control robots and answer different questions.

New version, using LangGraph, MCP Server and other features.
New version, using LangGraph, MCP Server and other features.

The current solution enables natural conversations that control robots, experiments, floor automation, and respond to questions about the space—a setup that can be easily productized.

Next, let’s briefly discuss each architectural component.

LANGGRAPH
There are many frameworks for building AI agents; we wanted an open framework compatible with Oracle’s Generative AI service and multiple LLMs running inside Oracle Cloud. We chose LangGraph to manage complex flows, multiple agents, memory, and other needs. It functions as an orchestrator, deciding when to trigger tools, RAG, memory, or respond using OCI Generative AI’s pretrained models.

Cora AI’s backend implements a ReAct-style agent (Reason + Act) that resolves the user’s request iteratively:

  1. First, it performs a reasoning step to decide what to do next (Thought).
  2. Then, it executes an action by calling an available tool—such as answering an Innovation Center question, requesting a coffee, asking for a robot’s presence, or turning off the lights in the concept hotel room.
  3. Next, it receives the result of that action and incorporates it into the context (Observation).

This cycle repeats until the agent produces a final answer or reaches a stopping condition, such as a maximum number of steps.

REact workflow from Langgraph
https://docs.langchain.com/oss/python/langgraph/workflows-agents

LLMs
We have flexibility to use different models, but from the beginning, we’ve used those running on OCI via the Generative AI service. Over a year, we experimented with several models. In the beginning, only a few were available; we started with Cohere, then Llama, Grok, and currently Gpt 4.1 mini. The experimental nature of the Innovation Center means this is subject to change—swapping models is very simple thanks to Generative AI and LangGraph. Comprehensive testing is, of course, essential, given that Cora AI controls much of the automation and robotics in the space.

OCI Generative AI is the main service behind CORA AI design. This service provide LLM inference, running inside OCI cloud.
OCI Generative AI is the main service behind CORA AI design. This service provide LLM inference, running inside OCI cloud.

Integrations
The Oracle Innovation Center offers a variety of experiences across business areas. All experiences require integrations to some extent. To manage these, we rely on Oracle Integration Cloud (OIC) on OCI to orchestrate and govern them, enabling monitoring of calls, errors, and more.

Whenever someone asks Cora to “deliver a towel to the hotel room,” “turn on the room lights,” or even “order a coffee” (delivered by a robot to your table!), OIC orchestrates the integration.
.
For more about OIC in Oracle Innovation Center, contact Jailton Junior!

This is Cora Robot, one of several devices in the innovation center managed by AI + Oracle Integration Cloud (OIC)
This is Cora Robot, one of several devices in the innovation center managed by AI + Oracle Integration Cloud (OIC)

Wondering about MCP?

MCP SERVER
MCP (Model Context Protocol) standardizes communication between LLMs and external tools. The layers are kept separate, so the agent has no direct knowledge of the tools.

The LLMs gain new capabilities via MCP Server—such as performing calculations, triggering robots, answering specific questions, and anything else the tool’s implementation allows. Tools are defined on the MCP Server with detailed descriptions of their functions and parameters, allowing the LLM to decide which tool to use for a given task.

As my friend Jailton says: “It’s time to emphasize the importance of documenting and describing program methods.” Now, we document methods for both humans and the AI model.

A dedicated MCP Server exists for the Innovation Center, and almost all tools are backed by integrations managed within Oracle Integration Cloud.

RAG
So far, our agent can answer questions about the Innovation Center and trigger robots and integrations that interact with Oracle systems or floor automation (lights, blinds, projections, etc.). But how do we keep it updated with new knowledge?

The OCI Generative AI pre-trained models don’t know about recent knowledge or any changes in Innovation Center. To address this, we use Oracle Autonomous Database and an architecture called RAG or Retrieval Augunment Generation.

RAG works augmenting a natural language prompt by retrieving content from our specified vector store using semantic similarity search. This reduces hallucinations by using your specific and up-to-date content and provides more relevant natural language responses to our questions.

Documents can be incrementally stored in an Object Storage repository and are automatically submitted to an embedding model, indexed, and stored as vectors in Autonomous Database tables. They can then be searched using natural language. These documents can include information about floor maintainers, detailed explanations of new experiences, or any material for publication.

Example RAG Workflow
Example RAG Workflow

You can learn more about RAG in https://docs.oracle.com/en/database/oracle/oracle-database/26/vecse/retrieval-augmented-generation1.html

AVATAR
Until now, our agent was only accessed by voice command, with responses heard through the floor’s sound system. Then, we decided to give Cora a face: time to create an avatar.

There are several avatar creation tools on the market. After a careful evaluation of pros and cons, and considering the high cost of hyper-realistic human avatars, we decided to create our own. Ours doesn’t look human—a bit more like an android—but comes at an almost zero cost. We used two main tools: NVIDIA Audio2Face and Unreal Engine.

The avatar is exposed as a service: it receives the text to be spoken, synchronizes it with the Avatar, and sends it to the floor’s audio system.
Now we have both voice and face!

Cora’s avatar, displayed on transparent and opaque TVs in the Innovation Center.
Cora’s avatar, displayed on transparent and opaque TVs in the Innovation Center.

ROBOTS
Extending Cora’s “soul” and “face,” we have several robots performing various tasks within the floor’s experiments.

The oldest, acquired before the Innovation Center was established, is also called Cora—a concierge robot that guides guests around the floor. It knows the floorplan and has obstacle-detection sensors. She’s very polite, always saying “excuse me”.

Voice commands to Cora AI are translated into instructions for tools interfacing with the robot’s API, using Oracle Integration Cloud as the connectivity layer.

Other robots follow similar patterns: robots delivering towels to hotel rooms, making coffee, or delivering coffee; quadrupeds for simulated inspections in dangerous areas (all safe, of course, within the Innovation Center).

Robot delivering towels to a hotel room.
Robot delivering towels to a hotel room.

EVOLUTION
As we say at the Innovation Center, it’s a living environment in constant evolution.

Recently, Oracle Integration Cloud added features for MCP support—so OIC-managed services can now be exposed via MCP as well. This makes sense for corporate governance, where OIC excels.

This means that Rui Romanini wouldn’t need to code a whole MCP server in Python to expose Innovation Center capabilities to consuming agents—just configure them in OIC.

Many products and services—Oracle and third-party—are launched all the time. The innovation center’s role is to experiment, select, and recommend those relevant to business needs. As my friend Silvio Trasmonte says: “It has to address a real pain point!”

It’s important to note that although we’re an innovation space, we’re also concerned with supportability—everything approved in the evaluation stage must be kept fully functional during visits!

And finally, thank you to the Sinapse / Oracle Innovation Center team! Through teamwork, we were able to build an AI-based solution that delivers automation and information to Innovation Center visitors.

See you next time.

Our new team member... (powered by Oracle Gen AI)
Our new team member… (powered by Oracle Gen AI)

To learn more


LangGraph
Make reliable AI agents:
https://www.langchain.com/langgraph

OIC
Integration platform—with features such as conversion of applications/endpoints into MCP servers:
https://www.oracle.com/br/integration/application-integration/

OCI Generative AI
Language models running within Oracle Cloud:
https://www.oracle.com/artificial-intelligence/generative-ai/generative-ai-service/