A free short course takes AI developers and engineers beyond prompt and context engineering to memory engineering, using Oracle AI Database as the agent memory core

“Memory turns a stateless LLM into an agent that learns over time. How to architect agentic memory is one of the most debated topics in AI right now. This course gives AI developers and engineers a comprehensive view of the most common memory patterns.”

Andrew Ng, Founder, DeepLearning.AI

Oracle and DeepLearning.AI today announced the release of Agent Memory: Building Memory-Aware Agents, a new short course now available on the DeepLearning.AI platform. The course teaches AI developers and engineers how to architect and implement memory systems that give agents persistence, continuity, and the ability to learn over time.

Most agents forget. Each new session starts from zero, accumulated context from previous interactions is discarded, and the agent has no mechanism to learn from what it has already done. As a result, AI developers often rely on workarounds: cramming everything into the context window, reloading conversation logs, or bolting on ad-hoc retrieval.

These approaches can work, but they don’t provide a clear mental model for how information should live inside an agentic system boundary. This course treats memory as a first-class citizen in AI agents, and is built around that memory-first perspective.

“For the past few years, we have focused on prompt and context engineering to get the best results from a single LLM call. But engineering the right context for agents that need to work over days or weeks needs an effective memory system. This course takes that memory-first approach to building agents.”

Richmond Alake, AI Developer Experience Director, Oracle


Beyond Prompt Engineering

You’ve heard about prompt engineering. You’ve probably heard about context engineering. This course introduces the next layer: memory engineering, treating long-term memory as first-class infrastructure that is external to the model, persistent, and structured.

The course covers the full memory stack across five hands-on modules, built on LangChain, Tavily, and Oracle AI Database:

  • Why AI Agents Need Memory: Explore failure modes of stateless agents and the memory-first architecture used throughout the course.
  • Constructing the Memory Manager: Design persistent memory stores across memory types, model memory data for efficient retrieval, and implement a manager that orchestrates read, write, and retrieval operations during agent execution.
  • Scaling Agent Tool Use with Semantic Tool Memory: Treat tools as procedural memory, index them in a vector store, and retrieve only contextually relevant tools at inference time using semantic search.
  • Memory Operations: Extraction, Consolidation, and Self-Updating Memory: Build LLM-powered pipelines that extract structured facts from raw interactions, consolidate episodic memory into semantic memory, and implement write-back loops that let an agent autonomously update and resolve conflicts in its own knowledge base.
  • Memory-Aware Agent: assemble a stateful agent that initializes from long-term memory at startup, checkpoints intermediate reasoning states during execution, and persists learned context across sessions.

“The patterns we cover here are not theoretical. AI developers and engineers will walk through real implementations: building memory stores, wiring up extraction pipelines, and handling contradictions in memory. You leave with working code you can adapt for your own production agents.”

Nacho Martinez, AI Developer Advocate, Oracle


Oracle AI Database as the Agent Memory Core

Oracle AI Database serves as the unified agent memory core throughout the course. Instead of treating a database as a passive store, the course demonstrates how Oracle AI Database functions as the active retrieval and persistence layer that makes each memory pattern work in production.

Oracle AI Database brings key retrieval strategies into a single engine, including vector search for semantic similarity and unstructured knowledge retrieval, graph traversal for relationship-aware reasoning across connected entities, and relational queries for structured, transactional memory that demands precision and consistency. This helps reduce complexity by avoiding separate systems for different data types.

The memory patterns taught in this course, such as semantic tool memory, self-updating memory, and memory consolidation, are the same patterns used to build production-grade agentic systems on Oracle AI Database. This course puts that architecture directly in the hands of AI developers and engineers.


Who This Course Is For

Agent Memory: Building Memory-Aware Agents is designed for:

  • AI developers and engineers building or evaluating agentic systems who need production-grade memory architecture
  • ML engineers integrating LLMs into multi-turn or multi-session workflows
  • Developers working with LangChain, LangGraph, or Tavily who want durable, structured memory
  • Technical leaders assessing Oracle AI Database for agent infrastructure at scale


Availability

Agent Memory: Building Memory-Aware Agents is available now on DeepLearning.AI. The course is free to access and requires no prior Oracle experience. Developers can enroll at https://www.deeplearning.ai/short-courses/agent-memory-building-memory-aware-agents/.


About Oracle AI Database

Oracle AI Database is a converged database platform built for AI workloads. It provides native vector search, graph traversal, relational retrieval, and the persistence infrastructure required for production agent memory systems all in one single database engine. This removes the fragmentated infrastructure that’s a bottleneck for AI innovation. Oracle AI Database is used by developers and enterprises as the unified memory core for AI agents globally to build and deploy intelligent, secure, memory-aware AI agents.