Your Fusion Applications hold a goldmine of enterprise data — ERP transactions, HCM records, and supply chain events. But what if you need that data for custom AI and/or ML workloads? That’s where the BICC connector comes in.

Oracle AI Data Platform Workbench lets you connect directly to raw Fusion data using the Business Intelligence Cloud Connector (BICC). A clean path from Fusion to a Spark-powered notebook where you can transform, enrich, and build on your enterprise data.

In this blog article, we walk through the end-to-end flow from configuring the connection to running your first Spark query against Fusion data, based on the official Oracle documentation and the 42-steps interactive demo built by the Forward Deployed Engineering (FDE) team.

Interactive Demo: “Fusion Data in Oracle AI Data Platform with BICC” — 42 steps  ·  Launch Demo ↗

Interactive Demo - Fusion Data in Oracle AI Data Platform with BICC

Where Does BICC Fit in the Architecture?

Oracle’s Fusion Integration reference architecture defines three options for getting Fusion data into Oracle AI Data Platform. The BICC path is Option 1 — the most direct route, designed for teams that need raw Fusion data in Oracle AI Data Platform Workbench without standing up a full Oracle Fusion AI Data Platform pipeline.

Option Path Best For
① BICC Fusion → BICC → Object Storage → Oracle AI Data Platform notebook Custom AI and ML, raw data access, data engineering

AI Data Platform - Fusion Integration Reference Architecture - BICC Option 1 highlightedAI Data Platform – Fusion Integration Reference Architecture  ·  ① (BICC) highlighted

This BICC connector option is the fastest path to value when you need Fusion data in Oracle AI Data Platform for use cases where Oracle Fusion AI Data Platform doesn’t serve — think custom ML model training, cross-source data enrichment, or building AI agents grounded in operational data.

The End-to-End Flow

BICC is Oracle’s native bulk extraction tool, included with every Fusion Applications subscription. It extracts data from prebuilt Public Virtual Objects (PVOs) — optimized views covering Oracle ERP, HCM, SCM Analytics, and more — and writes compressed CSV files to Oracle Cloud Infrastructure Object Storage. From there, Oracle AI Data Platform’s Spark engine picks them up natively.

🏢 Fusion Apps ⚡ BICC Extract 🪵 Object Storage 📓 AI Data Platform Notebook △ Delta / AI-ML

What Exactly Is BICC?

The Business Intelligence Cloud Connector is a built-in extraction framework within Oracle Fusion Applications. It provides pre-packaged data extracts called offerings, each containing a set of Public Virtual Objects (PVOs) that represent specific business data views. Think of offerings as curated packages — there are offerings for Financials, Procurement, HCM, Supply Chain, and more. Each PVO inside contains an offering map to a specific database view optimized for bulk extraction.

BICC supports both full extracts (initial load of all records) and incremental extracts (only changed data since last run), making it suitable for both one-time migrations and ongoing data sync. Extracted data lands as zipped CSV files with an accompanying manifest file.

Prerequisites

Requirement Side Details
Fusion Admin access FUSION Administrator permissions on the Fusion instance
BICC role FUSION ORA_ASM_APPLICATION_IMPLEMENTATION_ADMIN_ABSTRACT role or equivalent
OCI Object Storage Bucket OCI Bucket in the same compartment as your Oracle AI Data Platform Workbench
Bucket identifiers OCI Bucket name, namespace, hostname, and region
API key and user OCID OCI OCID of user with API key to access the bucket, plus tenancy OCID
Oracle AI Data Platform Workbench OCI Active Workbench instance with a Spark compute cluster running

Three Steps to Fusion Data in Oracle AI Data Platform

 
STEP 1  Create the BICC Connection to Oracle AI Data Platform Workbench

This step builds the bridge between Fusion and Oracle Cloud Infrastructure. You configure BICC’s external storage to point at your OCI Object Storage bucket so extracted data lands where Oracle AI Data Platform can read it.

→  Log into the BICC Console at https://<host>/biacm
→  Navigate to Configure External Storage → OCI Object Storage Connection tab
→  Enter the bucket name, namespace, region, OCI username, and auth token
→  Click Test Connection to validate connectivity
→  Save the configuration

 
STEP 2  Add Fusion Data Sources and Schedule Extraction

Still in the BICC Console, define what data to extract and when.

→  Go to Manage Jobs → Add to create a new extraction job
→  Select the offerings and specific PVOs you need
→  Save the job definition
→  Go to Manage Job Schedules → Add to set a schedule
→  Set to run immediately (one-time) or on a recurring basis (daily, weekly, etc.)
→  Save and let the job execute

Once the job completes, verify the data in your OCI Object Storage bucket. BICC exports data as zipped CSV files — one per data store — accompanied by a manifest file.

 
💡 Pro tip: Audit the columns in each PVO and extract only what you need. By default, BICC extracts all columns, which can produce unnecessarily large files. Use the dedicated ExtractPVOs — they’re optimized for bulk extraction. Avoid using OTBI reporting PVOs for integration. The first full extract can be very large; subsequent incremental runs will be much lighter.

 
STEP 3  Read Fusion Data in an Oracle AI Data Platform Notebook

Now for the payoff. Oracle AI Data Platform Workbench ships with a built-in FUSION_BICC ingestion connector that handles the heavy lifting — it connects to your Fusion instance, fetches BICC-extracted data from external storage, and loads it directly into your Spark session. No manual CSV parsing or bucket path construction required.

→  Open a notebook, attach a Spark cluster, and use the aidataplatform Spark format with type FUSION_BICC.

The connector abstracts away bucket paths, CSV parsing, and file discovery. You point it at the Fusion service URL and the PVO datastore you need — Oracle AI Data Platform Workbench handles the rest. This is the recommended approach and the same pattern shown in the official Oracle AI Data Platform sample notebooks.

PYSPARK · BUILT-IN FUSION_BICC CONNECTOR RECOMMENDED
# Read Fusion data using the built-in BICC connector
df = spark.read.format("aidataplatform")
     .option("type", "FUSION_BICC")
     .option("fusion.service.url", "<FUSION_URL>")
     .option("user.name", "<USERNAME>")
     .option("password", "<PASSWORD>")
     .option("schema", "<SCHEMA>")
     .option("fusion.external.storage", "IDL_CONNECTOR_BICC")
     .option("datastore","FscmTopModelAM...SupplierExtractPVO")
     .load()

# Preview the data
df.show()

Once the data is in your DataFrame, write it to a managed Delta table to make it queryable across the platform:

PYSPARK · WRITE TO DELTA TABLE
# Write to a managed Delta table (medallion bronze layer)
df.write.format("delta")
     .mode("overwrite")
     .saveAsTable("fusion_catalog.bronze.erp_transactions")

From here, you are in the full Oracle AI Data Platform Workbench environment. Clean and transform the data using Spark. Write it into Delta tables following a medallion architecture (bronze → silver → gold). Train ML models. Feed it into GenAI agents. Connect BI tools like Oracle Analytics Cloud (OAC) via JDBC. The data is yours to work with.

What Can You Do Once the Data is in Oracle AI Data Platform?

Once Fusion data lands in Oracle AI Data Platform Workbench, you unlock capabilities that go beyond standard Oracle Fusion AI Data Platform analytics:

 
→  Custom ML/AI model training — train models on operational ERP, HCM, or SCM data using PySpark and Python libraries directly in Oracle AI Data Platform notebooks

→  Cross-source enrichment — join Fusion data with external sources already in your catalog (Object Storage, ADW, Kafka) for richer analytics

→  Medallion architecture — land BICC extracts as bronze, transform to silver, and curate into gold Delta tables with full ACID transactions and time travel

→  GenAI agent development — use Fusion data alongside OCI Generative AI foundation models to build conversational agents grounded on your business context

→  BI & reporting — connect Oracle Analytics Cloud, Tableau, or Power BI via JDBC to query Fusion data stored in Oracle AI Data Platform

→  Data sharing — share curated datasets with other teams or external partners using the built-in Delta Sharing protocol

 
🔑 Key takeaway: BICC is the fastest path from Fusion to AI Data Platform when you need raw operational data for custom data engineering, AI and ML, or analytics use cases that the standard Oracle Fusion AI Data Platform pipeline doesn’t cover. It requires minimal setup — just a bucket, a connection, and a notebook.

Resources

▶  Interactive Demo:  oracle.storylane.io/share/yx2k1grzsd9x

📄  Oracle AI Data Platform Fusion Docs:  docs.oracle.com/…/fusion-data-oracle-ai-data-platform

📘  BICC Extract Guide:  docs.oracle.com/…/biacc/index

💻  Oracle AI Data Platform Ingestion Samples (GitHub):  oracle-aidp-samples/…/Read_Only_Ingestion_Connectors.ipynb