Oracle Data Integrator (ODI)
This guide is based on Oracle Database 23ai and Oracle Data Integrator (ODI) 14c (14.1.2.0.0) running on Oracle Linux (64-bit).
ODI is a powerful ELT (Extract, Load, Transform) tool, mainly used for data integration and transformation. We will be using ODI to query an Oracle database and ingest the data to Apica Ascent.
ODI Installation
Download and install ODI from this link https://www.oracle.com/middleware/technologies/data-integrator-downloads.html
Basic ODI Concepts
Before you dive in, here are a few key terms:
Repository: Where ODI metadata is stored (Master & Work repositories).
Topology: Configuration for your physical and logical data servers.
Designer: Where you define data models, mappings, and procedures.
Operator: Monitor and manage your ETL executions.
Agent: Executes the ODI scenarios or jobs.
Creating a Repository Connection
Start the ODI Studio (GUI):
$ODI_HOME/odi/studio/odi.sh
The initial configuration it to create a Master Repository (stores global settings) and Optional Work Repository (stores project-level data like mappings and interfaces):
Under ODI UI, go to File -> New -> Create a New ODI Repository LogIn
Enter all the details
Click Test to validate the connection and hit OK on the dialog box
Click OK to Save the connection
Once connected to Repository you can switch to Topology and view source and target Data Servers under the Physical Architecture
Create a Work Repository
Navigate to Topology -> Repositories -> Work Repository
Right click and select New Work Repository
Fill in the required details at Step 1
At Step 2 enter all the details and click Finish
a. Name: e.g., WORKREP1
b. Password: set one for this Work Repo
c. Type: Development
At this point, ODI will create the necessary tables in the same database schema (or another one if specified).
Reconnect with the Work Repository
Click on Disconnect from Repository icon
Reconnect to Repository after selecting Work repository WORKREP1
Groovy Script Automation for data ingestion from Oracle DB to Apica Ascent
Navigate to Tools -> Groovy -> New Script
A sample script is attached (above) that:
Connects to an Oracle database
Retrieves data from the EMPLOYEES table one row at a time
Builds a JSON payload per row
Sends each JSON object to Apica Ascent API endpoint using an HTTP POST
Retries up to 3 times if there's a failure
Logs any failures (with error reason and payload) into a local file called failed_rows.log
Execute the script
Validate that the data is now ingested to Apica Ascent
Navigate to Ascent Explore page and verify the data ingested
Tip: To start polling for new rows from Oracle DB and ingest into Apica Ascent using Oracle Data Integrator (ODI) at regular intervals, you could use Change Data Capture (CDC) or Incremental Load techniques (Using a Timestamp Column). Use ODI Agent to schedule the scenario/package to run at regular intervals.
Last updated
Was this helpful?