Tuesday Aug 07, 2012

OWB – ANSI and Oracle SQL code generation

There is a configuration property in OWB for switching between ANSI SQL code generation and Oracle SQL. It is under the ‘Code generation options’ in the mapping configuration. The join condition is expressed in Oracle SQL join syntax and OWB will reinterpret if generating ANSI SQL.

You can change the value to false, generate the code and inspect it inline within the mapping editor;

The 11gR2 release of OWB has changes in the join component to allow you to express the join type in a logical manner, so you can indicate outer join on a group for example.

Friday Apr 06, 2012

ODI 11g – How to override SQL at runtime?

Following on from the posting some time back entitled ‘ODI 11g – Simple, Powerful, Flexible’ here we push the envelope even further. Rather than just having the SQL we override defined statically in the interface design we will have it configurable via a variable….at runtime.

Imagine you have a well defined interface shape that you want to be fulfilled and that shape can be satisfied from a number of different sources that is what this allows - or the ability for one interface to consume data from many different places using variables. The cool thing about ODI’s reference API and this is that it can be fantastically flexible and useful.

When I use the variable as the option value, and I execute the top level scenario that uses this temporary interface I get prompted (or can get prompted to be correct) for the value of the variable.

Note I am using the <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@> notation for the table reference, since this is done at runtime, then the context will resolve to the correct table name etc.

Each time I execute, I could use a different source provider (obviously some dependencies on KMs/technologies here). For example, the following groovy snippet first executes and the query uses SCOTT model with EMP, the next time it is from BOB model and the datastore OTHERS.

m=new Properties();
m.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@>");
s=new StartupParams(m);
runtimeAgent.startScenario("TOP", null, s, null, "GLOBAL", 5, null, true);

m2=new Properties();
m2.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","OTHERS", "BOB","D")@>");
s2=new StartupParams(m);
runtimeAgent.startScenario("TOP", null, s2, null, "GLOBAL", 5, null, true);

You’ll need a patch to 11.1.1.6 for this type of capability, thanks to my ole buddy Ron Gonzalez from the Enterprise Management group for help pushing the envelope!

Tuesday Jan 24, 2012

ODI Time Generation – SQL as a Source

Came across a nice use of the earlier SQL as a Source KM posting where the source was a time dimension generator. The forum entry is here, so the temporary interface is a data generator which when nested can be used in an interface to merge or load into a target.

Click on the image to see more …

 

Nice way to capture within tool and leverage different integration KMs on the target.

Wednesday Nov 02, 2011

Parallel Processing with DBMS_PARALLEL_EXECUTE

Here is another illustration of some of the powerful capabilities of the DBMS_PARALLEL_EXECUTE package in the Oracle database, carrying on from the earlier post here. One of the comments from that post was on how to insert into a different named table within each chunk and that insert can perform parallel DML also. This kind of scenario could be interesting for very high end processing, it could be end point target tables or tables that are prepared and then you perform partition exchanges with them or something.

The image below shows a variation on the original post where rather than inserting into a specific partition, you write into a specific table.

Driving the whole process can be your own chunking criteria, the question was how to drive this process from a table using SQL such as ‘select distinct level_key, level_key from chunk_table’ where chunk_table has the level_key and the target table name. For example it could contain the following data;

level_key table_name
1 sales_level1
2 sales_level2
3 sales_level3
4 sales_level4
 

So the first chunk with level_key 1 will write the results to table sales_level1 etc.

You can use the DBMS_PARALLEL_PACKAGE as follows to create this functionality. The start/end values have to be of data type NUMBER, so you will have to lookup the (target) table name inside your PLSQL block within the statement provided in the run_task call.

This block has the query to determine the chunks .....

begin
   begin
     DBMS_PARALLEL_EXECUTE.DROP_TASK(task_name => 'TASK_NAME');
   exception when others then null;
   end;
   DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => 'TASK_NAME');
   DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(task_name => 'TASK_NAME',
   sql_stmt =>'select distinct level_key, level_key from chunk_table', by_rowid => false);
end;

Then the next block will construct the and process the tasks......

begin
   DBMS_PARALLEL_EXECUTE.RUN_TASK (task_name => 'TASK_NAME',
     sql_stmt =>'declare
       s varchar2(16000); vstart_id number := :start_id; vend_id number:= :end_id;
       table_name varchar2(30);
       begin
         select table_name into table_name from chunk_table where level_key=vstart_id;
         s:=''insert into ''||table_name||'' select /*+ PARALLEL(STG, 8) */ colx from STAGING_TABLE STG
           where level_key =:vstart_id'';
         execute immediate s using vstart_id;
         commit;
     end;',
     language_flag => DBMS_SQL.NATIVE, parallel_level => 2 );
end;

The anonymous PLSQL block can be any arbitrary code, you can see the table name of the target is retrieved, the example does a parallel INSERT using the hint PARALLEL(STG,8). Anyway, good to share.

Friday Jan 04, 2008

Advanced Aggregation

[Read More]

Thursday Sep 20, 2007

Leveraging XDB

[Read More]
About

ETL, CDC, Real-Time DI and Data Quality for the Oracle Database from the inside.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today