Parallel Unload to File
By David Allan-Oracle on Aug 12, 2009
Here we see how to to construct a parallel unload mapping leveraging the Oracle Database parallel table function capabilities to write the results from a PLSQL Ref Cursor to file. The example also illustrates how the table function arguments are constructed within OWB. The format of the file is determined by the user in the mapping by an expression defining the data to be written.
The example is a simple table to file scenario just to illustrate the basic mechanics, but the query/operator set can obviously be more complex. Below we see an example writing the results of a SALES table to a file, the target is a PLSQL table function which has a refcursor parameter; a ref cursor which produces the record to be written to the file (the filename and directory name are embedded in the function definition).
The record itself is a string and is a concatenation of the columns, below we see the fields delimited by commas, if you wanted any specific locale expressions or wrapped in quotes for example you can do this here.
The table function itself has details of the underlying PLSQL function used and an indicator that it is being used as a target in the mapping (you must set this, its not smart enough to figure it out).
Now you see that the table function operator has 2 groups, 1 input group and one output group. There is a property which defines the type for the group and if you have a mixture of ref cursor inputs and scalars then you will need more than one group. So I have one group (INGRP1) for the ref cursor which produces the record to be written, it is defined with a type of ref cursor (so the operators sourcing the attribute in this group will be input via a PLSQL refcursor for use in the function).
The output group is simply returning a table of scalars so I made sure the group has that property set.
That's about it, so when we generate the code we see a call to the table function ParallelUnload wrapped in a TABLE call, and the parameter passed in; a ref cursor containing the query representing the operators sourcing the attribute (the 'select blah from SALES' statement). Note also the PARALLEL hint which I configured on the SALES table and defined 5 as the number of concurrent servers.
Running this will allow parallel unload to file. The function
create or replace type NumSet as table of number;
create or replace package RefCur_Unload as
type Line is record (vc varchar2(32767));
type MultiLine_cur is ref cursor return Line;
create or replace function ParallelUnload
(r RefCur_Unload.MultiLine_cur) return NumSet
PIPELINED PARALLEL_ENABLE (PARTITION r BY ANY) is
i binary_integer := 0;
filename varchar2(256) := 'dbunload';
directoryname varchar2(256) := 'MY_DIR';
select sid into vsid from v$mystat where rownum=1;
filename := filename || vsid || '.dat';
out := utl_file.fopen (directoryname, filename , 'w');
fetch r into rec2;
exit when r%notfound;
i := i + 1;
The function uses the PIPELINED AND PARALLEL_ENABLE keywords to indicate the table function and parallel capabilities, the Oracle Database Data Warehousing Guide has more details on this in the 'Loading and Transformation' chapter if you want to dive into more detail. Also each parallel session of the function’s execution will write a data file (so data file will be named dbunload<sessionid>.dat). There was an earlier post on table functions (here) as source and target operators which is also worth checking out, they are a useful way of adding some custom transformation code into the mapping design, and all leveraging Oracle PLSQL.