X

Information, tips, tricks and sample code for Big Data Warehousing in an autonomous, cloud-driven world

Chunking of Updates

Jean-Pierre Dijcks
Master Product Manager

While I was looking at the Edition-based Redefinition I was puzzled by, or thinking about the performance of large data volumes moving across the edition. So I started thinking about how do I get that trigger to do stuff in parallel. Turns out I was thinking upside down... or inside out is maybe better.

In 11g Release 2 a new DBMS package is introduced called DBMS_PARALLEL_EXECUTE, and in the RUN_TASK procedure it allows me to actually fire a cross-edition trigger, and there is my answer to how to get the redefinition of the data to happen in parallel. Eureka... we already figured this out, it just took me a while to figure out the connections...

If you are just interested in avoiding large rollback segments or trying to find some way of doing updates on data in a small controlled batch manner I would say, have a look at the documentation for DBMS_PARALLEL_EXECUTE. The nice thing about the chunking is that you get restartability and control over the update in smaller chunks without having to build the infrastructure for all of this yourself.

You can read some more here and here.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha