Tuesday Nov 24, 2015

Little things to know about ... Oracle Partitioning (part one of hopefully many)

Oracle Partitioning is one of the most commonly used option on top of Enterprise Edition - if not the most often used one, which is as you can guess always a discussion in our buildings ;-)

Over the years Oracle Partitioning matured significantly and became more powerful and flexible. But, as in real life, with power and flexibility comes always little things that are good to know (no, that's not an equivalent for complexity). So I am happy to see Connor McDonald just blogging about such a little detail around Interval Partitioning.  

Check it out, it's worth it.

Monday Nov 23, 2015

PX Server Sets, Parallelizers, DFO Trees, Parallel Groups, ... Let's Talk About Terminology

In my last post I mentioned cases where a SQL statement uses more PX servers than expected, I said "There are rare cases when a statement uses more than DOP*2 number of PX servers depending on the plan shape.". I started this post to talk about those cases but then I thought maybe we should clarify the terminology before doing that. So let's go over the basic parallel execution terminology we use in the documentation and in the monitoring tools (V$ views, SQL Monitor, etc...).

DFO (Data Flow Operation), PX Server Set

A DFO is the basic unit of work carried out by PX servers in an execution plan. The set of PX servers working on a DFO is called a PX server set. The number of PX servers in a set is determined by the statement DOP.

Consider the following example.

SQL> explain plan for select /*+ parallel(2) */ * from sales;


SQL> select * from table(dbms_xplan.display);

As a general rule DFO boundaries are indicated by two columns in an execution plan; the lines having PX SEND in the Operation column or TQ (Table Queue) in the Name column indicate DFO boundaries. In this plan we see that we have only one DFO, indicated by Line Id 2. This means this statement will use one PX server set having 2 PX servers as the DOP is 2.

Only very basic statements can be executed with a single DFO, most statements are executed by multiple DFOs. The following plan shows 2 DFOs which means 2 PX server sets.

SQL> explain plan for 
  2  select /*+ parallel(2) */ cust_id,count(*)
  3  from sales
  4  group by cust_id;


SQL> select * from table(dbms_xplan.display);

We can say this again by looking at the PX SEND operations, Line Id 2 and 5. Since we have 2 DFOs this statement will use 2 PX server sets, each set will have 2 PX servers as the DOP is 2. So this statement needs a total of 4 PX servers. One PX server set will perform Line Id 5-8, the other set will perform Line id 2-4.

DFO Tree, Parallelizer

Any PX COORDINATOR in an execution plan is called a parallelizer. As we see in the above example there may be multiple DFOs under a parallelizer, these DFOs are grouped under a DFO tree, so the terms parallelizer and DFO tree are used interchangably. In the above examples we see that there is one parallelizer meaning one DFO tree.

A DFO tree in the plan is carried out by at most 2 PX server sets. If there is only one DFO under the DFO tree as the first example shows there will be only one PX server set. If there are two or more DFOs under the DFO tree 2 PX server sets will be used, we limit the number of PX server sets to 2 for a DFO tree. This is why most statements use 2 PX server sets meaning they will use DOP*2 number of PX servers. Let's look at a plan that has 3 DFOs.

SQL> explain plan for 
  2  select /*+ parallel(2) */ count(*)
  3  from sales s, customers c
  4  where s.cust_id=c.cust_id;


SQL> select * from table(dbms_xplan.display);

Even though we have 3 DFOs there is only one DFO tree (one PX COORDINATOR), this means this statement will need 2 PX server sets, 4 PX servers as the DOP is 2.

Finding the Number of Parallelizers and PX Server Sets

You can find the number of PX server sets for a statement using the view v$pq_sesstat. If we run the last example statement above here is what we see.

SQL> select /*+ parallel(2) */ count(*)
  2  from sales s, customers c
  3  where s.cust_id=c.cust_id;


SQL> select * from v$pq_sesstat;

------------------------------ ---------- ------------- ----------
Queries Parallelized                    1             1          0
DML Parallelized                        0             0          0
DDL Parallelized                        0             0          0
DFO Trees                               1             1          0
Server Threads                          4             0          0
Allocation Height                       2             0          0
Allocation Width                        1             0          0
Local Msgs Sent                    387913        387913          0
Distr Msgs Sent                         0             0          0
Local Msgs Recv'd                  387913        387913          0
Distr Msgs Recv'd                       0             0          0
DOP                                     2             0          0
Slave Sets                              2             0          0

13 rows selected.

It shows what we had 1 parallelizer (indicated by the statistic DFO Trees), 2 PX server sets (indicated by the statistic Slave Sets), 4 PX servers (indicated by the statistic Server Threads, and the DOP was 2.

SQL Monitor shows the DFO trees and PX server sets in the Parallel tab of the SQL Monitor report as can be seen in the below screenshot.

It shows the PX server sets as "Parallel Set" and PX servers as "Parallel Server". It seems like we are doing everything to confuse the users by using different names everywhere. In this SQL Monitor report we see that we had 2 PX server sets and 4 PX servers.

Multiple Parallelizers

Some statements can have more than one parallelizer (DFO tree). Since each parallelizer can use 2 PX server sets these statements can use more than DOP*2 number of PX servers. You can identify such statements by looking at the plan as explained above. If the plan has multiple PX coordinators it means the statement has multiple parallelizers. The following plan has 2 parallelizers as indicated by the number of PX coordinators.

SQL> explain plan for
  2  select /*+ parallel(2) */ cust_id, (select max(cust_id) from customers)
  3  from customers
  4  order by cust_id;


SQL> select * from table(dbms_xplan.display);

Here is what v$pq_sesstat shows after running this statement.

------------------------------ ---------- ------------- ----------
Queries Parallelized                    1             7          0
DML Parallelized                        0             0          0
DDL Parallelized                        0             0          0
DFO Trees                               2            10          0
Server Threads                          6             0          0
Allocation Height                       2             0          0
Allocation Width                        1             0          0
Local Msgs Sent                     11878       1190228          0
Distr Msgs Sent                         0             0          0
Local Msgs Recv'd                   11873       1190215          0
Distr Msgs Recv'd                       0             0          0
DOP                                     2             0          0
Slave Sets                              3             0          0

We had 2 parallelizers (DFO Trees), 3 PX server sets (Slave Sets) and 6 PX servers (Server Threads). There are cases where v$pq_sesstat shows incorrect information when the plan has multiple parallelizers, so do not rely on this information if you have plans with multiple parallelizers, I will talk about those cases in the next post.

SQL Monitor shows DFO trees as Parallel Group in the Parallel tab as can be seen in the below screenshot.

Again we see that we had 2 parallelizers and 3 PX server sets.

After covering the basic terminology we can now talk about multiple parallelizers in more detail and look at how they work. Stay tuned for the next post.

Friday Nov 20, 2015

Review of Data Warehousing and Big Data at #oow15

DW BigData Review Of OOW15

This year OpenWorld was bigger, more exciting and packed with sessions about the very latest technology and product features. Most importantly, both data warehousing and Big Data were at the heart of this year’s conference across a number of keynotes and a huge number of general sessions. Our hands-on labs were all completely full as people got valuable hands-on time with our most important new features. The key focus areas at this year’s conference were:
  • Database 12c for Data Warehousing
  • Big Data and the Internet of Things 
  • Cloud from on-premise to on the Cloud to running hybrid Cloud systems
  • Analytics and SQL continues to evolve to enable more and more sophisticated analysis. 
All these topics appeared across the main keynote sessions including live on-stage demonstrations of how many of our news features can be used to increase the performance and analytical capability of your data warehouse and big data management system - checkout the on-demand videos for the keynotes and executive interviews....
[Read More]

Friday Oct 23, 2015

Performance Study: Big Data Appliance compared with DIY Hadoop

Over the past couple of months a team of Intel engineers have been working with our engineers on Oracle Big Data Appliance and performance, especially in ensuring a BDA outperforms DIY Hadoop out of the box. The good news is that your BDA, as you know it today is already 1.2x faster. We are now working to include a lot of the findings in BDA 4.3 and subsequent versions, so we are steadily expanding that 1.2x into a 2x out of box performance advantage. And that is all above and beyond the faster time to value a BDA delivers, as well as on top of the low cost you can get it for. 

Read the full paper here.

But, we thought we should add some color to all of this, and if you are at Openworld this year, come listen to Eric explain all of this in detail on Monday October 26th at 2:45 in Moscone West room 3000.

If you can't make it, here is a short little dialog we had over the results and both Eric and Lucy's take on the work they did and what they are up to next.

Q: What was the most surprising finding in tuning the system?

A: We were surprised how well the BDA performed right after its installation. Having worked for over 5 years on Hadoop, we understand it is a long iterative process to extract the best possible performance out of your hardware. BDA was a well-tuned machine and we were a little concerned we might not have much value to add... 

Q: Anything that you thought was exciting but turned out to be not such a big thing?

A: We were hoping for 5x gains from our work, but only got 2x... But, in all seriousness, we were hoping for better results from some of our memory and Java garbage collection tuning. Unfortunately they only resulted in marginal single digits gains. 

Q: What is next?

A: There is a long list of exciting new products coming from Intel in the coming year; such as hardware accelerated compression, 3d-Xpoint, almost zero latency PCIE SSDs and not to forget new processors. We are excited at the idea of tightly integrating them all with Big Data technologies! What is a better test bed that the BDA? A full software/hardware solution!

Looks like we have a lot of fun things to go work on and with, as well as of course looking into performance improvements for BDA in light of Apache Spark.

See you all at Openworld, or once again, read the paper here

Wednesday Oct 14, 2015

Parallel Execution Questions at Oracle OpenWorld

Because of some logistical reasons we do not have a specific Parallel Execution booth on the demo grounds at OpenWorld this year. Instead the PX development team will be at the SQL Analytics and Analytic Views booth on Oct 27, Tue, 2pm-5pm and Oct 28, Wed, 10:15am-1pm. If you have any questions about PX or if you just want to chat with PX developers find us at that booth. You can also put your questions beforehand as comments here and come talk about them at the booth.

Other than that my session on how to make people happy in a DW will be at Moscone South 307 on Oct 27, Tue, 12:15pm. Come and join us to talk about how the Oracle Database is nice to all of its users and treats them well.

Tuesday Oct 13, 2015

Big Data SQL 2.0 - Now Available

With the release of Big Data SQL 2.0 it is probably time to do a quick recap and introduce the marquee features in 2.0. The key goals of Big Data SQL are to expose data in its original format, and stored within Hadoop and NoSQL Databases through high-performance Oracle SQL being offloaded to Storage resident cells or agents. The architecture of Big Data SQL closely follows the architecture of Oracle Exadata Storage Server Software and is built on the same proven technology.

Retrieving Data With data in HDFS stored in an undetermined format (schema on read), SQL queries require some constructs to parse and interpret data for it to be processed in rows and columns. For this Big Data SQL leverages all the Hadoop constructs, notably InputFormat and SerDe Java classes optionally through Hive metadata definitions. Big Data SQL then layers the Oracle Big Data SQL Agent on top of this generic Hadoop infrastructure as can be seen below.

Accessing HDFS data through Big Data SQL

Because Big Data SQL is based on Exadata Storage Server Software, a number of benefits are instantly available. Big Data SQL not only can retrieve data, but can also score Data Mining models at the individual agent, mapping model scoring to an individual HDFS node. Likewise querying JSON documents stored in HDFS can be done with SQL directly and is executed on the agent itself.

Smart Scan

Within the Big Data SQL Agent, similar functionality exists as is available in Exadata Storage Server Software. Smart Scans apply the filter and row projections from a given SQL query on the data streaming from the HDFS Data Nodes, reducing the data that is flowing to the Database to fulfill the data request of that given query. The benefits of Smart Scan for Hadoop data are even more pronounced than for Oracle Database as tables are often very wide and very large. Because of the elimination of data at the individual HDFS node, queries across large tables are now possible within reasonable time limits enabling data warehouse style queries to be spread across data stored in both HDFS and Oracle Database.

Storage Indexes

Storage Indexes - new in Big Data SQL 2.0 - provide the same benefits of IO elimination to Big Data SQL as they provide to SQL on Exadata. The big difference is that in Big Data SQL the Storage Index works on an HDFS block (on BDA – 256MB of data) and span 32 columns instead of the usual 8. Storage Index is fully transparent to both Oracle Database and to the underlying HDFS environment. As with Exadata, the Storage Index is a memory construct managed by the Big Data SQL software and invalidated automatically when the underlying files change.

Concepts for Storage Indexes

Storage Indexes work on data exposed via Oracle External tables using both the ORACLE_HIVE and ORACLE_HDFS types. Fields are mapped to these External Tables and the Storage Index is attached to the Oracle (not the Hive) columns, so that when a query references the column(s), the Storage Index - when appropriate - kicks in. In the current version, Storage Index does not support tables defined with Storage Handlers (ex: HBase or Oracle NoSQL Database).

Compound Benefits

The Smart Scan and Storage Index features deliver compound benefits. Where Storage Indexes reduces the IO done, Smart Scan then enacts the same row filtering and column projection. This latter step remains important as it reduces the data transferred between systems.

To learn more about Big Data SQL, join us at Open World in San Francisco at the end of the month.

Friday Oct 09, 2015

Android App for Data Warehousing and Big Data at #oow15

Android App for OOW15

A number of people have asked if I could make a smartphone app version of my usual OpenWorld web app (OpenWorld 2015 on your smartphone and tablet) which does not require a data or wifi connection. For Android users there is some good news: here is my new Android app for OpenWorld that installs on your smartphone and runs without needing a wifi connection or a mobile data connection.

[Read More]

Tuesday Oct 06, 2015

Online Calendar for Data Warehousing and Big Data Sessions at #oow15 now available

Cw22 lg oow 2227098

I have published a shared online calendar that contains all the most important data warehouse and big data sessions at this year’s OpenWorld. The following links will allow you to add this shared calendar to your own desktop calendar application or view it via a browser...


[Read More]

Thursday Oct 01, 2015

DOP Downgrades, or Avoid The Ceiling

We talked about how to find the reason of a DOP downgrade before. Let's look at the most undesirable downgrade reason, which is "DOP downgrade due to insufficient number of processes", and why it is undesirable.

PX Server Pool

Oracle allocates PX servers to SQL statements from a pool of processes. The size of this pool is determined by the parameter parallel_max_servers. This parameter acts as the limit for the number of PX servers that can be spawned by the database instance, just like the processes parameter limits the total number of processes for the instance. When the number of PX servers in an instance reaches parallel_max_servers no more PX servers will be spawned. This makes sure that the system does not get overloaded with processes.

The single exception for this is the usage of PX servers to access GV$ views in RAC. When you query a GV$ view one PX server is allocated on each instance, these PX servers are not counted against parallel_max_servers so they do not effect the number of PX servers that can be used by statements accessing other objects.

You can use the view V$PX_PROCESS to see the PX servers in your instance. This view also lists the PX servers used for queries accessing GV$ views in RAC, so you can use the predicate IS_GV='FALSE' to filter out those PX servers.

SQL> select server_name,status from v$px_process where is_gv='FALSE';

-------------------- ---------
P007                 IN USE
P001                 IN USE
P005                 IN USE
P004                 IN USE
P003                 IN USE
P000                 IN USE
P002                 IN USE
P006                 IN USE
P008                 AVAILABLE
P00A                 AVAILABLE
P00F                 AVAILABLE
P00D                 AVAILABLE
P009                 AVAILABLE
P00E                 AVAILABLE
P00B                 AVAILABLE
P00C                 AVAILABLE

16 rows selected.

Be aware that Oracle will spawn PX servers up to parallel_max_servers when needed, this view only lists already spawned PX servers. So, do not expect to see the number of PX servers in this view to be equal to parallel_max_servers. The PX servers currently being used have STATUS='IN USE'.

The following query looks at the in-use PX servers and parallel_max_servers and shows the number of in-use PX servers and the number of available PX servers.

  (SELECT COUNT(*) FROM v$px_process WHERE  is_gv='FALSE' AND status='IN USE'
  ) inuse,
  (SELECT value-
    (SELECT COUNT(*) FROM v$px_process WHERE    is_gv='FALSE' AND status='IN USE'
  FROM v$parameter
  WHERE name='parallel_max_servers'
  ) available
FROM dual;

----- ---------
    8        28

PX Server Allocation to Statements

When a SQL statement requires PX servers these processes are allocated from the PX server pool. The number of PX servers required for a statement depends on the degree of parallelism (DOP) and the execution plan shape. Most statements are executed with DOP*2 number of PX servers because of the producer/consumer model. There are rare cases when a statement uses more than DOP*2 number of PX servers depending on the plan shape. Let's not go into the details of when this can happen as I will cover these cases in another blog post soon.

Here is an example showing two queries running on an instance with parallel_max_servers=36. One query is running with DOP=8 and using 16 PX servers, the other query is running with DOP=4 and using 8 PX servers.

As you see in the picture there are 12 PX servers not used by any statement, these are available for any statement.

Downgrades Due to PX Server Shortage

With the default configuration of the Oracle database (specifically when parallel_degree_policy=MANUAL) when the required number of PX servers for a statement is higher than the number of available PX servers that statement gets downgraded. This means the statement will run with fewer PX servers than it requires. When the statement is downgraded because there are not enough number of available PX servers you will see the reason "DOP downgrade due to insufficient number of processes".

Continuing with the same example the following picture shows a new query submitted with DOP=8 and requires 16 PX servers.

Since there are only 12 available PX servers this statement gets downgraded from DOP=8 to DOP=6 and gets all 12 available PX servers.

Consider another query submitted at this time with DOP=8. Even though this query requires 16 PX servers it will be given none because there are no available PX servers left. This query gets downgraded to DOP=1 and runs serially.

Downgraded? So What?

DOP downgrades can have a huge negative impact on SQL performance. It is not easy to quantify this impact without testing your query with different DOPs as it depends on the scalability of the statement and the downgrade percentage. If your statement gets downgraded from DOP=48 to DOP=36 maybe you can live with that but it gets worse as the downgrade percentage gets higher. For example, if the last statement above would complete in 10 minutes with DOP=8, it will now finish in 80 minutes assuming linear scalability as it will be using one process instead of eight. A statement cannot change its DOP after it starts, so even if the required number of PX servers become available just at the start of execution this statement will not be able to use them.

DOP downgrades due to PX server shortage also makes your SQL performance unpredictable. The performance of a statement depends on the number of available PX servers when it is executed, there is no guarentee that it will get the required number of PX servers. This causes users to get widely varying response times for the same statement.

Parallel Statement Queuing to Prevent Downgrades

This brings us to the next post in which we will talk about the Parallel Statement Queuing feature introduced in 11.2. Parallel Statement Queuing prevents downgrades by limiting the number of concurrent parallel statements in the system. When the required number of PX servers for a statement is higher than the number of available PX servers it queues that statement until that number of PX servers become available. More details to follow in the next post.

Tuesday Sep 22, 2015

Optimizer Processing Rates for Auto DOP

In the previous post in this series we looked at how we can configure and control Auto DOP and we have explained which initialization parameters are used for that. Now let's look at how HW characteristics and object statistics are used by the optimizer and how they impact the computation of the DOP.

The optimizer needs two things to compute the DOP for a statement, the amount of work to be done and the HW characteristics of the system.

Amount of work

For every operation in the plan the optimizer calculates the work in terms of the data size that operation processes, this is expressed in terms of bytes and number of rows. The source of this information is object statistics so like most anything else related to the optimizer keeping the object statistics up-to-date is very important for Auto DOP too.

Processing rates

Besides the amount of work the optimizer also needs to know the HW characteristics of the system to understand how much time is needed to complete that amount of work. Consequently, the HW characteristics decribe how much work a single process can perform on that system, these are expressed as bytes per second and rows per second and are called processing rates. As they indicate a system's capability it means you will need fewer processes (which means less DOP) for the same amount of work as these rates go higher; the more powerful a system is, the less resources you need to process the same statement in the same amount of time.

There are two kinds of processing rates, IO and CPU. Let's look at how they are used in versions 11.2 and 12.1.

Processing rates in 11.2

11.2 uses only the IO processing rate which is expressed as IO megabytes per second. This rate shows how much IO a single process can perform per second. There are two methods to populate this value, you can run IO calibration, or you can manually insert or update the value in the RESOURCE_IO_CALIBRATE$ table as explained in MOS note 1269321.1. Both of these methods require an instance restart to take effect.

You can query DBA_RSRC_IO_CALIBRATE to see the value for this processing rate.


If you do not run IO calibration and if you have not manually inserted this value Auto DOP will not be used and you will see a related note in the plan output.

- automatic DOP: skipped because of IO calibrate statistics are missing

The recommended value for Exadata for this is 200MB/sec. This is the value you will get if you use the Exadata dbm template when creating the database.

11.2 looks at only scan operations and uses the object statistics and the IO processing rate to compute a DOP for each scan operation. The maximum of these operation DOPs is chosen as the statement DOP.

Note that this was a conscious decision to alleviate the first switch to Auto DOP for customers. A common practice of users of manual DOP was to pick DOP values as object attributes based on the object size; for example, a customer could choose to set all tables below 2GB to serial, tables between 2GB and 10GB to a DOP of 4, and all tables larger than 10GB to a DOP of 16 as “default” for his or her system. Auto DOP to some extent mimics such a practice, albeit with more flexibility.

In the following example only plan line id 8 effects the DOP as it is the only scan operation.

You can use the ADVANCED format option of DBMS_XPLAN to see the where the DOP comes from.


It displays the object that derived the DOP decision.

Processing rates in 12.1

The most significant difference between 12.1 and 11.2 is that Auto DOP now takes more processing rates into account as it uses the CPU and IO cost of all operations.

12.1 uses the IO processing rate just like 11.2 did. The most important difference is that 12c does not require this value to be populated. It uses a default value of 200MB/sec when this value is not set by the user. If you already have this value set you can keep it as is. If you are in 12.1 and starting to use Auto DOP we recommend starting with the default value.

One of the drawbacks of Auto DOP in 11.2 was the lack of CPU costing which could lead to CPU-heavy statements to run with a DOP that was deemed too low. Since each operation in a plan requires CPU as well, the optimizer needs to take the CPU operations into account as well to compute a DOP that reflects the real resource usage of a statement. That is why CPU processing rates were introduced in 12.1. There are basically two CPU rates used for Auto DOP, bytes per second and rows per second. These specify how much data a single process on the CPU can process per second.

All of these three processing rates are set to default values in 12.1 and they are stored in the new view V$OPTIMIZER_PROCESSING_RATE.


-------------------- --------------------
IO_BYTES_PER_SEC     200.00000
CPU_BYTES_PER_SEC    1000.00000
CPU_ROWS_PER_SEC     1000000.00

These values indicate a single process can scan 200MB/sec, it can process 1GB/sec or 1,000,000 rows/sec. These are the default values and we recommend starting with these defaults.

If you want to change these values you can use the new procedure DBMS_STATS.SET_PROCESSING_RATE to set them manually. In this case the values will be populated in the MANUAL_VALUE column of V$OPTIMIZER_PROCESSING_RATE. Unfortunately because of a bug the optimizer will not start using the values you set manually immediately, you need to set the parameter _optimizer_proc_rate_source to MANUAL for them to take effect. We will fix this bug so that manually set values will be used immediately without changing any parameters.

For the IO rate in 12.1 you still can use the old methods explained for 11.2 and the value you set will be reflected in this new view, but we recommend not using those methods and use this new procedure instead for both IO and CPU processing rates.

There is a simple reason for this: “overloading” the collection of IO statistics on a system with having a direct impact on the calculation of Auto DOP turned out to be not the best decision we made, so we are going to fix this (we had cases where people ran IO calibration to see what their system was capable of and inadvertently changed their system’s Auto DOP behavior).

When computing the DOP for a statement, unlike 11.2, 12.1 looks at all operations and also the plan shape. It computes two separate DOPs for each operation, one based on IO rates, the other based on CPU rates, these are called IO DOP and CPU DOP respectively. For the CPU DOP it uses both CPU_BYTES_PER_SEC and CPU_ROWS_PER_SEC and uses the one that gives a higher DOP. The highest of the IO DOP and CPU DOP is chosen as the operation DOP. The optimizer then looks at the plan shape and calculates the statement DOP based on operation DOPs.

Since 12.1 takes into account all operations rather than only IO operations you can get higher DOPs in 12.1 compared to 11.2. Using the previous sample query you now get a DOP of 20 in 12.1 compared to a DOP of 3 in 11.2.

So, what should I do now?

For users who are switching to Auto DOP, in 11.2 we recommend starting with a value of 200MB/sec for the IO rate.

In 12.1 we recommend not running IO calibration at all and starting with the default IO and CPU processing rates. Based on your testing you can change these values, increase them for lower DOPs, or decrease them for higher DOPs.

For users already using Auto DOP just keep what you have if you are happy with your current DOPs. If you are on 11.2 and upgrading to 12.1 start with the enhanced cost model that includes CPU costing as well and leave the default CPU rates; no need to touch the IO rate as you already have the IO rate set in 11.2. Be aware that the DOPs can change after the upgrade as the optimizer will start using the CPU processing rates in addition to the IO rate, which is a good thing from all we have seen so far; CPU intensive queries will just pick a higher, more accurate DOP. However, do not take this as blind rule: as for all upgrades you need to test your workload to make sure you are satisfied with the DOP changes.

Wednesday Sep 16, 2015

OpenWorld 2015 on your smartphone and tablet

Most of you probably know that each year I publish a data warehouse guide in iBook and PDF format for OpenWorld which contains links to the latest data warehouse videos, a list of the most important sessions along with hands-on labs and profiles of the key presenters. For this year’s conference I have again made all this information available in an HTML web app that (should) run on most smartphones and tablets.

[Read More]

Thursday Sep 03, 2015

Oracle Big Data Lite 4.2.1 - Includes Big Data Discovery

We just released Oracle Big Data Lite 4.2.1 VM.  This VM provides many of the key big data technologies that are part of Oracle's big data platform.  Along with all the great features of the previous version, Big Data Lite now adds Oracle Big Data Discovery 1.1:

The list of big data capabilities provided by the virtual machine continues to grow.  Here's a list of all the products that are pre-configured:

  • Oracle Enterprise Linux 6.6
  • Oracle Database 12c Release 1 Enterprise Edition ( - including Oracle Big Data SQL-enabled external tables, Oracle Multitenant, Oracle Advanced Analytics, Oracle OLAP, Oracle Partitioning, Oracle Spatial and Graph, and more.
  • Cloudera Distribution including Apache Hadoop (CDH5.4.0)
  • Cloudera Manager (5.4.0)
  • Oracle Big Data Discovery 1.1
  • Oracle Big Data Connectors 4.2
    • Oracle SQL Connector for HDFS 3.3.0
    • Oracle Loader for Hadoop 3.4.0
    • Oracle Data Integrator 12c
    • Oracle R Advanced Analytics for Hadoop 2.5.0
    • Oracle XQuery for Hadoop 4.2.0
  • Oracle NoSQL Database Enterprise Edition 12cR1 (3.3.4)
  • Oracle Big Data Spatial and Graph 1.0
  • Oracle JDeveloper 12c (12.1.3)
  • Oracle SQL Developer and Data Modeler 4.1
  • Oracle Data Integrator 12cR1 (
  • Oracle GoldenGate 12c
  • Oracle R Distribution 3.1.1
  • Oracle Perfect Balance 2.4.0
  • Oracle CopyToBDA 2.0 
Take it for a spin - and check out the tutorials and demos that are available from the Big Data Lite download page.

Monday Aug 10, 2015

Must-See Session Guide for Data Warehousing at #oow15

Your Must-See Guide for Data Warehousing and Big Data #OOW2015

There's so much to learn at Oracle OpenWorld - it provides more educational and networking opportunities than any other conference dedicated to Oracle business and technology users. To help you get the most from this year's event I have prepared a comprehensive guide which lists all the must-see data warehousing and big data sessions...

[Read More]

Friday Aug 07, 2015

System Statistics About DOP Downgrades

I want to take a detour from the Auto DOP series to talk about system statistics related to DOP downgrades. In earlier posts I talked about finding the downgrades for individual SQL statements in SQL Monitor and finding the reason for the downgrades.

Rather than individual statements if you are trying to find out how many statements are getting downgraded in your system the obvious place to look at is (g)v$sysstat, or AWR for historical data. The statistics available are listed in the documentation as:

Parallel operations not downgraded
Parallel operations downgraded to serial
Parallel operations downgraded 75 to 99 pct
Parallel operations downgraded 50 to 75 pct
Parallel operations downgraded 25 to 50 pct
Parallel operations downgraded 1 to 25 pct

The statistic names are self-explaining so there is no need to describe each of them here.

One thing to be aware of is that as I have mentioned in the Auto DOP series Database Resource Manager (DBRM) limits are integrated with Auto DOP. So, for statements using Auto DOP you will not see any downgrade numbers as their DOP will be already adjusted according to DBRM, those statements will be counted under "Parallel operations not downgraded" unless they are downgraded for reasons other than DBRM. Here is an example showing this behavior.

With no DBRM plan and parallel_degree_limit set to CPU I get a DOP of 4 because of the parallel degree limit in my system.

With a plan that limits the DOP to 2 for OTHER_GROUPS which my username is mapped to I get a DOP of 2 because of the plan limit.



DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA(); dbms_resource_manager.submit_pending_area; END; / 

alter system set resource_manager_plan='plan1'; 

When you run this statement with the plan enabled and look at v$sysstat you will see that "Parallel operations not downgraded" is incremented.

Even if you have Auto DOP enabled in your system or session you can still have statements using manual DOP by using hints. Manual DOP is not integrated with DBRM, so the requested DOP will not be same as the actual DOP if the DOP limit in your plan is less than the DOP in the hint. These statements will show as downgrades in v$sysstat.

Here's the same query with a hint that specifies a specific (manual) DOP.

If you run this query you will see that the appropriate downgrade statistic is incremented. 

Since the requested DOP was 8 and the actual DOP was 2 I got a 75% downgrade as indicated by the statistics.

For all other reasons other than DBRM downgrade statistics will be incremented in case of downgrades for both Auto DOP and manual DOP. 

Wednesday Aug 05, 2015

Configuring and Controlling Auto DOP

This is the second post in a series talking about Auto DOP. In the first post we talked about what Auto DOP is. Now, let's look at how you can configure and control Auto DOP.

How to enable/disable Auto DOP 

There are two ways to enable/disable Auto DOP, you can enable/disable it for the whole system or for a session using the initialization parameter parallel_degree_policy, or you can enable/disable it for specific SQL statements using hints.


This is the parameter used to enable/disable Auto DOP system-wise or session-wise. 

The default value of this parameter is MANUAL which disables Auto DOP. With this setting the DOP of a statement will be determined by table/index decorations and hints. This does not mean that you cannot use Auto DOP as you can still use hints to request Auto DOP for specific statements as we will see below.

When you set this parameter to LIMITED, Auto DOP is applied to statements accessing tables/indexes decorated with the default DOP unless those statements set a specific DOP using hints. For all other statements manual DOP will be used.

When you set this parameter to AUTO, Auto DOP is applied to all statements regardless of the table/index DOP decorations unless those statements set a specific DOP using hints.


Regardless of what you set for parallel_degree_policy you can request Auto DOP for SQL statements using the hint PARALLEL. The statement level hints PARALLEL and PARALLEL (AUTO) instruct the optimizer to use Auto DOP. The difference is that the first one uses at least a DOP of 2 whereas the second one can use serial execution depending on the optimizer's DOP computation.

Even if you enable Auto DOP session-wise or system-wise statements using hints that request a specific DOP, like PARALLEL(integer), will run with that DOP and Auto DOP will not be used for those statements.

Enabling Auto DOP by setting parallel_degree_policy to AUTO brings additional benefits like Parallel Statement Queuing and In-memory Parallel Execution, we will discuss these in future posts.

How to limit the DOP computed by Auto DOP

The DOP computed by the optimizer with Auto DOP can be quite high depending on the resource requirements of the statement. You can limit the DOP using the initialization parameter parallel_degree_limit or Database Resource Manager (DBRM).


This parameter limits the DOP that can be computed by the optimizer. After computing the DOP the optimizer looks at this parameter and adjusts the DOP accordingly and generates a plan based on the adjusted DOP.

The default value of this parameter is CPU which means the maximum DOP you can get is the default DOP which is parallel_threads_per_cpu * SUM(cpu_count). Even if you increase this parameter's value to be higher than the default DOP, the optimizer will always limit the DOP at default DOP.

When this parameter is set to IO the maximum DOP will be determined based on the IO calibration values. The maximum DOP will be the total system IO bandwidth divided by the per process IO throughput which is DBA_RSRC_IO_CALIBRATE.MAX_MBPS / DBA_RSRC_IO_CALIBRATE.MAX_PMBPS. If these values are not set or gathered the maximum DOP will be set as the default DOP like mentioned above.

This is a system-wide limit, so it limits the DOP for all users in the system to the same value. You can set this parameter to an integer value if you want a global limit but for more fine-grained control on the DOP for different kinds of users we recommend leaving this parameter as default and using Database Resource Manager (DBRM) to limit the DOP.

This parameter has no effect on statements not using Auto DOP. 


With DBRM, you can map different users/applications to different consumer groups depending on business requirements and set separate DOP limits for those consumer groups. The DBRM directive that specifies the DOP limit is parallel_degree_limit_p1. This directive in integrated with the optimizer which means the optimizer will look at this value, adjust the computed DOP and generate a plan accordingly.

How to set the threshold for Auto DOP 

As explained in the first post, for every SQL statement the optimizer first generates a serial plan and estimates the execution time. It compares the estimated time with a threshold and decides to use parallelism or not. This threshold is specified by the initialization parameter parallel_min_time_threshold


This parameter specifies the estimated execution time of a SQL statement to be considered for parallel execution. If the estimated time is less than this value the statement runs serially. If it is greater the optimizer goes on to generate a parallel plan and compute the DOP for the statement. 

The default value for this parameter is AUTO which means 10 seconds. You can decrease this value if you want more statements to run in parallel or increase it if you want fewer statements to run in parallel.

This parameter has no effect on statements not using Auto DOP. 

So, for any SQL statement that the optimizer decides to run in parallel with Auto DOP, it takes into account the initialization parameters, DBRM settings, system and object statistics, and HW characteristics to compute the DOP. 

We have looked at what initialization parameters you can use to control Auto DOP, in the next post we will look at how system and object statistics are used and what we mean by "HW characteristics". 

UPDATE: The next post in this series is here


The data warehouse insider is written by the Oracle product management team and sheds lights on all thing data warehousing and big data.


« November 2015