Tuesday Sep 22, 2015

Optimizer Processing Rates for Auto DOP

In the previous post in this series we looked at how we can configure and control Auto DOP and we have explained which initialization parameters are used for that. Now let's look at how HW characteristics and object statistics are used by the optimizer and how they impact the computation of the DOP.

The optimizer needs two things to compute the DOP for a statement, the amount of work to be done and the HW characteristics of the system.

Amount of work

For every operation in the plan the optimizer calculates the work in terms of the data size that operation processes, this is expressed in terms of bytes and number of rows. The source of this information is object statistics so like most anything else related to the optimizer keeping the object statistics up-to-date is very important for Auto DOP too.

Processing rates

Besides the amount of work the optimizer also needs to know the HW characteristics of the system to understand how much time is needed to complete that amount of work. Consequently, the HW characteristics decribe how much work a single process can perform on that system, these are expressed as bytes per second and rows per second and are called processing rates. As they indicate a system's capability it means you will need fewer processes (which means less DOP) for the same amount of work as these rates go higher; the more powerful a system is, the less resources you need to process the same statement in the same amount of time.

There are two kinds of processing rates, IO and CPU. Let's look at how they are used in versions 11.2 and 12.1.

Processing rates in 11.2

11.2 uses only the IO processing rate which is expressed as IO megabytes per second. This rate shows how much IO a single process can perform per second. There are two methods to populate this value, you can run IO calibration, or you can manually insert or update the value in the RESOURCE_IO_CALIBRATE$ table as explained in MOS note 1269321.1. Both of these methods require an instance restart to take effect.

You can query DBA_RSRC_IO_CALIBRATE to see the value for this processing rate.

select MAX_PMBPS from DBA_RSRC_IO_CALIBRATE;

If you do not run IO calibration and if you have not manually inserted this value Auto DOP will not be used and you will see a related note in the plan output.

- automatic DOP: skipped because of IO calibrate statistics are missing

The recommended value for Exadata for this is 200MB/sec. This is the value you will get if you use the Exadata dbm template when creating the database.

11.2 looks at only scan operations and uses the object statistics and the IO processing rate to compute a DOP for each scan operation. The maximum of these operation DOPs is chosen as the statement DOP.

Note that this was a conscious decision to alleviate the first switch to Auto DOP for customers. A common practice of users of manual DOP was to pick DOP values as object attributes based on the object size; for example, a customer could choose to set all tables below 2GB to serial, tables between 2GB and 10GB to a DOP of 4, and all tables larger than 10GB to a DOP of 16 as “default” for his or her system. Auto DOP to some extent mimics such a practice, albeit with more flexibility.

In the following example only plan line id 8 effects the DOP as it is the only scan operation.

You can use the ADVANCED format option of DBMS_XPLAN to see the where the DOP comes from.

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY(format=>'advanced'));

It displays the object that derived the DOP decision.

Processing rates in 12.1

The most significant difference between 12.1 and 11.2 is that Auto DOP now takes more processing rates into account as it uses the CPU and IO cost of all operations.

12.1 uses the IO processing rate just like 11.2 did. The most important difference is that 12c does not require this value to be populated. It uses a default value of 200MB/sec when this value is not set by the user. If you already have this value set you can keep it as is. If you are in 12.1 and starting to use Auto DOP we recommend starting with the default value.

One of the drawbacks of Auto DOP in 11.2 was the lack of CPU costing which could lead to CPU-heavy statements to run with a DOP that was deemed too low. Since each operation in a plan requires CPU as well, the optimizer needs to take the CPU operations into account as well to compute a DOP that reflects the real resource usage of a statement. That is why CPU processing rates were introduced in 12.1. There are basically two CPU rates used for Auto DOP, bytes per second and rows per second. These specify how much data a single process on the CPU can process per second.

All of these three processing rates are set to default values in 12.1 and they are stored in the new view V$OPTIMIZER_PROCESSING_RATE.

SQL> select OPERATION_NAME, DEFAULT_VALUE
  2  from V$OPTIMIZER_PROCESSING_RATE
  3  where OPERATION_NAME in ('IO_BYTES_PER_SEC','CPU_BYTES_PER_SEC', 'CPU_ROWS_PER_SEC');

OPERATION_NAME       DEFAULT_VALUE
-------------------- --------------------
IO_BYTES_PER_SEC     200.00000
CPU_BYTES_PER_SEC    1000.00000
CPU_ROWS_PER_SEC     1000000.00

These values indicate a single process can scan 200MB/sec, it can process 1GB/sec or 1,000,000 rows/sec. These are the default values and we recommend starting with these defaults.

If you want to change these values you can use the new procedure DBMS_STATS.SET_PROCESSING_RATE to set them manually. In this case the values will be populated in the MANUAL_VALUE column of V$OPTIMIZER_PROCESSING_RATE. Unfortunately because of a bug the optimizer will not start using the values you set manually immediately, you need to set the parameter _optimizer_proc_rate_source to MANUAL for them to take effect. We will fix this bug so that manually set values will be used immediately without changing any parameters.

For the IO rate in 12.1 you still can use the old methods explained for 11.2 and the value you set will be reflected in this new view, but we recommend not using those methods and use this new procedure instead for both IO and CPU processing rates.

There is a simple reason for this: “overloading” the collection of IO statistics on a system with having a direct impact on the calculation of Auto DOP turned out to be not the best decision we made, so we are going to fix this (we had cases where people ran IO calibration to see what their system was capable of and inadvertently changed their system’s Auto DOP behavior).

When computing the DOP for a statement, unlike 11.2, 12.1 looks at all operations and also the plan shape. It computes two separate DOPs for each operation, one based on IO rates, the other based on CPU rates, these are called IO DOP and CPU DOP respectively. For the CPU DOP it uses both CPU_BYTES_PER_SEC and CPU_ROWS_PER_SEC and uses the one that gives a higher DOP. The highest of the IO DOP and CPU DOP is chosen as the operation DOP. The optimizer then looks at the plan shape and calculates the statement DOP based on operation DOPs.

Since 12.1 takes into account all operations rather than only IO operations you can get higher DOPs in 12.1 compared to 11.2. Using the previous sample query you now get a DOP of 20 in 12.1 compared to a DOP of 3 in 11.2.

So, what should I do now?

For users who are switching to Auto DOP, in 11.2 we recommend starting with a value of 200MB/sec for the IO rate.

In 12.1 we recommend not running IO calibration at all and starting with the default IO and CPU processing rates. Based on your testing you can change these values, increase them for lower DOPs, or decrease them for higher DOPs.

For users already using Auto DOP just keep what you have if you are happy with your current DOPs. If you are on 11.2 and upgrading to 12.1 start with the enhanced cost model that includes CPU costing as well and leave the default CPU rates; no need to touch the IO rate as you already have the IO rate set in 11.2. Be aware that the DOPs can change after the upgrade as the optimizer will start using the CPU processing rates in addition to the IO rate, which is a good thing from all we have seen so far; CPU intensive queries will just pick a higher, more accurate DOP. However, do not take this as blind rule: as for all upgrades you need to test your workload to make sure you are satisfied with the DOP changes.

Wednesday Sep 16, 2015

OpenWorld 2015 on your smartphone and tablet

Most of you probably know that each year I publish a data warehouse guide in iBook and PDF format for OpenWorld which contains links to the latest data warehouse videos, a list of the most important sessions along with hands-on labs and profiles of the key presenters. For this year’s conference I have again made all this information available in an HTML web app that (should) run on most smartphones and tablets.

[Read More]

Thursday Sep 03, 2015

Oracle Big Data Lite 4.2.1 - Includes Big Data Discovery

We just released Oracle Big Data Lite 4.2.1 VM.  This VM provides many of the key big data technologies that are part of Oracle's big data platform.  Along with all the great features of the previous version, Big Data Lite now adds Oracle Big Data Discovery 1.1:

The list of big data capabilities provided by the virtual machine continues to grow.  Here's a list of all the products that are pre-configured:

  • Oracle Enterprise Linux 6.6
  • Oracle Database 12c Release 1 Enterprise Edition (12.1.0.2) - including Oracle Big Data SQL-enabled external tables, Oracle Multitenant, Oracle Advanced Analytics, Oracle OLAP, Oracle Partitioning, Oracle Spatial and Graph, and more.
  • Cloudera Distribution including Apache Hadoop (CDH5.4.0)
  • Cloudera Manager (5.4.0)
  • Oracle Big Data Discovery 1.1
  • Oracle Big Data Connectors 4.2
    • Oracle SQL Connector for HDFS 3.3.0
    • Oracle Loader for Hadoop 3.4.0
    • Oracle Data Integrator 12c
    • Oracle R Advanced Analytics for Hadoop 2.5.0
    • Oracle XQuery for Hadoop 4.2.0
  • Oracle NoSQL Database Enterprise Edition 12cR1 (3.3.4)
  • Oracle Big Data Spatial and Graph 1.0
  • Oracle JDeveloper 12c (12.1.3)
  • Oracle SQL Developer and Data Modeler 4.1
  • Oracle Data Integrator 12cR1 (12.1.3.0.1)
  • Oracle GoldenGate 12c
  • Oracle R Distribution 3.1.1
  • Oracle Perfect Balance 2.4.0
  • Oracle CopyToBDA 2.0 
Take it for a spin - and check out the tutorials and demos that are available from the Big Data Lite download page.

Monday Aug 10, 2015

Must-See Session Guide for Data Warehousing at #oow15






Your Must-See Guide for Data Warehousing and Big Data #OOW2015

There's so much to learn at Oracle OpenWorld - it provides more educational and networking opportunities than any other conference dedicated to Oracle business and technology users. To help you get the most from this year's event I have prepared a comprehensive guide which lists all the must-see data warehousing and big data sessions...

[Read More]

Friday Aug 07, 2015

System Statistics About DOP Downgrades

I want to take a detour from the Auto DOP series to talk about system statistics related to DOP downgrades. In earlier posts I talked about finding the downgrades for individual SQL statements in SQL Monitor and finding the reason for the downgrades.

Rather than individual statements if you are trying to find out how many statements are getting downgraded in your system the obvious place to look at is (g)v$sysstat, or AWR for historical data. The statistics available are listed in the documentation as:

Parallel operations not downgraded
Parallel operations downgraded to serial
Parallel operations downgraded 75 to 99 pct
Parallel operations downgraded 50 to 75 pct
Parallel operations downgraded 25 to 50 pct
Parallel operations downgraded 1 to 25 pct

The statistic names are self-explaining so there is no need to describe each of them here.

One thing to be aware of is that as I have mentioned in the Auto DOP series Database Resource Manager (DBRM) limits are integrated with Auto DOP. So, for statements using Auto DOP you will not see any downgrade numbers as their DOP will be already adjusted according to DBRM, those statements will be counted under "Parallel operations not downgraded" unless they are downgraded for reasons other than DBRM. Here is an example showing this behavior.

With no DBRM plan and parallel_degree_limit set to CPU I get a DOP of 4 because of the parallel degree limit in my system.


With a plan that limits the DOP to 2 for OTHER_GROUPS which my username is mapped to I get a DOP of 2 because of the plan limit.

BEGIN
dbms_resource_manager.create_pending_area;
DBMS_RESOURCE_MANAGER.create_plan('plan1','plan1');
dbms_resource_manager.create_plan_directive(plan=>'plan1',

group_or_subplan=>'OTHER_GROUPS',comment=>'test',parallel_degree_limit_p1=>2);

DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA(); dbms_resource_manager.submit_pending_area; END; / 

alter system set resource_manager_plan='plan1'; 


When you run this statement with the plan enabled and look at v$sysstat you will see that "Parallel operations not downgraded" is incremented.


Even if you have Auto DOP enabled in your system or session you can still have statements using manual DOP by using hints. Manual DOP is not integrated with DBRM, so the requested DOP will not be same as the actual DOP if the DOP limit in your plan is less than the DOP in the hint. These statements will show as downgrades in v$sysstat.

Here's the same query with a hint that specifies a specific (manual) DOP.


If you run this query you will see that the appropriate downgrade statistic is incremented. 

Since the requested DOP was 8 and the actual DOP was 2 I got a 75% downgrade as indicated by the statistics.

For all other reasons other than DBRM downgrade statistics will be incremented in case of downgrades for both Auto DOP and manual DOP. 

Wednesday Aug 05, 2015

Configuring and Controlling Auto DOP

This is the second post in a series talking about Auto DOP. In the first post we talked about what Auto DOP is. Now, let's look at how you can configure and control Auto DOP.


How to enable/disable Auto DOP 

There are two ways to enable/disable Auto DOP, you can enable/disable it for the whole system or for a session using the initialization parameter parallel_degree_policy, or you can enable/disable it for specific SQL statements using hints.

PARALLEL_DEGREE_POLICY

This is the parameter used to enable/disable Auto DOP system-wise or session-wise. 

The default value of this parameter is MANUAL which disables Auto DOP. With this setting the DOP of a statement will be determined by table/index decorations and hints. This does not mean that you cannot use Auto DOP as you can still use hints to request Auto DOP for specific statements as we will see below.

When you set this parameter to LIMITED, Auto DOP is applied to statements accessing tables/indexes decorated with the default DOP unless those statements set a specific DOP using hints. For all other statements manual DOP will be used.

When you set this parameter to AUTO, Auto DOP is applied to all statements regardless of the table/index DOP decorations unless those statements set a specific DOP using hints.

Hints

Regardless of what you set for parallel_degree_policy you can request Auto DOP for SQL statements using the hint PARALLEL. The statement level hints PARALLEL and PARALLEL (AUTO) instruct the optimizer to use Auto DOP. The difference is that the first one uses at least a DOP of 2 whereas the second one can use serial execution depending on the optimizer's DOP computation.

Even if you enable Auto DOP session-wise or system-wise statements using hints that request a specific DOP, like PARALLEL(integer), will run with that DOP and Auto DOP will not be used for those statements.

Enabling Auto DOP by setting parallel_degree_policy to AUTO brings additional benefits like Parallel Statement Queuing and In-memory Parallel Execution, we will discuss these in future posts.


How to limit the DOP computed by Auto DOP

The DOP computed by the optimizer with Auto DOP can be quite high depending on the resource requirements of the statement. You can limit the DOP using the initialization parameter parallel_degree_limit or Database Resource Manager (DBRM).

PARALLEL_DEGREE_LIMIT

This parameter limits the DOP that can be computed by the optimizer. After computing the DOP the optimizer looks at this parameter and adjusts the DOP accordingly and generates a plan based on the adjusted DOP.

The default value of this parameter is CPU which means the maximum DOP you can get is the default DOP which is parallel_threads_per_cpu * SUM(cpu_count). Even if you increase this parameter's value to be higher than the default DOP, the optimizer will always limit the DOP at default DOP.

When this parameter is set to IO the maximum DOP will be determined based on the IO calibration values. The maximum DOP will be the total system IO bandwidth divided by the per process IO throughput which is DBA_RSRC_IO_CALIBRATE.MAX_MBPS / DBA_RSRC_IO_CALIBRATE.MAX_PMBPS. If these values are not set or gathered the maximum DOP will be set as the default DOP like mentioned above.

This is a system-wide limit, so it limits the DOP for all users in the system to the same value. You can set this parameter to an integer value if you want a global limit but for more fine-grained control on the DOP for different kinds of users we recommend leaving this parameter as default and using Database Resource Manager (DBRM) to limit the DOP.

This parameter has no effect on statements not using Auto DOP. 

DBRM

With DBRM, you can map different users/applications to different consumer groups depending on business requirements and set separate DOP limits for those consumer groups. The DBRM directive that specifies the DOP limit is parallel_degree_limit_p1. This directive in integrated with the optimizer which means the optimizer will look at this value, adjust the computed DOP and generate a plan accordingly.


How to set the threshold for Auto DOP 

As explained in the first post, for every SQL statement the optimizer first generates a serial plan and estimates the execution time. It compares the estimated time with a threshold and decides to use parallelism or not. This threshold is specified by the initialization parameter parallel_min_time_threshold

PARALLEL_MIN_TIME_THRESHOLD

This parameter specifies the estimated execution time of a SQL statement to be considered for parallel execution. If the estimated time is less than this value the statement runs serially. If it is greater the optimizer goes on to generate a parallel plan and compute the DOP for the statement. 

The default value for this parameter is AUTO which means 10 seconds. You can decrease this value if you want more statements to run in parallel or increase it if you want fewer statements to run in parallel.

This parameter has no effect on statements not using Auto DOP. 


So, for any SQL statement that the optimizer decides to run in parallel with Auto DOP, it takes into account the initialization parameters, DBRM settings, system and object statistics, and HW characteristics to compute the DOP. 

We have looked at what initialization parameters you can use to control Auto DOP, in the next post we will look at how system and object statistics are used and what we mean by "HW characteristics". 

UPDATE: The next post in this series is here

Monday Aug 03, 2015

What is Auto DOP?

Until 11.2 there were several ways for developers/DBAs to request parallelism for a SQL statement. 11.2 brought another method which is called Automatic Degree of Parallelism (Auto DOP). I will try to cover what Auto DOP is, how it works, and how to control and configure it in a series of blog posts here. Some of this information is already covered in the documentation and the Parallel Execution with Oracle Database 12c Fundamentals white paper, so it would be a good idea to start with those and complement them with this blog series. 

This first part covers what Auto DOP is basically.

Manual DOP

Before we talk about Auto DOP let's look at the two most common ways to request parallelism for a SQL statement before Auto DOP. Since these methods require manual intervention in the table/index or SQL level we call the DOP set by these methods manual DOP.

Table/index decorations

You can set a specific DOP for tables/indexes and SQL statements accessing those tables/indexes will request the specified DOP.

ALTER TABLE sales PARALLEL 16;

Hints

You can use statement level or object level PARALLEL hints to request a specific DOP for a statement.  

SELECT /*+ parallel(16) */ COUNT(*) FROM customers;

What is wrong with manual DOP?

You can think of several problems with manual DOP but I think the biggest problem is time; as time passes the DOP you set for today may not give you the same performance tomorrow. The database and the platform it runs on is not static. What happens when your data size doubles? When you change your HW with a faster system? Tomorrow you will need maybe more, maybe less DOP for the same statement. Think about the case when the database gets even smarter and starts to execute operations faster, this means you can get better performance with fewer processes which means less DOP. All of these changes will require you to go back to the tables/indexes and your application to adjust the DOP settings.

Other things to consider are; how does a user decide on a DOP for a statement? What about business users who do not know how to manually set the DOP? Does a user consider the HW configuration when setting a DOP, etc...?

Auto DOP to the rescue

Introduced in 11.2 Auto DOP enables the optimizer to calculate the DOP for a statement based on resource requirements and HW characteristics. As opposed to manual DOP it does not require table/index decorations or hints.

Here is the basic decision flow for a SQL statement with Auto DOP.

The optimizer first generates a serial plan for the SQL statement and estimates the execution time. If the estimated execution time is less than the specified threshold the statement runs serially. If the estimated execution time is greater than the threshold the optimizer generates a parallel plan and calculates a DOP based on resource requirements. I am not going into details like how to set the threshold, how the optimizer calculates the DOP, etc... here, they are for upcoming posts in this series.

How does Auto DOP impact my workload? 

With Auto DOP since the optimizer is deciding when to use parallel execution and the DOP to use, depending on how you configure Auto DOP, the number of parallel statements and their DOPs may change when you enable it. Some serial statements may start running in parallel, some parallel statements may start running serially. This can change the response time of individual statements and also the resource utilization in your system. Auto DOP relieves the burden of deciding on a DOP for each statement and enables you to focus on optimizing the whole workload together with Parallel Statement Queuing and Database Resource Manager. We will talk about how these work together in later posts.

In the next post we will look at how to configure and control Auto DOP, in the meantime please comment if you have any questions here.

UPDATE: Next post of this series is here

Thursday Jul 09, 2015

Parallel_Degree_Limit, Parallel_Max_Degree, Maximum DOP? Confused?

Here are two questions for you. What is the maximum DOP you can get with Auto DOP in a 12c database? Where can you see what the number is without generating execution plans? The answers to these question may get tricky if you start to play with init.ora parameters. 

What is the maximum DOP you can get with Auto DOP in a 12c database? 

The answer to this question may seem obvious if you look at the documentation, the init.ora parameter PARALLEL_DEGREE_LIMIT determines the maximum DOP. So, you can set it to any integer value you want and that will be your maximum, that is it, or is it? What happens if you set it to a really high value? What is missing from the documentation is that the maximum DOP you can get will always be the default DOP, which is CPU_COUNT * PARALLEL_THREADS_PER_CPU. No matter what number you set for PARALLEL_DEGREE_LIMIT, the internal enforced limit will stay the same, any value higher than this will be ignored. This also applies to the Database Resource Manager (DBRM). The parameter and resource manager directives will be obeyed as long as they are lower than the internal limit.

Here is how the maximum DOP is derived for a session. 

Max DOP = MIN (CPU_COUNT * PARALLEL_THREADS_PER_CPU, PARALLEL_DEGREE_LIMIT, DBRM DIRECTIVE PARALLEL_DEGREE_LIMIT_P1

Here is a sample test case to show the behavior in 12.1.0.2. 

SQL> create table t as select * from all_objects;

Table created.

SQL> exec dbms_stats.set_table_stats(user,'T',numrows=>1000000000000,numblks=>1000000000);

PL/SQL procedure successfully completed.

I created an empty table and set the statistics so that the optimizer thinks it is a very large table, this is to make sure that Auto DOP will calculate a super high DOP. Here are the related parameters:

SQL> select name, value from v$parameter
  2  where name in ('parallel_degree_limit','cpu_count','parallel_threads_per_cpu');

NAME                           VALUE
------------------------------ --------------------------------------------------------------------------------
cpu_count                      2
parallel_threads_per_cpu       2
parallel_degree_limit          128 

Let's look at a statement accessing the large table.

The DOP is capped at 4 even though PARALLEL_DEGREE_LIMIT is 128. This is because CPU_COUNT * PARALLEL_THREADS_PER_CPU is 4

Where can you see the maximum DOP allowed without generating execution plans?

As you can see above an execution plan shows the computed DOP in the notes section under the plan. But, that is the computed DOP and if it does not hit any limits you will not be able to find out what the maximum DOP can be. Where can we see what our database allows as maximum DOP?

The view V$SES_OPTIMIZER_ENV can give a clue about this. This view shows the optimizer settings for all sessions in an instance. It also includes settings related to parallel execution. Of particular interest to this discussion is two of them, PARALLEL_DEGREE_LIMIT and PARALLEL_MAX_DEGREE. Let's see what they show in my test environment. Here are the init.ora parameters.

SQL> select name, value from v$parameter
  2  where name in ('parallel_degree_limit','cpu_count','parallel_threads_per_cpu');

NAME                           VALUE
------------------------------ --------------------------------------------------------------------------------
cpu_count                      2
parallel_threads_per_cpu       2
parallel_degree_limit          CPU 

Here is what the view shows with these parameters.

SQL> select name, value from v$ses_optimizer_env
  2  where name in ('parallel_degree_limit','parallel_max_degree')
  3  and sid=(select sid from v$mystat where rownum=1);

NAME                           VALUE
------------------------------ --------------------------------------------------------------------------------
parallel_degree_limit          65535
parallel_max_degree            4 

The first thing to notice is that parallel_degree_limit in this view is not the same thing as the init.ora parameter PARALLEL_DEGREE_LIMIT. Since I did not explicitly set the init.ora parameter to an integer value the view shows a large number (64K-1), the actual limit is shown as parallel_max_degree. This shows me that in this system I cannot get a DOP higher than 4 with Auto DOP.

If you set the init.ora parameter to an integer value this is what the view shows. 

SQL> alter system set parallel_degree_limit=128;

System altered. 

SQL> select name, value from v$ses_optimizer_env
  2  where name in ('parallel_degree_limit','parallel_max_degree')
  3  and sid=(select sid from v$mystat where rownum=1);

NAME                           VALUE
------------------------------ --------------------------------------------------------------------------------
parallel_degree_limit          128
parallel_max_degree            4 

Now parallel_degree_limit shows the actual init.ora parameter. The rule still applies, the maximum DOP I can get is parallel_max_degree which is 4 as shown in the execution plan above. If you set the init.ora parameter to a lower value than CPU_COUNT * PARALLEL_THREADS_PER_CPU you will see that parallel_max_degree reflects that. This also true for DBRM.

So, parallel_max_degree is calculated as;  MIN (CPU_COUNT * PARALLEL_THREADS_PER_CPU, PARALLEL_DEGREE_LIMIT, DBRM DIRECTIVE PARALLEL_DEGREE_LIMIT_P1) .

The only caveat with parallel_max_degree is it can only show values up to 4095 (4K-1). Above this number is starts over from 0. If you bump up one of the variable in the formula, like CPU_COUNT, you will see cases like below.

SQL> select name, value from v$parameter
  2  where name in ('parallel_degree_limit','cpu_count','parallel_threads_per_cpu');

NAME                           VALUE
------------------------------ ----------
cpu_count                      2048
parallel_threads_per_cpu       2
parallel_degree_limit          CPU 

SQL> select name, value from v$ses_optimizer_env
  2  where name in ('parallel_degree_limit','parallel_max_degree')
  3  and sid=(select sid from v$mystat where rownum=1);

NAME                           VALUE
------------------------------ --------------------------------------------------------------------------------
parallel_degree_limit          65535
parallel_max_degree            0 

In these cases the maximum DOP you can get will be different than parallel_max_degree.

So, you can rely on this value if your CPU_COUNT is not in the two thousands range! If you have such a huge system use the formula given above to calculate max DOP allowed. And please let me know how you are using parallel execution in such an environment :)

Remember that these are only maximum values that can be calculated by the optimizer, at runtime you are still bounded by PARALLEL_MAX_SERVERS. Obviously your statement will get downgraded if PARALLEL_MAX_SERVERS is reached.

Tuesday Jul 07, 2015

Update to BDA X5-2 provides more flexibility and capacity with no price changes

As more people pick up big data technologies, we see the workloads run on these big data system evolve and diversify. The initial workloads were all Map Reduce and fit a specific, (micro) batch workload pattern. Over the past couple of years that has changed and that change is reflected in the Hadoop tools - specifically with YARN. While there is still quite a bit of batch work being done, typically using Map Reduce (think Hive, Pig etc) we are seeing our customers move to more mixed workloads where the batch work is augmented with both more online SQL as well as more streaming workloads.

More Horsepower - More Capacity 

The change towards a more mixed workload leads us to change the shape of the underlying hardware. Systems now shift away from the once sacred "1-core to 1-disk ratio" and also from the small memory footprints for the worker nodes.

With the BDA X5-2 update in December 2014, BDA doubled the base memory configuration and added 2.25x more CPU resources in every node as well as upgrading to Intel's fastest Xeon E5 CPU. BDA X5-2 now has 2 * 18 Xeon cores to enable CPU intense workloads like analytics SQL queries using Oracle Big Data SQL, machine learning, graph applications etc.

With the processing covered for these more mixed workloads, we looked at other emerging trends or workloads and their impact on the BDA X5-2 hardware. The most prominent trend we see in big data are the large data volumes we expect to see from the Internet of Things (IoT) explosion and the potential cost associated with storing that data.

To address this issue (and storage cost in general) we are now adding 2x more disk space onto each and every BDA disk doubling the total available space on the system while keeping the list price constant. That is correct, 2x capacity but no price change! 

More Flexibility

And if that isn't enough, we are also changing the way our customers can grow their systems by introducing BDA Elastic Configurations.

As we see customers build out production in large increments, we also see a need to be more flexible in expanding the non-production environments (test, qa and performance environments). BDA X5-2 Elastic Configurations enables expansion of a system in 1-node increments by adding a BDA X5-2 High Capacity (HC) plus InfiniBand Infrastructure into a 6-node Starter Rack.

The increased flexibility enables our customers to start with a production scale cluster of 6 nodes (X5-2 or older) and then increment within the base rack up to 18 nodes, then expand across racks without any additional switching (no top of rack required, all on the same InfiniBand network) to build large(r) clusters. The expansion is of course all supported from the Oracle Mammoth configuration utility and its CLI, greatly simplifying expansion of clusters.

Major Improvement, No Additional Cost

Over the past generations BDA has been quickly adopted to changing usage and workload patterns enabling the adoption of Hadoop into the data ecosystem with minimal infrastructure disruption but with maximum business benefits. The latest update to BDA X5-2 enables flexibility, delivers more storage capacity and runs more workloads then ever before. 

For more information see the BDA X5-2 Data Sheet on OTN

Thursday Jul 02, 2015

Using Oracle Big Data Spatial and Graph

Wondering how to get started with graph analyses?  The latest Oracle Big Data Lite VM includes Oracle's new spatial and graph toolkit for big data.  Check out these two blog posts that describe how to find interesting relationships in data:

 Pretty cool :)

 

Saturday Jun 20, 2015

Oracle Big Data Spatial and Graph - Installing the Image Processing Framework

Oracle Big Data Lite 4.2 was just released - and one of the cool new features is Oracle Spatial and Graph.  In order to use this new feature, there is one more configuration step required.  Normally, we include everything you need in the VM - but this is a component that we couldn't distribute.

For the Big Data Spatial Image Processing Framework, you will need to install and configure Proj.4 - Cartographic Projections Library.  Simply follow these steps: 

  • Start the Big Data Lite VM and log in as user "oracle"
  • Launch firefox and download this tarball (​http://download.osgeo.org/proj/proj-4.9.1.tar.gz) to ~/Downloads
  • Run the following commands at the linux prompt:
    • cd ~/Downloads
    • tar -xvf proj-4.9.1.tar.gz
    • cd proj-4.9.1
    • ./configure
    • make
    • sudo make install

This will create the libproj.so file in directory /usr/local/lib/.  Now that the file has been created, create links to it in the appropriate directories.  At the linux prompt:

Update

For Big Data Lite 4.2 - use these steps

  • sudo ln -s /usr/local/lib/libproj.so /u02/oracle-spatial-graph/shareddir/spatial/demo/imageserver/native/libproj.so
  • sudo ln -s /usr/local/lib/libproj.so /usr/lib/hadoop/lib/native/libproj.so

For releases after Big Data Lite 4.2 - use this step:

  • sudo ln -s /usr/local/lib/libproj.so /opt/oracle/oracle-spatial-graph/spatial/raster/gdal/lib/libproj.so

That's all there is to it.  Big Data Lite is now ready for Orace Big Data Spatial and Graph!

Oracle Big Data Lite 4.2 Now Available!

Oracle Big Data Lite Virtual Machine 4.2 is now available on OTN.  For those of you that are new to the VM - it is a great way to get started with Oracle's big data platform.  It has a ton of products installed and configured - including: 

  • Oracle Enterprise Linux 6.6
  • Oracle Database 12c Release 1 Enterprise Edition (12.1.0.2) - including Oracle Big Data SQL-enabled external tables, Oracle Multitenant, Oracle Advanced Analytics, Oracle OLAP, Oracle Partitioning, Oracle Spatial and Graph, and more.
  • Cloudera Distribution including Apache Hadoop (CDH5.4.0)
  • Cloudera Manager (5.4.0)
  • Oracle Big Data Connectors 4.2
    • Oracle SQL Connector for HDFS 3.3.0
    • Oracle Loader for Hadoop 3.4.0
    • Oracle Data Integrator 12c
    • Oracle R Advanced Analytics for Hadoop 2.5.0
    • Oracle XQuery for Hadoop 4.2.0
  • Oracle NoSQL Database Enterprise Edition 12cR1 (3.3.4)
  • Oracle Big Data Spatial and Graph 1.0
  • Oracle JDeveloper 12c (12.1.3)
  • Oracle SQL Developer and Data Modeler 4.1
  • Oracle Data Integrator 12cR1 (12.1.3)
  • Oracle GoldenGate 12c
  • Oracle R Distribution 3.1.1
  • Oracle Perfect Balance 2.4.0
  • Oracle CopyToBDA 2.0

Check out our new product - Oracle Big Data Spatial and Graph (and don't forget to read the blog post on a small config update you'll need to make to use it).  It's a great way to find relationships in data and query and visualize geographic data.  Speaking of analysis... Oracle R Advanced Analytics for Hadoop now leverages Spark for many of its algorithms for (way) faster processing.

 But, that's just a couple of features... download the VM and check it out for yourself :). 

Tuesday May 26, 2015

Space Management and Oracle Direct Path Load

If you're loading you data warehouse using Oracle direct path load, you might want to check out a post on the Optimizer blog. It covers how the Oracle database manages space during a direct path load and what we've changed in Oracle Database 12c.

Wednesday May 20, 2015

PX In Memory, PX In Memory IMC?

In my previous post I talked about how to use SQL Monitor to monitor parallel statements. The part of that post which talks about In-Memory Parallel Execution triggered some questions and I want to provide more information on that in a separate post here.

In-Memory Database? In-Memory Parallel Execution? In-Memory Everywhere

Back in the 11.2 days when there was no Oracle Database In-Memory Option (DBIM) there was no confusion about in-memory because there was only one way to use the database memory for parallel execution and that was In-Memory Parallel Execution (IMPX, also known as IMPQ). Starting with 12.1.0.2 DBIM boosted the performance and also the confusion about the term in-memory parallel execution.

I will not go into the details of how these two features work in this post but let us look at the plan note from the previous post and try to understand what it shows about these.

As we see there are two notes about in-memory, Px In Memory Imc and Px In Memory. Let us look at what these two mean.

Px In Memory

This plan note is only related to IMPX and not related to DBIM. It shows whether the statement uses IMPX or not. If you see yes here it means some or all of the tables in the query are accessed through the buffer cache rather than doing direct reads from disk. IMPX looks at the table sizes and the buffer cache size and decides whether to read the blocks into the buffer cache or not. Once the blocks are in the buffer cache subsequent parallel statements can read the data from there rather than going to the disk. In the case of RAC, IMPX makes sure that PX servers are allocated on the nodes where table blocks reside, as a result of this cache fusion is not used to transfer blocks between nodes which in turn makes performance better.

This plan note in SQL Monitor maps to the note parallel scans affinitized for buffer cache in the DBMS_XPLAN output. Here is an example showing IMPX will be used.

The first access to the CUSTOMERS table will read the data from the disk to the buffer cache, subsequent accesses to this table will read data from the buffer cache. The term affinitized here means that specific blocks are assigned to specific nodes and PX servers on those nodes will read the data through the buffer cache.

You will see this note if IMPX is used independent of whether you are on a single-instance database or a RAC database.

So, if you see the note parallel scans affinitized for buffer cache in the DBMS_XPLAN output you will see Px In Memory=yes in SQL Monitor.

Px In Memory Imc

This plan note is only related to DBIM and not related to IMPX. IMC here means In-Memory Columnar which indicates DBIM. The IMC store provides metadata that shows which part of an in-memory table is in which node's memory in a RAC database. Parallel Execution uses this information to allocate PX servers on those nodes so that they can read the data from the IMC store without going to disk. The plan note Px In Memory Imc shows whether the statement accesses an in-memory table and whether PX servers are affinitized to RAC nodes. This does not indicate if DBIM is used or not, that is indicated by the operations in the plan like TABLE ACCESS INMEMORY FULL.

This note maps to the note parallel scans affinitized for inmemory in the DBMS_XPLAN output. Here is an example showing in-memory affinity is used.

In a single-instance database you will not see this note because any PX server can read the data from the single node's IMC store and there is no concept of affinity of PX servers to nodes.

So, if you see the note parallel scans affinitized for inmemory in the DBMS_XPLAN output you will see Px In Memory Imc=yes in SQL Monitor.

One or The Other?

It is possible to have parallel statements using both IMPX and DBIM. If your statement accesses both in-memory tables and tables determined to be read via the buffer cache you can see both notes in the plan output.

In this example, CUSTOMERS table is an in-memory table and since this is a RAC database PX servers are affinitized to nodes. CUSTOMERSPART table is not an in-memory table but Parallel Execution decided to read it through the buffer cache.

There is a lot of information about how DBIM works in the In-Memory blog, I will talk about how IMPX works in a future post. In the meantime please comment here if you have any questions related to it.

Friday May 15, 2015

Big Data Spatial and Graph is now released!

Cross-posting this from the announcement of the new spatial and graph capabilities. You can get more detail on OTN.

The product objective is to provide spatial and graph capabilities that are best suited to the use cases, data sets, and workloads found in big data environments.  Oracle Big Data Spatial and Graph can be deployed on Oracle Big Data Appliance, as well as other supported Hadoop and NoSQL systems on commodity hardware. 

Here are some feature highlights.   

Oracle Big Data Spatial and Graph includes two main components:

  • A distributed property graph database with 35 built-in graph analytics to
    • discover graph patterns in big data, such as communities and influencers within a social graph
    • generate recommendations based on interests, profiles, and past behaviors
  • A wide range of spatial analysis functions and services to
    • evaluate data based on how near or far something is to one another, or whether something falls within a boundary or region
    • process and visualize geospatial map data and imagery

Property Graph Data Management and Analysis

The property graph feature of Oracle Big Data Spatial and Graph facilitates big data discovery and dynamic schema evolution with real-world modeling and proven in-memory parallel analytics. Property graphs are commonly used to model and analyze relationships, such as communities, influencers and recommendations, and other patterns found in social networks, cyber security, utilities and telecommunications, life sciences and clinical data, and knowledge networks.  


Property graphs model the real-world as networks of linked data comprising vertices (entities), edges (relationships), and properties (attributes) for both. Property graphs are flexible and easy to evolve; metadata is stored as part of the graph and new relationships are added by simply adding a edge. Graphs support sparse data; properties can be added to a vertex or edge but need not be applied to all similar vertices and edges.  Standard property graph analysis enables discovery with analytics that include ranking, centrality, recommender, community detection, and path finding.

Oracle Big Data Spatial and Graph provides an industry leading property graph capability on Apache HBase and Oracle NoSQL Database with a Groovy-based console; parallel bulk load from common graph file formats; text indexing and search; querying graphs in database and in memory; ease of development with open source Java APIs and popular scripting languages; and an in-memory, parallel, multi-user, graph analytics engine with 35 standard graph analytics.

Spatial Analysis and Services Enrich and Categorize Your Big Data with Location

With the spatial capabilities, users can take data with any location information, enrich it, and use it to harmonize their data.  For example, Big Data Spatial and Graph can look at datasets like Twitter feeds that include a zip code or street address, and add or update city, state, and country information.  It can also filter or group results based on spatial relationships:  for example, filtering customer data from logfiles based on how near one customer is to another, or finding how many customers are in each sales territory.  These results can be visualized on a map with the included HTML5-based web mapping tool.  Location can be used as a universal key across disparate data commonly found in Hadoop-based analytic solutions. 

Also, users can perform large-scale operations for data cleansing, preparation, and processing of imagery, sensor data, and raw input data with the raster services.  Users can load raster data on HDFS using dozens of supported file formats, perform analysis such as mosaic and subset, write and carry out other analysis operations, visualize data, and manage workflows.  Hadoop environments are ideally suited to storing and processing these high data volumes quickly, in parallel across MapReduce nodes.  

Learn more about Oracle Big Data Spatial and Graph at the OTN product website:

Read the Data Sheet

Read the Spatial Feature Overview

About

The data warehouse insider is written by the Oracle product management team and sheds lights on all thing data warehousing and big data.

Search

Archives
« July 2016
SunMonTueWedThuFriSat
     
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
      
Today