Thursday Jan 07, 2016

Data loading into HDFS - Part1

Today I’m going to start first article that will be devoted by very important topic in Hadoop world – data loading into HDFS. Before all, let me explain different approaches of loading and processing data in different IT systems.

Schema on Read vs Schema on Write

So, when we talking about data loading, usually we do this into system that could belong on one of two types.  One of this is schema on write. With this approach we have to define columns, data formats and so on. During the reading  every user will observe the same data set. As soon as we performed ETL (transform data in format that mostly convenient to some particular system), reading will be pretty fast and overall system performance will be pretty good. But you should keep in mind, that we already paid penalty for this when were loading data. Like example of schema on write system you could consider Relational data base, for example, like Oracle or MySQL.


Schema on Write

Another approach is schema on read. In this case we load data as-is without any changing and transformations.  With this approach we skip ETL (don’t transform data) step and we don’t have any headaches with data format and data structure. Just load file on file system, like coping photos from FlashCard or external storage to your laptop’s disk. How to interpret data you will decide during the data reading. Interesting stuff that the same data (same files) could be read in different manner. For instance, if you have some binary data and you have to define Serialization/Deserialization framework and using it within your select, you will have some structure data, otherwise you will get set of the bytes. Another example, even if you have simplest CSV files you could read the same column like a Numeric or like a String. It will affect on different results for sorting or comparison operations.

Schema on Read

Hadoop Distributed File System is classical example of schema on read system.More details about Schema on Read and Schema on Write approach you could find here. Now we are going to talk about data loading data into HDFS. I hope after explanation above, you understand that data loading into Hadoop is not equal of ETL (data doesn’t transform).


[Read More]

Thursday Dec 24, 2015

Oracle Big Data Lite 4.3.0 is Now Available on OTN

Big Data Lite 4.3.0 is now available on OTN



This latest release is packed with new features - here's the inventory of what's included:

  • Oracle Enterprise Linux 6.7
  • Oracle Database 12c Release 1 Enterprise Edition (12.1.0.2) - including Oracle Big Data SQL-enabled external tables, Oracle Multitenant, Oracle Advanced Analytics, Oracle OLAP, Oracle Partitioning, Oracle Spatial and Graph, and more.
  • Cloudera Distribution including Apache Hadoop (CDH5.4.7)
  • Cloudera Manager (5.4.7)
  • Oracle Big Data Spatial and Graph 1.1
  • Oracle Big Data Discovery 1.1.1
  • Oracle Big Data Connectors 4.3
    • Oracle SQL Connector for HDFS 3.4.0
    • Oracle Loader for Hadoop 3.5.0
    • Oracle Data Integrator 12c
    • Oracle R Advanced Analytics for Hadoop 2.5.1
    • Oracle XQuery for Hadoop 4.2.1
  • Oracle NoSQL Database Enterprise Edition 12cR1 (3.4.7)
  • Oracle Table Access for Hadoop and Spark 1.0
  • Oracle JDeveloper 12c (12.1.3)
  • Oracle SQL Developer and Data Modeler 4.1.2 with Oracle REST Data Services 3.0
  • Oracle Data Integrator 12cR1 (12.2.1)
  • Oracle GoldenGate 12c
  • Oracle R Distribution 3.2.0
  • Oracle Perfect Balance 2.5.0
Also, this release is using github as the repository for all of our sample code (https://github.com/oracle/BigDataLite).  This gives us a great mechanism for updating the samples/demos between releases.  Users simply double click the "Refresh Samples" icon on the desktop to download the latest collateral.

Friday Dec 18, 2015

Oracle among vendors that lead the pack in The Forrester Wave™: Enterprise Data Warehouse, Q4 2015, achieving highest scores for current product offering and product strategy

According to Forrester, a leading independent research firm, EDWs are now evolving beyond traditional storage and delivery. It is now scale, performance, and innovation that distinguish the EDW leaders. Oracle was top ranked by Forrester in the current offering category. Furthermore, Oracle was top ranked in the strategy category.  
[Read More]

Monday Dec 07, 2015

Multiple Parallelizers

The number of PX servers that a SQL statement needs depend on the execution plan and the degree of parallelism (DOP) of the statement. Most statements use DOP*2 number of PX servers as a result of the producer/consumer model used in parallel execution.

We sometimes get questions from users about statements using many more PX servers than they expect. They say "I have a statement with a DOP of 8 and I see that it uses 64 processes". These types of statements typically involve multiple parallelizers. We have looked at how you can identify such statements in the previous post, that post also covers basic terminology so please read that post before going into this.

In this post we will look at a few example cases that can generate plans with multiple parallelizers and how those plans behave at runtime. We will also talk about the implications of having such statements in your system. The examples used here are based on 12c. Ideally a user should be able to control the number of PX servers for a statement, this is not the case when you have multiple parallelizers. We are trying to minimize cases of multiple parallelizers but I wanted to explain what the current behavior is.

Multiple nonconcurrent parallelizers

The typical case where you can see multiple parallelizers is temp table transformation. With temp table transformation Oracle creates temporary tables at runtime, stores intermediate results in those tables and queries them during the execution of the statement. The optimizer can decide to use temp table transformation with subquery factoring a.k.a. WITH clause, grouping sets, star queries, and in-memory aggregation.

Let's look at a simple query with grouping sets as an example to understand how multiple parallelizers work.

SELECT /*+ parallel(2) */ channel_desc, calendar_month_desc, SUM(amount_sold)
FROM sales, times, channels
WHERE sales.time_id=times.time_id 
AND sales.channel_id= channels.channel_id
GROUP BY GROUPING SETS(channel_desc, calendar_month_desc);

This plan has 4 parallelizers (Line Ids 2, 11, 20, 29). The parallelizers #1 and #4 have one DFO each which means each of them needs one PX server set (2 PX servers as the DOP is 2). The parallelizers #2 and #3 have 2 DFOs each which means they need 2 PX server sets (4 PX servers as the DOP is 2). The important question here is, will these parallelizers execute concurrently? If they are concurrent this statement will use 12 PX servers, if they are not concurrent the statement can run with fewer PX servers.

In the case of temp table transformation parallelizers do not execute concurrently, they run one after the other. Here is a timeline of execution for this plan.

T0: First parallelizer #1 starts and allocates 2 PX servers, it joins three tables and loads the result set into a temporary table (SYS_TEMP_0FD9D6719_E74332).

T1: When this parallelizer finishes it releases the PX servers back to the pool. These PX servers are now available for anyone to use.

T2: Now parallelizer #2 can start, it allocates 4 PX servers from the pool, these may be the same ones released by the previous parallelizer or they may be different PX servers from the pool. This parallelizer reads from the temporary table created previously, does the group by operation and loads the results into a new temporary table (SYS_TEMP_0FD9D671A_E74332).

T3: Now it releases the PX servers back to the pool and these PX servers become available in the system.

T4, T5: Parallelizer #3 does the same thing, it allocates 4 PX servers and releases them when it finishes.

T6, T7: Now the last parallelizer (#4) starts, it allocates 2 PX servers, reads from the temporary table created previously (SYS_TEMP_0FD9D671A_E74332), sends the results to the user and releases the PX servers when it finishes.

This sequence of execution shows that the maximum number of PX servers used concurrently was 4, not more. In this case we used DOP*2 number of PX servers concurrently even though we had multiple parallelizers. This is because a parallelizer allocates PX servers only when it starts and releases them when it finishes. A parallelizer is started only when it is needed in the runtime execution of the plan.

Here is what v$pq_sesstat shows after running this query.

STATISTIC                      LAST_QUERY SESSION_TOTAL     CON_ID
------------------------------ ---------- ------------- ----------
Queries Parallelized                    1             1          0
DML Parallelized                        0             4          0
DDL Parallelized                        0             0          0
DFO Trees                               1             5          0
Server Threads                          2             0          0
Allocation Height                       2             0          0
Allocation Width                        1             0          0
Local Msgs Sent                        86          3113          0
Distr Msgs Sent                         0             0          0
Local Msgs Recv'd                      90          3123          0
Distr Msgs Recv'd                       0             0          0
DOP                                     2             0          0
Slave Sets                              1             0          0

Even though we used 4 PX servers concurrently it shows we used 2 (Servers Threads), this is because v$pq_sesstat only shows stats from the last active parallelizer which was Line Id 32 in this case.

To find the actual number of PX servers used concurrently you need to look at v$px_session when the statement is running. Here is what that view shows when the statement is running and the second parallelizer is active (my session's SID which is also the QC SID was 26 in this case).

select qcsid,sid,server_group,server_set,server#
from v$px_session where qcsid=26

     QCSID        SID SERVER_GROUP SERVER_SET    SERVER#
---------- ---------- ------------ ---------- ----------
        26        791            1          1          1 <--- Server Set #1, PX Server #1
        26         14            1          1          2 <--- Server Set #1, PX Server #2
        26        786            1          2          1 <--- Server Set #2, PX Server #1
        26         33            1          2          2 <--- Server Set #2, PX Server #2
        26         26                                    <--- Query Coordinator

This shows that at the time I looked at v$px_session there were 5 sessions working for my query, one was the QC (sid=26), the others were 4 PX server sessions. There was one parallelizer active (server_group=1) and it had 2 PX server sets (server_set) and 4 PX servers (server#). As the statement proceeds you will see that this view will show the active parallelizers and PX servers.

SQL Monitor also nicely shows which parallelizers are active by showing the active operations in the plan.

In this screenshot you can see that only the first parallelizer is active at this time.

Multiple concurrent parallelizers

A typical case where multiple parallelizers run concurrently is noncorrelated subqueries. Here is an example showing this case.

SELECT /*+ parallel(2) */ DISTINCT prod_id
FROM sales
WHERE amount_sold >
  (SELECT AVG(amount_sold) FROM sales)
AND quantity_sold >
  (SELECT AVG(quantity_sold) FROM sales)  ;

We have 3 parallelizers in this plan, Line Ids 1, 10, and 16. Here is a timeline of the execution for this plan.

T0: This plan starts with parallelizer #1, since this parallelizer needs 2 PX server sets it allocates 4 PX servers as the DOP is 2. This parallelizer requires the outputs of the filter subqueries so now the other parallelizers will be started.

T1: While still keeping the initial 4 PX servers allocated we now start the second parallelizer (#2), this parallelizer only needs one PX server set, so it allocates 2 PX servers. We now have a total of 6 PX servers concurrently allocated for this query.

T2: When the first subquery finishes running this second parallelizer is now finished, so it releases 2 PX servers it allocated.

T3, T4: Now the third parallelizer (#3) starts, allocates 2 PX servers and releases them when it finishes.

The first parallelizer now has the results of both subqueries and can continue scanning and filtering the sales table. So, this query starts at most 2 parallelizers concurrently and uses 6 PX servers concurrently at any point.

UPDATE, 14 Dec 2015: The actual order of allocating PX servers is parallelizer #1, #3, and #2. Please see the comments for this correction.

If we look at v$pq_sesstat after this query ends it reports 8 PX servers (Server Threads), not 6. This is because this view does not show the number of PX servers used concurrently, but shows the accumulated number of PX servers during the execution even if the same PX servers were released and allocated.

STATISTIC                      LAST_QUERY SESSION_TOTAL     CON_ID
------------------------------ ---------- ------------- ----------
Queries Parallelized                    1             3          0
DML Parallelized                        0             0          0
DDL Parallelized                        0             0          0
DFO Trees                               3             7          0
Server Threads                          8             0          0
Allocation Height                       2             0          0
Allocation Width                        1             0          0
Local Msgs Sent                       120           296          0
Distr Msgs Sent                         0             0          0
Local Msgs Recv'd                     120           296          0
Distr Msgs Recv'd                       0             0          0
DOP                                     2             0          0
Slave Sets                              4             0          0

Again, if you want to find out the number of PX servers allocated concurrently for statements with multiple parallelizers use the view v$px_session instead.

Implications of multiple parallelizers

Downgrades

A statement with a single parallelizer allocates the required number of PX servers at the start and uses them without releasing until it finishes. So the number of PX servers throughout the execution is constant. Statements with multiple parallelizers are different as we saw in the above examples, they allocate PX servers when each parallelizer starts. Since parallelizers can start at different times during the execution each parallelizer may be running with a different number of PX servers based on the number of available processes in the system. Basically the rules about DOP downgrades I talked before apply to each parallelizer individually.

Consider the sequence of execution above. When parallelizer #1 starts it will try to allocate 2 PX servers. Assuming there are enough available PX servers in the system it will get those processes. When it finishes and releases them parallelizer #2 will start and try to allocate 4 PX servers. If at this point there are no available PX servers in the system this parallelizer will run serially. The same is true for the subsequent parallelizers.

Parallel Statement Queuing

Parallel Statement Queuing decides to queue or run a parallel statement based on its DOP and the number of available PX servers. It assumes no statement will use more than 2 PX server sets (thus DOP*2 number of PX servers). Consider the case where the DOP is 8 and there are 16 PX servers available below parallel_servers_target. Oracle will allow this query to run, but if this statement uses multiple concurrent parallelizers and starts using 24 PX servers the queuing point will be exceeded. When a statement is picked from the queue and allowed to run it is free to allocate any number of PX servers it needs. Depending on the number of these kinds of statements running at the same time all PX servers in the system can be consumed. This effectively eliminates the benefit of queuing and statements may start getting downgraded because of PX server shortage in the system. So, if you are using Parallel Statement Queuing and if you see that parallel_servers_target is exceeded look for statements with multiple parallelizers as possible suspects. We are working to fix this behavior in future releases. Until then make sure there is enough gap between parallel_servers_target and parallel_max_servers to prevent downgrades.

Database Resource Manager DOP limits

We have had some users setting DOP limits using Database Resource Manager (DBRM) and expecting all statements to be limited to DOP*2 number of PX servers. As of today that expectation is not true for statements with multiple concurrent parallelizers. For example even if DBRM limits the DOP to 2, a statement with a DOP of 2 can use 6 PX servers concurrently as we saw in example #2.

Summary

Here is a short summary of what we have discussed in this post.

1. Multiple parallelizers can run concurrently or nonconcurrently.

2. Each parallelizer allocates PX servers when it starts, the number of PX servers allocated by each parallelizer depends on the number of PX server sets and can be at most DOP*2.

3. v$pq_sesstat may show incorrect information depending on the concurrency of multiple parallelizers, watch the behavior at runtime instead.

4. Each parallelizer may or may not get the required number of PX servers depending on the number of available PX servers in the system.

5. Parallel Statement Queuing assumes each statement will use DOP*2 number of PX servers, this can cause statements with multiple parallelizers to be allowed to run and exceed the queuing point (parallel_servers_target). To prevent downgrades in this case make sure there is enough gap between parallel_servers_target and parallel_max_servers.

I am planning to cover Parallel Statement Queuing in detail in a future post, we will come back to this topic in there, so please come back for that one too.

In each release we are trying to decrease the number of cases that can generate plans with multiple parallelizers. If you are on 11g you will see that some plans with multiple parallelizers will switch to using a single parallelizer when you upgrade to 12c. In the next post I will talk about those cases and compare 11g to 12c in this regard.

Thursday Dec 03, 2015

Looking forward to #ukoug_tech15 conference

Next week I will be in Birmingham at the annual UKOUG Tech conference. I will be presenting on some of the new SQL features that we added in Database 12c to support analysis of big data......to give you a taster of what to expect during my session here is a sort-of relevant Dilbert cartoon, courtesy of Scott Adams

Dilbert-Approximate-Answers


 

[Read More]

Tuesday Dec 01, 2015

My highlights from DOAG 2015...

Last week I was in Nuremburg at the excellent annual German Oracle user group conference - DOAG15. This was my first visit to DOAG and I was amazed at the size of the event. It is huge but most importantly it is very well organized and definitely one of the best, possibly the best, user conference that I have attended. 

..and then there was the chance to drink some gluhwein with the database PM team on the way to the speaker dinner.

Nuremburg

[Read More]

Tuesday Nov 24, 2015

Little things to know about ... Oracle Partitioning (part one of hopefully many)

Oracle Partitioning is one of the most commonly used option on top of Enterprise Edition - if not the most often used one, which is as you can guess always a discussion in our buildings ;-)

Over the years Oracle Partitioning matured significantly and became more powerful and flexible. But, as in real life, with power and flexibility comes always little things that are good to know (no, that's not an equivalent for complexity). So I am happy to see Connor McDonald just blogging about such a little detail around Interval Partitioning.  

Check it out, it's worth it.

Monday Nov 23, 2015

PX Server Sets, Parallelizers, DFO Trees, Parallel Groups, ... Let's Talk About Terminology

In my last post I mentioned cases where a SQL statement uses more PX servers than expected, I said "There are rare cases when a statement uses more than DOP*2 number of PX servers depending on the plan shape.". I started this post to talk about those cases but then I thought maybe we should clarify the terminology before doing that. So let's go over the basic parallel execution terminology we use in the documentation and in the monitoring tools (V$ views, SQL Monitor, etc...).

DFO (Data Flow Operation), PX Server Set

A DFO is the basic unit of work carried out by PX servers in an execution plan. The set of PX servers working on a DFO is called a PX server set. The number of PX servers in a set is determined by the statement DOP.

Consider the following example.

SQL> explain plan for select /*+ parallel(2) */ * from sales;

Explained.

SQL> select * from table(dbms_xplan.display);

As a general rule DFO boundaries are indicated by two columns in an execution plan; the lines having PX SEND in the Operation column or TQ (Table Queue) in the Name column indicate DFO boundaries. In this plan we see that we have only one DFO, indicated by Line Id 2. This means this statement will use one PX server set having 2 PX servers as the DOP is 2.

Only very basic statements can be executed with a single DFO, most statements are executed by multiple DFOs. The following plan shows 2 DFOs which means 2 PX server sets.

SQL> explain plan for 
  2  select /*+ parallel(2) */ cust_id,count(*)
  3  from sales
  4  group by cust_id;

Explained.

SQL> select * from table(dbms_xplan.display);

We can say this again by looking at the PX SEND operations, Line Id 2 and 5. Since we have 2 DFOs this statement will use 2 PX server sets, each set will have 2 PX servers as the DOP is 2. So this statement needs a total of 4 PX servers. One PX server set will perform Line Id 5-8, the other set will perform Line id 2-4.

DFO Tree, Parallelizer

Any PX COORDINATOR in an execution plan is called a parallelizer. As we see in the above example there may be multiple DFOs under a parallelizer, these DFOs are grouped under a DFO tree, so the terms parallelizer and DFO tree are used interchangably. In the above examples we see that there is one parallelizer meaning one DFO tree.

A DFO tree in the plan is carried out by at most 2 PX server sets. If there is only one DFO under the DFO tree as the first example shows there will be only one PX server set. If there are two or more DFOs under the DFO tree 2 PX server sets will be used, we limit the number of PX server sets to 2 for a DFO tree. This is why most statements use 2 PX server sets meaning they will use DOP*2 number of PX servers. Let's look at a plan that has 3 DFOs.

SQL> explain plan for 
  2  select /*+ parallel(2) */ count(*)
  3  from sales s, customers c
  4  where s.cust_id=c.cust_id;

Explained.

SQL> select * from table(dbms_xplan.display);

Even though we have 3 DFOs there is only one DFO tree (one PX COORDINATOR), this means this statement will need 2 PX server sets, 4 PX servers as the DOP is 2.

Finding the Number of Parallelizers and PX Server Sets

You can find the number of PX server sets for a statement using the view v$pq_sesstat. If we run the last example statement above here is what we see.

SQL> select /*+ parallel(2) */ count(*)
  2  from sales s, customers c
  3  where s.cust_id=c.cust_id;

  COUNT(*)
----------
   9584391

SQL> select * from v$pq_sesstat;

STATISTIC                      LAST_QUERY SESSION_TOTAL     CON_ID
------------------------------ ---------- ------------- ----------
Queries Parallelized                    1             1          0
DML Parallelized                        0             0          0
DDL Parallelized                        0             0          0
DFO Trees                               1             1          0
Server Threads                          4             0          0
Allocation Height                       2             0          0
Allocation Width                        1             0          0
Local Msgs Sent                    387913        387913          0
Distr Msgs Sent                         0             0          0
Local Msgs Recv'd                  387913        387913          0
Distr Msgs Recv'd                       0             0          0
DOP                                     2             0          0
Slave Sets                              2             0          0

13 rows selected.

It shows what we had 1 parallelizer (indicated by the statistic DFO Trees), 2 PX server sets (indicated by the statistic Slave Sets), 4 PX servers (indicated by the statistic Server Threads, and the DOP was 2.

SQL Monitor shows the DFO trees and PX server sets in the Parallel tab of the SQL Monitor report as can be seen in the below screenshot.

It shows the PX server sets as "Parallel Set" and PX servers as "Parallel Server". It seems like we are doing everything to confuse the users by using different names everywhere. In this SQL Monitor report we see that we had 2 PX server sets and 4 PX servers.

Multiple Parallelizers

Some statements can have more than one parallelizer (DFO tree). Since each parallelizer can use 2 PX server sets these statements can use more than DOP*2 number of PX servers. You can identify such statements by looking at the plan as explained above. If the plan has multiple PX coordinators it means the statement has multiple parallelizers. The following plan has 2 parallelizers as indicated by the number of PX coordinators.

SQL> explain plan for
  2  select /*+ parallel(2) */ cust_id, (select max(cust_id) from customers)
  3  from customers
  4  order by cust_id;

Explained.

SQL> select * from table(dbms_xplan.display);

Here is what v$pq_sesstat shows after running this statement.

STATISTIC                      LAST_QUERY SESSION_TOTAL     CON_ID
------------------------------ ---------- ------------- ----------
Queries Parallelized                    1             7          0
DML Parallelized                        0             0          0
DDL Parallelized                        0             0          0
DFO Trees                               2            10          0
Server Threads                          6             0          0
Allocation Height                       2             0          0
Allocation Width                        1             0          0
Local Msgs Sent                     11878       1190228          0
Distr Msgs Sent                         0             0          0
Local Msgs Recv'd                   11873       1190215          0
Distr Msgs Recv'd                       0             0          0
DOP                                     2             0          0
Slave Sets                              3             0          0

We had 2 parallelizers (DFO Trees), 3 PX server sets (Slave Sets) and 6 PX servers (Server Threads). There are cases where v$pq_sesstat shows incorrect information when the plan has multiple parallelizers, so do not rely on this information if you have plans with multiple parallelizers, I will talk about those cases in the next post. 

SQL Monitor shows DFO trees as Parallel Group in the Parallel tab as can be seen in the below screenshot.

Again we see that we had 2 parallelizers and 3 PX server sets.

After covering the basic terminology we can now talk about multiple parallelizers in more detail and look at how they work. Stay tuned for the next post. UPDATE 9 Dec 2015: That post is now published. 

Friday Nov 20, 2015

Review of Data Warehousing and Big Data at #oow15

DW BigData Review Of OOW15

This year OpenWorld was bigger, more exciting and packed with sessions about the very latest technology and product features. Most importantly, both data warehousing and Big Data were at the heart of this year’s conference across a number of keynotes and a huge number of general sessions. Our hands-on labs were all completely full as people got valuable hands-on time with our most important new features. The key focus areas at this year’s conference were:
  • Database 12c for Data Warehousing
  • Big Data and the Internet of Things 
  • Cloud from on-premise to on the Cloud to running hybrid Cloud systems
  • Analytics and SQL continues to evolve to enable more and more sophisticated analysis. 
All these topics appeared across the main keynote sessions including live on-stage demonstrations of how many of our news features can be used to increase the performance and analytical capability of your data warehouse and big data management system - checkout the on-demand videos for the keynotes and executive interviews....
[Read More]

Friday Oct 23, 2015

Performance Study: Big Data Appliance compared with DIY Hadoop

Over the past couple of months a team of Intel engineers have been working with our engineers on Oracle Big Data Appliance and performance, especially in ensuring a BDA outperforms DIY Hadoop out of the box. The good news is that your BDA, as you know it today is already 1.2x faster. We are now working to include a lot of the findings in BDA 4.3 and subsequent versions, so we are steadily expanding that 1.2x into a 2x out of box performance advantage. And that is all above and beyond the faster time to value a BDA delivers, as well as on top of the low cost you can get it for. 

Read the full paper here.

But, we thought we should add some color to all of this, and if you are at Openworld this year, come listen to Eric explain all of this in detail on Monday October 26th at 2:45 in Moscone West room 3000.

If you can't make it, here is a short little dialog we had over the results and both Eric and Lucy's take on the work they did and what they are up to next.

Q: What was the most surprising finding in tuning the system?

A: We were surprised how well the BDA performed right after its installation. Having worked for over 5 years on Hadoop, we understand it is a long iterative process to extract the best possible performance out of your hardware. BDA was a well-tuned machine and we were a little concerned we might not have much value to add... 

Q: Anything that you thought was exciting but turned out to be not such a big thing?

A: We were hoping for 5x gains from our work, but only got 2x... But, in all seriousness, we were hoping for better results from some of our memory and Java garbage collection tuning. Unfortunately they only resulted in marginal single digits gains. 

Q: What is next?

A: There is a long list of exciting new products coming from Intel in the coming year; such as hardware accelerated compression, 3d-Xpoint, almost zero latency PCIE SSDs and not to forget new processors. We are excited at the idea of tightly integrating them all with Big Data technologies! What is a better test bed that the BDA? A full software/hardware solution!

Looks like we have a lot of fun things to go work on and with, as well as of course looking into performance improvements for BDA in light of Apache Spark.

See you all at Openworld, or once again, read the paper here

Wednesday Oct 14, 2015

Parallel Execution Questions at Oracle OpenWorld

Because of some logistical reasons we do not have a specific Parallel Execution booth on the demo grounds at OpenWorld this year. Instead the PX development team will be at the SQL Analytics and Analytic Views booth on Oct 27, Tue, 2pm-5pm and Oct 28, Wed, 10:15am-1pm. If you have any questions about PX or if you just want to chat with PX developers find us at that booth. You can also put your questions beforehand as comments here and come talk about them at the booth.

Other than that my session on how to make people happy in a DW will be at Moscone South 307 on Oct 27, Tue, 12:15pm. Come and join us to talk about how the Oracle Database is nice to all of its users and treats them well.

Tuesday Oct 13, 2015

Big Data SQL 2.0 - Now Available

With the release of Big Data SQL 2.0 it is probably time to do a quick recap and introduce the marquee features in 2.0. The key goals of Big Data SQL are to expose data in its original format, and stored within Hadoop and NoSQL Databases through high-performance Oracle SQL being offloaded to Storage resident cells or agents. The architecture of Big Data SQL closely follows the architecture of Oracle Exadata Storage Server Software and is built on the same proven technology.

Retrieving Data With data in HDFS stored in an undetermined format (schema on read), SQL queries require some constructs to parse and interpret data for it to be processed in rows and columns. For this Big Data SQL leverages all the Hadoop constructs, notably InputFormat and SerDe Java classes optionally through Hive metadata definitions. Big Data SQL then layers the Oracle Big Data SQL Agent on top of this generic Hadoop infrastructure as can be seen below.

Accessing HDFS data through Big Data SQL

Because Big Data SQL is based on Exadata Storage Server Software, a number of benefits are instantly available. Big Data SQL not only can retrieve data, but can also score Data Mining models at the individual agent, mapping model scoring to an individual HDFS node. Likewise querying JSON documents stored in HDFS can be done with SQL directly and is executed on the agent itself.

Smart Scan

Within the Big Data SQL Agent, similar functionality exists as is available in Exadata Storage Server Software. Smart Scans apply the filter and row projections from a given SQL query on the data streaming from the HDFS Data Nodes, reducing the data that is flowing to the Database to fulfill the data request of that given query. The benefits of Smart Scan for Hadoop data are even more pronounced than for Oracle Database as tables are often very wide and very large. Because of the elimination of data at the individual HDFS node, queries across large tables are now possible within reasonable time limits enabling data warehouse style queries to be spread across data stored in both HDFS and Oracle Database.

Storage Indexes

Storage Indexes - new in Big Data SQL 2.0 - provide the same benefits of IO elimination to Big Data SQL as they provide to SQL on Exadata. The big difference is that in Big Data SQL the Storage Index works on an HDFS block (on BDA – 256MB of data) and span 32 columns instead of the usual 8. Storage Index is fully transparent to both Oracle Database and to the underlying HDFS environment. As with Exadata, the Storage Index is a memory construct managed by the Big Data SQL software and invalidated automatically when the underlying files change.

Concepts for Storage Indexes

Storage Indexes work on data exposed via Oracle External tables using both the ORACLE_HIVE and ORACLE_HDFS types. Fields are mapped to these External Tables and the Storage Index is attached to the Oracle (not the Hive) columns, so that when a query references the column(s), the Storage Index - when appropriate - kicks in. In the current version, Storage Index does not support tables defined with Storage Handlers (ex: HBase or Oracle NoSQL Database).

Compound Benefits

The Smart Scan and Storage Index features deliver compound benefits. Where Storage Indexes reduces the IO done, Smart Scan then enacts the same row filtering and column projection. This latter step remains important as it reduces the data transferred between systems.

To learn more about Big Data SQL, join us at Open World in San Francisco at the end of the month.

Friday Oct 09, 2015

Android App for Data Warehousing and Big Data at #oow15

Android App for OOW15

A number of people have asked if I could make a smartphone app version of my usual OpenWorld web app (OpenWorld 2015 on your smartphone and tablet) which does not require a data or wifi connection. For Android users there is some good news: here is my new Android app for OpenWorld that installs on your smartphone and runs without needing a wifi connection or a mobile data connection.

[Read More]

Tuesday Oct 06, 2015

Online Calendar for Data Warehousing and Big Data Sessions at #oow15 now available

Cw22 lg oow 2227098

I have published a shared online calendar that contains all the most important data warehouse and big data sessions at this year’s OpenWorld. The following links will allow you to add this shared calendar to your own desktop calendar application or view it via a browser...

 

[Read More]

Thursday Oct 01, 2015

DOP Downgrades, or Avoid The Ceiling

We talked about how to find the reason of a DOP downgrade before. Let's look at the most undesirable downgrade reason, which is "DOP downgrade due to insufficient number of processes", and why it is undesirable.

PX Server Pool

Oracle allocates PX servers to SQL statements from a pool of processes. The size of this pool is determined by the parameter parallel_max_servers. This parameter acts as the limit for the number of PX servers that can be spawned by the database instance, just like the processes parameter limits the total number of processes for the instance. When the number of PX servers in an instance reaches parallel_max_servers no more PX servers will be spawned. This makes sure that the system does not get overloaded with processes.

The single exception for this is the usage of PX servers to access GV$ views in RAC. When you query a GV$ view one PX server is allocated on each instance, these PX servers are not counted against parallel_max_servers so they do not effect the number of PX servers that can be used by statements accessing other objects.

You can use the view V$PX_PROCESS to see the PX servers in your instance. This view also lists the PX servers used for queries accessing GV$ views in RAC, so you can use the predicate IS_GV='FALSE' to filter out those PX servers.

SQL> select server_name,status from v$px_process where is_gv='FALSE';

SERVER_NAME          STATUS
-------------------- ---------
P007                 IN USE
P001                 IN USE
P005                 IN USE
P004                 IN USE
P003                 IN USE
P000                 IN USE
P002                 IN USE
P006                 IN USE
P008                 AVAILABLE
P00A                 AVAILABLE
P00F                 AVAILABLE
P00D                 AVAILABLE
P009                 AVAILABLE
P00E                 AVAILABLE
P00B                 AVAILABLE
P00C                 AVAILABLE

16 rows selected.

Be aware that Oracle will spawn PX servers up to parallel_max_servers when needed, this view only lists already spawned PX servers. So, do not expect to see the number of PX servers in this view to be equal to parallel_max_servers. The PX servers currently being used have STATUS='IN USE'.

The following query looks at the in-use PX servers and parallel_max_servers and shows the number of in-use PX servers and the number of available PX servers.

SELECT  
  (SELECT COUNT(*) FROM v$px_process WHERE  is_gv='FALSE' AND status='IN USE'
  ) inuse,
  (SELECT value-
    (SELECT COUNT(*) FROM v$px_process WHERE    is_gv='FALSE' AND status='IN USE'
    )
  FROM v$parameter
  WHERE name='parallel_max_servers'
  ) available
FROM dual;

INUSE AVAILABLE
----- ---------
    8        28

PX Server Allocation to Statements

When a SQL statement requires PX servers these processes are allocated from the PX server pool. The number of PX servers required for a statement depends on the degree of parallelism (DOP) and the execution plan shape. Most statements are executed with DOP*2 number of PX servers because of the producer/consumer model. There are rare cases when a statement uses more than DOP*2 number of PX servers depending on the plan shape. Let's not go into the details of when this can happen as I will cover these cases in another blog post soon.

Here is an example showing two queries running on an instance with parallel_max_servers=36. One query is running with DOP=8 and using 16 PX servers, the other query is running with DOP=4 and using 8 PX servers.

As you see in the picture there are 12 PX servers not used by any statement, these are available for any statement.

Downgrades Due to PX Server Shortage

With the default configuration of the Oracle database (specifically when parallel_degree_policy=MANUAL) when the required number of PX servers for a statement is higher than the number of available PX servers that statement gets downgraded. This means the statement will run with fewer PX servers than it requires. When the statement is downgraded because there are not enough number of available PX servers you will see the reason "DOP downgrade due to insufficient number of processes".

Continuing with the same example the following picture shows a new query submitted with DOP=8 and requires 16 PX servers.

Since there are only 12 available PX servers this statement gets downgraded from DOP=8 to DOP=6 and gets all 12 available PX servers.

Consider another query submitted at this time with DOP=8. Even though this query requires 16 PX servers it will be given none because there are no available PX servers left. This query gets downgraded to DOP=1 and runs serially.

Downgraded? So What?

DOP downgrades can have a huge negative impact on SQL performance. It is not easy to quantify this impact without testing your query with different DOPs as it depends on the scalability of the statement and the downgrade percentage. If your statement gets downgraded from DOP=48 to DOP=36 maybe you can live with that but it gets worse as the downgrade percentage gets higher. For example, if the last statement above would complete in 10 minutes with DOP=8, it will now finish in 80 minutes assuming linear scalability as it will be using one process instead of eight. A statement cannot change its DOP after it starts, so even if the required number of PX servers become available just at the start of execution this statement will not be able to use them.

DOP downgrades due to PX server shortage also makes your SQL performance unpredictable. The performance of a statement depends on the number of available PX servers when it is executed, there is no guarentee that it will get the required number of PX servers. This causes users to get widely varying response times for the same statement.

Parallel Statement Queuing to Prevent Downgrades

This brings us to the next post in which we will talk about the Parallel Statement Queuing feature introduced in 11.2. Parallel Statement Queuing prevents downgrades by limiting the number of concurrent parallel statements in the system. When the required number of PX servers for a statement is higher than the number of available PX servers it queues that statement until that number of PX servers become available. More details to follow in the next post.

About

The data warehouse insider is written by the Oracle product management team and sheds lights on all thing data warehousing and big data.

Search

Archives
« June 2016
SunMonTueWedThuFriSat
   
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
25
26
27
28
29
30
  
       
Today