Tuesday Apr 14, 2015

Statement of Direction -- Big Data Management System

Click here to start reading the Full Statement of Direction. 

Introduction: Oracle Big Data Management System Today 

As today's enterprises embrace big data, their information architectures must evolve. Every enterprise has data warehouses today, but the best-practices information architecture embraces emerging technologies such as Hadoop and NoSQL. Today’s information architecture recognizes that data not only is stored in increasingly disparate data platforms, but also in increasingly disparate locations: on-premises and potentially multiple cloud platforms. The ideal of a single monolithic ‘enterprise data warehouse’ has faded as a new more flexible architecture has emerged. Oracle calls this new architecture the Oracle Big Data Management System, and today it consists of three key components

  • The data warehouse, running on Oracle Database and Oracle Exadata Database Machine, is the primary analytic database for storing much of a company’s core transactional data: financial records, customer data, point- of-sale data and so forth. Despite now being part of a broader architecture, the requirements on the RDBMS for performance, scalability, concurrency and workload management are in more demand than ever; Oracle Database 12c introduced Oracle Database In-Memory (with columnar tables, SIMD processing, and advanced compression schemes) as latest in a long succession of warehouse-focused innovations. The market-leading Oracle Database is the ideal starting point for customers to extend their architecture to the Big Data Management System.
  • The ‘data reservoir’, hosted on Oracle Big Data Appliance, will augment the data warehouse as a repository for the new sources of large volumes of data: machine-generated log files, social-media data, and videos and images -- as well as a repository for more granular transactional data or older transactional data which is not stored in the data warehouse. Oracle’s Big Data Management System embraces complementary technologies and platforms, including open-source technologies: Oracle Big Data Appliance includes Cloudera’s Distribution of Hadoop and Oracle NoSQL Database for data management.
  • A ‘franchised query engine,’ Oracle Big Data SQL, enables scalable, integrated access in situ to the entire Big Data Management System. SQL is the accepted language for day-to-day data access and analytic queries, and thus SQL is the primary language of the Big Data Management System.  Big Data SQL enables users to combine data from Oracle Database, Hadoop and NoSQL sources within a single SQL statement.  Leveraging the architecture of Exadata Storage Software and the SQL engine of the Oracle Database, Big Data SQL delivers high-performance access to all data in the Big Data Management System.

Using this architecture, the Oracle Big Data Management System combines the performance of Oracle’s market-leading relational database, the power of Oracle’s SQL engine, and the cost-effective, flexible storage of Hadoop and NoSQL. The result is an integrated architecture for managing Big Data, providing all of the benefits of Oracle Database, Exadata, and Hadoop, without the drawbacks of independently-accessed data repositories.  

Note that the scope of this statement of direction is the data platform for Big Data. An enterprise Big Data solution would also be comprised of big data tools and big data applications built upon this data platform. 

Read the full Statement of Direction -- Big Data Management System here.


Wednesday Apr 08, 2015

Noteworthy event for big data and data warehousing

I know there is a choice of conferences and events, but this one strikes me as very interesting. The thing that (amongst a whole bunch of other interesting sessions) strikes me as extremely interesting is the extra day with a master class on Information Architecture in the "big data world". Officially this is called Oracle Information Management and Big Data Reference architecture, and I think we - as a community - can all use some more foundation on information management. Or maybe, a nice refresher for those of us who were in the weeds of our daily jobs?

On top of the master class, there are a whole load of interesting sessions. Some are really on the consumer side, some are focused on the back-end, but I think both perspectives are very useful.

Some of the sessions that look very promising to me (in Brighton) are "A journey into big data and analytics - Liberty Global" and in either location anything on Big Data Discovery - really cool stuff, and something I think will change how we derive value from big data.

So, where do you go? Brighton and Atlanta of course...

Read more about the event here.

PS. I'm hearing our own Reiner Zimmermann is doing a keynote in Brighton! 

Tuesday Apr 07, 2015

Oracle Academy: Data Science Bootcamp for 2015

I'm pleased to announce that Oracle Academy has released our Data Science Bootcamp for 2015.  As I've spent a great deal of time over the past few months helping Oracle Academy develop the content, I wanted to briefly explain what the Bootcamp series is and why it's worth a look.

 What is This Thing?

The Data Science Bootcamp is an attempt at providing asynchronous training for data science fundamentals.  There are videos for each of the 16 lessons, example code, tight integration with our Big Data Lite VM, and even an online textbook.  Between these elements, we think students can learn in the way that best fits their schedule and level of interest.  Watch, try, or read about each problem in whatever helps you learn best.

[Read More]

Friday Mar 27, 2015

Open World 2015 call for papers - my simple guidelines

OOW Banner 2013

Most of you will already have received an email from the OpenWorld team announcing the call for papers for this year’s conference: https://www.oracle.com/openworld/call-for-proposals.html. Each year, a lot of people ask me how they can increase their chances of getting their paper accepted? Well, I am going to start by stating that product managers have absolutely no influence over which papers are accepted - even mentioning that a product manager will be co-presenting with you will not increase your chances! Yes, sad but true!

So how do you make sure that your presentation title and abstract catches the eye of the selection committee? Well, here is my list of top 10 guidelines for submitting a winning proposal...

[Read More]

Why SQL Part 4 - Intelligent and continuous evolution

In the third part of this series of posts on (Why SQL Part 3 - Simple optimization makes life simplerof this series of blog posts I explained that by letting the database perform the optimisations at source, where the data is located, it is possible to share tuning improvements across a wide spectrum of front-end systems (BI reports, dashboards, applications etc). The ability of the database to “own” the optimization and processing characteristics of how a SQL statement is executed is an important element in the success of SQL as a data analysis language. In the whitepaper that forms the basis for this series of blog posts this section is quite brief. This was done for reasons that will become obvious as you work through this blog post

Now that we have looked at the powerful framework that supports SQL and the simplified optimization layer we can move on to look at how SQL has evolved over time to support ever more sophisticated features and functions and compare this with some of the newer open source frameworks.

[Read More]

Tuesday Mar 24, 2015

Finding the Distribution Method in Adaptive Parallel Joins

Parallel Execution(PX) uses a producer/consumer model which requires distribution of rows between producers and consumers (two PX server sets). Until 12c, the distribution method was decided at parse time based on optimizer statistics. 12c brings adaptive parallel distribution methods which enables the distribution method decision to be taken at runtime based on the actual number of rows. This functionality is explained in the "Optimizer with Oracle Database 12c" white paper in the "Adaptive Parallel Distribution Methods" section. This was also discussed by Randolf Geist here.

So, how can we find out which distribution method was chosen at runtime? One approach is to use the view V$PQ_TQSTAT as explained by Randolf in his post, or look at the rows processed by producers and consumers in SQL Monitor as explained in the above mentioned paper. But, the easiest way to find this information is to look at the OTHER column in SQL Monitor.

The OTHER column shows a binocular icon in the lines with PX SEND HYBRID HASH row source.

When you click on those icons you can see the distribution method used at runtime.

As the DOP downgrade reason I talked about in my previous post, this is also a cryptic number code as of 12.1. For the adaptive distribution methods there are three possible values. The mappings for these codes are; 6 = BROADCAST, 5 = ROUND-ROBIN, and 16 = HASH distribution. 

Any comments and feedback welcome. 

Friday Mar 20, 2015

Finding the Reason for DOP Downgrades

Whether with manual DOP or with Auto DOP, the runtime DOP of a statement is not always the same as the requested DOP. There are several reasons for this like hitting the maximum number of parallel execution (PX) servers allowed, hitting the DOP limit set by the resource manager, etc...

To find about why a specific statement's DOP got downgraded you can use the SQL Monitor reports in 12.1. SQL Monitor shows the DOP downgrade in the General section as a people icon with a red down-pointing arrow, when you hover over that icon you can see the actual DOP, the downgrade percentage, the number of PX servers requested, and the number of PX servers allocated, like below.

To find out the reason for the downgrade you can click the binocular icon in the OTHER column in the PX COORDINATOR line.


That shows you the actual DOP after the downgrade and the reason for the downgrade (as a cryptic code obviously) like below.

Here is the list of possible reasons and their number codes as of 12.1.

       350 DOP downgrade due to adaptive DOP
       351 DOP downgrade due to resource manager max DOP
       352 DOP downgrade due to insufficient number of processes
       353 DOP downgrade because slaves failed to join

You can look at an example SQL Monitor report showing this here.

We plan to provide this information in an easy to reach section of SQL Monitor in human readable format in a future release.

Thursday Mar 19, 2015

Why SQL Part 3 - Simple optimization makes life simpler

In the second part of this series of posts on (Why SQL Part 2 - It has a powerful frameworkof this series of blog posts I explained the underlying framework that SQL is based on. That framework is referred to as “relational algebra” and it was first described in 1970 by E.F. Codd while at IBM . It provides well-founded semantics for both modelling and querying a set of data. Codd's framework established a series of basic principles that govern the construction and execution of a query and these can be summarized as follows:

  1. Projection
  2. Filter
  3. Join
  4. Aggregate

Now that we have examined the basics of the relational model let’s now move on to how the language actually determines the best way to answer a question based on its use of simplified and transparent optimizations. To understand why simplified and transparent optimization is so important it is useful to consider two of the most common programming paradigms and these are: procedural and declarative. Let’s explore these two approaches...

[Read More]

Wednesday Mar 18, 2015

Production workloads blend Cloud and On-Premise Capabilities

Prediction #7 - blending production workloadsacross cloud and on-premise in Oracle's Enterprise Big Data Predictions 2015 is a tough nut to crack. Yet, we at Oracle think this is really the direction we all will go. Sure we can debate the timing, and whether or not this happens in 2015, but it is something that will come to all of us who are looking towards that big data future. So let’s discuss what we think is really going to happen over the coming years in the big data and cloud world.

Reality #1 – Data will live both in the cloud and on-premise

We see this today. Organizations run Human Capital Management systems in the cloud, integrate these with data from outside cloud based systems (think for example LinkedIn, staffing agencies etc.) while their general data warehouses and new big data systems are all deployed as on-premise systems. We also see the example in the prediction where various auto dealer systems uplink into the cloud to enable the manufacturer to consolidate all of their disparate systems. This data may be translated into data warehouse entries and possibly live in two worlds – both in the cloud and on-premise for deep diving analytics or in aggregated form.

Reality #2 – Hybrid deployments are difficult to query and optimize

We also see this today and it is one of the major issues of living in the hybrid world of cloud and on-premise. A lot of the issues are driven by low level technical limitations, specifically in network bandwidth and upload / download capacity into and out of the cloud environment. The other challenges are really (big) data management challenges in that they go into the art of running queries across two ecosystems with very different characteristics. We see a trend to use engineered systems on-premise, which delivers optimized performance for the applications, but in the cloud we often see virtualization pushing the trade-off towards ease of deployment and ease of management. These completely different ecosystems make optimization of queries across them very difficult.

Solution – Equality brings optimizations to mixed environments

As larger systems like big data and data warehouse systems move to the cloud, better performance becomes a key success criterion. Oracle is uniquely positioned to drive both standardization and performance optimizations into the cloud by deploying on engineered systems like Oracle Exadata and Oracle Big Data Appliance. Deploying engineered systems enables customers to run large systems in the cloud delivering performance as they see today in on-premise deployments. This then means that we do not live in a world divided in slow and fast, but in a world of fast and fast.
This equivalence also means that we have the same functionality in both worlds, and here we can sprinkle in some – future – Oracle magic, where we start optimizing queries to take into account where the data lives, how fast we can move it around (the dreaded networking bandwidth issue) and where we need to execute code. Now, how are we going to do this? That is a piece magic, and you will just need to wait a bit… suffice it to say we are hard at work at solving this challenging topic.

Wednesday Mar 11, 2015

Oracle Big Data Lite 4.1 VM is available on OTN

Oracle Big Data Lite 4.1 VM is now available for download on OTN.  Big Data Lite includes many of the key capabilities of Oracle's big data platform.  Each of the components have been configure to work together - and there are many hands-on labs and demonstrations to help you get started using the system.  Below is a listing of what's included:

  • Oracle Enterprise Linux 6.5
  • Oracle Database 12c Release 1 Enterprise Edition (12.1.0.2) - including Oracle Big Data SQL-enabled external tables, Oracle Multitenant, Oracle Advanced Analytics, Oracle OLAP, Oracle Partitioning, Oracle Spatial and Graph, and more.
  • Cloudera Distribution including Apache Hadoop (CDH5.3.0)
  • Cloudera Manager (5.3.0)
  • Oracle Big Data Connectors 4.1
    • Oracle SQL Connector for HDFS 3.2.0
    • Oracle Loader for Hadoop 3.3.0
    • Oracle Data Integrator 12c
    • Oracle R Advanced Analytics for Hadoop 2.4.1
    • Oracle XQuery for Hadoop 4.1.0
  • Oracle NoSQL Database Enterprise Edition 12cR1 (3.2.5)
  • Oracle JDeveloper 12c (12.1.3)
  • Oracle SQL Developer and Data Modeler 4.0.3
  • Oracle Data Integrator 12cR1 (12.1.3)
  • Oracle GoldenGate 12c
  • Oracle R Distribution 3.1.1
  • Oracle Perfect Balance 2.3.0
  • Oracle CopyToBDA 1.1 

 

 Enjoy!

Tuesday Mar 10, 2015

Parallel DML on Tables with LOB Columns

Until 12c, running parallel DML on tables with LOB columns required that table to be partitioned. Starting with 12c, we allow parallel INSERTs into non-partitioned tables with LOB columns provided that those columns are declared as SecureFiles LOBs. Parallel UPDATE, DELETE, and MERGE are still not supported. The documentation on this has some errors leading some customers to expect all parallel DML to be supported, so I wanted to point out the current behavior.

INSERTs into non-partitioned table with SecureFiles LOB column

Here is an example showing that the INSERT will be executed in parallel.

SQL> create table t (a number,b varchar2(200),c clob) lob(c) store as securefile;
Table created.

SQL> explain plan for insert /*+ parallel enable_parallel_dml */
  2  into t select /*+ parallel */ * from t;

Explained.

SQL> select * from table(dbms_xplan.display(format=>'basic +note'));


Other DML on non-partitioned table with SecureFiles LOB column

A DELETE statement on the other hand is executed serially as indicated in the note section of the plan. The note also shows the reason which is that the table is non-partitioned. This behavior applies for UPDATE and MERGE statements too.

SQL> explain plan for delete /*+ parallel enable_parallel_dml */  from t;
Explained.

SQL> select * from table(dbms_xplan.display(format=>'basic +note'));


DML on partitioned table with SecureFiles LOB column

For partitioned tables with LOB columns, all parallel DML operations are supported as before. Remember that the effective DOP will be the number of partitions, this behavior is also the same as before. This is because we use the partitions as PX granules in this case.

This example shows that parallel DML is allowed on a partitioned table with LOB columns. V$PQ_SESSTAT shows we get a DOP of 4 even though the plan shows a DOP of 8, that is because the table has 4 partitions. 

SQL> create table t (a number,b varchar2(200),c clob)
  2  lob(c) store as securefile partition by hash(a) partitions 4;

Table created.

SQL> explain plan for delete /*+ parallel(8) enable_parallel_dml */  from t;

Explained.

SQL> select * from table(dbms_xplan.display(format=>'basic +note'));



SQL> delete /*+ parallel(8) enable_parallel_dml */  from t;

0 rows deleted.

SQL> select * from V$PQ_SESSTAT;

STATISTIC                      LAST_QUERY SESSION_TOTAL     CON_ID
------------------------------ ---------- ------------- ----------
Queries Parallelized                    0             0          0
DML Parallelized                        1             1          0
DDL Parallelized                        0             0          0
DFO Trees                               1             1          0
Server Threads                          4             0          0
Allocation Height                       4             0          0
Allocation Width                        1             0          0
Local Msgs Sent                         8             0          0
Distr Msgs Sent                         0             0          0
Local Msgs Recv'd                       8             0          0
Distr Msgs Recv'd                       0             0          0
DOP                                     4             0          0
Slave Sets                              1             0          0

13 rows selected.

As always, we are fixing the documentation about this.

Monday Mar 09, 2015

Why SQL Part 2 - It has a powerful framework

In the first part (Why SQL is the natural language for data analysis) of this series of blog posts I explained why analytics is important to the business, how the volume of data along with the types of data is continuing to expand and why that makes it vital that you select the right language for data analysis. Many of us work with SQL every day and take for granted many of its unique features, its power, flexibility and the richness of the analytics.

This familiarity with SQL means that sometimes we are a bit slow at preventing some of our projects investing in other big data languages such as MapReduce, Impala, Spark, Java and many of the new generation of SQL-like open source projects. While many of these new languages are considered “cool”, it is very easy to end up building new proprietary data silos or investing in a language that eventually is replaced by another open source project or investing time and effort in code that eventually fails to deliver the required analytics.

One of the aims of this series of blog posts is to refresh your memory about why SQL has been so successful in the area of analytics. SQL has four unique features that make it perfect for data analysis...

    [Read More]

    Tuesday Mar 03, 2015

    Are you leveraging Oracle's database innovations for Cloud and Big data?

    If you are interested in big data, Hadoop, SQL and data warehousing then mark your calendars because on March 18th at 10:00AM PST/1:00PM EST, you will be able to hear Tom Kyte (Oracle Database Architect) talk about how you can use Oracle Big Data SQL to seamlessly integrate all your Hadoop big data datasets with your relational schemas stored in Oracle Database 12c. As part of this discussion Tom will outline how you can build the perfect foundation for your enterprise big data management system using Oracle's innovative technology.

    If you are working on a data warehousing project and/or a big data project then this is one webcast you will not want to miss so register today (click here) to hear the latest about Oracle Database innovations and best practices. The full list of speakers is:

    Tom Kyte
    Oracle Database Architect
    Keith Wilcox
    VP, Database Administration
    Epsilon
    Bill Callahan
    Director, Principal Engineer,
    CCC Information Services, Inc.

    Why SQL is the natural language for data analysis

    Analytics is a must-have component of every corporate data warehousing and big data project. It is the core driver for the business: the development of new products, better targeting of customers with promotions, hiring of new talent and retention of existing key talent. Yet the analysis of especially “big data environments”, data stored and processed outside of classical relational systems, continues to be a significant challenge for the majority companies. According to Gartner, 72% of companies are planning to increase their expenditure on big data yet 55% state they don’t have the necessary skills to make use of it. 

    The objective of this series of articles, which will appear over the coming weeks, is to explain why SQL is the natural language for amy kind of data analysis including big data and the benefits that this brings for application developers, DBAs and business users. 

    [Read More]

    Wednesday Feb 25, 2015

    New Way to Enable Parallel DML

    This post was triggered by Jonathan Lewis' tweet here.

    The conventional way of enabling parallel DML is to enable it with an ALTER SESSION command as explained in the documentation. This command enables parallel DML for the session and all subsequent DML statements are candidates for parallel execution provided that the rules and restrictions for parallel DML are met.

    12c introduces a new way of enabling and disabling parallel DML. Rather than enabling or disabling it session-wise you can enable or disable it statement-wise using new hints. The hint ENABLE_PARALLEL_DML enables parallel DML for the statement, and the hint DISABLE_PARALLEL_DML disables it for the statement. All rules and restrictions for parallel DML still apply, these hints are only alternatives to the related ALTER SESSION commands.

    This example shows the plans for the same statement without and with the ENABLE_PARALLEL_DML hint.

    We can see that parallel DML is not enabled for the above statement indicated by the LOAD operation being above the PX COORDINATOR in the plan. In 12c the notes section nicely and clearly shows that it is not enabled.

    Here we see that the notes section does not mention parallel DML as disabled and the LOAD operation is under the PX COORDINATOR, both of these indicate that parallel DML is enabled.

    These two new hints are available starting with 12.1.0.1 and can be used regardless of the value of the OPTIMIZER_FEATURES_ENABLE parameter. They can be used in INSERT, UPDATE, DELETE, and MERGE statements. We are updating the related documentation to include these, I will update this post to include links when the documentation is refreshed.

    Sep 2, 2015 UPDATE: The documentation is updated to include these hints, here and here.

    About

    The data warehouse insider is written by the Oracle product management team and sheds lights on all thing data warehousing and big data.

    Search

    Archives
    « September 2015
    SunMonTueWedThuFriSat
      
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
       
           
    Today