Tuesday Apr 14, 2015

Statement of Direction -- Big Data Management System

Click here to start reading the Full Statement of Direction. 

Introduction: Oracle Big Data Management System Today 

As today's enterprises embrace big data, their information architectures must evolve. Every enterprise has data warehouses today, but the best-practices information architecture embraces emerging technologies such as Hadoop and NoSQL. Today’s information architecture recognizes that data not only is stored in increasingly disparate data platforms, but also in increasingly disparate locations: on-premises and potentially multiple cloud platforms. The ideal of a single monolithic ‘enterprise data warehouse’ has faded as a new more flexible architecture has emerged. Oracle calls this new architecture the Oracle Big Data Management System, and today it consists of three key components

  • The data warehouse, running on Oracle Database and Oracle Exadata Database Machine, is the primary analytic database for storing much of a company’s core transactional data: financial records, customer data, point- of-sale data and so forth. Despite now being part of a broader architecture, the requirements on the RDBMS for performance, scalability, concurrency and workload management are in more demand than ever; Oracle Database 12c introduced Oracle Database In-Memory (with columnar tables, SIMD processing, and advanced compression schemes) as latest in a long succession of warehouse-focused innovations. The market-leading Oracle Database is the ideal starting point for customers to extend their architecture to the Big Data Management System.
  • The ‘data reservoir’, hosted on Oracle Big Data Appliance, will augment the data warehouse as a repository for the new sources of large volumes of data: machine-generated log files, social-media data, and videos and images -- as well as a repository for more granular transactional data or older transactional data which is not stored in the data warehouse. Oracle’s Big Data Management System embraces complementary technologies and platforms, including open-source technologies: Oracle Big Data Appliance includes Cloudera’s Distribution of Hadoop and Oracle NoSQL Database for data management.
  • A ‘franchised query engine,’ Oracle Big Data SQL, enables scalable, integrated access in situ to the entire Big Data Management System. SQL is the accepted language for day-to-day data access and analytic queries, and thus SQL is the primary language of the Big Data Management System.  Big Data SQL enables users to combine data from Oracle Database, Hadoop and NoSQL sources within a single SQL statement.  Leveraging the architecture of Exadata Storage Software and the SQL engine of the Oracle Database, Big Data SQL delivers high-performance access to all data in the Big Data Management System.

Using this architecture, the Oracle Big Data Management System combines the performance of Oracle’s market-leading relational database, the power of Oracle’s SQL engine, and the cost-effective, flexible storage of Hadoop and NoSQL. The result is an integrated architecture for managing Big Data, providing all of the benefits of Oracle Database, Exadata, and Hadoop, without the drawbacks of independently-accessed data repositories.  

Note that the scope of this statement of direction is the data platform for Big Data. An enterprise Big Data solution would also be comprised of big data tools and big data applications built upon this data platform. 

Read the full Statement of Direction -- Big Data Management System here.

Friday Mar 27, 2015

Why SQL Part 4 - Intelligent and continuous evolution

In the third part of this series of posts on (Why SQL Part 3 - Simple optimization makes life simplerof this series of blog posts I explained that by letting the database perform the optimisations at source, where the data is located, it is possible to share tuning improvements across a wide spectrum of front-end systems (BI reports, dashboards, applications etc). The ability of the database to “own” the optimization and processing characteristics of how a SQL statement is executed is an important element in the success of SQL as a data analysis language. In the whitepaper that forms the basis for this series of blog posts this section is quite brief. This was done for reasons that will become obvious as you work through this blog post

Now that we have looked at the powerful framework that supports SQL and the simplified optimization layer we can move on to look at how SQL has evolved over time to support ever more sophisticated features and functions and compare this with some of the newer open source frameworks.

[Read More]

Tuesday Mar 24, 2015

Finding the Distribution Method in Adaptive Parallel Joins

Parallel Execution(PX) uses a producer/consumer model which requires distribution of rows between producers and consumers (two PX server sets). Until 12c, the distribution method was decided at parse time based on optimizer statistics. 12c brings adaptive parallel distribution methods which enables the distribution method decision to be taken at runtime based on the actual number of rows. This functionality is explained in the "Optimizer with Oracle Database 12c" white paper in the "Adaptive Parallel Distribution Methods" section. This was also discussed by Randolf Geist here.

So, how can we find out which distribution method was chosen at runtime? One approach is to use the view V$PQ_TQSTAT as explained by Randolf in his post, or look at the rows processed by producers and consumers in SQL Monitor as explained in the above mentioned paper. But, the easiest way to find this information is to look at the OTHER column in SQL Monitor.

The OTHER column shows a binocular icon in the lines with PX SEND HYBRID HASH row source.

When you click on those icons you can see the distribution method used at runtime.

As the DOP downgrade reason I talked about in my previous post, this is also a cryptic number code as of 12.1. For the adaptive distribution methods there are three possible values. The mappings for these codes are; 6 = BROADCAST, 5 = ROUND-ROBIN, and 16 = HASH distribution. 

Any comments and feedback welcome. 

Friday Mar 20, 2015

Finding the Reason for DOP Downgrades

Whether with manual DOP or with Auto DOP, the runtime DOP of a statement is not always the same as the requested DOP. There are several reasons for this like hitting the maximum number of parallel execution (PX) servers allowed, hitting the DOP limit set by the resource manager, etc...

To find about why a specific statement's DOP got downgraded you can use the SQL Monitor reports in 12.1. SQL Monitor shows the DOP downgrade in the General section as a people icon with a red down-pointing arrow, when you hover over that icon you can see the actual DOP, the downgrade percentage, the number of PX servers requested, and the number of PX servers allocated, like below.

To find out the reason for the downgrade you can click the binocular icon in the OTHER column in the PX COORDINATOR line.

That shows you the actual DOP after the downgrade and the reason for the downgrade (as a cryptic code obviously) like below.

Here is the list of possible reasons and their number codes as of 12.1.

       350 DOP downgrade due to adaptive DOP
       351 DOP downgrade due to resource manager max DOP
       352 DOP downgrade due to insufficient number of processes
       353 DOP downgrade because slaves failed to join

You can look at an example SQL Monitor report showing this here.

We plan to provide this information in an easy to reach section of SQL Monitor in human readable format in a future release.

Thursday Mar 19, 2015

Why SQL Part 3 - Simple optimization makes life simpler

In the second part of this series of posts on (Why SQL Part 2 - It has a powerful frameworkof this series of blog posts I explained the underlying framework that SQL is based on. That framework is referred to as “relational algebra” and it was first described in 1970 by E.F. Codd while at IBM . It provides well-founded semantics for both modelling and querying a set of data. Codd's framework established a series of basic principles that govern the construction and execution of a query and these can be summarized as follows:

  1. Projection
  2. Filter
  3. Join
  4. Aggregate

Now that we have examined the basics of the relational model let’s now move on to how the language actually determines the best way to answer a question based on its use of simplified and transparent optimizations. To understand why simplified and transparent optimization is so important it is useful to consider two of the most common programming paradigms and these are: procedural and declarative. Let’s explore these two approaches...

[Read More]

Wednesday Mar 18, 2015

Production workloads blend Cloud and On-Premise Capabilities

Prediction #7 - blending production workloadsacross cloud and on-premise in Oracle's Enterprise Big Data Predictions 2015 is a tough nut to crack. Yet, we at Oracle think this is really the direction we all will go. Sure we can debate the timing, and whether or not this happens in 2015, but it is something that will come to all of us who are looking towards that big data future. So let’s discuss what we think is really going to happen over the coming years in the big data and cloud world.

Reality #1 – Data will live both in the cloud and on-premise

We see this today. Organizations run Human Capital Management systems in the cloud, integrate these with data from outside cloud based systems (think for example LinkedIn, staffing agencies etc.) while their general data warehouses and new big data systems are all deployed as on-premise systems. We also see the example in the prediction where various auto dealer systems uplink into the cloud to enable the manufacturer to consolidate all of their disparate systems. This data may be translated into data warehouse entries and possibly live in two worlds – both in the cloud and on-premise for deep diving analytics or in aggregated form.

Reality #2 – Hybrid deployments are difficult to query and optimize

We also see this today and it is one of the major issues of living in the hybrid world of cloud and on-premise. A lot of the issues are driven by low level technical limitations, specifically in network bandwidth and upload / download capacity into and out of the cloud environment. The other challenges are really (big) data management challenges in that they go into the art of running queries across two ecosystems with very different characteristics. We see a trend to use engineered systems on-premise, which delivers optimized performance for the applications, but in the cloud we often see virtualization pushing the trade-off towards ease of deployment and ease of management. These completely different ecosystems make optimization of queries across them very difficult.

Solution – Equality brings optimizations to mixed environments

As larger systems like big data and data warehouse systems move to the cloud, better performance becomes a key success criterion. Oracle is uniquely positioned to drive both standardization and performance optimizations into the cloud by deploying on engineered systems like Oracle Exadata and Oracle Big Data Appliance. Deploying engineered systems enables customers to run large systems in the cloud delivering performance as they see today in on-premise deployments. This then means that we do not live in a world divided in slow and fast, but in a world of fast and fast.
This equivalence also means that we have the same functionality in both worlds, and here we can sprinkle in some – future – Oracle magic, where we start optimizing queries to take into account where the data lives, how fast we can move it around (the dreaded networking bandwidth issue) and where we need to execute code. Now, how are we going to do this? That is a piece magic, and you will just need to wait a bit… suffice it to say we are hard at work at solving this challenging topic.

Tuesday Mar 10, 2015

Parallel DML on Tables with LOB Columns

Until 12c, running parallel DML on tables with LOB columns required that table to be partitioned. Starting with 12c, we allow parallel INSERTs into non-partitioned tables with LOB columns provided that those columns are declared as SecureFiles LOBs. Parallel UPDATE, DELETE, and MERGE are still not supported. The documentation on this has some errors leading some customers to expect all parallel DML to be supported, so I wanted to point out the current behavior.

INSERTs into non-partitioned table with SecureFiles LOB column

Here is an example showing that the INSERT will be executed in parallel.

SQL> create table t (a number,b varchar2(200),c clob) lob(c) store as securefile;
Table created.

SQL> explain plan for insert /*+ parallel enable_parallel_dml */
  2  into t select /*+ parallel */ * from t;


SQL> select * from table(dbms_xplan.display(format=>'basic +note'));

Other DML on non-partitioned table with SecureFiles LOB column

A DELETE statement on the other hand is executed serially as indicated in the note section of the plan. The note also shows the reason which is that the table is non-partitioned. This behavior applies for UPDATE and MERGE statements too.

SQL> explain plan for delete /*+ parallel enable_parallel_dml */  from t;

SQL> select * from table(dbms_xplan.display(format=>'basic +note'));

DML on partitioned table with SecureFiles LOB column

For partitioned tables with LOB columns, all parallel DML operations are supported as before. Remember that the effective DOP will be the number of partitions, this behavior is also the same as before. This is because we use the partitions as PX granules in this case.

This example shows that parallel DML is allowed on a partitioned table with LOB columns. V$PQ_SESSTAT shows we get a DOP of 4 even though the plan shows a DOP of 8, that is because the table has 4 partitions. 

SQL> create table t (a number,b varchar2(200),c clob)
  2  lob(c) store as securefile partition by hash(a) partitions 4;

Table created.

SQL> explain plan for delete /*+ parallel(8) enable_parallel_dml */  from t;


SQL> select * from table(dbms_xplan.display(format=>'basic +note'));

SQL> delete /*+ parallel(8) enable_parallel_dml */  from t;

0 rows deleted.

SQL> select * from V$PQ_SESSTAT;

------------------------------ ---------- ------------- ----------
Queries Parallelized                    0             0          0
DML Parallelized                        1             1          0
DDL Parallelized                        0             0          0
DFO Trees                               1             1          0
Server Threads                          4             0          0
Allocation Height                       4             0          0
Allocation Width                        1             0          0
Local Msgs Sent                         8             0          0
Distr Msgs Sent                         0             0          0
Local Msgs Recv'd                       8             0          0
Distr Msgs Recv'd                       0             0          0
DOP                                     4             0          0
Slave Sets                              1             0          0

13 rows selected.

As always, we are fixing the documentation about this.

Monday Mar 09, 2015

Why SQL Part 2 - It has a powerful framework

In the first part (Why SQL is the natural language for data analysis) of this series of blog posts I explained why analytics is important to the business, how the volume of data along with the types of data is continuing to expand and why that makes it vital that you select the right language for data analysis. Many of us work with SQL every day and take for granted many of its unique features, its power, flexibility and the richness of the analytics.

This familiarity with SQL means that sometimes we are a bit slow at preventing some of our projects investing in other big data languages such as MapReduce, Impala, Spark, Java and many of the new generation of SQL-like open source projects. While many of these new languages are considered “cool”, it is very easy to end up building new proprietary data silos or investing in a language that eventually is replaced by another open source project or investing time and effort in code that eventually fails to deliver the required analytics.

One of the aims of this series of blog posts is to refresh your memory about why SQL has been so successful in the area of analytics. SQL has four unique features that make it perfect for data analysis...

    [Read More]

    Tuesday Mar 03, 2015

    Are you leveraging Oracle's database innovations for Cloud and Big data?

    If you are interested in big data, Hadoop, SQL and data warehousing then mark your calendars because on March 18th at 10:00AM PST/1:00PM EST, you will be able to hear Tom Kyte (Oracle Database Architect) talk about how you can use Oracle Big Data SQL to seamlessly integrate all your Hadoop big data datasets with your relational schemas stored in Oracle Database 12c. As part of this discussion Tom will outline how you can build the perfect foundation for your enterprise big data management system using Oracle's innovative technology.

    If you are working on a data warehousing project and/or a big data project then this is one webcast you will not want to miss so register today (click here) to hear the latest about Oracle Database innovations and best practices. The full list of speakers is:

    Tom Kyte
    Oracle Database Architect
    Keith Wilcox
    VP, Database Administration
    Bill Callahan
    Director, Principal Engineer,
    CCC Information Services, Inc.

    Why SQL is the natural language for data analysis

    Analytics is a must-have component of every corporate data warehousing and big data project. It is the core driver for the business: the development of new products, better targeting of customers with promotions, hiring of new talent and retention of existing key talent. Yet the analysis of especially “big data environments”, data stored and processed outside of classical relational systems, continues to be a significant challenge for the majority companies. According to Gartner, 72% of companies are planning to increase their expenditure on big data yet 55% state they don’t have the necessary skills to make use of it. 

    The objective of this series of articles, which will appear over the coming weeks, is to explain why SQL is the natural language for amy kind of data analysis including big data and the benefits that this brings for application developers, DBAs and business users. 

    [Read More]

    Monday Sep 29, 2014

    The End of the Data Scientist Bubble...

    Looking around northern California and inside many technology kitchens makes me believe that we are about to see the Data Scientist bubble burst. And then I read the Fortune Magazine article on Peter Thiel - and the excerpt on Zero to One (his new book) in that article and it dawned on me that is one of the intersting ways to look at the Data Scientist bubble.

    Thiel's Classification of Innovation

    Without trying to simplify and/or bastardize mr. Thiel's theory, the example in the Fortune Mag article will make this visible to most people (I hope). In the article the analogy is; going from one type writer to 100 type writers is 1 to N, inventing a word processor is moving us from 0 to 1. In other words, true innovation dramatically changes things by giving previously unknown power to the masses. It is that innovation that moves us from 0 to 1. Expansion of existing ideas - not true innovation - moves us from 1 to N. Of course, don't take my word on this but read the article or the book...

    The Demise of the Human Data Scientist

    The above paradigm explains the Data Scientist bubble quite nicely. Once upon a time companies hired a few PhD students who by chance had a degree in statistics and had learned how to program and figured out how to deal with (large) data sets. These newly minted data scientists proved that there is potential value in mashing data together, running analytics on these newly created data sets and thus caused a storm of publicity. Companies large and small are now frantically trying to hire these elusive data scientists, or something a little more down to earth, are creating data scientists (luckily not in the lab) by forming teams that bring a part of the skillset to the table.

    This approach all starts to smell pretty much like a whole busload of typewriters being thrown at a well-known data analysis and data wrangling problem. Neither the problem nor the solution are new, nor innovative. Data Scientists are therefore not moving us from 0 to 1...

    One could argue that while the data scientist quest is not innovative, at least is solves the problem of doing analytics. Fair and by some measure correct, but there is one bigger issue with the paradigm of "data scientists will solve our analytics problem" and that is scale. Giving the keys to all that big data to only a few data scientists is not going to work because these smart and amazing people are now becoming, often unbeknownst to them, an organizational bottleneck to gaining knowledge from big data.

    The only real solution, our 0 to 1, is to expose a large number of consumers to all that big data, while enabling these consumers to apply a lot of the cool data science to all that data. In other words, we need to provide tools which include data science smarts. Those tools will enable us to apply the 80% common data science rules to the 80% of common business problems. This approach drives real business value at scale. With large chunks of issues resolved, we can then focus our few star data scientists on the 20% of problems or innovations that drive competitive advantage and change markets.

    My Conclusion

    The bubble is bursting because what I am seeing is more and more tools coming to market (soon) that will drive data science into the day-to-day job of all business people. Innovation is not the building of a better tool for data scientists or hiring more of them, instead the real 0 to 1 innovation is tools that make make all of us data scientists and lets us solve our own data science problems. The future of Data Science is smarter tools, not smarter humans.

    Friday Sep 26, 2014

    Why SQL is becoming the goto language for Big Data analysis

    Since the term big data first appeared in our lexicon of IT and business technology it has been intrinsically linked to the no-SQL, or anything-but-SQL, movement. However, we are now seeing that SQL is experiencing a renaissance. The term “noSQL” has softened to a much more realistic approach "not-only-SQL" approach. And now there is an explosion of SQL-based implementations designed to support big data. Leveraging the Hadoop ecosystem, there is: Hive, Stinger, Impala, Shark, Presto and many more. Other NoSQL vendors such as Cassandra are also adopting flavors of SQL. Why is there a growing level of interest in the reemergence of SQL? Probably, a more pertinent question is: did SQL ever really go away? Proponents of SQL often cite the following explanations for the re-emergence of SQL for analysis:

    1. There are legions of developers who know SQL. Leveraging the SQL language allows those developers to be immediately productive.
    2. There are legions of tools and applications using SQL today.
    3. Any platform that provides SQL will be able to leverage the existing SQL ecosystem.

    However, despite the virtues of these explanations, they alone do not explain the recent proliferation of SQL implementations. Consider this: how often does the open-source community embrace a technology just because it is the corporate orthodoxy? The answer is: probably not ever. If the open-source community believed that there was a better language for basic data analysis, they would be implementing it. Instead, a huge range of emerging projects, as mentioned earlier, have SQL at their heart The simple conclusion is that SQL has emerged as the de facto language for big data because, frankly, it is technically superior. Let’s examine the four key reasons for this:

    1. SQL is a natural language for data analysis.
    2. SQL is a productive language for writing queries.
    3. SQL queries can be optimised.
    4. SQL is extensible.

    1. SQL is a natural language for data analysis.

    The concept of SQL is underpinned by the relational algebra - a consistent framework for organizing and manipulating sets of data - and the SQL syntax concisely and intuitively expresses this mathematical system.

    Most business users, data analysts and even data scientists think about data within the context of a spreadsheet. If you think about a spreadsheet containing a set of customer orders then what do most people do with that spreadsheet? Typically, they might filter the records to look only at the customer orders for a given region. Alternatively, they might hide some columns: maybe the customer address is not needed for a particular piece of analysis, but the customer name and their orders are important data points. Finally, they might add calculations to compute totals and/or perhaps create a cross tabular report.

    Within the language of SQL these are common steps: 1) projections (SELECT), 2) filters and joins (WHERE), and 3) aggregations (GROUP BY). These are core operators in SQL. The vast majority of people have found the fundamental SQL query constructs to be straightforward and readable representation of everyday data analysis operations.

    2. SQL is a productive language for writing queries.

    When a developer writes a SQL query, he or she simply describes the results that they want. The developer does not have to get into any of the nitty-gritty of describing how to get the results 

    This type of approach is often referred to as  'declarative programming,’ and it makes the developer's job easier. Even the simplest SQL query illustrates the benefits of declarative programming:

    SELECT day, prcp, temp FROM weather
    WHERE city = 'San Francisco' AND prcp > 0.0;

    SQL engines may have multiple ways to execute this query (for example, by using an index). Fortunately the developer doesn't need to understand any of the underlying database processing techniques. The developer simply specifies the desired set of data using projections (SELECT) and filters (WHERE).

    This is perhaps why SQL has emerged as such an attractive alternative to the MapReduce framework for analyzing HDFS data. MapReduce requires the developer to specify, at each step, how the underlying data is to be processed. For the same “query", the code is longer and more complex in MapReduce. For the vast majority of data analysis requirements, SQL is more than sufficient, and the additional expressiveness of MapReduce introduces complexity without providing significant benefits.

    3. SQL queries can be optimized

    The fact that SQL is a declarative language not only shields the developer from the complexities of the underlying query techniques, but also gives the underlying SQL engine has a lot of flexibility in how to optimize any given query. 

    In a lot of programming languages, if the code runs slow, then it's the programmer's fault. For the SQL language, however, if a SQL query runs slow, then it's the SQL engine's fault.

    This is where analytic databases really earn their keep – databases can easily innovate ‘under the covers’ to deliver faster performance; parallelization techniques, query transformations, indexing and join algorithms are just a few key areas of database innovation that drive query performance.

    4. SQL is extensible

    SQL provides a robust framework that adapts to new requirements

    SQL has stayed relevant over the decades because, even though its core is grounded in universal data processing techniques, the language itself can be extended with new processing techniques and new calculations. Simple time-series calculations, statistical functions, and pattern-matching capabilities have all been added to SQL over the years. 

    Consider, as a recent example, what many organizations realized as they started to ask queries such as 'how many distinct visitors came to my website last month?' These organizations realized that it is not vital to have a precise answer to this type of query ... an approximate answer (say, within 1%) would be more than sufficient. This has requirement has now been quickly delivered by implementing the existing hyperloglog algorithms within SQL engines for 'approximate count distinct' operations. 

    More importantly, SQL is a language that is not explicitly tied to a storage model. While some might think of SQL as synonymous with relational databases, many of the new adopters of SQL are built on non-relational data. SQL is well on its way to being a standard language for accessing data stored in JSON and other serialized data structures.  


    SQL is an immensely popular language today … and if anything its popularity is growing as the language is adopted for new data types and new use cases. The primacy of SQL for big data is not simply a default choice, but a conscious realization that SQL is the best suited language for basic analysis

    PS. Next week, many sessions at this year’s OpenWorld will focus on the power, richness and performance of SQL for sophisticated data analysis including the following:

    Monday September 28

    Using Analytical SQL to Intelligently Explore Big Data @ 4:00PM Moscone North 131

    Joerg Otto - Head of Database Engineering, IDS GmbH
    Marty Gubar - Director, Oracle
    Keith Laker - Senior Principal Product Manager, Data Warehousing and Big Data, Oracle

    YesSQL! A Celebration of SQL and PL/SQL @ 6:00PM Moscone South 103

    Steven Feuerstein - Architect, Oracle
    Thomas Kyte - Architect, Oracle

    Tuesday September 29

    SQL Is the Best Development Language for Big Data @ 10:45AM Moscone South 104

    Thomas Kyte - Architect, Oracle

    Enjoy OpenWorld 2014 and if you have time please come and meet the Analytical SQL team in the Moscone South Exhbition Hall. We will be on the Parallel Execution and Advanced SQL Processing demo booth (id 3720).

    Tuesday Sep 23, 2014

    Big Data IM Reference Architecture

    Just in time for Oracle Openworld, the new Big Data Information Management Reference Architecture is posted on our OTN pages. The reference architecture attempts to create order in the wild west of new technologies, the flurry of new ideas and most importantly tries to go from marketing hype to a real, implementable architecture.

    To get all the details, read the paper here. Thanks to the EMEA architecture team , the folks at Rittman Mead Consulting and all others involved.

    Monday Sep 15, 2014

    Oracle SQL Developer & Data Modeler Support for Oracle Big Data SQL

    Oracle SQL Developer and Data Modeler (version 4.0.3) now support Hive and Oracle Big Data SQL.  The tools allow you to connect to Hive, use the SQL Worksheet to query, create and alter Hive tables, and automatically generate Big Data SQL-enabled Oracle external tables that dynamically access data sources defined in the Hive metastore.  

    Let's take a look at what it takes to get started and then preview this new capability.

    Setting up Connections to Hive

    The first thing you need to do is set up a JDBC connection to Hive.  Follow these steps to set up the connection:

    Download and Unzip JDBC Drivers

    Cloudera provides high performance JDBC drivers that are required for connectivity:

    • Download the Hive Drivers from the Cloudera Downloads page to a local directory
    • Unzip the archive
      • unzip Cloudera_HiveJDBC_2.5.4.1006.zip
    • Two zip files are contained within the archive.  Unzip the JDBC4 archive to a target directory that is accessible to SQL Developer (e.g. /home/oracle/jdbc below): 
      • unzip Cloudera_HiveJDBC4_2.5.4.1006.zip -d /home/oracle/jdbc/

    Now that the JDBC drivers have been extracted, update SQL Developer to use the new drivers.

    Update SQL Developer to use the Cloudera Hive JDBC Drivers

    Update the preferences in SQL Developer to leverage the new drivers:

    • Start SQL Developer
    • Go to Tools -> Preferences
    • Navigate to Database -> Third Party JDBC Drivers
    • Add all of the jar files contained in the zip to the Third-party JDBC Driver Path.  It should look like the picture below:
      sql developer preferences

    • Restart SQL Developer

    Create a Connection

    Now that SQL Developer is configured to access Hive, let's create a connection to Hiveserver2.  Click the New Connection button in the SQL Developer toolbar.  You'll need to have an ID, password and the port where Hiveserver2 is running:

    connect to hiveserver2

    The example above is creating a connection called hive which connects to Hiveserver2 on localhost running on port 10000.  The Database field is optional; here we are specifying the default database.

    Using the Hive Connection

    The Hive connection is now treated like any other connection in SQL Developer.  The tables are organized into Hive databases; you can review the tables' data, properties, partitions, indexes, details and DDL:

    sqldeveloper - view data in hive

    And, you can use the SQL Worksheet to run custom queries, perform DDL operations - whatever is supported in Hive:


    Here, we've altered the definition of a hive table and then queried that table in the worksheet.

    Create Big Data SQL-enabled Tables Using Oracle Data Modeler

    Oracle Data Modeler automates the definition of Big Data SQL-enabled external tables.  Let's create a few tables using the metadata from the Hive Metastore.  Invoke the import wizard by selecting the File->Import->Data Modeler->Data Dictionary menu item.  You will see the same connections found in the SQL Developer connection navigator:

    pick a connection

    After selecting the hive connection and a database, select the tables to import:

    pick tables to import

    There could be any number of tables here - in our case we will select three tables to import.  After completing the import, the logical table definitions appear in our palette:

    imported tables

    You can update the logical table definitions - and in our case we will want to do so.  For example, the recommended column in Hive is defined as a string (i.e. there is no precision) - which the Data Modeler casts as a varchar2(4000).  We have domain knowledge and understand that this field is really much smaller - so we'll update it to the appropriate size:

    update prop

    Now that we're comfortable with the table definitions, let's generate the DDL and create the tables in Oracle Database 12c.  Use the Data Modeler DDL Preview to generate the DDL for those tables - and then apply the definitions in the Oracle Database SQL Worksheet:

    preview ddl

    Edit the Table Definitions

    The SQL Developer table editor has been updated so that it now understands all of the properties that control Big Data SQL external table processing.  For example, edit table movieapp_log_json:

    edit table props

    You can update the source cluster for the data, how invalid records should be processed, how to map hive table columns to the corresponding Oracle table columns (if they don't match), and much more.

    Query All Your Data

    You now have full Oracle SQL access to data across the platform.  In our example, we can combine data from Hadoop with data in our Oracle Database.  The data in Hadoop can be in any format - Avro, json, XML, csv - if there is a SerDe that can parse the data - then Big Data SQL can access it!  Below, we're combining click data from the JSON-based movie application log with data in our Oracle Database tables to determine how the company's customers rate blockbuster movies:

    compare to blockbuster movies

    Looks like they don't think too highly of them! Of course - the ratings data is fictitious ;)

    Friday Sep 12, 2014

    Announcement: Big Data SQL is Generally Available

    Oracle Big Data SQL and Big Data Appliance 4.0 are generally available

    Big Data Appliance 4.0 receives the following upgrades:


    • Big Data SQL: Join data in Hadoop with Oracle Database using Oracle SQL 
      • New external table types for handling data stored in Hadoop 
      • Smart Scan for Hadoop to provide fast query performance 
      • Requires additional license for Big Data SQL 
      • Requires Exadata Database Machine running DB 
    • Automated recovery from server failure 
      • This includes migration of master roles to a different server and re-provisioning of a slave node on a server that has been replaced 
    • NoSQL DB multiple-zone configurations on BDA 
      • When adding nodes to a NoSQL DB BDA cluster, they can be added to an existing zone or to a new zone. 
    •  Update to using the 12c ODI Agent on BDA clusters 

    For more information on Big Data SQL, check out:



    [Read More]

    The data warehouse insider is written by the Oracle product management team and sheds lights on all thing data warehousing and big data.


    « April 2015