Thursday Jun 26, 2014
Wednesday Jun 25, 2014
By Jean-Pierre Dijcks-Oracle on Jun 25, 2014
With data growing at unprecedented levels, application providers need a robust, scalable and high-performance infrastructure to manage and analyze data at the speed of business. Helping Independent Software Vendors (ISVs) meet these challenges, Oracle PartnerNetwork (OPN) is today extending its Oracle Exastack program to give partners the opportunity to achieve Oracle Exastack Ready or Optimized status for Oracle Big Data Appliance and Oracle Database Appliance.
The announcement was made today during the sixth annual Oracle PartnerNetwork (OPN) global kickoff event, “Engineered for Success: Oracle and Partners Winning Together.” For a replay of the event, visit: http://bit.ly/1pgDjXL
Here is a word from one of our partners: “TCS was looking for an enterprise grade and reliable Hadoop solution
for our HDMS document management application,” said Jayant Dani,
principal consultant, Component Engineering Group, TCS. “We were drawn
to Oracle's Big Data Appliance for its ready to use platform, superior
performance, scalability, and simple scale-out capabilities.”
Thursday Jun 05, 2014
By Jean-Pierre Dijcks-Oracle on Jun 05, 2014
In a fast evolving market, speed is of the essence. mCentric and Globacom leveraged Big Data Appliance, Oracle NoSQL Database to save over 35,000 Call-Processing minutes daily and analyze network traffic 40x faster.
Here are some highlights from the profile:
“Oracle Big Data Appliance works well for very large amounts of structured and unstructured data. It is the most agile events-storage system for our collect-it-now and analyze-it-later set of business requirements. Moreover, choosing a prebuilt solution drastically reduced implementation time. We got the big data benefits without needing to assemble and tune a custom-built system, and without the hidden costs required to maintain a large number of servers in our data center. A single support license covers both the hardware and the integrated software, and we have one central point of contact for support,” said Sanjib Roy, CTO, Globacom.
It took only five days for Oracle partner mCentric to deploy Oracle Big Data Appliance, perform the software install and configuration, certification, and resiliency testing. The entire process—from site planning to phase-I, go-live—was executed in just over ten weeks, well ahead of the four months allocated to complete the project.
Oracle partner mCentric leveraged Oracle Advanced Customer Support Services’ implementation methodology to ensure configurations are tailored for peak performance, all patches are applied, and software and communications are consistently tested using proven methodologies and best practices.
Tuesday May 13, 2014
By Jean-Pierre Dijcks-Oracle on May 13, 2014
Last week we released Big Data Lite VM 3.0. It contains the latest update on the VM for the entire stack.
Oracle Enterprise Linux 6.4
Oracle Database 12c Release 1 Enterprise Edition (126.96.36.199) with Oracle Advanced Analytics, Spatial & Graph, and more
Cloudera’s Distribution including Apache Hadoop (CDH 5.0)
Oracle Big Data Connectors 3.0
Oracle SQL Connector for HDFS 3.0.0
Oracle Loader for Hadoop 3.0.0
Oracle Data Integrator 12c
Oracle R Advanced Analytics for Hadoop 2.4.0
Oracle XQuery for Hadoop 3.0.0
Oracle NoSQL Database Enterprise Edition 12cR1 (3.0.5)
Oracle JDeveloper 11g
Oracle SQL Developer 4.0
Oracle Data Integrator 12cR1
Oracle R Distribution 3.0.1
The download page is on OTN in its usual place.
Tuesday Apr 22, 2014
By Jean-Pierre Dijcks-Oracle on Apr 22, 2014
Today we are releasing Big Data Appliance 3.0 (which includes the just released Oracle NoSQL Database 3.0) and Big Data Connectors 3.0.These releases deliver a large number of interesting and cool features and enhance the overall Oracle Big Data Management System that we think is going to be the core of information management going forward.
This post highlights a few of the new enhancements across the BDA, NoSQL DB and BDC stack.
Big Data Appliance 3.0:
- Pre-configured and pre-installed CDH 5.0 with default support for YARN and MR2
- Upgrade from BDA 2.5 (CDH 4.6) to BDA 3.0 (CDH 5.0)
- Full encryption (at rest and over the network) from a single vendor in an appliance
- Kerberos and Apache Sentry pre-configured
- Partition Pruning through Oracle SQL Connector for Hadoop
- Apache Spark (incl. Spark Streaming) support
Oracle NoSQL Database 3.0:
- Table data model support layered on top of distributed key-value model
- Support for Secondary Indexing
- Support for "Data Centers" => Metro zones for DR and secondary zones for read-only workloads
- Authentication and network encryption
You can read about all of these features by going to links above and reading the OTN page, data sheets and other relevant information.
While BDA 3.0 immediately delivers upgrade from BDA 2.5, Oracle will also support the current version and we fully expect more BDA 2.x releases based on more CDH 4.x releases. As a customer you now have a choice how to deploy BDA and which version it is you want to run, while knowing you can upgrade to the latest and greatest in a safe manner.
Thursday Apr 03, 2014
By Jean-Pierre Dijcks-Oracle on Apr 03, 2014
It was time to update this post a little. Big Data Appliance grew, got more features and prices as well as insights just changed all across the board. So, here is an update.
The post is still aimed at providing a simple apples-to-apples comparison and a clarification of what is, and what is not included in the pricing and packaging of Oracle Big Data Appliance when compared to "I'm doing this myself - DIY style".
Oracle Big Data Appliance Details
A few of the most overlooked items in pricing out a Hadoop cluster are the cost of software, the cost of actual production-ready hardware and the required networking equipment. A Hadoop cluster needs more than just CPUs and disks... For Oracle Big Data Appliance we assume that you would want to run this system as a production system (with hot-pluggable components and redundant components in your system). We also assume you want the leading Hadoop distribution plus support for that software. You'd want to look at securing the cluster and possibly encrypting data at rest and over the network. Speaking of network, InfiniBand will eliminate network saturation issues - which is important for your Hadoop cluster.
With that in mind, Oracle Big Data Appliance is an engineered system built for production clusters. It is pre-installed and pre-configured with Cloudera CDH and all (I emphasize all!) options included and we (with the help of Cloudera of course) have done the tuning of the system for you. On top of that, the price of the hardware (US$ 525,000 for a full rack system - more configs and smaller sizes => read more) includes the cost of Cloudera CDH, its options and Cloudera Manager (for the life of the machine - so not a subscription).
So, for US$ 525,000 you get the following:
- Big Data Appliance Hardware (comes with Automatic Service Request upon component failures)
- Cloudera CDH and Cloudera Manager
- All Cloudera options as well as Accumulo and Spark (CDH 5.0)
- Oracle Linux and the Oracle JDK
- Oracle Distribution of R
- Oracle NoSQL Database Community Edition
- Oracle Big Data Appliance Enterprise Manager Plug-In
The support cost for the above is a single line item.. The list price for Premier Support for Systems per the Oracle Price list (see source below) is US$ 63,000 per year.
To do a simple 3 year comparison with other systems, the following table shows the details and the totals for Oracle Big Data Appliance. Note that the only additional item is the install and configuration cost which are done by Oracle personnel or partners, on-site:
||Year 1||Year 2||Year 3||3 Year
Annual Support Cost
On-site Install (approximately)
For this you will get a full rack BDA (18 Sun X4-2L servers, 288 cores (Two Intel Xeon E5-2650V2 CPUs per node), 864TB disk (twelve 4TB disks per node), plus software, plus support, plus on-site setup and configuration. Or in terms of cost per raw TB at purchase and at list pricing: $697.
HP DL-380 Comparative System (this is changed from the original post to the more common DL-380's)
To build a comparative hardware solution to the Big Data Appliance we picked an HP-DL180 configuration and built up the servers using the HP.com website for pricing. The following is the price for a single server.
|Model Number||Description||Quantity||Total Price|
|653200-B21||ProLiant DL380p Gen8 Rackmount Factory Integrated 8 SFF CTO Model (2U) with no processor, 24 DIMM with no memory, open bay (diskless) with 8 SFF drive cage, Smart Array P420i controller with Zero Memory, 3 x PCIe 3.0 slots, 1 FlexibleLOM connector, no power supply, 4 x redundant fans, Integrated HP iLO Management Engine||
||2.6GHz Xeon E5-2650 v2 processor (1 chip, 8 cores) with 20MB L3 cache - Factory Integrated Only
||HP 1GbE 4-port 331FLR Adapter - Factory Integrated Only
||460W Common Slot Gold Hot Plug Power Supply
||HP Rack 10000 G2 Series - 10842 (42U) 800mm Wide Cabinet - Pallet Universal Rack
||8GB (1 x 8GB) Single Rank x8 PC3L-12800R (DDR3-1600) Registered CAS-11 Low Voltage Memory Kit
||HP Smart Array P222/512MB FBWC 6Gb 1-port Int/1-port Ext SAS controller||1
||4TB 6Gb SAS 7.2K LFF hot-plug SmartDrive SC Midline disk drive (3.5") with 1-year warranty
||Grand Total for a single server (list prices)||
On top of this we need InfiniBand switches. Oracle Big Data Appliance comes with 3 IB switches, allowing us to expand the cluster without suddenly requiring extra switches. And, we do expect these machines to be a part of a much larger clusters. The IB switches are somewhere in the neighborhood of US$ 6,000 per switch, so add $18,000 per rack and add a management switch (BDA uses a Cisco switch) which seems to be around $15,000 list. The total switching comes to roughly $33,000.
We will also need Cloudera Enterprise subscription - and to compare apples to apples, we will do it for all software. Some sources (see this document) peg CDH Core at $3,382 list per node and per year (24*7 support). Since BDA has more software (all options) and that pricing is not public I am going to make an educated calculation and rounding and double the price with a rounding to the nearest nice and round number. That gets me to $7,000 per node, per year for 24*7 support.
BDA also comes with on-disk encryption, which is even harder to price out. My somewhat educated guess is around $1,500 list or so per node and per year. Oh, and lets not forget the Linux subscription, which lists at $1,299 per node per year. We also run a MySQL database (enterprise edition with replication), which costs list subscription $5,000. We run it replicated over 2 nodes.
This all gets us to roughly $10,000 list price per node per year for all applicable software subscriptions and support and an additional $10,000 for the two MySQL nodes.
HP + Cloudera Do-it-Yourself System
Let's go build our own system. The specs are like a BDA, so we will have 18 servers and all other components included.
||Year 1||Year 2||Year 3||Total|
SW Subscriptions and Support
Installation and Configuration
Some will argue that the installation and configuration is free (you already pay your data center team), but I would argue that something that takes a short amount of time when done by Oracle, is worth the equivalent if it takes you a lot longer to get all this installed, optimized, and running. Nevertheless, here is some math on how to get to that cost anyways: approximately 150 hours of labor per rack for the pure install work. That adds up to US $15,000 if we assume a cost per hour of $100.
Note: those $15,000 do NOT include optimizations and tuning to Hadoop, to the OS, to Java and other interesting things like networking settings across all these areas. You will now need to spend time to figure out the number of slots you allocate per node, the file system block size (do you use Apache defaults, or Cloudera's or something else) and many more things at system level. On top of that, we pre-configure for example Kerberos and Apache Sentry giving you a secure authorization and authentication method, as well as have a one-click on-disk and network encryption setting. Of course you can contact various other companies to do this for you.
You can also argue that "you want the cheapest hardware possible", because Hadoop is built to deal with failures, so it is OK for things to regularly fail. Yes, Hadoop does deal well with hardware failures, but your data center is probably much less keen about this idea, because someone is going to replace the disks (all the time). So make sure the disks are hot-swappable. An oh, that someone swapping the disks does cost money... The other consideration is failures in important components like power... redundant power in a rack is a good thing to have. All of this is included (and thought about) in Oracle Big Data Appliance.
In other words, do you really want spend weeks installing, configuring and learning or would you rather start to build applications on top of the Hadoop cluster and thus providing value to your organization.
The main differences between Oracle Big Data Appliance and a DIY approach are:
- A DIY system - at list price with basic installation but no optimization - is a staggering $220 cheaper as an initial purchase
- A DIY system - at list price with basic installation but no optimization - is almost $250,000 more expensive over 3 years.
Note to purchasing, you can spend this on building or buying applications on your cluster (or buy some real intriguing Oracle software)
- The support for the DIY system includes five (5) vendors. Your hardware support vendor, the OS vendor, your Hadoop vendor, your encryption vendor as well as your database vendor. Oracle Big Data Appliance is supported end-to-end by a single vendor: Oracle
- Time to value. While we trust that your IT staff will get the DIY system up and running, the Oracle system allows for a much faster "loading dock to loading data" time. Typically a few days instead of a few weeks (or even months)
- Oracle Big Data Appliance is tuned and configured to take advantage of the software stack, the CPUs and InfiniBand network it runs on
- Any issue we, you or any other BDA customer finds in the system is fixed for all customers. You do not have a unique configuration, with unique issues on top of the generic issues.
In an apples-to-apples comparison of a production Hadoop cluster, Oracle Big Data Appliance starts of with the same acquisition prices and comes out ahead in terms of TCO over 3 years. It allows an organization to enter the Hadoop world with a production-grade system in a very short time reducing both risk as well as reducing time to market.
As always, when in doubt, simply contact your friendly Oracle representative for questions, support and detailed quotes.
HP and related pricing: http://www.hp.com or http://www.ideasinternational.com/ (the latter is a paid service - sorry!)
Oracle Pricing: http://www.oracle.com/us/corporate/pricing/exadata-pricelist-070598.pdf
MySQL Pricing: http://www.oracle.com/us/corporate/pricing/price-lists/mysql-pricelist-183985.pdf
Wednesday Mar 26, 2014
By Mgubar-Oracle on Mar 26, 2014
Oracle Big Data Appliance Version 2.5 was released last week. Some great new features in this release- including a continued security focus (on-disk encryption and automated configuration of Sentry for data authorization) and updates to Cloudera Distribution of Apache Hadoop and Cloudera Manager.
With each BDA release, we have a new release of Oracle Big Data Lite Virtual Machine. Oracle Big Data Lite provides an integrated environment to help you get started with the Oracle Big Data platform. Many Oracle Big Data platform components have been installed and configured - allowing you to begin using the system right away. The following components are included on Oracle Big Data Lite Virtual Machine v 2.5:
- Oracle Enterprise Linux 6.4
- Oracle Database 12c Release 1 Enterprise Edition (188.8.131.52)
- Cloudera’s Distribution including Apache Hadoop (CDH4.6)
- Cloudera Manager 4.8.2
- Cloudera Enterprise Technology, including:
- Cloudera RTQ (Impala 1.2.3)
- Cloudera RTS (Search 1.2)
- Oracle Big Data Connectors 2.5
- Oracle SQL Connector for HDFS 2.3.0
- Oracle Loader for Hadoop 2.3.1
- Oracle Data Integrator 11g
- Oracle R Advanced Analytics for Hadoop 2.3.1
- Oracle XQuery for Hadoop 2.4.0
- Oracle NoSQL Database Enterprise Edition 12cR1 (2.1.54)
- Oracle JDeveloper 11g
- Oracle SQL Developer 4.0
- Oracle Data Integrator 12cR1
- Oracle R Distribution 3.0.1
Go to the Oracle Big Data Lite Virtual Machine landing page on OTN to download the latest release.
Wednesday Mar 19, 2014
By Jean-Pierre Dijcks-Oracle on Mar 19, 2014
With the release of Big Data Appliance software bundle 2.5, BDA completes the encryption story underneath Cloudera CDH. BDA already came with network encryption, ensuring no network sniffing can be applied in between the nodes, it now adds encryption of data-at-rest.
A Brief Overview
Encryption of data-at-rest can be done in 2 modes. One mode leverages the Trusted Platform Module (TPM) on the motherboard to provide a key to encrypt the data on disk. This mode does not require a password or pass phrase but relies on the motherboard. The second mode leverages a passphrase, which in turn will be used to generate a private-public key pair generated with OpenSSL. The key pair is encrypted as well.
The passphrase encryption has a few more interesting aspects. For one, it does require the passphrase to be entered upon re-booting the system. Leveraging the TPM option does not require any manual intervention at reboot. On Big Data Appliance it is possible to regularly change the passphrase without impacting the encryption, or required re-encryption of the data.
Neither one of the encryption methods affect user access to user data. In other words, on an unprotected cluster a user that can read data before encryption will be able to read data after encryption. The goal is to ensure data is protected on physical media - like theft or incorrect disposal of a disk. Both forms protect from that, but only passphrase based encryption protects from disposal or theft of a server.
On BDA, it is possible to switch between these two methods. This does have impact on running the cluster as data needs to be re-encrypted. For this step the cluster will be down, however data is not duplicated, so there is no need to reserve double the space to do the re-encryption.
How to Encrypt Data
As with all installation or changes on Big Data Appliance you will leverage Mammoth to do the install with encryption or to make changes to the system if you are already in production. Before you set up either of the two modes of data-at-rest encryption, you should consider your requirements. Changing the mode - as described - is possible, but will require the cluster to be down for re-encryption.
Full Set of Security Features
Encryption - out-of-the-box is yet another feature that is specific to Oracle Big Data Appliance. On top of pre-configured Kerberos, Apache Sentry, Oracle Audit Vault Encryption now adds another security dimension. To read more about the full set of features start here.
Thursday Mar 13, 2014
Friday Mar 07, 2014
By Jean-Pierre Dijcks-Oracle on Mar 07, 2014
Intel partnered with Oracle to certify compatibility between Intel® Distribution for Apache Hadoop* (IDH) and Oracle Big Data Connectors*. Users can now connect IDH to Oracle Database with Oracle Big Data Connectors, taking advantage of the high performance feature-rich components of that product suite. Applications on IDH can leverage the connectors for fast load into Oracle Database, in-place query of data in HDFS with Oracle SQL, analytics in Hadoop with R, XQuery processing on Hadoop, and native Hadoop integration within Oracle Data Integrator.
Monday Jan 27, 2014
By Mgubar-Oracle on Jan 27, 2014
You've been hearing alot about Oracle's big data platform. Today, we're pleased to announce Oracle Big Data Lite Virtual Machine - an environment to help you get started with the platform. And, we have a great OTN Virtual Developer Day event scheduled where you can start using our big data products as part of a series of workshops.
Oracle Big Data Lite Virtual Machine is an Oracle VM VirtualBox that contains many key components of Oracle's big data platform, including: Oracle Database 12c Enterprise Edition, Oracle Advanced Analytics, Oracle NoSQL Database, Cloudera Distribution including Apache Hadoop, Oracle Data Integrator 12c, Oracle Big Data Connectors, and more. It's been configured to run on a "developer class" computer; all Big Data Lite needs is a couple of cores and about 5GB memory to run (this means that your computer should have at least 8GB total memory). With Big Data Lite, you can develop your big data applications and then deploy them to the Oracle Big Data Appliance. Or, you can use Big Data Lite as a client to the BDA during application development.
How do you get started? Why not start by registering for the Virtual Developer Day scheduled for Tuesday, February 4, 2014 - 9am to 1pm PT / 12pm to 4pm ET / 3pm to 7pm BRT:
There will be 45 minute sessions delivered by product experts (from both Oracle and Oracle Aces) - highlighted by Tom Kyte and Jonathan Lewis' keynote "Landscape of Oracle Database Technology Evolution". Some of the big data technical sessions include:
- Oracle NoSQL Database Installation and Cluster Topology Deployment
- Application Development & Schema Design with Oracle NoSQL Database
- Processing Twitter Data with Hadoop
- Use Data from a Hadoop Cluster with Oracle Database
- Make the Right Offers to Customers Using Oracle Advanced Analytics
- In-DB Map Reduce with SQL/Hadoop
- Pattern Matching in SQL
Keep an eye on this space - we'll be publishing how-to's that leverage the new Oracle Big Data Lite VM. And, of course, we'd love to hear about the clever applications you build as well!
Tuesday Dec 03, 2013
By Jean-Pierre Dijcks-Oracle on Dec 03, 2013
As a follow-on to the previous post (here) on use cases, follow the link below to a recording that explains how to go about expanding the data warehouse into a big data platform.
The idea behind it all is to cover a best practice on adding to the existing data warehouse and expanding the system (shown in the figure below) to deal with:
- Real-Time ingest and reporting on large volumes of data
- A cost effective strategy to analyze all data across types and at large volumes
- Deliver SQL access and analytics to all data
- Deliver the best query performance to match the business requirements
To access the webcast:
- Register on this page
- Scroll down and find the session labeled "Best practices for expanding the data warehouse with new big data streams"
- Have fun
Wednesday Nov 27, 2013
By Jean-Pierre Dijcks-Oracle on Nov 27, 2013
Big Data Usage Patterns
The following usage patterns are derived from actual customer projects across a large number of industries and cross boundaries between commercial enterprises and public sector. These patterns are also geographically applicable and technically feasible with today’s technologies.
This paper will address the following four usage patterns:
- Data Factory – a pattern that enable an organization to integrate and transform – in a batch method – large diverse data sets before moving this data into an upstream system like an RDBMS or a NoSQL system. Data in the data factory is possibly transient and the focus is on data processing.
- Data Warehouse Expansion with a Data Reservoir – a pattern that expands the data warehouse with a large scale Hadoop system to capture data at lower grain and higher diversity, which is then fed into upstream systems. Data in the data reservoir is persistent and the focus is on data processing as well as data storage as well as the reuse of data.
- Information Discovery with a Data Reservoir – a pattern that creates a data reservoir for discovery data marts or discovery systems like Oracle Endeca to tap into a wide range of data elements. The goal is to simplify data acquisition into discovery tools and to initiate discovery on raw data.
- Closed Loop Recommendation and Analytics system – a pattern that is
often considered the holy grail of data systems. This pattern combines
both analytics on historical data, event processing or real time actions
on current events and closes the loop between the two to continuously
improve real time actions based on current and historical event
Pattern 1: Data Factory
The core business reason to build a Data Factory as it is presented here
is to implement a cost savings strategy by placing long-running batch
jobs on a cheaper system. The project is often funded by not spending
money on the more expensive system – for example by switching Mainframe
MIPS off - and instead leveraging that cost savings to fund the Data
Factory. The first figure shows a simplified implementation of the Data Factory.
As the image below shows, the data factory must be scalable, flexible and (more) cost effective for processing the data. The typical system used to build a data factory is Apache Hadoop or in the case of Oracle’s Big Data Appliance – Cloudera’s Distribution including Apache Hadoop (CDH).
Hadoop (and therefore Big Data Appliance and CDH) offers an extremely
scalable environment to process large data volumes (or a large number of
small data sets) and jobs. Most typical is the offload of large batch
updates, matching and de-duplication jobs etc. Hadoop also offers a very
flexible model, where data is interpreted on read, rather than on
write. This idea enables a data factory to quickly accommodate all types
of data, which can then be processed in programs written in Hive, Pig
As shown in above the data factory is an integration platform, much like an ETL tool. Data sets land in the data factory, batch jobs process data and this processed data moves into the upstream systems. These upstream systems include RDBMS’s which are then used for various information needs. In the case of a Data Warehouse, this is very close to pattern 2 described below, with the difference that in the data factory data is often transient and removed after the processing is done.
This transient nature of data is not a required feature, but it is often implemented to keep the Hadoop cluster relatively small. The aim is generally to just transform data in a more cost effective manner.
In the case of an upstream system in NoSQL systems, data is often prepared in a specific key-value format to be served up to end applications like a website. NoSQL databases work really well for that purpose, but the batch processing is better left to Hadoop cluster.
It is very common for data to flow in the reverse order or for data from RDBMS or NoSQL databases to flow into the data factory. In most cases this is reference data, like customer master data. In order to process new customer data, this master data is required in the Data Factory.
Because of its low risk profile – the logic of these batch processes is well known and understood – and funding from savings in other systems, the Data Factory is typically an IT department’s first attempt at a big data project. The down side of a Data Factory project is that business users see very little benefits in that they do not get new insights out of big data.
Pattern 2: Data Warehouse Expansion
The common way to drive new insights out of big data is pattern two. Expanding the data warehouse with a data reservoir enables an organization to expand the raw data captured in a system that is able to add agility to the organization. The graphical pattern is shown in below.
A Data Reservoir – like the Data Factory from Pattern 1 – is based on
Hadoop and Oracle Big Data Appliance, but rather then have transient
data and just process data and then hand the data off, a Data Reservoir
aims to store data at a lower than previously stored grain for a period
much longer than previous periods.
The Data Reservoir is initially used to capture data, aggregate new metrics and augment (not replace) the data warehouse with new and expansive KPIs or context information. A very typical addition is the sentiment of a customer towards a product or brand which is added to a customer table in the data warehouse.
The addition of new KPIs or new context information is a continuous process. That is, new analytics on raw and correlated data should find their way into the upstream Data Warehouse on a very, very regular basis.
As the Data Reservoir grows and starts to become known to exist because of the new KPIs or context, users should start to look at the Data Reservoir as an environment to “experiment” and “play” with data. With some rudimentary programming skills power users can start to combine various data elements in the Data Reservoir, using for example Hive. This enables the users to verify a hypotheses without the need to build a new data mart. Hadoop and the Data Reservoir now becomes an economically viable sandbox for power users driving innovation, agility and possibly revenue from hitherto unused data.
Pattern 3: Information Discovery
Agility for power users and expert programmers is one thing, but
eventually the goal is to enable business users to discover new and
exciting things in the data. Pattern 3 combines the data reservoir with a
special information discovery system to provide a Graphical User
Interface specifically for data discovery. This GUI emulates in many
ways how an end user today searches for information on the internet.
To empower a set of business users to truly discover information, they first and foremost require a Discovery tool. A project should therefore always start with that asset.
Once the Discovery tool (like Oracle Endeca) is in place, it pays to
start to leverage the Data Reservoir to feed the Discovery tool. As is
shown above, the Data Reservoir is continuously fed with new data. The
Discovery tool is a business user’s tool to create ad-hoc data marts in
the discovery tool. Having the Data Reservoir simplifies the acquisition
by end users because they only need to look in one place for data.
In essence, the Data Reservoir now is used to drive two different systems; the Data Warehouse and the Information Discovery environment and in practice users will very quickly gravitate to the appropriate system. But no matter which system they use, they now have the ability to drive value from data into the organization.
Pattern 4: Closed Loop Recommendation and Analytics System
So far, most of what was discussed was analytics and batch based. But a lot of organizations want to come to some real time interaction model with their end customers (or in the world of the Internet of Things – with other machines and sensors).
Hadoop is very good at providing the Data Factory and the Data
Reservoir, at providing a sandbox, at providing massive storage and
processing capabilities, but it is less good at doing things in real
time. Therefore, to build a closed loop recommendation system – which
should react in real time – Hadoop is only one of the components .
Typically the bottom half of the last figure is akin to pattern 2 and is used to catch all data, analyze the correlations between recorded events (detected fraud for example) and generate a set of predictive models describing something like “if a, b and c during a transaction – mark as suspect and hand off to an agent”. This model would for example block a credit card transaction.
To make such a system work it is important to use the right technology at both levels. Real time technologies like Oracle NoSQL Database, Oracle Real Time Decisions and Oracle Event Processing work on the data stream in flight. Oracle Big Data Appliance, Oracle Exadata/Database and Oracle Advanced Analytics provide the infrastructure to create, refine and expose the models.
Today’s big data technologies offer a wide variety of capabilities. Leveraging these capabilities with the existing environment and skills already in place according to the four patterns described does enable an organization to benefit from big data today. It is a matter of identifying the applicable pattern for your organization and then to start on the implementation.
The technology is ready. Are you?
Thursday Nov 14, 2013
By Jean-Pierre Dijcks-Oracle on Nov 14, 2013
As almost everyone is interested in data science, take the boot camp to get ahead of the curve. Leverage this free Data Science Boot camp from Oracle Academy to learn some of the following things:
- Introduction: Providing Data-Driven Answers to Business Questions
- Lesson 1: Acquiring and Transforming Big Data
- Lesson 2: Finding Value in Shopping Baskets
- Lesson 3: Unsupervised Learning for Clustering
- Lesson 4: Supervised Learning for Classification and Prediction
- Lesson 5: Classical Statistics in a Big Data World
- Lesson 6: Building and Exploring Graphs
You will also find the code samples that go with the training and you can get of to a running start.
Tuesday Nov 12, 2013
By Jean-Pierre Dijcks-Oracle on Nov 12, 2013
Today we are announcing the release of the 3rd generation Big Data Appliance. Read the Press Release here.
The focus for this 3rd generation of Big Data Appliance is:
- Comprehensive and Open - Big Data Appliance now includes all Cloudera Software, including Back-up and Disaster Recovery (BDR), Search, Impala, Navigator as well as the previously included components (like CDH, HBase and Cloudera Manager) and Oracle NoSQL Database (CE or EE).
- Lower TCO then DIY Hadoop Systems
- Simplified Operations while providing an open platform for the organization
- Comprehensive security including the new Audit Vault and Database Firewall software, Apache Sentry and Kerberos configured out-of-the-box
A good place to start is to quickly review the hardware differences (no price changes!). On a per node basis the following is a comparison between old and new (X3-2) hardware:
||Big Data Appliance X3-2
Big Data Appliance X4-2
2 x 8-Core Intel® Xeon® E5-2660 (2.2 GHz)
2 x 8-Core Intel® Xeon® E5-2650 V2 (2.6 GHz)
12 x 4TB High Capacity SAS
For all the details on the environmentals and other useful information, review the data sheet for Big Data Appliance X4-2. The larger disks give BDA X4-2 33% more capacity over the previous generation while adding faster CPUs. Memory for BDA is expandable to 512 GB per node and can be done on a per-node basis, for example for NameNodes or for HBase region servers, or for NoSQL Database nodes.
More details in terms of software and the current versions (note BDA follows a three monthly update cycle for Cloudera and other software):
||Big Data Appliance 2.2 Software Stack||Big Data Appliance 2.3 Software Stack|
Oracle Linux 5.8 with UEK 1
Oracle Linux 6.4 with UEK 2
And like we said at the beginning it is important to understand that all other Cloudera components are now included in the price of Oracle Big Data Appliance. They are fully supported by Oracle and available for all BDA customers.
For more information:
- Big Data Appliance Data Sheet
- Big Data Connectors Data Sheet
- Oracle NoSQL Database Data Sheet (CE | EE)
- Oracle Advanced Analytics Data Sheet
The data warehouse insider is written by the Oracle product management team and sheds lights on all thing data warehousing and big data.
- Update to BDA X5-2 provides more flexibility and capacity with no price changes
- Using Oracle Big Data Spatial and Graph
- Oracle Big Data Spatial and Graph - Installing the Image Processing Framework
- Oracle Big Data Lite 4.2 Now Available!
- Space Management and Oracle Direct Path Load
- PX In Memory, PX In Memory IMC?
- Big Data Spatial and Graph is now released!
- Monitoring Parallel Execution using Real-Time SQL Monitoring in Oracle Database 12c
- Managing overflows in LISTAGG
- Statement of Direction -- Big Data Management System