Monday Jan 27, 2014

FREE OTN virtual workshop - Learn about SQL pattern matching with Oracle Database 12c.

otn virtual dvlper day

Make sure you are free on Tuesday February 4 because the OTN team are hosting another of their virtual developer day events. Most importantly it is FREE. Even more importantly is the fact that I will be running a 12c pattern matching workshop at 11:45am Pacific Time. Of course there are lots other sessions that you can attend relating to big data and Oracle Database 12c and the OTN team has created two streams to help you learn about this two important areas:

  • Oracle Database application development — Learn expert tips and tricks on how to develop applications for Oracle Database 12c and Big Data environments more effectively.
  • Oracle Database platform deployment processes — From integration, to data migration, experts showcase new capabilities in Oracle 12c and Big Data environments that will allow you to deliver greater database performance and integration.

You can sign-up for the event and pick your tracks and sessions via this link: https://oracle.6connex.com/portal/database2014/login?langR=en_US&mcc=aceinvite

My pattern matching session is included in the Oracle 12c DBA section of the application development track and the workshop will cover the following topics:

  • Part 1 - Introduction to SQL Pattern Matching
  • Part 2 - Pattern Match: simple example
  • Part 3 - How to use built-in measures
  • Part 4 - Searching for more complex patterns
  • Part 5 - Deep dive into how SQL Pattern Matching works
  • Part 6 - More Advanced Topics

As my session is only 45 minutes long I am only going to cover the first three topics and leave you to work through the last three topics in your own time. During the 45 minute workshop I will be available to answer any questions via the live Q&A chat feature.

There is a link to the full agenda on the invitation page. The OTN team will be providing a Database 12c Virtualbox VM that you will be able to download later this week. For the pattern matching session I will be providing the scripts to install our sample schema, the slides from the webcast and the workshop files which include a whole series of exercises that will help you learn about pattern matching and test your SQL skills. 

The big data team has kindly included my pattern matching content inside their Virtualbox image so if you want to focus on the sessions offered on the big data tracks but still want to work on the pattern matching exercises after the event then you will have everything you need already installed and ready to go!

Don't forget to register as soon as possible and I hope you have a great day…Let me know if you have any questions or comments.

Friday Jan 17, 2014

StubHub's Data Scientists reap benefits of integrated approach….

We have released yet another great video customer video, this time with Stubhub.

Many customers are still pulling data out of their data warehouse and shipping it to specialised processing engines so they can mine their data, run spatial analytics and/or built multi-dimensional cubes. The problem with this approach, as the team at Stubhub points out, is that typically when you move the data to these specialised engines you have to work with a subset of the data that is sitting in your data warehouse. When you work with a subset of data you immediately start to impose compromises on your analytical workflows. If you can't work with all your data then you can't be sure that your analytical model is as good as it could be and that could mean losing customers or missing out on additional revenue.

The other problem comes from everyone using their own favourite tool to do their analysis: how do you share your discoveries, how do you develop a high level of corporate-wide analytical skills?

Stubhub asked Oracle to help them resolve these two key problems...

[Read More]

Monday Jan 06, 2014

CaixaBank deploys new big data infrastructure on Oracle

CaixaBank is Spain’s largest domestic bank by market share with a customer base of 13.7 It is also Spain’s leading bank in terms of innovation and technology, and one of the most prominent innovators worldwide. CaixaBank has been recently awarded the title of the World’s Most Innovative Bank at the 2013 Global Banking Innovation Awards (November 2013).

Like most financial services companies CaixaBank wants to get closer to its customers by collecting data about their activities across all the different channels (offices, internet, phone banking, ATMs, etc.). In the old days we used to call this CRM and then this morphed into "360-degree view" etc etc. While many companies have delivered these types of projects and customers feel much more connected and in control of their relationship with their bank the capture of streams of big data has the potential to create another revolution in the way we interact with our bank. What banks like CaixaBank want to do is to capture data in one part of the business and make it available to all the other lines of business as quickly as possible.

Big data is allowing businesses like  CaixaBank to significantly enhance the business value of their existing customer data by integrating it with all sorts of other internal and external data sets. This is probably the most exciting part of big data because the potential business benefits are really only constrained by imagination of the team working on these type of projects. However, that in itself does create problems in terms of securing funding and ownership of projects because the benefits can be difficult to estimate which is where all the industry use cases, conference papers and blog posts can help in terms of providing insight into what is going on in across the market in broad general terms.

To help them implement a strategic Big Data project, CaixaBank has selected Oracle for the deployment of its new Big Data infrastructure. This project, which includes an array of Oracle solutions, positions CaixaBank at the forefront of innovation in the banking industry. The new infrastructure will allow CaixaBank to maximize the business value from any kind of data and embark on new business innovation projects based on valuable information gathered from large data sets. Projects currently under review include:

  • Development of predictive models to improve customer value
  • Identifying cross-selling and up-selling  opportunities
  • Development of personalized offers to customers
  • Reinforcement of risk management and brand protection services
  • More powerful fraud analysis 
  • General streamlining of current processes to reduce time-to-market
  • Support new regulatory requirements

The Oracle solution (including Oracle Engineered Systems, Oracle Software and Oracle Consulting Services) consists in the implementation of a new Information Management Architecture that provides a unified corporate data model and new advanced analytic capabilities (for more information about how Oracle's Reference Architecture can help you integrate structured, semi-structured and unstructured information into a single logical information resource that can be exploited for commercial gain click here to view our whitepaper)

The importance of the project is best explained by Juan Maria Nin, CEO of CaixaBank:

“Business innovation is the key for success in today’s highly competitive banking environment. The implementation of this Big Data solution will help CaixaBank remain at the forefront of innovation in the financial sector, delivering the best and most competitive services to our customers”.

The Oracle press release is here: https://emeapressoffice.oracle.com/Press-Releases/CaixaBank-Selects-Oracle-for-the-Deployment-of-its-New-Big-Data-Infrastructure-4183.aspx

Friday Dec 20, 2013

BAE Systems Choose Big Data Appliance for Critical Projects

Here's another great story about how to use data warehousing and big data technologies to solve real world problems using diverse sets of data using Oracle technology. BAE Systems is taking unstructured, semi-structured, operational and social media data and using it to solve complex problems such as financial crime, cyber security and digital transformation. The volumes of data that BAE deals with are very large and this creates its own set of challenges and problems in terms of optimising hardware and software to work efficiently and effectively together. Although BAE had their own in-house Hadoop experts they chose Oracle Big Data Appliance for their Hadoop cluster because it’s easier, cheaper, and faster to operate.

BAE is working with many telco customers to explore the new areas that are being opened up by the use of big data to manage browsing data and call record data. These data sources are being transformed to provide additional insight for the network operations teams, analysis of customer quality and to drive marketing campaigns.

 

BAE

 

Click on the image to watch the video, or click here: http://medianetwork.oracle.com/video/player/2940549413001

Thursday Dec 19, 2013

SQL Analytics Part 2- Key Concepts

This post continues on from my first post on analytical SQL "introduction to SQL for reporting and analysis" which looked at the reasons why it makes sense to use analytical SQL in your data warehouse and operational projects.  In this post we are going to examine the key processing concepts behind analytical SQL.  

One of the main advantages of Oracle's SQL analytics is that the key concepts are shared across all functions - in effect we have created a unified SQL framework for delivering analytics. These concepts build on existing SQL features to provide developers and business users with a framework that is both flexible and powerful in terms of its ability to support sophisticated calculations. There are four key concepts that you need to understand when implementing features and functions relating to SQL analytics:

  1. Process order
  2. Result-set Partitions
  3. Windows
  4. Current Row

Let's look at each of these topics in more detail.

1) Processing order.

The execution workflow for SQl statements containing analytical SQL is relatively simple:  first all the HAVING, GROUP BY and JOIN predicates are processed. The output from this step is then passed to the analytical functions so all the calculations can be applied. This typically involves the use of window functions which are applied based on the partitions that have been defined with analytic functions applied to each row in each partition. Finally the ORDER BY clause is processed to provide control over the final output. It is useful to keep this workflow in your mind when you are building your analytical SQL because it will help you understand the inputs flowing into your analytical functions and the resulting output.  

2) Result-set partitions

Oracle's analytic functions allow the input data set to be divided into groups of rows which are referred to as "partitions". It is important to note that in this context the term "partition" is completely unrelated to the table partition feature.

These analytical partitions are created after the groups defined with GROUP BY clauses and are can be used by any analytical aggregate functions such as sums and averages. The partitions can be based on any column that is part of the the input data  set and individual partitions can be any size. It is quite possible to create a single partition contain all the rows from the initial query result set or create a small number of very large partitions or a large number of very small partitions where each partition just contains a few rows.

3) Windows

For each row in a partition it is possible to define a window over the data which determines the range of rows used to perform the calculations for the current row (the next section will explain the concept of the "current row")/ The size of a window can be based on either a physical number of rows or a logical interval, which is typically time-based. The window has a starting row and an ending row and depending on how the window is defined it may move at only one end or, in some cases, both ends.

Physical windows

For example a cumulative sum function would have its starting row fixed at the first row in the partition and the ending row would then slide from the starting row all the way to the last row of the partition to create a running total over the rows in the partition. 

SELECT Qtrs
, Months
, Channels
, Revenue
, SUM(Revenue) OVER (PARTITION BY Qtrs) AS Qtr_Sales
, SUM(Revenue) OVER () AS Total_Sales
FROM sales_table


Window Fixed 1

Logical windows

f the data set contains a date column then it is possible to use logical windows by taking advantage of Oracle’s built-in time awareness.  A good example of window where the start row changes is the calculation of a moving average. In this case both the starting and end points slide so that a constant physical or logical range is maintained during the processing. The example below creates a four-period moving average and the images show the current-row, which is identified by the arrow, and the moving window, which is marked as the pink area :

Window 1 Window 2
Window 3 Window 4
Window 5 Window 6

The concept of a "window" is very powerful and provides a lot of flexibility in terms of being able to interact with the data. A window can be set as large as all the rows in a partition. At the other extreme it could be just a single row. Users may specify a window containing a constant number of rows, or a window containing all rows where a column value is in a specified numeric range. Windows may also be defined to hold all rows where a date value falls within a certain time period, such as the prior month.

When using window functions, the current row is included during calculations, so you should only specify (n-1) when you are dealing with n items - see the next section for more information….

4) Current Row

Each calculation performed with an analytic function is based on a current row within a partition. The current row serves as the reference point and during processing it begins at the starting row, moves throw the following rows until the end row of the window is reached. For instance, a centered moving average calculation could be defined with a window that holds the current row, the six preceding rows, and the following six rows. In the example below the calculation of a running total would be the result of the current row plus the values from the preceding two rows. At the end of the window the running total will be reset. The example shown below creates running totals within a result set showing the total sales for each channel within a product category within year:

SELECT calendar_year
, prod_category_desc
, channel_desc
, country_name
, sales
, units
, SUM(sales) OVER (PARTITION BY calendar_year, prod_category_desc, channel_desc order by country_name) sales_tot_cat_by_channel
FROM . . .

SQL A Current Row

Summary

This post has outlined the four main processing concepts behind analytical SQL. The next series of posts will provide an overview of the key analytical features and functions that use these concepts. In the next blog post we will review the analytical SQL features and techniques that are linked to enhanced reporting which includes: windowing, lag-lead, reporting aggregate functions, pivoting operations and data densification for reporting and time series calculations. Although these topics will be presented in terms of data warehousing, they are actually applicable to any activity needing analysis and reporting. 

If you have any questions or comments about analytical SQL then feel free to contact me via this blog.

Technorati Tags: , , , ,

Friday Dec 13, 2013

New! Oracle DW-Big Data Monthly Roundup Magazine

Are you interested in learning how Oracle customers are taking advantage of Oracle's Data Warehousing and Big Data Platform?  Want to keep up on the latest product releases and how they might impact your organization?  Looking for best practices that describe how to most effectively apply Oracle DW-Big Data technology?

Check out Oracle's new monthly magazine - Oracle DW-Big Data Monthly Roundup.  Powered by Flipboard - you can now view all this content on your favorite device. 

Let us know what you think!


Thursday Dec 12, 2013

Oracle releases Exadata X4 with optimizations for data warehousing

Exadata top closed 0056

Support Quote

”Oracle Exadata Database Machine is the best platform on which to run the Oracle Database and the X4 release extends that value proposition,” said Oracle President Mark Hurd. “As private database clouds grow in popularity, the strengths of Oracle Exadata around performance, availability and quality of service set it apart from all alternatives.”

We have just announced the release of the fifth-generation of our flagship database machine: Oracle Exadata Database Machine X4. This latest release introduces new hardware and software to accelerate performance, increase capacity, and improve efficiency and quality-of-service for enterprise data warehouse deployments.

Performance of all data warehousing workloads is accelerated by new flash caching algorithms that focus on table and partition scan workloads that are common in Data Warehouses. Tables that are larger than flash are now automatically partially cached in flash and read concurrently from both flash and disk to speed throughput.

Other key highlights are:

1) Improved workload management
Exadata X4-2 includes new workload management features that will improve the management of data warehouse workloads. Exadata now has the unique ability to transparently prioritize requests as they flow from database servers, through network adapters and network switches, to storage, and back.

We are using a new generation of InfiniBand network protocols to ensure that network-intensive workloads such as reporting, batch and backups do not delay response-time sensitive interactive workloads. Which is great news for IT teams that have to define and manage service level agreements.

2) Bigger flash cache for even faster performance
We have increased the amount of physical flash within a full rack to 44 TB per full rack. However, the capacity of the logical flash cache has increased by 100% to 88 TB per full rack.

3) Hardware driven compression/decompression

A feature that is unique to Exadata is the Flash Cache Compression. This transparently compresses database data into flash using hardware acceleration to compress and decompress data with zero performance overhead.

4) In-memory processing
For in-memory workloads we increased maximum memory capacity by 100% to 4TB in full rack (using memory expansion kits) which means more workloads will be able to run in-memory with extremely fast response times.

5) Increased support for big data
To support big data projects we increased the capacity of the high performance disks to over 200 TB per full rack and for high capacity disks the storage capacity is now 672 TB per full rack. Once you factor in Oracle Exadata's compression technologies then a full rack is capable of storing petabytes of user data. 

The full press release is here: http://www.oracle.com/us/corporate/press/2079925

Wednesday Dec 11, 2013

dunnhumby increases customer loyalty with Oracle Big Data

dunnhumby presented at this year's OpenWorld where they outlined the how and why of data warehousing on Exadata.  Our engineered system delivered a performance improvement of more than 24x. dunnhumby pushes its data warehouse platform really hard with more than 280 billion fact rows and 250 million dimension rows for one large retailer client alone, dunnhumby’s massive data requires the best performance the industry has to offer.

In Oracle Exadata, dunnhumby has found that solution. Using Oracle Exadata’s advanced Smart Scan technology and robust Oracle Database features. This new environment has empowered its analysts to perform complex ad hoc queries across billions of fact rows and hundreds of millions of dimension rows in minutes or seconds, compared to hours or even days on other platforms. 

You can download the presentation by Philip Moore - Exadata Datawarehouse Architect, Dunnhumby USA LLC -  from the OpenWorld site, see here: https://oracleus.activeevents.com/2013/connect/sessionDetail.ww?SESSION_ID=3412.

If you missed Philip's session at OpenWorld then we have just released a new video interview with Chris Wones, Director of Data Solutions at dunnhumby. During the interview Chris outlines some of the challenges his team faced when trying to do joined up analytics across disparate and disconnected data sets and how Exadata allowed them to bring everything together so that they could run advanced analytical queries that were just not possible before and that meant being able to bid on completely new types of contracts. The combination of Exadata and Oracle Advanced Analytics are delivering real business benefit to dunnhumby and its customers.

For more information about Oracle's Advanced Analytics option checkout Charlie Berger's advanced analytics blog: http://blogs.oracle.com/datamining and Charlie's twitter feed: https://twitter.com/CharlieDataMine

To watch the video click on the image: 

Dunnhumby

If the video does not start follow this link: http://medianetwork.oracle.com/video/player/2889835899001

Tuesday Dec 03, 2013

Big Data - Real and Practical Use Cases: Expand the Data Warehouse

As a follow-on to the previous post (here) on use cases, follow the link below to a recording that explains how to go about expanding the data warehouse into a big data platform.

The idea behind it all is to cover a best practice on adding to the existing data warehouse and expanding the system (shown in the figure below) to deal with:

  • Real-Time ingest and reporting on large volumes of data
  • A cost effective strategy to analyze all data across types and at large volumes
  • Deliver SQL access and analytics to all data 
  • Deliver the best query performance to match the business requirements

Roles of the Components

To access the webcast:

  • Register on this page
  • Scroll down and find the session labeled "Best practices for expanding the data warehouse with new big data streams"
  • Have fun
you want the slides, go to slideshare.

Wednesday Nov 27, 2013

Big Data - Real and Practical Use Cases

The goal of this post is to explain in a few succinct patterns how organizations can start to work with big data and identify credible and doable big data projects. This goal is achieved by describing a set of general patterns that can be seen in the market today.
Big Data Usage Patterns
The following usage patterns are derived from actual customer projects across a large number of industries and cross boundaries between commercial enterprises and public sector. These patterns are also geographically applicable and technically feasible with today’s technologies.
This paper will address the following four usage patterns:
  • Data Factory – a pattern that enable an organization to integrate and transform – in a batch method – large diverse data sets before moving this data into an upstream system like an RDBMS or a NoSQL system. Data in the data factory is possibly transient and the focus is on data processing.
  • Data Warehouse Expansion with a Data Reservoir – a pattern that expands the data warehouse with a large scale Hadoop system to capture data at lower grain and higher diversity, which is then fed into upstream systems. Data in the data reservoir is persistent and the focus is on data processing as well as data storage as well as the reuse of data.
  • Information Discovery with a Data Reservoir – a pattern that creates a data reservoir for discovery data marts or discovery systems like Oracle Endeca to tap into a wide range of data elements. The goal is to simplify data acquisition into discovery tools and to initiate discovery on raw data.
  • Closed Loop Recommendation and Analytics system – a pattern that is often considered the holy grail of data systems. This pattern combines both analytics on historical data, event processing or real time actions on current events and closes the loop between the two to continuously improve real time actions based on current and historical event correlation.

Pattern 1: Data Factory

The core business reason to build a Data Factory as it is presented here is to implement a cost savings strategy by placing long-running batch jobs on a cheaper system. The project is often funded by not spending money on the more expensive system – for example by switching Mainframe MIPS off  - and instead leveraging that cost savings to fund the Data Factory. The first figure shows a simplified implementation of the Data Factory.
As the image below shows, the data factory must be scalable, flexible and (more) cost effective for processing the data. The typical system used to build a data factory is Apache Hadoop or in the case of Oracle’s Big Data Appliance – Cloudera’s Distribution including Apache Hadoop (CDH).

data factory

Hadoop (and therefore Big Data Appliance and CDH) offers an extremely scalable environment to process large data volumes (or a large number of small data sets) and jobs. Most typical is the offload of large batch updates, matching and de-duplication jobs etc. Hadoop also offers a very flexible model, where data is interpreted on read, rather than on write. This idea enables a data factory to quickly accommodate all types of data, which can then be processed in programs written in Hive, Pig or MapReduce.
As shown in above the data factory is an integration platform, much like an ETL tool. Data sets land in the data factory, batch jobs process data and this processed data moves into the upstream systems. These upstream systems include RDBMS’s which are then used for various information needs. In the case of a Data Warehouse, this is very close to pattern 2 described below, with the difference that in the data factory data is often transient and removed after the processing is done.
This transient nature of data is not a required feature, but it is often implemented to keep the Hadoop cluster relatively small. The aim is generally to just transform data in a more cost effective manner.
In the case of an upstream system in NoSQL systems, data is often prepared in a specific key-value format to be served up to end applications like a website. NoSQL databases work really well for that purpose, but the batch processing is better left to Hadoop cluster.
It is very common for data to flow in the reverse order or for data from RDBMS or NoSQL databases to flow into the data factory. In most cases this is reference data, like customer master data. In order to process new customer data, this master data is required in the Data Factory.
Because of its low risk profile – the logic of these batch processes is well known and understood – and funding from savings in other systems, the Data Factory is typically an IT department’s first attempt at a big data project. The down side of a Data Factory project is that business users see very little benefits in that they do not get new insights out of big data.

Pattern 2: Data Warehouse Expansion

The common way to drive new insights out of big data is pattern two. Expanding the data warehouse with a data reservoir enables an organization to expand the raw data captured in a system that is able to add agility to the organization. The graphical pattern is shown in below.


DW Expansion

A Data Reservoir – like the Data Factory from Pattern 1 – is based on Hadoop and Oracle Big Data Appliance, but rather then have transient data and just process data and then hand the data off, a Data Reservoir aims to store data at a lower than previously stored grain for a period much longer than previous periods.
The Data Reservoir is initially used to capture data, aggregate new metrics and augment (not replace) the data warehouse with new and expansive KPIs or context information. A very typical addition is the sentiment of a customer towards a product or brand which is added to a customer table in the data warehouse.
The addition of new KPIs or new context information is a continuous process. That is, new analytics on raw and correlated data should find their way into the upstream Data Warehouse on a very, very regular basis.
As the Data Reservoir grows and starts to become known to exist because of the new KPIs or context, users should start to look at the Data Reservoir as an environment to “experiment” and “play” with data. With some rudimentary programming skills power users can start to combine various data elements in the Data Reservoir, using for example Hive. This enables the users to verify a hypotheses without the need to build a new data mart. Hadoop and the Data Reservoir now becomes an economically viable sandbox for power users driving innovation, agility and possibly revenue from hitherto unused data.

Pattern 3: Information Discovery

Agility for power users and expert programmers is one thing, but eventually the goal is to enable business users to discover new and exciting things in the data. Pattern 3 combines the data reservoir with a special information discovery system to provide a Graphical User Interface specifically for data discovery. This GUI emulates in many ways how an end user today searches for information on the internet.
To empower a set of business users to truly discover information, they first and foremost require a Discovery tool. A project should therefore always start with that asset.


Once the Discovery tool (like Oracle Endeca) is in place, it pays to start to leverage the Data Reservoir to feed the Discovery tool. As is shown above, the Data Reservoir is continuously fed with new data. The Discovery tool is a business user’s tool to create ad-hoc data marts in the discovery tool. Having the Data Reservoir simplifies the acquisition by end users because they only need to look in one place for data.
In essence, the Data Reservoir now is used to drive two different systems; the Data Warehouse and the Information Discovery environment and in practice users will very quickly gravitate to the appropriate system. But no matter which system they use, they now have the ability to drive value from data into the organization.

Pattern 4: Closed Loop Recommendation and Analytics System

So far, most of what was discussed was analytics and batch based. But a lot of organizations want to come to some real time interaction model with their end customers (or in the world of the Internet of Things – with other machines and sensors).

Closed Loop System

Hadoop is very good at providing the Data Factory and the Data Reservoir, at providing a sandbox, at providing massive storage and processing capabilities, but it is less good at doing things in real time. Therefore, to build a closed loop recommendation system – which should react in real time – Hadoop is only one of the components .
Typically the bottom half of the last figure is akin to pattern 2 and is used to catch all data, analyze the correlations between recorded events (detected fraud for example) and generate a set of predictive models describing something like “if a, b and c during a transaction – mark as suspect and hand off to an agent”. This model would for example block a credit card transaction.
To make such a system work it is important to use the right technology at both levels. Real time technologies like Oracle NoSQL Database, Oracle Real Time Decisions and Oracle Event Processing work on the data stream in flight. Oracle Big Data Appliance, Oracle Exadata/Database and Oracle Advanced Analytics provide the infrastructure to create, refine and expose the models.

Summary

Today’s big data technologies offer a wide variety of capabilities. Leveraging these capabilities with the existing environment and skills already in place according to the four patterns described does enable an organization to benefit from big data today. It is a matter of identifying the applicable pattern for your organization and then to start on the implementation.

The technology is ready. Are you?

Read-All-About-It: new weekly Oracle Data Warehousing newspaper

Thanks to Brendan Tierney for bringing this excellent online automated news service to my attention….

For a long time I have been wondering how to pull together all the articles from my favourite Twitter feeds, Facebook pages and blogs. Well thanks to Brendan I have discovered a service called Paper.li. This weekend I spent some time setting up feeds from all my favourite sources related to data warehousing, big data. Exadata and other related Oracle technologies. The result is the "#Oracle DW-Big Data Weekly Roundup" which is designed to "keep you up to date on all the weekly sql analytics, data warehousing and big data news from # Oracle". The newspaper is refreshed every Sunday night so that it is ready for Monday morning to read over breakfast. It is the perfect way to start the working week….



Newspaper


if you want to subscribe to this weekly newspaper then go here: http://paper.li/OracleBigData/1384259272 and click on the red SUBSCRIBE link in the top right region of the screen. To give you some guidance on the where all this content is coming from, I am pulling articles from the following sources:

  • Oracle Twitter accounts
    • OracleBigData
    • Oracle Database
    • SQLMaria (Optimizer)
    • CharlieDataMine (Advanced Analytics)
    • NoSQL Database
    • SQL Developer
    • Hardware team
    • BI technology
    • Profit Online Magazine
    • Mark Hornick (R Enterprise)
    • Oracle University
  • Oracle Blogs
    • Data Warehousing
    • Data Mining
    • R
  • Oracle Facebook pages
    • Data Warehousing and Big Data page

Looking for feedback on how useful this is to people as we have so many ways to communicate with you it is good to know what works and what does not work.
If you want to subscribe to Brendan's data mining/analytics newsletter it is here: http://paper.li/brendantierney/1364568794.

Now I am off to investigate creating the same thing on Flipboard for all you iPad/iPhone and Android users…..hope to have an update for you on this very soon so stay tuned!

Friday Nov 22, 2013

Using Oracle Exadata to improve crop yields

It is not often you read about how the agricultural industry uses data warehousing so this article in latest edition of Oracle Magazine, with the related video from OOW 2013, on how Land O'Lakes is using Exadata caught my attention:  The Business of Growing, by Marta Bright http://www.oracle.com/technetwork/issue-archive/2013/13-nov/o63lol-2034253.html

A little background on Land O'Lakes: Land O’Lakes is a US company that has grown far beyond its roots as a small cooperative of dairy farmers with forward-thinking ideas about producing and packaging butter. It is a Fortune 500 company and is now the second-largest cooperative in the United States, with annual sales of more than US$14 billion. Over the years, Land O’Lakes has expanded its operations into a variety of subsidiaries, including WinField Solutions (WinField), which provides farmers with a wide variety of crop seeds and crop protection products.

This implementation on our engineered systems highlights one of the key unique features of Oracle: the ability to run a truly mixed operation + data warehouse workload on the same platform, in the same rack. Land O'Lakes uses a lot of Oracle applications which push data into their data warehouse. In many cases we talk about the need for the data warehouse to support small windows of opportunities. For WinField solutions this is exactly why they invested in Oracle. 

What makes seed sales unique and challenging is that they are directly tied to seasonal purchasing. “There’s somewhat of a Black Friday in the seed business,” explains Tony Taylor, director of technology services at Land O’Lakes. “WinField is a US$5 billion company that sells all of its seed during about a six-week period of time”. With that sort of compressed sales cycle you need to have unto date information at your finger tips so you make the all the right decisions at the right time. Speed and efficiency are the key factors. If you watch the video Chris Malott, Manger of the DBA team at Land O'Lakes, explains the real benefits that her team and the business teams have gained from moving their systems (operational and data warehouse) on to our engineered systems platform.

Land OLakes

Click on the image above to link to the video. If the link does not work then click here: http://www.youtube.com/watch?v=-MDYI7IakR8

WinField is maximising the use of Oracle's analytical capabilities by incorporating huge volumes of imaging data which it then uses to help farmers make smarter agronomic decisions and ultimately gain higher yields. How many people would expect Oracle's in-database analytics to be driving improved crop yields? Now that is a great use case!

WinField also drills down and across all the information they collect to help identify new opportunities and what in the telco space we would call "churn" - for instance in January they run reports to identify particular farmers who typically purchases seed and products in November but has not yet ordered. This provides WinField the information it needs in order to perform proactive outreach to farmers to find out if they simply haven’t had time to place an order.

“We’re able to help farmers and our co-op members, even in cases where we’re not sure whether it’s going to directly benefit Land O’Lakes or WinField. Because this is truly a cooperative system, these are the people we work for, and we’re willing to invest in them.” -  Mike Macrie, vice president and CIO at Land O’Lakes. 

For next steps, Land O-Lakes is looking to move to Database 12c and I am sure that will open up even more opportunities for their business users to help farmers. Hopefully, they will be able to make use of new analytical features such as SQL Pattern Matching.


Thursday Nov 14, 2013

Data Scientist Boot camp (Skills and Training)

As almost everyone is interested in data science, take the boot camp to get ahead of the curve. Leverage this free Data Science Boot camp from Oracle Academy to learn some of the following things:

  • Introduction: Providing Data-Driven Answers to Business Questions
  • Lesson 1: Acquiring and Transforming Big Data
  • Lesson 2: Finding Value in Shopping Baskets
  • Lesson 3: Unsupervised Learning for Clustering
  • Lesson 4: Supervised Learning for Classification and Prediction
  • Lesson 5: Classical Statistics in a Big Data World
  • Lesson 6: Building and Exploring Graphs

You will also find the code samples that go with the training and you can get of to a running start.


Wednesday Nov 13, 2013

ADNOC talks about 50x increase in performance

In this new video Awad Ahmen Ali El Sidddig, Senior DBA at ADNOC, talks about the impact that Exadata has had on his team and the whole business. ADNOC is using our engineered systems to drive and manage all their workloads: from transaction systems to payments system to data warehouse to BI environment. A true Disk-to-Dashboard revolution using Engineered Systems. This engineered approach is delivering 50x improvement in performance with one queries running 100x faster! The IT has even revolutionised some of their data warehouse related processes with the help of Exadata and now jobs that were taking over 4 hours now run in a few minutes.

[Read More]

Tuesday Nov 12, 2013

Big Data Appliance X4-2 Release Announcement

Today we are announcing the release of the 3rd generation Big Data Appliance. Read the Press Release here.

Software Focus

The focus for this 3rd generation of Big Data Appliance is:

  • Comprehensive and Open - Big Data Appliance now includes all Cloudera Software, including Back-up and Disaster Recovery (BDR), Search, Impala, Navigator as well as the previously included components (like CDH, HBase and Cloudera Manager) and Oracle NoSQL Database (CE or EE).
  • Lower TCO then DIY Hadoop Systems
  • Simplified Operations while providing an open platform for the organization
  • Comprehensive security including the new Audit Vault and Database Firewall software, Apache Sentry and Kerberos configured out-of-the-box

Hardware Update

A good place to start is to quickly review the hardware differences (no price changes!). On a per node basis the following is a comparison between old and new (X3-2) hardware:


Big Data Appliance X3-2

Big Data Appliance X4-2

CPU

2 x 8-Core Intel® Xeon® E5-2660 (2.2 GHz)
2 x 8-Core Intel® Xeon® E5-2650 V2 (2.6 GHz)
Memory
64GB
64GB
Disk

12 x 3TB High Capacity SAS

12 x 4TB High Capacity SAS
InfiniBand
40Gb/sec
40Gb/sec
Ethernet
10Gb/sec
10Gb/sec

For all the details on the environmentals and other useful information, review the data sheet for Big Data Appliance X4-2. The larger disks give BDA X4-2 33% more capacity over the previous generation while adding faster CPUs. Memory for BDA is expandable to 512 GB per node and can be done on a per-node basis, for example for NameNodes or for HBase region servers, or for NoSQL Database nodes.

Software Details

More details in terms of software and the current versions (note BDA follows a three monthly update cycle for Cloudera and other software):


Big Data Appliance 2.2 Software Stack Big Data Appliance 2.3 Software Stack
Linux
Oracle Linux 5.8 with UEK 1
Oracle Linux 6.4 with UEK 2
JDK
JDK 6
JDK 7
Cloudera CDH
CDH 4.3
CDH 4.4
Cloudera Manager
CM 4.6
CM 4.7

And like we said at the beginning it is important to understand that all other Cloudera components are now included in the price of Oracle Big Data Appliance. They are fully supported by Oracle and available for all BDA customers.

For more information:



About

The data warehouse insider is written by the Oracle product management team and sheds lights on all thing data warehousing and big data.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
2
4
5
6
7
8
9
10
11
12
13
14
16
18
19
20
21
23
24
25
26
27
28
29
30
   
       
Today