Friday Dec 18, 2015
Thursday Dec 03, 2015
By Klaker-Oracle on Dec 03, 2015
Next week I will be in Birmingham at the annual UKOUG Tech conference. I will be presenting on some of the new SQL features that we added in Database 12c to support analysis of big data......to give you a taster of what to expect during my session here is a sort-of relevant Dilbert cartoon, courtesy of Scott Adams:
Tuesday Dec 01, 2015
By Klaker-Oracle on Dec 01, 2015
Last week I was in Nuremburg at the excellent annual German Oracle user group conference - DOAG15. This was my first visit to DOAG and I was amazed at the size of the event. It is huge but most importantly it is very well organized and definitely one of the best, possibly the best, user conference that I have attended.
..and then there was the chance to drink some gluhwein with the database PM team on the way to the speaker dinner.
Wednesday Mar 18, 2015
By Jean-Pierre Dijcks-Oracle on Mar 18, 2015
Prediction #7 - blending production workloadsacross cloud and on-premise in Oracle's Enterprise Big Data Predictions 2015 is a tough nut to crack. Yet, we at Oracle think this is really the direction we all will go. Sure we can debate the timing, and whether or not this happens in 2015, but it is something that will come to all of us who are looking towards that big data future. So let’s discuss what we think is really going to happen over the coming years in the big data and cloud world.
Reality #1 – Data will live both in the cloud and on-premise
We see this today. Organizations run Human Capital Management systems in
the cloud, integrate these with data from outside cloud based systems
(think for example LinkedIn, staffing agencies etc.) while their general
data warehouses and new big data systems are all deployed as on-premise
systems. We also see the example in the prediction where various auto
dealer systems uplink into the cloud to enable the manufacturer to
consolidate all of their disparate systems. This data may be translated
into data warehouse entries and possibly live in two worlds – both in
the cloud and on-premise for deep diving analytics or in aggregated
Reality #2 – Hybrid deployments are difficult to query and optimize
We also see this today and it is one of the major issues of living in
the hybrid world of cloud and on-premise. A lot of the issues are driven
by low level technical limitations, specifically in network bandwidth
and upload / download capacity into and out of the cloud environment.
The other challenges are really (big) data management challenges in that
they go into the art of running queries across two ecosystems with very
different characteristics. We see a trend to use engineered systems
on-premise, which delivers optimized performance for the applications,
but in the cloud we often see virtualization pushing the trade-off
towards ease of deployment and ease of management. These completely
different ecosystems make optimization of queries across them very
Solution – Equality brings optimizations to mixed environments
As larger systems like big data and data warehouse systems move to the
cloud, better performance becomes a key success criterion. Oracle is
uniquely positioned to drive both standardization and performance
optimizations into the cloud by deploying on engineered systems like
Oracle Exadata and Oracle Big Data Appliance. Deploying engineered
systems enables customers to run large systems in the cloud delivering
performance as they see today in on-premise deployments. This then means
that we do not live in a world divided in slow and fast, but in a world
of fast and fast.
This equivalence also means that we have the same functionality in both worlds, and here we can sprinkle in some – future – Oracle magic, where we start optimizing queries to take into account where the data lives, how fast we can move it around (the dreaded networking bandwidth issue) and where we need to execute code. Now, how are we going to do this? That is a piece magic, and you will just need to wait a bit… suffice it to say we are hard at work at solving this challenging topic.
Monday Sep 29, 2014
By Jean-Pierre Dijcks-Oracle on Sep 29, 2014
Looking around northern California and inside many technology kitchens makes me believe that we are about to see the Data Scientist bubble burst. And then I read the Fortune Magazine article on Peter Thiel - and the excerpt on Zero to One (his new book) in that article and it dawned on me that is one of the intersting ways to look at the Data Scientist bubble.
Thiel's Classification of Innovation
Without trying to simplify and/or bastardize mr. Thiel's theory, the example in the Fortune Mag article will make this visible to most people (I hope). In the article the analogy is; going from one type writer to 100 type writers is 1 to N, inventing a word processor is moving us from 0 to 1. In other words, true innovation dramatically changes things by giving previously unknown power to the masses. It is that innovation that moves us from 0 to 1. Expansion of existing ideas - not true innovation - moves us from 1 to N. Of course, don't take my word on this but read the article or the book...
The Demise of the Human Data Scientist
The above paradigm explains the Data Scientist bubble quite nicely. Once upon a time companies hired a few PhD students who by chance had a degree in statistics and had learned how to program and figured out how to deal with (large) data sets. These newly minted data scientists proved that there is potential value in mashing data together, running analytics on these newly created data sets and thus caused a storm of publicity. Companies large and small are now frantically trying to hire these elusive data scientists, or something a little more down to earth, are creating data scientists (luckily not in the lab) by forming teams that bring a part of the skillset to the table.
This approach all starts to smell pretty much like a whole busload of typewriters being thrown at a well-known data analysis and data wrangling problem. Neither the problem nor the solution are new, nor innovative. Data Scientists are therefore not moving us from 0 to 1...
One could argue that while the data scientist quest is not innovative, at least is solves the problem of doing analytics. Fair and by some measure correct, but there is one bigger issue with the paradigm of "data scientists will solve our analytics problem" and that is scale. Giving the keys to all that big data to only a few data scientists is not going to work because these smart and amazing people are now becoming, often unbeknownst to them, an organizational bottleneck to gaining knowledge from big data.
The only real solution, our 0 to 1, is to expose a large number of consumers to all that big data, while enabling these consumers to apply a lot of the cool data science to all that data. In other words, we need to provide tools which include data science smarts. Those tools will enable us to apply the 80% common data science rules to the 80% of common business problems. This approach drives real business value at scale. With large chunks of issues resolved, we can then focus our few star data scientists on the 20% of problems or innovations that drive competitive advantage and change markets.
The bubble is bursting because what I am seeing is more and more tools coming to market (soon) that will drive data science into the day-to-day job of all business people. Innovation is not the building of a better tool for data scientists or hiring more of them, instead the
real 0 to 1 innovation is tools that make make all of us data scientists
and lets us solve our own data science problems. The future of Data Science is smarter tools, not smarter humans.
Tuesday Apr 01, 2014
By Klaker-Oracle on Apr 01, 2014
Oracle has always been at the forefront of efforts to revolutionise your data center. To date, for obvious reasons, the focus has been on optimizing energy and space efficiency. As of today we are moving into an exciting new phase in terms of the look and feel of your data center. Oracle recently added a new fashion design team to its engineered system group to help us re-imagine the next generation data center and the first exciting fruits of this new partnership of both technology and fashion are now available for our customers to order…..
For a short period only, Oracle is offering its data warehouse customers the chance to buy a limited edition EXADATA X4-2C. This new Exadata configuration is going to brighten up your data center with its exciting range of color coordinated racks! Now you can enjoy running those really sophisticated business queries in glorious technicolor. Most importantly, the great news is that we are not charging you anything extra for this fabulous new technicolor data warehouse experience:
HARDWARE, SOFTWARE AND COLOR, ENGINEERED TO WORK TOGETHER
Each color-coded rack comes with its own color-linked version of Enterprise Manager to add more colour, brightness and joy to all those day-to-day tasks as you can see below on these specially designed monitoring screens:
Your Exadata DBA is really going to thank you!
So what happens if you buy a 1/2 rack then slowly add more Exadata nodes? Great question - well, while stocks last you can actually create your own multi-colored Exadata rack. As always we are ahead of the game because we know what our customers want. SO WHY NOT HAVE A TECHNICOLOR DATA WAREHOUSE in your data center! Go on, you know it makes sense….
BUT YOU GOTTA HURRY - This new Exadata X4-2C range is a limited edition, special order only model. Stocks are limited. To brighten up your data center make sure you contact your Oracle Sales Representative right now because you do not want to miss out on this exciting opportunity to put one of these gorgeous, colour-coded dudes in your data center. And don't forget, only Oracle gives you HARDWARE, SOFTWARE AND COLOR, ENGINEERED TO WORK TOGETHER
Tuesday Apr 26, 2011
Tuesday Mar 23, 2010
Sunday Feb 21, 2010
By Klaker-Oracle on Feb 21, 2010
Wednesday Feb 17, 2010
Thursday Feb 11, 2010
By Klaker-Oracle on Feb 11, 2010
Wednesday Feb 10, 2010
By Klaker-Oracle on Feb 10, 2010
Sunday Feb 07, 2010
By Klaker-Oracle on Feb 07, 2010
Monday Jan 18, 2010
By Klaker-Oracle on Jan 18, 2010
Tuesday Jan 05, 2010
By Klaker-Oracle on Jan 05, 2010
The data warehouse insider is written by the Oracle product management team and sheds lights on all thing data warehousing and big data.
- Big Data SQL Quick Start. Partition Pruning - Part7.
- Big Data SQL Quick Start. Predicate Push Down - Part6.
- SQL Pattern Matching Deep Dive - Part 3, greedy vs. reluctant quantifiers
- Common Distribution Methods in Parallel Execution
- Big Data SQL Quick Start. Joins. Bloom Filter and other features - Part5.
- Is an approximate answer just plain wrong?
- Big Data SQL Quick Start. Security - Part4.
- Oracle OpenWorld 2016 call for papers is OPEN!
- SQL Pattern Matching Deep Dive - Part 2, using MATCH_NUMBER() and CLASSIFIER()
- In-Memory Parallel Query