X

Future State - The Oracle Consulting Blog

Big Data – Fast and Square!

Guest Author
By Petr Hosek, Senior Director, Oracle EMEA Consulting 

 

A Big Data project is actually just another Data Warehouse implementation, business as usual! I often hear this opinion, and I must say:”Absolutely wrong, dude!”

Firstly, the sheer amount of data shatters all the traditional wisdom about moving data, sharing them or backing them up. You have to think about data all the time from solution design through project plan down to the implementation itself. And you need to avoid any unnecessary moves of data from one processing tool to another; you need to move tools to data as much as you can!

Secondly, the plethora of tools available on the market, with new tools – or at least new versions – popping up almost every day, poses a serious challenge in choosing the right ones. Just a few months between a successful Proof of Concept and the start of the full project can make the set of tools you used for the PoC completely obsolete. And what will you do if – once you have frozen your list – a new version with the killer function you have been waiting for comes?

Next, many Big Data platform building blocks (including the open source ones) are missing basic built-in security features. Hence, you have to apply a very rigid insight in designing the whole platform or audit processes!

 


Short Teaser here


Watch the full-length webcast here


Also, Big Data domain transcends the feeling of freewheeling world with flexible rules and untamed myriads of data. Opposite is true, and a disciplined approach to project management is a must to avoid costly errors that may surface only in the later phases of a large implementation program.

To make the life of our customers easier, Oracle Consulting has decided to document and structure the experience of many different Big Data projects deployed in  the last three years. The final product has a fancy name, Data Factory Engine (DFE) and in its first incarnation was offered as one big “elephant” – take it or leave it. Based on feedback from numerous customer discussions, we have decided to slice it into logical components. Hence, DFE has evolved into a toolbox of twelve modules offering architecture blueprints, process best practices, middleware code and product parameterization. And in a specific customer case, only those modules that are really needed are selected.

Among the key benefits of employing DFE in a Big Data project belong faster implementation time; getting all your data under control *); and prevention of possible costly mistakes in the whole project.

If you want to learn more about the benefits and the features of DFE as a whole and its individual modules, watch this 20-minute video!

*)  Read related article about why Big Data projects actually have to encompass All Data of yours.

 

 

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.