Today's guest blog is from Johan Louwers, Oracle ACE Director in the field of Oracle Linux at Capgemini.
When we talk about big data, enterprises are commonly the first thing that comes to mind. But what about the scientific research field? It seems to come in a distant second when thinking about big data. An improbable thought when we consider that one of the biggest data creators in the world is CERN and, more specifically, the Large Hadron Collider project. The Large Hadron Collider provides a staggering amount of data (close to 100 terabytes of data a day) which needs to be analyzed by a multitude of different researchers (watch this video to learn about the success CERN has already achieved with Oracle Big Data Discovery).
As a next step, CERN is working on another project named Future Circular Collider. The Future Circular Collider (FCC) study aims to develop a conceptual design for a particle accelerator infrastructure in a global context, with an energy significantly above that of previous circular colliders (SPS, Tevatron, LHC). It will explore the feasibility of different particle collider scenarios with the aim of significantly expanding the current energy and luminosity frontiers. It also aims to complement existing technical designs for linear electron/positron colliders (ILC and CLIC). In short, this is a big project that carries with it big implications and even bigger sets of data.
For the FCC study, CERN scientists and researchers are looking at current implementation methods used in other collider projects as a basis. Part of the study is to analyze LHC data to improve the current LHC project and to use it in this FCC project. They aim to study:
In addition, part of the FCC study focuses on:
To ensure this study is done correctly, CERN has implemented Oracle Exalytics and Oracle Big Data Discovery in combination with a Cloudera CDH 5.5.1 cluster (Cloudera has been a close partner with Oracle for several years and is the main provider of software for the Oracle Big Data Appliance). It's important for CERN to be able analyze all data that is collected so their researchers have an accurate and holistic view of their systems. The deployment is shown below (in a high level diagram) as it is been implemented at CERN. This configuration allows CERN's researchers to work with an interactive catalog of all data, assessing attribute statistics, data quality, and outliers, all while simultaneously conducting quick data exploration or creating dashboards and applications on the fly. By analyzing the data collected from the control and monitoring systems used within the LHC project, CERN hopes to achieve a situation in which these systems will become intelligent, predictive, and proactive based upon massive big-data analysis.
This setup shows how CERN is using the combination of a Cloudera solution and Oracle Exalytics for big data discovery. The same setup can apply for other solutions and industries that deal with large amounts of data. In a situation where you need to develop a solution that can provide the ability to discover information in a large set of data, the above setup would be a great starting point for evaluation.
We're excited to welcome Oracle experts to share their views and perspectives on today's trends on this blog. Johan is an Oracle ACE Director in the field of Oracle Linux and systems. Specialized in Oracle technology, infrastructure solutions and cloud computing, Johan is providing active advice and support to enterprises around the globe on both an architectural as hands-on technical level. He is currently leading the Global Oracle Architect office as the Capgemini Chief Architect. Next to this he is a strong promoter of Open Source technology for implementing disruptive and cutting edge technology solutions within enterprises. You can follow Johan on Twitter @johanlouwers, LinkedIn, and his blog at johanlouwers.blogspot.com.