dimanche sept. 22, 2013

T5-2 for high-performance financial trading

On this post, I will focus on the second testimony reported by François Napoleoni. Here,the goal was to select the next best platform to face the growth and upgrade of a financial trading application. The comparaison has been made between :
  1. T5-2, running Oracle VM, Solaris, Sybase and the financial application
  2. x86, running VMWare, Redhat, Sybase and the financial application

The decision criteria being :

  • Simplified architecture
  • Systems performance in real life (what has been tested and measured)
  • Platform stability
  • Single point of support
  • Leverage internal skills
  • Strong security enforced between the different virtual environments
  • ROI of the solution

For those of you understanding french, I will let you listen to this few minutes video below. And for the English readers, go through more details in this post.

Oracle VM virtualization on T4-4 : architecture and service catalog

Last Tuesday, during the SPARC Showcase event, Jean-François Charpentier, François Napoleoni and Sébastien Flourac delivered very interesting uses cases of deployment of latest Oracle Systems at work : Oracle VM virtualization project on T4-4, intensive financial trading on T5-2 and a complete Information System migration to Oracle SuperCluster.

I will start to cover in this post the main points that Jean-François Charpentier focus on to build a consolidated T4-4 platform, effective for his Business users.

Oracle VM virtualization on T4-4 Architecture

As often, Mr Charpentier had to handle existing environment, that needed to be taken into account when building this new virtual platform. First he had to be able to provide a platform that could consolidate all the existing environments, the main driver here being :

  • total memory requirement of existing asset
  • multi-tenant capability to share the platform securely between several different networks
  • comply to strong SLA

T4-4 was the best building block to cover them :

  • memory : up to 2 TB
  • multiple networks connection : up to 16x PCIe extension
  • Oracle VM built-in with redundant channels capability and live migration
  • Solaris binary compatibility to enable easy consolidation of existing environments

Overall Architecture Design

The deployment choice have been to setup 2x Oracle VM T4-4 clusters per sites as follow:

To cover his SLA requirements, Mr Charpentier built redundancy not only by providing multiple T4-4 nodes per Oracle VM clusters, but also at the Oracle VM itselfs. For Oracle VM, he chose to make redundant storage and network virtual access layer as display in the following 2 diagrams.

Oracle VM Virtual Mutlipathing with alternate I/O Domain

Oracle VM Virtual Network Access through IPMP

 All of this virtual network layer being link to different network back-bones, thanks to the 16x PCIe extension of the T4-4, as illustrated in the following diagram.

Another option could have been to deploy Oracle Virtual Network, to enable the disk and network access with only 2x PCIe slots at the server layer.

Oracle VM on T4-4 Service Catalog

Beside the architecture choice, that needs to comply with strong SLA. The development of a service catalogue is also very key to bring the IT toward a service provider. And it is exactly what have been put in place by Jean-François Charpentier, as follow :

 By putting in place this new virtual platform with its associated service catalog, Mr Charpentier was able to provide to his Business better agility thanks to easier and faster deployment. This platform has become the standard for all Solaris deployment for his business unit, and they expect to reach a 90% to 100% Solaris virtualization by 2014.

jeudi sept. 05, 2013

Preparing for #OOW: DB12c, M6, In-memory, Clouds, Big Data... and IoT

It’s always difficult to fit the upcoming Oracle Open World topics, and all its sessions in one title, even if "Simplifying IT. Enabling Business Transformation." makes it clear on what Oracle is focusing on, I wanted to be more specific on the "How". At least for those of you who attended Hot Chips conference, some of the acronyms will be familiar to you, some may not (I will come back to "IoT" later). For those of you attending, or those of you who will get the sessions presentations once available online, here are few things that you don't want to miss which will give you not only what Oracle R&D has done for you since last year, but also what customers -like you- have implemented thanks to the red-stack and its partners, being ISVs or SIs.

First, don't miss Oracle Executives Key notes, second, have a look into the general sessions delivered by VPs of Engineering to get a more in-deep direction, and last but not least, network with your peers, being on specifics deep-dive sessions, experience sharing or even the demo ground where you will be able to get the technologies in action with the Oracle developers subject matters experts.You will find hereafter a small selection.

Oracle Strategy and roadmaps

Industry Focus

Projects implementation feedbacks & lessons learn

Deep-dive with the Experts

Learn how to do it yourself (in 1 hour): Hands-on-Labs

Watch the technologies at work : Demos Ground

This digest is an extract of the many valuable sessions you will be able to attend to accelerate your projects and IT evolution.

jeudi juin 13, 2013

3 minutes video of last Month Oracle Solaris Users Group event

A quick report of last month Oracle Solaris Users Group event in Paris... in french...

vendredi mai 17, 2013

Why OS matters: Solaris Users Group testimony

Wednesday evening, a month after the new SPARC servers T5 & M5 launch in Paris, the french Solaris users group, get together to get the latest from Oracle experts on SPARC T5 & M5, Oracle Virtual Network, as well as the new enhancements inside Solaris 11.1 for Oracle Database. They also came to share their projects experiences and lessons learn, leveraging Solaris features : René Garcia Vallina from PSA, did a deep dive on ZFS internal and best practices around SAP deployment and Bruno Philippe explained how he managed to consolidate 100 Solaris servers into 6 thanks to Solaris 11 specific features.

It was very interesting to see all the value that an operating system like Solaris can bring. As of today, operating systems are often deeply hidden in the bottom layers of the IT stack, and we tend to forget that this is a key layer to leverage all the hardware innovations (being new CPUs cores, SSD storage, large memory subsystems,....) and expose them to the applications layers (being Databases, Java application servers,...). Solaris is going even further than most operating systems, around performances (will get back to that point), observability (with DTrace), reliability (predictive self healing,...), and virtualization (Solaris ZFS, Solaris Zones & Solaris Network Virtualization, also known as project "crossbow").

All of those unique features are bringing even more values and benefits for IT management and operations in a time of cost optimization and efficiency. And during this event, this was something that we could get from all the presentations and exchanges.

Solaris and SPARC T5 & M5

As Eric Duminy explained in the introduction of his session on the new SPARC T5 & M5, we are looking into new paradigm of CPU design and associated systems. Following Moor's law, we are using transistors in completely new ways. This is no more a run for frequency, if you want to achieve performance gain, you need more. You need to bring application features directly at CPU and Operating System level. Looking at SPARC T5, we are talking about a 16 cores, 8 threads/core processor, with up to 8x sockets, 4 TB RAM, SPARC T5-8 server in only 8 rack units ! This mean also, 128 cores and 1024 threads, and even more for the M5-32, with up to 192 cores, 1536 threads, 32 TB RAM  ! That's why the operating system is a key piece that needs to be able to handle such systems efficiently : ability to scale to that level, ability to place the process threads and associated memory on the right cores to avoid context switch, ability to manage the memory to feed the cores at the right pace.... This is all what we have done inside Solaris, and even more with Solaris 11.1 to leverage all this new SPARC T5 & M5 servers, and get the results that we announced a month ago at the launch.

 Of course we don't stop there. To get the best out of the infrastructure, we are designing at CPU, system and Solaris level to optimize for the application, starting at the database level.This is what Karim Berrah covered in his session.

Solaris 11.1 unique optimizations for Oracle Database

Karim's explained first the reasoning behind the complete new virtual memory management of Solaris 11.1, something that benefits directly to Oracle Database for the PGA and SGA allocation. You will experience it directly at database startup (twice faster !). The new virtual memory system will also benefit to ALL your applications, just looking for example at the mmap() function which is now x45 faster (this is what is used for all the shared libraries). Beyond performances, optimizations have been made on security, audit, and management. For example, with the up coming new release of Oracle Database, you will be able to dynamically resize your SGA and also get greater visibility for the DBA in datapath performances thanks to a new DTrace table directly available inside the database: a tight integration between Oracle DB and Solaris unique features.

Alain Chereau one of our performance guru from EMEA Oracle Solution Center provided his foresight and expertise. He especially reminded that the performance is achieve when ALL the layers work well together, and that "your OS choice has an impact on the DB and reverse. Something to remember for your critical applications." Alain closed the session with a final advice on best use of SSD for Oracle DB and Solaris ZFS. In short, SSD are align on 4k block. For Oracle DB, starting with, redolog can write in 4k block. This needs to be specify at redolog creation on the record size setting. For Solaris, ZFS knows about SSD and directly adapt. That's the reason why putting ZFS secondary cache on SSD (readzilla) is a very good idea, and a way to avoid bad behavior introduced by new "blind" storage tiering when combined with ZFS. Just put SSD drives for ZFS secondary cache directly inside your T5 or M5 servers and you are done. This is an important topic, as even if a majority of customers are running Oracle Database on ASM on production to get the benefit of grid and Oracle RAC security and scalability, that maybe different for development environments. As a matter of fact, for development systems most customers are leveraging Solaris ZFS and its compression and infinite clone and snapshot functions.

This brings me to René's session on SAP on ZFS...

Lessons learn from deploying SAP on ZFS

Clearly one of the most technical session of this event. Congratulation to René for a very clear explanation on ZFS allocation mechanisms and algorithm policies. I will start by René's conclusion : "Don't follow your ISV (SAP in this case) recommendations blindly". In fact, PSA was experiencing performances degradation and constant I/O activity even with very few transactions on application side. This was due to the fact that SAP recommends to use the SAP Data filesystem at more than 90% ! A very bad idea when you put your data on a Copy-on-Write (COW) filesystem like ZFS... Where I always recommend to keep around 20% of free space to allow for the COW operations to take place ! That's of course the new rule for SAP deployment at PSA.

So if you already have ZFS deployed with this rule in place, you don't have to read further, just keep doing it and move directly to the next topic... otherwise you maybe facing currently some performance problems as well.  To identify which of your ZFS pools are facing this situation, René provided a nice dtrace command that will tell you :

# dtrace -qn 'fbt::zio_gang_tree_issue:entry { @[pid]=count();  }' -c 'sleep 60'

Then to solve the problem, you understand that you need to add free space to enable the COW operation (in one shot). The best way would be to add a vdev (for more details: Oracle Solaris ZFS: A Closer Look at Vdevs and Performance). You could also use a zfs replace with a bigger vdev, but that's not the best option in the long run. If you go through a whole modification cycle of the content of the pool, your zpool will "defragement" by itself. If you want to "defragment" the zfs pool immediatly, if you have a Database, you can do it through "alter table move" operations (special thank to Alain Chereau for the tip). For standard files, you need to copy them and rename them back, or best, do a zfs send | zfs receive to another free zpool and you are done.

From 100 Servers to 6 thanks to Solaris 11

Last but not least, we also had another deep dive session during this event, with live demo ! Thanks to Bruno Philippe, President of the French Solaris Users Group, who shared with us his project of consolidating 100 servers, going from Solaris 8 to Solaris 10 into 6 servers with minimal to no business impact allow ! Bruno achieved his project thanks to Solaris 11 unique new feature : Solaris network virtualization, combine with Solaris Zones P2V and V2V, and SPARC Hardware hypervisor (Oracle VM for SPARC, known also as "LDOM", or Logical Domain).

I invite you to visit Bruno's blog for more details : Link Aggregations and VLAN Configurations for your consolidation (Solaris 11 and Solaris Zone)

Awaiting his next entry explaining the detail of the V2V and P2V operations that he demonstrated to us live on his laptop through a Solaris 11 x86 VBOX image.

I hope to see you on the up coming Solaris and SPARC event to share your feedback and experience with us.

The up coming Paris events will take place on June 4th, for  Datacenter Virtualization, focus on storage and network, and July 4th for a special session on new SPARC servers and their business impact.

mardi mars 26, 2013

New SPARC Servers Launched today : Extreme Performances at exceptionally low cost

It will be very difficult to summarize in a short post all the details and already available customers and ISVs results leveraging Oracle investment, design and ability to execute on SPARC servers complete renewal with not one but 2 processors launched : SPARC T5 and M5. It is somehow captured in the title of this entry, in Larry's own words: "extreme performances at exceptionally low cost". To give you a quick idea, we just announced 8 552 523 tpmC with 1x T5-8 (a new 8x sockets T5 mid-range server). Adding on top, "extreme scalability with extreme reliability", as with the M5-32 server, we can scale up to 32x sockets and 32 TB of memory, in a mission-critical system.

New way of designing systems

As what John Fowler was saying : "this starts with design". Here at Oracle, we have a new way of designing. Historically systems were designed by putting servers, storage, network and OS together. At Oracle we add Database, Middleware and Applications in the design. We think what it takes for the coherency protocols, the interfaces, and design around those points... and more.
Today we introduce not one but 2x processors with the whole family of servers associated with them. Thanks to common architecture they are design to work together. All of this of course runs Solaris. You can run Solaris 10, Solaris 11 and virtualize.  No break on binary compatibility.

Direct benefit for your applications... at lowest risk... and lowest cost

This is good for our customers and our ISVs, enabling them to run their applications unchanged on those new platforms with no equivalent performance gain, lowest cost and lowest risks, thanks to the binary compatibility and the new servers design under Oracle era. There was many customers examples on-stage, so I will just pick 2, SAS moving from M9000 to M5-32 with a x15 gain overall, and Sybase moving from M5000 to a T5-2 with a x11 gain overall. Those being in my opinion very important as they are reflecting real applications and customers experiences, many of them being in the financial services, and already having jump on those new systems (thanks to the beta program).

To get a better idea of what the new SPARC T5 and M5 will bring to your applications, being Siebel, E-Business Suite, JD Edwards, Java, or SAP... Have a look here : https://blogs.oracle.com/BestPerf/ on the 17 world records... on performances and price.

mardi déc. 04, 2012

Understanding Oracle Strategy, Cloud and Engineered Systems

Sometimes small self-explanatory videos are better than long talks... I wanted to share with you today 3 short videos explaining Oracle Strategy, our Cloud positioning and what Engineered Systems bring to your IT. Enjoy...

Oracle Strategy....

… the Cloud...

and Oracle Engineered Systems...

mardi nov. 20, 2012

#OOW 2012 @PARIS...talking Oracle and Clouds, and Optimized Datacenter

For those of you who want to get most out of Oracle technologies to evolve your IT to the Next Wave, I encourage you to register to the up coming Oracle Optimized Datacenter event that will take place in Paris on November 28th. You will get the opportunity to exchange with Oracle experts and customers having successfully evolve their IT by leveraging Oracle technologies. You will also get the latest news on some of the Oracle systems announcements made during OOW 2012.

During this event we will make an update about Oracle and Clouds, from private to public and hybrid models. So in preparing this session, I thought it was a good start to make a status of Cloud Computing in France, and CIO requirements in particular. Starting in 2009 with the first Cloud Camp in Paris, the market has evolved, but the basics are still the same : think hybrid.

From Traditional IT to Clouds

One size doesn't fit all, and for big companies having already an IT in place, there will be parts eligible to external (public) cloud, and parts that would be required to stay inside the firewalls, so ability to integrate both side is key.  None the less, one of the major impact of Cloud Computing trend on IT, reported by Forrester, is the pressure it makes on CIO to evolve towards the same model that end-users are now used to in their day to day life, where self-service and flexibility are paramount. This is what is driving IT to transform itself toward "a Global Service Provider", or for some as "IT "is" the Business" (see : Gartner Identifies Four Futures for IT and CIO), and for both models toward a Private Cloud Service Provider.

In this journey, there is still a big difference between most of existing external Cloud and a firm IT : the number of applications that a CIO has to manage. Most cloud providers today are overly specialized, but at the end of the day, there are really few business processes that rely on only one application. So CIOs has to combine everything together external and internal. And for the internal parts that they will have to make them evolve to a Private Cloud, the scope can be very large. This will often require CIOs to evolve from their traditional approach to more disruptive ones, the time has come to introduce new standards and processes, if they want to succeed.

So let's have a look at the different Cloud models, what type of users they are addressing, what value they bring and most importantly what needs to be done by the  Cloud Provider, and what is left over to the user.

IaaS, PaaS, SaaS : what's provided and what needs to be done

First of all the Cloud Provider will have to provide all the infrastructure needed to deliver the service. And the more value IT will want to provide, the more IT will have to deliver and integrate : from disks to applications. As we can see in the above picture, providing pure IaaS, left a lot to cover for the end-user, that’s why the end-user targeted by this Cloud Service is IT people.

If you want to bring more value to developers, you need to provide to them a development platform ready to use, which is what PaaS is standing for, by providing not only the processors power, storage and OS, but also the Database and Middleware platform.

SaaS being the last mile of the Cloud, providing an application ready to use by business users, the remaining part for the end-users being configuring and specifying the application for their specific usage.

In addition to that, there are common challenges encompassing all type of Cloud Services :

  • Security : covering all aspect, not only of users management but also data flows and data privacy

  • Charge back : measuring what is used and by whom

  • Application management : providing capabilities not only to deploy, but also to upgrade, from OS for IaaS, Database, and Middleware for PaaS, to a full Business Application for SaaS.

  • Scalability : ability to evolve ALL the components of the Cloud Provider stack as needed

  • Availability : ability to cover “always on” requirements

  • Efficiency : providing a infrastructure that leverage shared resources in an efficient way and still comply to SLA (performances, availability, scalability, and ability to evolve)

  • Automation : providing the orchestration of ALL the components in all service life-cycle (deployment, growth & shrink (elasticity), upgrades,...)

  • Management : providing monitoring, configuring and self-service up to the end-users

Oracle Strategy and Clouds

For CIOs to succeed in their Private Cloud implementation, means that they encompass all those aspects for each component life-cycle that they selected to build their Cloud. That’s where a multi-vendors layered approach comes short in terms of efficiency.

That’s the reason why Oracle focus on taking care of all those aspects directly at Engineering level, to truly provide efficient Cloud Services solutions for IaaS, PaaS and SaaS. We are going as far as embedding software functions in hardware (storage, processor level,...) to ensure the best SLA with the highest efficiency.

The beauty of it, as we rely on standards, is that the Oracle components that you are running today in-house, are exactly the same that we are using to build Clouds, bringing you flexibility, reversibility and fast path to adoption.

With Oracle Engineered Systems (Exadata, Exalogic & SPARC SuperCluster, more specifically, when talking about Cloud), we are delivering all those components hardware and software already engineered together at Oracle factory, with a single pane of glace for the management of ALL the components through Oracle Enterprise Manager, and with high-availability, scalability and ability to evolve by design. To give you a feeling of what does that bring in terms just of implementation project timeline, for example with Oracle SPARC SuperCluster, we have a consistent track of record to have the system plug into existing Datacenter and ready in a week. This includes Oracle Database, OS, virtualization, Database Storage (Exadata Storage Cells in this case), Application Storage, and all network configuration.

This strategy enable CIOs to very quickly build Cloud Services, taking out not only the complexity of integrating everything together but also taking out the automation and evolution complexity and cost.

I invite you to discuss all those aspect in regards of your particular context face2face on November 28th.

jeudi oct. 04, 2012

#OOW 2012: Big Data and The Social Revolution

As what was saying Cognizant CSO Malcolm Frank about the "Futur of Work", and how the Business should prepare in the face of the new generation  not only of devices and "internet of things" but also due to their users ("The Millennials"), moving from "consumers" to "prosumers" :  we are at a turning point today which is bringing us to the next IT Architecture Wave. So this is no more just about putting Big Data, Social Networks and Customer Experience (CxM) on top of old existing processes, it is about embracing the next curve, by identifying what processes need to be improve, but also and more importantly what processes are obsolete and need to be get ride of, and new processes put in place. It is about managing both the hierarchical and structured Enterprise and its social connections and influencers inside and outside of the Enterprise. And this does apply everywhere, up to the Utilities and Smart Grids, where it is no more just about delivering (faster) the same old 300 reports that have grown over time with those new technologies but to understand what need to be looked at, in real-time, down to an hand full relevant reports with the KPI relevant to the business. It is about how IT can anticipate the next wave, and is able to answers Business questions, and give those capabilities in real-time right at the hand of the decision makers... This is the turning curve, where IT is really moving from the past decade "Cost Center" to "Value for the Business", as Corporate Stakeholders will be able to touch the value directly at the tip of their fingers.

It is all about making Data Driven Strategic decisions, encompassed and enriched by ALL the Data, and connected to customers/prosumers influencers. This brings to stakeholders the ability to make informed decisions on question like : “What would be the best Olympic Gold winner to represent my automotive brand ?”... in a few clicks and in real-time, based on social media analysis (twitter, Facebook, Google+...) and connections link to my Enterprise data.

A true example demonstrated by Larry Ellison in real-time during his yesterday’s key notes, where “Hardware and Software Engineered to Work Together” is not only about extreme performances but also solutions that Business can touch thanks to well integrated Customer eXperience Management and Social Networking : bringing the capabilities to IT to move to the IT Architecture Next wave.

An example, illustrated also todays in 2 others sessions, that I had the opportunity to attend. The first session bringing the “Internet of Things” in Oil&Gaz into actionable decisions thanks to Complex Event Processing capturing sensors data with the ready to run IT infrastructure leveraging Exalogic for the CEP side, Exadata for the enrich datasets and Exalytics to provide the informed decision interface up to end-user. The second session showing Real Time Decision engine in action for ACCOR hotels, with Eric Wyttynck, VP eCommerce, and his Technical Director Pascal Massenet.

I have to close my post here, as I have to go to run our practical hands-on lab, cooked with Olivier Canonge, Christophe Pauliat and Simon Coter, illustrating in practice the Oracle Infrastructure Private Cloud recently announced last Sunday by Larry, and developed through many examples this morning by John Folwer. John also announced today Solaris 11.1 with a range of network innovation and virtualization at the OS level, as well as many optimizations for applications, like for Oracle RAC, with the introduction of the lock manager inside Solaris Kernel. Last but not least, he introduced Xsigo Datacenter Fabric for highly simplified networks and storage virtualization for your Cloud Infrastructure.

Hoping you will get ready to jump on the next wave, we are here to help...

lundi oct. 01, 2012

#OOW 2012 : IaaS, Private Cloud, Multitenant Database, and X3H2M2

The title of this post is a summary of the 4 announcements made by Larry Ellison today, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait a little, as I will go in order, beginning with the IaaS - Infrastructure as a Service - announcement.

Oracle IaaS goes Public... and Private...

Starting in 2004 with Fusion development, Oracle Cloud was launch last year to provide not only SaaS Application, based on standard development, but also the underlying PaaS, required to build the specifics, and required interconnections between applications, in and outside of the Cloud. Still, to cover the end-to-end Cloud  Services spectrum, we had to provide an Infrastructure as a Service, leveraging our Servers, Storage, OS, and Virtualization Technologies, all "Engineered Together".

This Cloud Infrastructure, was already available for our customers to build rapidly their own Private Cloud either on SPARC/Solaris or x86/Linux... The second announcement made today bring that proposition a big step further : for cautious customers (like Banks, or sensible industries) who would like to benefits from the Cloud value of "as a Service", but don't want their Data out in the Cloud... We propose to them to operate the same systems, Exadata, Exalogic & SuperCluster, that are providing our Public Cloud Infrastructure, behind their firewall, in a Private Cloud model.

Oracle 12c Multitenant Database

This is also a major announcement made today, on what's coming with Oracle Database 12c : the ability to consolidate multiple databases with no extra additional  cost especially in terms of memory needed on the server node, which is often THE consolidation limiting factor. The principle could be compare to Solaris Zones, where, you will have a Database Container, who is "owning" the memory and Database background processes, and "Pluggable" Database in this Database Container. This particular feature is a strong compelling event to evaluate rapidly Oracle Database 12c once it will be available, as this is major step forward into true Database consolidation with Multitenancy on a shared (optimized) infrastructure.

X3H2M2, enabling the new Exadata X3 in-Memory Database

Here we are :  X3H2M2 stands for X3 (the new version of Exadata announced also today) Heuristic Hierarchical Mass Memory, providing the capability to keep most if not all the Data in the memory cache hierarchy. Of course, this is the major software enhancement of the new X3 Exadata machine, but as this is a software, our current customers would be able to benefit from it on their existing systems by upgrading to the new release. But that' not the only thing that we did with X3, at the same time we have upgraded everything :

  • the CPUs, adding more cores per server node (16 vs. 12, with the arrival of Intel E5 / Sandy Bridge),
  • the memory with 256GB memory as well per node,
  •  and the new Flash Fire card, bringing now up to 22 TB of Flash cache.

All of this 4TB of RAM + 22TB of Flash being use cleverly not only for read but also for write by the X3H2M2 algorithm... making a very big difference compare to traditional storage flash extension.

But what does those extra performances brings to you on an already very efficient system: double your performances compare to the fastest storage array on the market today (including flash) and divide you storage price x10 at the same time... Something to consider closely this days... Especially that we also announced the availability of a new Exadata X3-2 8th rack : a good starting point.

As you have seen a major opening for this year again with true innovation. But that was not the only thing that we saw today, as before Larry's talk, Fujitsu did introduce more in deep the up coming new SPARC processor, that they are co-developing with us. And as such Andrew Mendelsohn - Senior Vice President Database Server Technologies came on stage to explain that the next step after I/O optimization for Database with Exadata, was to accelerate the Database at execution level by bringing functions in the SPARC processor silicium. All in all, to process more and more Data... The big theme of the day... and of the Oracle User Groups Conferences that were also happening today and where I had the opportunity to attend some interesting sessions on practical use cases of Big Data one in Finances and Fraud profiling and the other one on practical deployment of Oracle Exalytics for Data Analytics.

In conclusion, one picture to try to size Oracle Open World

... and you can understand why, with such a rich content... and this is only the first day !

lundi sept. 03, 2012

Pre-rentrée Oracle Open World 2012 : à vos agendas

A maintenant moins d'un mois de l’événement majeur d'Oracle, qui se tient comme chaque année à San Francisco, fin septembre, début octobre, les spéculations vont bon train sur les annonces qui vont y être dévoilées... Et sans lever le voile, je vous engage à prendre connaissance des sujets des "Key Notes" qui seront tenues par Larry Ellison, Mark Hurd, Thomas Kurian (responsable des développements logiciels) et John Fowler (responsable des développements systèmes) afin de vous donner un avant goût.

Stratégie et Roadmaps Oracle

Bien entendu, au-delà des séances plénières qui vous donnerons  une vision précise de la stratégie, et pour ceux qui seront sur place, je vous engage à ne pas manquer les séances d'approfondissement qui auront lieu dans la semaine, dont voici quelques morceaux choisis :

Retours d'expériences et témoignages

Si Oracle Open World est l'occasion de partager avec les équipes de développement d'Oracle en direct, c'est aussi l'occasion d'échanger avec des clients et experts qui ont mis en oeuvre  nos technologies pour bénéficier de leurs retours d'expériences, comme par exemple :

Echanges avec les groupes d'utilisateurs et les équipes de développement Oracle

Si vous avez prévu d'arriver suffisamment tôt, vous pourrez également échanger dès le dimanche avec les groupes d'utilisateurs, ou tous les soirs avec les équipes de développement Oracle sur des sujets comme :

Testez et évaluez les solutions

Et pour finir, vous pouvez même tester les technologies au travers du Oracle DemoGrounds, (1133 Moscone South pour la partie Systèmes Oracle, OS, et Virtualisation) et des "Hands-on-Labs", comme :

En conclusion, une semaine très riche en perspective, et qui vous permettra de balayer l'ensemble des sujets au coeur de vos préoccupations, de la stratégie à l'implémentation... Cette semaine doit se préparer, pour tailler votre agenda sur mesure, à travers les plus de 2000 sessions dont je ne vous ai fait qu'un extrait, et dont vous pouvez retrouver l'ensemble en ligne.

vendredi mai 25, 2012

Stratégie Systèmes Oracle

Pour ce poste, l'écrit est substitué par la parole. Je profite en fait d'une interview réalisée le mois dernier lors de l'évènement de la Mêlée Numérique de Toulouse, pour vous livrer une synthèse de notre stratégie de développement des systèmes au sein d'Oracle.

ITW Eric Bezille - Mêlée Numérique 2012 by lamelee

mercredi janv. 11, 2012

Big Data : opportunité Business et (nouveau) défi pour la DSI ?

Translate in English

Ayant participé à quelques conférences sur ce thème, voici quelques réflexions pour commencer l'année 2012 sur le sujet du moment...

Big Data : Opportunités Business

Comme le souligne une étude de McKinsey (« Big Data: The next frontier for innovation, competition, and productivity » ), la maîtrise des données (dans leur diversité) et la capacité à les analyser à un impact fort sur l’apport que l’informatique (la DSI) peut fournir aux métiers pour trouver de nouveaux axes de compétitivité. Pour ne citer que 2 exemples, McKinsey estime que l'exploitation du Big Data pourrait permettre d'économiser plus de €250 milliards sur l'ensemble du secteur public Européen (identification des fraudes, gestion et mesures de l'efficacité des affectations des subventions et des plans d'investissements, ...). Quant au secteur marchand, la simple utilisation des données de géolocalisation pourrait permettre un surplus global de $600 milliards, opportunité illustrée par Jean-Pierre Dijcks dans son blog : "Understanding a Big Data Implementation and its Components".

Volume, Vélocité, Variété...

Le "Big Data" est souvent caractérisé par ces 3x V :

  • Volume : pour certains, le Big Data commence à partir du seuil pour lequel le volume de données devient difficile à gérer dans une solution de base données relationnelle. Toutefois, les avancées technologiques nous permettent toujours de repousser ce seuil de plus en plus loin sans remettre en cause les standards des DSI (cf: Exadata), et c'est pourquoi, l'aspect volume en tant que tel n'est pas suffisant pour caractériser une approche "Big Data".
  • Vélocité : le Big Data nécessite donc également une notion temporelle forte associée à de gros volumes. C'est à dire, être capable de capturer une masse de données mouvante pour pouvoir soit réagir quasiment en temps réel face à un évènement ou pouvoir le revisiter ultérieurement avec un autre angle de vue.
  • Variété : le Big Data va adresser non seulement les données structurées mais pas seulement. L'objectif essentiel est justement de pouvoir aller trouver de la valeur ajoutée dans l'ensemble des données accessibles à une entreprise. Et à l'heure du numérique, de la dématérialisation, des réseaux sociaux, des fournisseurs de flux de données, du Machine2Machine, de la géolocalisation,... la variété des données accessibles est importante, en perpétuelle évolution (qui sera le prochain Twitter ou Facebook, Google+ ?) et rarement structurée.


...Visualisation et Valeur

A ces 3x V qui caractérisent le "Big Data" de manière générale j'en ajouterai 2 : visualisation et valeur !

Visualisation, car face à ce volume de données, sa variété et sa vélocité, il est primordial de pouvoir se doter des moyens de naviguer au sein de cette masse, pour en tirer (rapidement et simplement) de l'information et de la Valeur, afin de trouver ce que l'on cherche mais aussi,... bénéficier d'un atout intéressant au travers de la diversité des données non structurées couplées aux données structurées de l'entreprise : la sérendipité ou, trouver ce que l'on ne cherchait pas (le propre de beaucoup d'innovations) !

Les opportunités pour le Business se situent évidemment dans les 2 derniers V : savoir visualiser l'information utile pour en tirer une valeur Business ...

(nouveau) Défi pour la DSI

Le défi pour la DSI est dans la chaîne de valeur globale : savoir acquérir et stocker un volume important de données variées et mouvantes, et être capable de fournir les éléments (outils) aux métiers pour en tirer du sens et de la valeur. Afin de traiter ces données (non-structurées), il est nécessaire de mettre en oeuvre des technologies complémentaires aux solutions déjà en place pour gérer les données structurées des entreprises. Ces nouvelles technologies sont initialement issues des centres de R&D des géants de l'internet, qui ont été les premiers à être confrontés à ces masses d'information non-structurées. L'enjeu aujourd'hui est d'amener ces solutions au sein de l'entreprise de manière industrialisée avec à la fois la maîtrise de l'intégration de l'ensemble des composants (matériels et logiciels) et leur support sur les 3 étapes fondamentales que constitue une chaîne de valeur autour du Big Data : Acquérir, Organiser et Distribuer.

  1. Acquérir : une fois les sources de données identifiées (avec les métiers), il faut pouvoir les stocker à moindre coût avec de forte capacité d'évolution (de part la volumétrie concernée et la rapidité de croissance) à des fins d'extraction d'information. Un système de grille de stockage évolutif doit être déployé, à l'instar du modèle Exadata. La référence dans ce domaine pour le stockage en grille de données non-structurées à des fins de traitement est  HDFS (Hadoop Distributed Filesystem), ce système de fichiers étant directement lié aux algorithmes d'extraction permettant d'effectuer l'opération directement là où les données sont stockées.

  2. Organiser : associer un premier niveau d'index {clé,valeur} sur ces données non-structurées avec NoSQL (pour Not Only SQL) . L'intérêt ici, par rapport à un modèle SQL classique étant de pouvoir traiter la variété (modèle non prédéfinie à l'avance), la vélocité et le volume. En effet, la particularité du NoSQL est de traiter les données sur un modèle CRUD (Create, Read, Update, Delete) et non pas ACID (Atomicity, Consistency, Isolation, Durability), avec ses avantages de rapidité (pas besoin de rentrer les données dans un modèle structuré) et ses inconvénients (accepter pour des raisons de capacité d'acquisition de pouvoir être amené à lire des données "périmées", entre autres). Et ensuite pouvoir également extraire de l'information au travers de l'opération MapReduce s'effectuant directement sur la grille de données non-structurées (pour éviter de transporter les données vers des noeuds de traitement).

    L'information ainsi extraite de cette grille de données non-structurées devient une partie du patrimoine de l'entreprise et a toute sa place dans les données structurées et donc fiables et à "haute densité" d'information. C'est pourquoi, l'extraction d'information des données non-structurées nécessite également une passerelle vers l'entrepôt de données de l'entreprise pour enrichir le référentiel. Cette passerelle doit être en mesure d'absorber d'importants volumes d'information dans des temps très courts.

    Ces 2 premières étapes ont été industrialisées aussi bien sur la partie matérielle (grille/cluster de stockage) que logicielle (HDFS, Hadoop MapReduce, NoSQL, Oracle Loader for Hadoop) au sein de l'Engineered System d'Oracle : Oracle Big Data Appliance, le référentiel de données structurées pouvant quant à lui être implémenté au sein d'Exadata.

  3. Distribuer : la dernière étape consiste à rendre disponible l'information aux métiers, et leur permettre d'en tirer la quintessence : Analyser et Visualiser. L'enjeu est de fournir les capacités de faire de l'analyse dynamique sur un gros volume de données (cubes décisionnels) avec la possibilité de visualiser simplement sur plusieurs facettes.

    Un premier niveau d'analyse peut se faire directement sur les données non-structurées au travers du langage R, directement sur le Big Data Appliance.

    L'intérêt réside également dans la vision agrégée au sein du référentiel enrichi suite à l'extraction, directement au travers d'Exadata par exemple... ou via un véritable tableau de bord métier dynamique qui vient s'interfacer au référentiel et permettant d'analyser de très gros volumes directement en mémoire avec des mécanismes de visualisation multi-facettes, pour non seulement trouver ce que l'on cherche mais aussi découvrir ce que l'on ne cherchait pas (retour sur la sérendipité...). Ceci est fait grâce à l'identification (visuelle) d'axes de recherches que les utilisateurs n'avaient pas forcément anticipés au départ.

    Cette dernière étape est industrialisée au travers de la solution Exalytics, illustrée dans la vidéo ci-dessous dans le monde de l'automobile, où vous verrez une démonstration manipulant dynamiquement les données des ventes automobiles mondiales sur une période de 10 ans, soit environ 1 milliard d'enregistrements et 2 TB de données manipulées en mémoire (grâce a des technologies de compression embarquées).

HSM (Hierachical Storage Management) et Big Data

Pour terminer la mise en place de l'éco-système "Big Data" au sein de la DSI, il reste un point fondamental souvent omis : la sécurisation et l'archivage des données non-structurées. L'objectif est de pouvoir archiver/sauvegarder les données non-structurées à des fins de rejeu éventuel, et pour faire face à la croissance des volumes en les stockant sur un support approprié en fonction de leur "fraîcheur".  En effet, une grille de type Hadoop base sa sécurité sur la duplication de la données, mais si une donnée est corrompue, ses copies le sont aussi. En outre, cette grille est là pour permettre un traitement à un instant t (vélocité) sur les données, une fois ce traitement effectué, les données sur la grille sont souvent remplacées par des données plus récentes (voir l'exemple : "⁞Understanding a Big Data Implementation and its Components" qui traite bien du cas d'usage des données liées à un contexte temporel) . Dans certains cas d'usage, il peut être intéressant de pouvoir revisiter des données capturées ultérieurement avec un autre angle d'analyse, ou pour des besoins de vérification, et dans tous les cas pour pouvoir restaurer en cas d'incident de corruption. C'est là où le couplage avec une solution de stockage hiérarchique (HSM) est indispensable pour la capture initiale des données non-structurées et leur archivage à moindre coût face aux volumétries à traiter. C'est ce que nous couvrons au travers de notre solution Storage Archive Manager (SAM), solution d'ailleurs utilisée dans un projet "Big Data" français pour pouvoir archiver 1 PB de données non-structurées.

Pour aller plus loin :

lundi oct. 10, 2011

Oracle Open World 2011 : Very Big (again) !

Translate in English 

Oracle Open World continue a battre des records aussi bien en terme d'audience avec plus de 45000 personnes que de contenus, avec plus de 2000 sessions, sans parler des annonces majeures qui ont eu lieu cette année et sur lesquelles je vais revenir dans ce poste, jour par jour.

Premier jour : Engineered Systems

L'évènement a été lancé avec une "key notes" 100% matériel, autour des Engineered Systems, avec un rappel du succès d'Exadata et d'Exalogic, et du pourquoi : massivement parallèle à tous les niveaux, avec en plus la compression des données pour pouvoir bouger beaucoup de données, beaucoup plus vite qu'une architecture traditionnelle, le tout basé sur un coeur infiniband... Conception poussée jusqu'au processeur avec le T4 et Solaris côté système d'exploitation, qui aboutissent a un nouvel "Engineered Systems", le Supercluster, pour proposer la solution la plus adaptée (intégrée) sur le terrain des applications utiles à l'Entreprise (Java/Database). Pour le partie calcul géométrique, ce sera la prochaine étape...

""Cette première "key notes" s'est conclue toujours sur du "Hardware and Software Engineered to work together", pour délivrer des résultats "plus vite que la pensée" avec l'Exalytics, qui s'interface de préférence avec un Exadata, mais pas seulement... pourquoi pas votre SAP, pour en tirer des analyses très rapide sur une volumétrie de données importante, que l'on arrive à faire tenir dans 1 TB de RAM, grâce à des technologies de... [Read more][Read More]

dimanche oct. 02, 2011

Oracle Open World - Hands-on Lab : Configuring ASM and ACFS on Solaris - Part 2

Oracle Open World - Hands-on Lab - Participant Guide

Content and Goal

"Oracle Automatic Storage Management gives database administrators a storage management interface that is consistent across all server and storage platforms and is purpose-built for Oracle Database.

Oracle Automatic Storage Management Cluster File System is a general-purpose file system for single-node and cluster configurations. It supports advanced data services such as tagging and encryption.

This hands-on lab shows how to configure Oracle Automatic Storage Management and Oracle Automatic Storage Management Cluster File System for installation of an Oracle Database instance on Oracle Solaris 10 8/11.

You'll learn how to install the software, build Oracle Automatic Storage Management volumes, and configure and mount Oracle Automatic Storage Management Cluster File System file systems."


This tutorial covers the installation of Oracle Grid Infrastructure for a standalone server. In the Oracle 11g Release 2, the Grid Infrastructure contains, amongst other software:

  • Automatic Storage Managment (ASM)

  • ASM Dynamic Volume Manager (ADVM)

  • ASM Cluster File System (ACFS)

This lab is divided into 4 exercises.

Exercise 1: We install the ASM binaries and grid infrastructure. As part of the install we create a diskgroup of three disks called DATA. DATA will later be used to store the database data files.

Exercise 2: We use ASM Configuration Assistant (ASMCA) to create a second diskgroup called MYDG. From MYDG we create a ADVM volume called MYVOL and from that we create a ACFS file system called u02.

Exercise 3: We use the installer to install the Oracle database binaries into our new ACFS filesystem (u02).

Exercise 4: We then use the database configuration assistant to create a database with the tablespaces populating the DATA ASM diskgroup.

In our setup we use "External" redundancy for our disks. This implies ... Read more...[Read More]

Eric Bezille-Oracle


« novembre 2015