mardi oct. 27, 2015

#OOW15 Enabling Efficiency and Security at Scale with Software in Silicon

Beside the Cloud, the other big theme of the day was the launch of the M7 Servers and SuperCluster. To get a good view on why we are all so exited about it, I invite you to read the following article from Chuck Hollis: "The Amazing Oracle M7". As reflected in Mark Hurd's Keynote, even if it is about Cloud, it is also about efficiency, scalability and security that power both the Cloud and radical innovation that customers like GE have to undertake to differentiate themselves. And as Larry Ellison stated on his Sunday opening Keynote, the deeper and lower you manage to go, the greater efficiency. And it is exactly what we have done with "The Amazing Oracle M7".

We started this journey several years ago, and we are delivering it to you today. Yes, you can order it now, and being ship to you for Christmas or even before. The nice thing is that we are not the only one to be exited about it, ISVs and customers too... as I was able to see in several meetings I had today at Oracle Open World (if they read this blog, they will recognized themselves).

Not only did we focus on the usual chip design things, but the most important was to embed Software in Silicon.

Undertaking the Security challenges

Turn security on by default ! No compromise, for your Data at rest, on the wire AND in memory ! All going at hardware speed !

With the Oracle M7 Software in Silicon features, not only did we improved the cyphers algorithms to encrypt your data (transparently), we are also providing a completely unique capability to secure program memory access: Silicon Secured Memory. This feature locks the memory of a program at allocation time, providing a "key" (colored pointer) to the program that it will be the only one to match with its memory segment. So this is the end of Heartbleed or Venom like attacks. The nice thing about it, is that it also prevents from bad programming and improving code quality, which is becoming more and more complex especially when you are talking Big Data in-memory... the upcoming "new normal".

Undertaking the Efficiency Challenge

Of course all the improvements we did on core count, cache size, memory bandwidth... benefit out of the box to any applications. But here again we are embedding SQL in Silicon, that direct benefits to Oracle Database 12c in-memory Analytic by a factor of x10 at minimum: a customer during the Beta Program experienced a x83 gain running Oracle Database 12c in-memory on M7 vs. all Flash. We are even exploring to open the interface to other applications, and Spark on SPARC is running in the DEMOground at Oracle Open World providing a x6 time improvement, as stated by Ganesh Ramamurthy during his Keynote.

If you are not convince yet on the new efficiency, I invite you to read our published benchmarks today, not only on generic benchmarks, but also on real Enterprise benchmark and many area like In-Memory, Hadoop, NoSQL, Graph, Neural Network with R... Just pick the one you need and it should be here... Oh, and I forgot to mention, of course we did those benchmarks with Security turned on by default.

#OOW15 Cloud Predictions for 2025

Marc Hurd launched second day of Oracle Open World by providing his Vision 2025, joint on stage by  Jim Fowler (GE CIO) and Mike Brady (CTO AIG), along with video testimonials from Solarus Aircraft business, Avaya and DocuSign. A panel of customers spanning from very large to very small companies.

We are facing today a massive change in the industry, driven by a lower world wide growth and a shift of consumers impacting technologies. The current IT model based on old applications pre-internet, pre-mobile, pre-social world is not sustainable. Combining all those macro and micro economics trends, explains why Cloud is becoming so popular. It is build on the promise to get to faster innovation at a lower cost NOW. But to make it effective, it would take time: at least a decade. That's why as Oracle we undertake our journey to Cloud several years ago to prepare this transformation, starting by rewriting our applications for the Cloud, allowing coexistence -between existing on-premise and Cloud- to sustain this decade to move to the Cloud.

By 2025, Mark Hurd's Vision is that :

  1. 80% of applications will be in the cloud
  2. 2 suites providers will own 80% of SaaS market
  3. 100% of test/dev will be in the cloud
  4. Virtually all enterprise Data will be store in the cloud
  5. Enterprise cloud will be the most secured IT environment

And we are readying ourselves for that transformation.

Jim Fowler, GE CIO, was completely reflecting this vision. He has setup goals of having 70% of his applications in the Cloud for 2020. And for that he wants to leverage partners who know best on where they are not adding value to the business and focusing on building where GE can differentiate, like software analytic for gas turbine or locomotive. And even for that 30% in-house intellectual property development GE is still looking at partners to buy innovation at the infrastructure layer to provide scalability and security. A good thing for Oracle Engineered Systems strategy to power both the Cloud and on-premise transformation.

AIG CTO Mike Brady reflected the same movement, underlying the need to link both world, as he can't move to the Cloud without connected the Data stream coming from the Enterprise.

And along those lines, Security was THE word expressed in all testimonials.

All in all, those testimonials were not only enforcing Oracle Vision for the Cloud but also our investments in the full stack innovation, deep into the chip level to be able to deliver the scalability and security both needed in the Cloud and to deliver on-premise transformation and innovation as well... securely. And upper the stack with a full integrating application portfolio spanning CRM, ERP, Marketing, Social Relationship, and extensible to your own requirements thanks to open Platform leveraging 100% compatible Java technology to run it anywhere, with no-locking. A strategic that Avaya leveraged to move from a 100% proprietary Cloud provider to Oracle Suite of Cloud Application and Java PaaS, reducing his specifics by 80%, and for the remaining 20% using a fully open Java PaaS.

To get additional insight on Mark's Keynote, I invite you to read also the following article: "Oracle CEO Mark Hurd Lays Out the Future of the Cloud".

vendredi oct. 23, 2015

Real-Time Financial Risk Management: big data in-memory at scale

If you are working in financial market and are going to attend Oracle Open World in few days, you should take a valuable 1 hour of your time to join the head of R&D of Quartet FS, Antoine Chambille  for one of his sessions at JavaOne. He will be joint by our Java and ISV Engineering experts to explain how you can make Real-Time correlations with Java, to take the right decision.

As a matter of fact, there is a time where scale-out -web search engine like design pattern- doesn't work anymore. And when you need to correlate very large amount of data in real-time, this is becoming a very interesting challenge. This is what has been achieved by Quartet FS with Oracle technologies running Java in-memory at very large scale in real-time. But to know more and ask the burning questions that you should have, on how you can leverage this capabilities in your own context (even outside of Financial Market, like Retail), join Antoine Chambille and Oracle Engineering for the following sessions:

As a first preview, and for those not going to be at JavaOne, you can have a look at the first 22 minutes of last Quartet FS User Group's Technology Keynote, about performance.

lundi sept. 23, 2013

#OOW2013: All your Database in-memory for All your existing applications... on Big Memory Machines

Many announcements have been made today by Larry Ellison, during his opening of Oracle OpenWorld. To begin with, Americas Cup is still running, as Oracle won today's races.  I must admit that seeing those boats racing at such a speed and crossing each other at few meters was really impressive. On OpenWorld side, it was also very impressive. More people this year are attending the event : 60 000 ! And in terms of big numbers, we saw very impressive results of the new features and products that have been announced today by Larry: Database 12c in-memory option, M6-32 Big Memory Machine, M6-32 SuperCluster and Oracle Database Backup, Logging, Recovery Appliance (yes, I am not joking, that's its real product name !).

Database 12c in-memory option: both row and column in-memory formats for same data/table

This new option will benefit all your existing applications unchanged. We are leveraging the memory to store both formats at the same time. This enable us to drop all the indexes that are usually necessary to process queries, for a design target of x100 improvement on performance for real-time analytic. As you will see later, we can achieve even more, especially if we are running on an M6-32 Big Memory Machine. At the same time the goal was also to improve transactions x2 !

The nice thing of this option is that it will benefit to all your existing applications running on top of Oracle Database 12c: no change required.

On stage, Juan Loaiza, did a small demonstration of this new option on a 3 billions row database, representing wikipedia research query. On a regular database, without this option, after identifying (or guessing) the query that will most likely be used by users, you put in place appropriate indexes (from 10 to 20 indexes), then you can run you query with acceptable performance, in this case: 2005 Million Rows Scanned / Sec instead of  5 Million Rows Scanned / Sec. Not too bad... Now if we replace the indexes required by the new Column formats store in-memory, we achieved in this case: 7151 Million Rows Scanned / Sec ! Something people looking into Big Data, and real-time decisions, will surely have a look at it.

 The second announcement was a new processor, and a new system associated with it: the M6 chip and the M6-32 Big Memory Machine... available now !

M6-32 Big Memory Machine: Terabyte Scale Computing

This system is compatible with the previous generation of M5 chips, protecting existing investment, and can host as well the new M6 12x cores, 96 threads processor. All in this system is about Terabytes : up to 32 TB of memory, 3 Terabytes/sec of system bandwidth, 1.4 Terabytes/sec of memory bandwidth, 1 Terabyte per second of I/O bandwidth !

This new machine is also the compute node of the new M6-32 SuperCluster announced also today.

M6-32 SuperCluster: In-Memory Database & Application System

That's our fastest Database Machine, with big memory for Column store and integrated Exadata Storage ! Juan Loaiza did also the same demonstration of the wikipedia search on this system... but not with 3 billions rows, but 218 billions rows ! The result speaks by itself: 341 072 Million Rows Scanned / Sec !

With all those critical systems hosting such amount of Data, it is also very important to provide a powerful Database Backup and Restore Solution... And that's all the latest Appliance announced today is about.

Oracle Database Backup, Logging, Recovery Appliance

By just reading its name you get nearly all the capabilities this new appliance will provide to you. First, it is specialized to backup Oracle Database of ALL your systems running an Oracle Database (Engineered Systems, like the lastest M6-32 SuperCluster or Exadata, as well as your regular servers). Second, it also captures all your Database logs. So not only do you have a backup but also the deltas between now and your latest backup. This is allowing you to come back at the point you want when recovering your database.

It can even be coupled with our new Database Backup service on Oracle Public Cloud, for an extra secure copy.

With this new appliance you can now be confident in securing your Oracle Database data.

Building your future datacenter

Today, not only did we see the new Oracle Database 12c enabling to work on memory for all you application, we also saw the associated M6-32 server and associated Engineered Systems M6-32 SuperCluster to run the stack with Big Memory capacity... all being secured by Oracle Database backup, Logging, Recovery Appliance. All of those innovations contributing to build your Datacenter of the future, where all is engineered to work together at the factory.

dimanche sept. 22, 2013

T5-2 for high-performance financial trading

On this post, I will focus on the second testimony reported by François Napoleoni. Here,the goal was to select the next best platform to face the growth and upgrade of a financial trading application. The comparaison has been made between :
  1. T5-2, running Oracle VM, Solaris, Sybase and the financial application
  2. x86, running VMWare, Redhat, Sybase and the financial application

The decision criteria being :

  • Simplified architecture
  • Systems performance in real life (what has been tested and measured)
  • Platform stability
  • Single point of support
  • Leverage internal skills
  • Strong security enforced between the different virtual environments
  • ROI of the solution

For those of you understanding french, I will let you listen to this few minutes video below. And for the English readers, go through more details in this post.

Oracle VM virtualization on T4-4 : architecture and service catalog

Last Tuesday, during the SPARC Showcase event, Jean-François Charpentier, François Napoleoni and Sébastien Flourac delivered very interesting uses cases of deployment of latest Oracle Systems at work : Oracle VM virtualization project on T4-4, intensive financial trading on T5-2 and a complete Information System migration to Oracle SuperCluster.

I will start to cover in this post the main points that Jean-François Charpentier focus on to build a consolidated T4-4 platform, effective for his Business users.

Oracle VM virtualization on T4-4 Architecture

As often, Mr Charpentier had to handle existing environment, that needed to be taken into account when building this new virtual platform. First he had to be able to provide a platform that could consolidate all the existing environments, the main driver here being :

  • total memory requirement of existing asset
  • multi-tenant capability to share the platform securely between several different networks
  • comply to strong SLA

T4-4 was the best building block to cover them :

  • memory : up to 2 TB
  • multiple networks connection : up to 16x PCIe extension
  • Oracle VM built-in with redundant channels capability and live migration
  • Solaris binary compatibility to enable easy consolidation of existing environments

Overall Architecture Design

The deployment choice have been to setup 2x Oracle VM T4-4 clusters per sites as follow:

To cover his SLA requirements, Mr Charpentier built redundancy not only by providing multiple T4-4 nodes per Oracle VM clusters, but also at the Oracle VM itselfs. For Oracle VM, he chose to make redundant storage and network virtual access layer as display in the following 2 diagrams.

Oracle VM Virtual Mutlipathing with alternate I/O Domain

Oracle VM Virtual Network Access through IPMP

 All of this virtual network layer being link to different network back-bones, thanks to the 16x PCIe extension of the T4-4, as illustrated in the following diagram.

Another option could have been to deploy Oracle Virtual Network, to enable the disk and network access with only 2x PCIe slots at the server layer.

Oracle VM on T4-4 Service Catalog

Beside the architecture choice, that needs to comply with strong SLA. The development of a service catalogue is also very key to bring the IT toward a service provider. And it is exactly what have been put in place by Jean-François Charpentier, as follow :

 By putting in place this new virtual platform with its associated service catalog, Mr Charpentier was able to provide to his Business better agility thanks to easier and faster deployment. This platform has become the standard for all Solaris deployment for his business unit, and they expect to reach a 90% to 100% Solaris virtualization by 2014.

lundi sept. 09, 2013

SPARC Showcase in Paris : September 17th

I recently posted about preparing Oracle Open World. A good start, for those of you in Paris, on September 17th, would be to come to the SPARC Showcase, where our customers will develop why and where they are leveraging latest SPARC technologies T4, T5 and Oracle SuperCluster to their IT and Business benefits : 

  • Mr Jean-Marc Jacquot, from Mysis
  • Mr Jean-François Charpentier, Technical Architect from a leader in RH services solutions
  • Mr Sébastien Flourac, Head of Strategy for GRTgaz IT

You can register here.

jeudi juin 13, 2013

3 minutes video of last Month Oracle Solaris Users Group event

A quick report of last month Oracle Solaris Users Group event in Paris... in french...

vendredi mai 17, 2013

Why OS matters: Solaris Users Group testimony

Wednesday evening, a month after the new SPARC servers T5 & M5 launch in Paris, the french Solaris users group, get together to get the latest from Oracle experts on SPARC T5 & M5, Oracle Virtual Network, as well as the new enhancements inside Solaris 11.1 for Oracle Database. They also came to share their projects experiences and lessons learn, leveraging Solaris features : René Garcia Vallina from PSA, did a deep dive on ZFS internal and best practices around SAP deployment and Bruno Philippe explained how he managed to consolidate 100 Solaris servers into 6 thanks to Solaris 11 specific features.

It was very interesting to see all the value that an operating system like Solaris can bring. As of today, operating systems are often deeply hidden in the bottom layers of the IT stack, and we tend to forget that this is a key layer to leverage all the hardware innovations (being new CPUs cores, SSD storage, large memory subsystems,....) and expose them to the applications layers (being Databases, Java application servers,...). Solaris is going even further than most operating systems, around performances (will get back to that point), observability (with DTrace), reliability (predictive self healing,...), and virtualization (Solaris ZFS, Solaris Zones & Solaris Network Virtualization, also known as project "crossbow").

All of those unique features are bringing even more values and benefits for IT management and operations in a time of cost optimization and efficiency. And during this event, this was something that we could get from all the presentations and exchanges.

Solaris and SPARC T5 & M5

As Eric Duminy explained in the introduction of his session on the new SPARC T5 & M5, we are looking into new paradigm of CPU design and associated systems. Following Moor's law, we are using transistors in completely new ways. This is no more a run for frequency, if you want to achieve performance gain, you need more. You need to bring application features directly at CPU and Operating System level. Looking at SPARC T5, we are talking about a 16 cores, 8 threads/core processor, with up to 8x sockets, 4 TB RAM, SPARC T5-8 server in only 8 rack units ! This mean also, 128 cores and 1024 threads, and even more for the M5-32, with up to 192 cores, 1536 threads, 32 TB RAM  ! That's why the operating system is a key piece that needs to be able to handle such systems efficiently : ability to scale to that level, ability to place the process threads and associated memory on the right cores to avoid context switch, ability to manage the memory to feed the cores at the right pace.... This is all what we have done inside Solaris, and even more with Solaris 11.1 to leverage all this new SPARC T5 & M5 servers, and get the results that we announced a month ago at the launch.

 Of course we don't stop there. To get the best out of the infrastructure, we are designing at CPU, system and Solaris level to optimize for the application, starting at the database level.This is what Karim Berrah covered in his session.

Solaris 11.1 unique optimizations for Oracle Database

Karim's explained first the reasoning behind the complete new virtual memory management of Solaris 11.1, something that benefits directly to Oracle Database for the PGA and SGA allocation. You will experience it directly at database startup (twice faster !). The new virtual memory system will also benefit to ALL your applications, just looking for example at the mmap() function which is now x45 faster (this is what is used for all the shared libraries). Beyond performances, optimizations have been made on security, audit, and management. For example, with the up coming new release of Oracle Database, you will be able to dynamically resize your SGA and also get greater visibility for the DBA in datapath performances thanks to a new DTrace table directly available inside the database: a tight integration between Oracle DB and Solaris unique features.

Alain Chereau one of our performance guru from EMEA Oracle Solution Center provided his foresight and expertise. He especially reminded that the performance is achieve when ALL the layers work well together, and that "your OS choice has an impact on the DB and reverse. Something to remember for your critical applications." Alain closed the session with a final advice on best use of SSD for Oracle DB and Solaris ZFS. In short, SSD are align on 4k block. For Oracle DB, starting with, redolog can write in 4k block. This needs to be specify at redolog creation on the record size setting. For Solaris, ZFS knows about SSD and directly adapt. That's the reason why putting ZFS secondary cache on SSD (readzilla) is a very good idea, and a way to avoid bad behavior introduced by new "blind" storage tiering when combined with ZFS. Just put SSD drives for ZFS secondary cache directly inside your T5 or M5 servers and you are done. This is an important topic, as even if a majority of customers are running Oracle Database on ASM on production to get the benefit of grid and Oracle RAC security and scalability, that maybe different for development environments. As a matter of fact, for development systems most customers are leveraging Solaris ZFS and its compression and infinite clone and snapshot functions.

This brings me to René's session on SAP on ZFS...

Lessons learn from deploying SAP on ZFS

Clearly one of the most technical session of this event. Congratulation to René for a very clear explanation on ZFS allocation mechanisms and algorithm policies. I will start by René's conclusion : "Don't follow your ISV (SAP in this case) recommendations blindly". In fact, PSA was experiencing performances degradation and constant I/O activity even with very few transactions on application side. This was due to the fact that SAP recommends to use the SAP Data filesystem at more than 90% ! A very bad idea when you put your data on a Copy-on-Write (COW) filesystem like ZFS... Where I always recommend to keep around 20% of free space to allow for the COW operations to take place ! That's of course the new rule for SAP deployment at PSA.

So if you already have ZFS deployed with this rule in place, you don't have to read further, just keep doing it and move directly to the next topic... otherwise you maybe facing currently some performance problems as well.  To identify which of your ZFS pools are facing this situation, René provided a nice dtrace command that will tell you :

# dtrace -qn 'fbt::zio_gang_tree_issue:entry { @[pid]=count();  }' -c 'sleep 60'

Then to solve the problem, you understand that you need to add free space to enable the COW operation (in one shot). The best way would be to add a vdev (for more details: Oracle Solaris ZFS: A Closer Look at Vdevs and Performance). You could also use a zfs replace with a bigger vdev, but that's not the best option in the long run. If you go through a whole modification cycle of the content of the pool, your zpool will "defragement" by itself. If you want to "defragment" the zfs pool immediatly, if you have a Database, you can do it through "alter table move" operations (special thank to Alain Chereau for the tip). For standard files, you need to copy them and rename them back, or best, do a zfs send | zfs receive to another free zpool and you are done.

From 100 Servers to 6 thanks to Solaris 11

Last but not least, we also had another deep dive session during this event, with live demo ! Thanks to Bruno Philippe, President of the French Solaris Users Group, who shared with us his project of consolidating 100 servers, going from Solaris 8 to Solaris 10 into 6 servers with minimal to no business impact allow ! Bruno achieved his project thanks to Solaris 11 unique new feature : Solaris network virtualization, combine with Solaris Zones P2V and V2V, and SPARC Hardware hypervisor (Oracle VM for SPARC, known also as "LDOM", or Logical Domain).

I invite you to visit Bruno's blog for more details : Link Aggregations and VLAN Configurations for your consolidation (Solaris 11 and Solaris Zone)

Awaiting his next entry explaining the detail of the V2V and P2V operations that he demonstrated to us live on his laptop through a Solaris 11 x86 VBOX image.

I hope to see you on the up coming Solaris and SPARC event to share your feedback and experience with us.

The up coming Paris events will take place on June 4th, for  Datacenter Virtualization, focus on storage and network, and July 4th for a special session on new SPARC servers and their business impact.

mercredi avr. 10, 2013

IT Modernization: SPARC Servers Engineering Vice President in Paris

Avec le renouvèlement complet des serveurs SPARC annoncé il y a 2 semaines, Masood Heydari, vice-président de l'ingénierie SPARC sera à Paris le 18 Avril, afin de partager avec vous, les apports de ces nouveaux serveurs T5 et M5 sur le marché. Après l'intervention de Masood, Didier Vionnet, ACCOR vice-président du back-office, Bruno Philippe, président du groupe français des utilisateurs de Solaris, Renato Vista, CTO CAP Gemini Infrastructure Services, Harry Zarrouk, Directeur des Systèmes d'Oracle pour la France et moi-même, participeront à une table ronde sur les apports de ces innovations pour la modernisation des systèmes d'informations et les nouveaux besoins métiers des entreprises. Je vous invite à vous inscrire à cet évènement afin de venir échanger avec l'ensemble des intervenants.

With the complet renewal of SPARC Servers announced 2 weeks ago, Masood HEYDARI, Senior Vice President of SPARC Servers Engineering will be in Paris on April 18th, to share what the new SPARC Servers T5 and M5 bring on the market. Following Masood key notes, Didier Vionnet, ACCOR Vice-President of Back-office, Bruno Philippe, President of French Solaris Users Group, Renato Vista, CTO CAP Gemini Infrastructure Services, Harry Zarrouk, Director of Oracle Systems for France and myself, will exchange on the benefits those innovations bring to IT Modernization and Business needs.

mardi mars 26, 2013

New SPARC Servers Launched today : Extreme Performances at exceptionally low cost

It will be very difficult to summarize in a short post all the details and already available customers and ISVs results leveraging Oracle investment, design and ability to execute on SPARC servers complete renewal with not one but 2 processors launched : SPARC T5 and M5. It is somehow captured in the title of this entry, in Larry's own words: "extreme performances at exceptionally low cost". To give you a quick idea, we just announced 8 552 523 tpmC with 1x T5-8 (a new 8x sockets T5 mid-range server). Adding on top, "extreme scalability with extreme reliability", as with the M5-32 server, we can scale up to 32x sockets and 32 TB of memory, in a mission-critical system.

New way of designing systems

As what John Fowler was saying : "this starts with design". Here at Oracle, we have a new way of designing. Historically systems were designed by putting servers, storage, network and OS together. At Oracle we add Database, Middleware and Applications in the design. We think what it takes for the coherency protocols, the interfaces, and design around those points... and more.
Today we introduce not one but 2x processors with the whole family of servers associated with them. Thanks to common architecture they are design to work together. All of this of course runs Solaris. You can run Solaris 10, Solaris 11 and virtualize.  No break on binary compatibility.

Direct benefit for your applications... at lowest risk... and lowest cost

This is good for our customers and our ISVs, enabling them to run their applications unchanged on those new platforms with no equivalent performance gain, lowest cost and lowest risks, thanks to the binary compatibility and the new servers design under Oracle era. There was many customers examples on-stage, so I will just pick 2, SAS moving from M9000 to M5-32 with a x15 gain overall, and Sybase moving from M5000 to a T5-2 with a x11 gain overall. Those being in my opinion very important as they are reflecting real applications and customers experiences, many of them being in the financial services, and already having jump on those new systems (thanks to the beta program).

To get a better idea of what the new SPARC T5 and M5 will bring to your applications, being Siebel, E-Business Suite, JD Edwards, Java, or SAP... Have a look here : on the 17 world records... on performances and price.

samedi sept. 25, 2010

Oracle OpenWorld : BIG !

Translate in English

Gigantesque est bien le mot. Je suis dans l'avion qui me ramène d'oracle openworld avec Christophe Talvard et nous voulions vous livrer quelques impressions "à chaud" et un chiffre : 41000 personnes ! Evidemment vous n'avez sûrement pas manqué les nombreux articles sur le sujet, avec bien entendu l'annonce majeure sur la solution Exalogic Elastic Cloud, qui vient s'adosser à l'offre Exadata pour couvrir le tier applicatif de façon très efficace : 12 fois plus performante qu'une architecture traditionnelle en grille de serveurs d'application. Un niveau de performance permettant de supporter la charge du trafic mondial de Facebook sur seulement deux configurations! Ce qui démontre en soi la stratégie d'Oracle : "Hardware and Software engineered to work together". Une stratégie qui va bien au delà de l'extraordinaire gain en performance et qui s'attache également à faciliter la gestion de l'ensemble des composants logiciels et matériels de l'Exalogic avec la possibilité de les mettre à jour avec un unique fichier, pré-testé et validé par Oracle.

Avec Exalogic et Exadata, tous les éléments sont présents pour déployer un Cloud public ou privé : les performances, l'intégration des logiciels et des matériels mais également la tolérance aux pannes, la flexibilité et l'évolutivité.

Mais ce n'est pas tout, SPARC et Solaris disposaient également d'une place de choix avec la présentation de la roadmap à 5 ans et l'annonce du processeur T3, ses 16 cœurs et quelques records du monde à la clé, ainsi que l'arrivée prochaine de Solaris 11, non seulement de façon générale mais aussi en option au sein d'Exalogic et de la nouvelle version d'Exadata. A ce titre de nombreuses sessions d'échanges sur des retours d'expérience de mises en œuvre d'Exadata ont fait salles combles, notamment celles de Jim Duffy et Christien Bilien sur la solution déployée chez BNP Paribas (voir précédent poste). A noter également plusieurs témoignages sur l'utilisation d'Exadata en consolidation de bases de données. Un modèle qui devrait s'accélérer avec la nouvelle machine x2-8, ses nœuds très capacitifs de 64 cores et 2 To de RAM et ses unités de stockage Exadata Storage server ultra performantes et optimisées pour vos données structurées. Sans oublier l'annonce de la nouvelle gamme ZFS Storage Appliances pour l'ensemble de vos données non structurées et le stockage de vos environnements virtualisés au meilleur coût et avec une sécurité maximum (triple parité).

Toutes ces infrastructures matérielles et logiciels conçues pour travailler ensemble, sont les fondations des applications supportant les métiers de votre entreprise. Et dans ce domaine, l'annonce de l'arrivée de Fusion Applications, l'un des plus gros projet de développement de l'histoire d'Oracle, est majeure. En effet, Fusion Application apporte à vos applications métiers (CRM, ERP, RH,...) un socle standard et non plus un moteur propriétaire comme c'était le cas jusqu'ici. Or, nous le savons tous, ces moteurs propriétaires liés aux développements spécifiques sont les causes de la complexité des systèmes d'informations actuellement en place et de leur non agilité à répondre aux changements des métiers toujours plus rapide. Fusion Application change radicalement les perspectives, car non seulement il fournit une souche standard mais il a été également conçu pour découpler les besoins spécifiques du socle et donc pour ne pas freiner les évolutions et l'agilité de l'entreprise.

En bref, nous disposons de solutions technologiques ouvertes, qui, tout en s'intégrant de manière évolutive dans votre système d'information vont en révolutionner les opérations avec un alignement sur les besoins métiers et une agilité incomparable. Et nous sommes tous prêts à travailler à vos côtés pour les mettre en application dès aujourd'hui.

Translate in English

vendredi févr. 19, 2010

Oracle Extreme Performance Data Warehousing

Translate in English

Mardi dernier a eu lieu un évènement portant sur la probématique de performance des environnements Data Warehouse et organisé par Oracle. A cette occasion, Sun a été invité à présenter les infrastructures et solutions adressant les exigences toujours plus fortes dans ce domaine. Et BNP Paribas CIB, en la personne de Jim Duffy, Head of Electronic Market DW, a apporté un témoignage très intéressant sur les phases d'évolution de leur Data Warehouse de gestion des flux financiers sur lequel je vais revenir également dans ce post, en vous parlant infrastructure évidement, socle majeur pour atteindre l'"Extreme Performance".

Explosion des données numériques = fort impact sur les infrastructures

Les chiffres parlent d'eux même. Nous assistons à l'explosion des données numériques. De 2006 à 2009, les données numériques ont pratiquement quintuplé pour atteindre pratiquement 500 Exabytes, et IDC prédit la même croissance d'ici 2012, soit 2500 Exabytes de données numériques dans le monde (source: IDC, Digital Universe 2007 et 2009).

En tant que fournisseur de stockage et numéro #1 de la protection de la données, nous le vivons tous les jours à vos côtés. Cette tendance à des impacts à plusieurs niveaux :

  • Sur la capacité à stocker et sauvegarder les données

  • Sur la capacité de traiter les informations pertinentes parmi une masse de données toujours plus conséquente

  • Sur la capacité de gérer l'évolution des unités de calculs et de stockage nécessaires tout en restant “vert”, c'est à dire en maîtrisant également l'impact sur l'énergie, les capacités de refroidissement, et l'encombrement dans vos Datacenter

Les besoins sur les infrastructures des Data Warehouse

Tout cela induit de nombreux enjeux techniques à couvrir pour les entrepôts de données. D'autant plus que cette fonction est devenue une fonction capitale et critique pour le pilotage de l'entreprise.

Le premier enjeu est la capacité de faire croitre l'ensemble de l'infrastructure pour faire face à la croissance des données et des utilisateurs. Ce que Jim Duffy a illustré clairement dans la présentation des phases d'évolutions du projet d'analyse des flux financiers chez BNP. Après un démarrage avec quelques dizaines de Giga Octets en alimentation par jour, ils ont vu la tendance évoluer fortement pour atteindre pratiquement 500 Giga Octects sur 2010. Grâce aux différentes options de la base de données Oracle (partitionnements, compressions) explicitées d'ailleurs lors de ce séminaire par Bruno Bottereau, avant-ventes technologies Oracle, la BNP a pu contrôler l'explosion des données au sein de son Data Warehouse. En outre, compte-tenu de la tendance d'une augmentation importante des données à traiter, les fonctions avancées disponibles dans la solution Sun Oracle Database Machine (Exadata) comme l'Hybride Columnar Compression s'avéraient indispensables à évaluer pour contrôler au mieux cette croissance. Comme l'expliquait Jim Duffy, l'évolution paraissait naturelle et simplifiée, car restant sur des technologies Oracle, ils ont validé en réel lors d'un Proof of Concept la simplicité de passage de la solution actuelle sur Oracle RAC 10g vers la solution Exadata en Oracle RAC 11gR2 en un temps record, avec un gain de performance important.

L'enjeu suivant est la performance avec la nécessité de prendre des décisions intelligentes souvent dans des temps de plus en plus courts et sur une masse de données plus importante. Ce qui impacte à la fois les unités de traitement et la bande passante pour traiter les données. Ce point a été clairement illustré par Jim dans son intervention, où il cherche a effectuer des analyses "quasi" en temps réel (minutes, voir secondes !) sur la masse de données collectée.

Avec une économie mondialisée, et un besoin de réajuster la stratégie presque en temps réel, les entrepôts de données ont vu leur besoin en disponibilité s'accroitre de façon très importante. C'est d'ailleurs ce qui a poussé la BNP à l'origine du projet à déployer un cluster Oracle RAC sur Solaris x86 pour supporter leur entrepôt de données.

Les entrepôts de données conservant les informations de l'entreprise, la sécurité est un élément incontournable dans le traitement de l'information qui y est stockée : qui à le droit d'accéder à quoi ? Quel niveau de protection est en place (cryptographie,...) ? Fonctions évidement couvertes par la base Oracle, mais également dans l'ADN du système d'exploitation Solaris : un double avantage.

Les solutions doivent évidement être rapide à mettre en place, pour ne pas être obsolètes une fois le projet d'entrepôt de données réalisé. Et évidemment, elles doivent répondre à une problématique de coût d'infrastructure optimisé aussi bien en terme de puissance de traitement, de capacité de stockage et de consommation énergétique associée. Tout en couvrant l'ensemble des critères évoqués jusqu'ici : scalabilité, performance, disponibilité, sécurité... Finalement, en s'appuyant sur des standards ouverts, à tous les niveaux, elles doivent permettent d'intégrer les nouvelles évolutions technologiques sans tout remettre en cause. En bref : être flexible.

L'approche des Systèmes Oracle Sun

Pour répondre à tous ces besoins, l'approche de Sun a toujours été de maîtriser l'ensemble des développements des composants de l'infrastructure, ainsi que leur intégration. Afin de concevoir des systèmes homogènes et évolutifs du serveur au stockage en incluant le système d'exploitation... et même jusqu'à l'application... au travers d'architectures de références testées et validées avec les éditeurs, dont Oracle ! En clair, fournir un système complet, homogène et pas uniquement un composant.

La solution Sun Oracle Database Machine (Exadata) en est une bonne illustration, en solution "prêt à porter". Cette philosophie s'applique à l'ensemble de la gamme des systèmes, tout en permettant de couvrir également des besoins "sur mesure", comme par exemple la sauvegarde.

A titre d'exemple de solution "sur mesure", voici une illustration d'un entrepôt de données, réalisé pour un de nos clients, avec des contraintes très fortes de volumétrie  à traiter et de disponibilité. Plus de 300 To de volumétrie pour le Data Warehouse et les Data Marts.

Cette implémentation s'appuie sur 3x serveurs Sun M9000, pouvant contenir chacun jusqu'à 64 processeurs quadri-coeurs, soit 256 coeurs, jusqu'à 4 To de mémoire et 244 Go/s de bande passante E/S: de la capacité pour évoluer en toute sérénité. Le coeur de l'entrepôt tourne sur 1x M9000, les DataMarts étant répartis sur 2 autres M9000. La disponibilité est assurée par le serveur M9000 en lui-même et sa redondance totale sans aucun point de rupture unique.

Le passage sur la nouvelle architecture a permis d'améliorer par 2 le temps de réponse de la plupart des requêtes, sur des données toujours croissantes. Cette infrastructure supporte plus de 1000 utilisateurs DW concurrents et la disponibilité a été améliorée de part la redondance interne des serveurs M9000 et des capacités d'intervention à chaud sur les composants.

En outre, en entrée et milieu de gamme, la gamme Oracle Sun T-Series, bien que limitée à 4 processeurs maximum offre une capacité de traitement parallèle unique  de part son processeur 8 coeurs/8 threads, couplé à des unités d'E/S et de cryptographie intégrées au processeur, et détient le record du nombre d'utilisateurs concurrents Oracle BI EE sur un serveur.

Quelle solution choisir : du "sur mesure" au "prêt à porter" ?

4 critères majeurs vous aideront à sélectionner le serveur répondant le mieux à vos besoins :

  1. le volume de données à traiter,
  2. le type de requêtes,
  3. le niveau de service attendu,
  4. le temps de mise en oeuvre

N'hésitez pas à nous contacter pour que nous vous guidions vers la solution la plus adaptée à vos besoins.

Translate in English

mardi févr. 17, 2009

OpenSolaris et Intel Xeon Processor Nehalem

Translate in English

Lorsque nous mettons en place un partenariat, comme dans le cas d'Intel, ce n'est pas pour être un simple revendeur, mais bien pour innover ensemble, et apporter le meilleur des 2 sociétés à  nos clients.
A ce titre, je vous ai déjà parlé de l'ingénierie optimisée de nos systèmes x86/x64, mais notre collaboration va bien au-delà... Solaris (et OpenSolaris) est une des raisons majeures de l'accord de partenariat qui nous lie avec Intel. De part sa stabilité et sa capacité à exploiter des systèmes multiprocesseurs et multi-coeurs, Solaris dispose de fonctions avancées... Fonctions qu'Intel intègre à sa nouvelle architecture multi-coeur, Nehalem  pour :

  • exploiter un grand nombre de Threads, grâce au "Dispatcher" optimisé de Solaris
  • tirer partie des architectures NUMA, avec la fonction "Memory Placement Optimization" (MPO)
  • gérer la consommation d'énergie, au travers du projet TESLA
  • optimiser les performances des machines virtuelles en collaborant dans le projet de Virtualization xVM Server
  • intégrer les nouveaux jeux d'instructions dans les outils Solaris (Studio, ...) pour tirer partie des nouvelles fonctions matérielles du processeur (XML instruction, loop CPC, counters...)

(note: ce n'est pas moi sur la vidéo, mais David Stewart, Software Engineering Manager d'Intel) 

Toutes ces intégrations sont disponibles dans les distributions OpenSolaris2008.11 et  Solaris 10 Update 6.

A cela s'ajoute également l'optimisation des couches logicielles pour les architectures mutli-coeurs.
Sun fournit des logiciels opensource recompilés pour ces architectures, au travers des distributions CoolStack.
Ces logiciels sont disponibles sur architectures x86/x64, mais également SPARC. Car il ne faut pas l'oublier, Sun a toujours une avance importante sur les technologies mutli-coeurs. Nous avons lancé dès 2005 un processeur SPARC CMT (CMT pour Chip Multi-Threading) 8 coeurs, avec 4 threads par coeur, soit 32 threads matériels d'exécution. Ceci a posé un certain nombre d'enjeux au niveau du système d'exploitation. Enjeux qui permettent aujourd'hui à Solaris d'exceller sur ce type d'architectures. Nous sommes aujourd'hui à la 3ième génération de ce processeur (eh oui, une par an, pas mal pour le monde des processeurs), qui supporte 8 coeurs, 8 threads par coeur et des systèmes jusqu'à 4 processeurs (256 threads d'exécutions matériels !).

Maintenant, la question que nous avons souvent : quand utiliser x86/x64 et quand utiliser le processeur SPARC CMT massivement multi-coeurs/multi-threads ?

De façon synthétique, l'architecture x86/x64 est aujourd'hui plus généraliste, et est à privilégier dans les cas où l'application n'est pas mullti-thread, et où la performance d'un thread ainsi que le temps de réponse associé est le facteur important, en bref, clé pour le HPC.

A contrario, le SPARC CMT est vraiment spécialisé pour :

  • les applications fortement multi-threads (Java en fait partie bien sûr, ce qui rend éligible un nombre important d'applications)
  • la prédictibilité du comportement (même sur très fortes charges transactionnelles) : pas de surprise !
  • la consommation électrique optimisée (fréquence moins élevée = moins de dissipation calorifique)
  • le MTBF élevé, de part une intégration importante de fonctions au niveau du processeur (gestionnaire mémoire, I/O, réseau et de cryptographie !)
Un point à ne pas négliger non plus : la configuration et le paramétrage des logiciels dans un environnement multi-coeur change !
Il faut penser différemment et parfois même revoir son paramétrage applicatif à l'inverse des habitudes. Tuning JVM pour mutli-cores : GC ! , pool thread java...!

Donc si vous souhaitez tirer partie au mieux des nouvelles architectures multi-coeurs :
  1. sélectionnez le matériel par rapport à vos besoins : x86/x64 ou SPARC CMT
  2. utilisez le bon OS : Solaris ou OpenSolaris
  3. utilisez la bonne stack logiciel
  4. utilisez les bons paramètres
  5. et obtenez le meilleur meilleur ratio prix/performance/watt/m²

Note : je n'ai pas évoqué ici les systèmes "high-end", SPARC64, car dans une autre classe de serveurs que ceux de type x86/x64 et SPARC CMT. Toutefois, ces systèmes ont un rôle à jouer dans des environnements nécessitant des besoins de croissance applicative verticale (SMP, pour Symetric Multi-Processing), beaucoup d'entrées/sorties et avec un niveau de criticité élevé (car ils disposent notamment de fonctions d'intervention à chaud).

Translate in English


Eric Bezille-Oracle


« novembre 2015