mardi oct. 27, 2015

#OOW15 Building High Performing Datacenters and Moving to the Cloud

This year to start Oracle Open World 2015, Intel CEO, Brian Krzanich was on stage to share his vision and what Intel is building with Oracle to enable the transformation to the Cloud. And this starts by building high performing Datacenters that:

  • Make it easy to adopt cloud technology
  • Make it perform and deliver a real ROI
  • Make it compelling to deliver actionable insight
  • Make it secure

Along the line of Software Define everything (see my previous entry on this similar topic) and Oracle Engineered Systems Strategy to "Make it Easy, Perform, Compelling and Secure" (check also this "Performance Study: Big Data Appliance compared with DIY Hadoop" and picture below extract from JavaOne Keynote), Intel is also working with Oracle on Project Apollo to provide Enterprise grade availability and scalability.

The goal of the project is to improve the massive deployment of virtual environments, making a better usage of the resources (scalability) and removing the variance of workloads behavior to provide sustainable SLA (availability). Of course this is done at hardware and software level. As reported by Intel CEO, at the current state of project, Intel and Oracle were able to improve the resources usage by 50% and reduce the variance by an order of magnitude from 30% to 3%. He also touched on innovations Intel is working on around SSD and DIMM, stating that "separation between memory and storage will completely transform performance in the year to come".

This Intel introduction was fully inline with Larry Ellison's Keynote on what Oracle is doing to enable the move to the Cloud: a new era of utility computing. As Larry said, we are designing our Cloud with 6 goals:

  • Cost: lowest acquisition price and lowest total cost of ownership
  • Reliability: fault tolerant with no SPOF (single point of failure) - as "we need to be as reliable as your utility company"
  • Performance: fastest database, middleware, analytics,... from batch to real-time
  • Standards: in early days of cloud, there were not standards but pionniers. But people have a huge investment on premise that they want to move to the cloud. So we need to implement the same standards in the cloud: SQL, Hadoop, NoSQL, Java, Ruby, node.js, linux, docker...
  • Compatibility: easy move of workloads between on-premise and cloud
  • Security: always on continuous defense against cyber attacks
As Larry said, "security should always be on and should be item number one: as all our data goes online !"

To provide the capabilities to enable your transformation to the Cloud, with the choice to run on-premise or in the cloud, through a single management pane of glass, major announcements where done today.

We enriched today our SaaS offering with Manufacturing and E-Commerce. Extending as well the Mobile consumer interface of our applications and the learning curve.

We also announced major enhancements on the platform both for the cloud and big data analytics. For the Cloud with extended multitenant capabilities both at Oracle Database level (with up to 4096 Pluggable Databases per Container) and Java Server level (improving by x3 the consolidation ratio). For Big Data Analytic, first with in-memory offloading capabilities to replicated Oracle Database instances, second by providing Big Data now also into the cloud, with the ability to deploy Hadoop Clusters, along with Big Data Preparation, Discovery and Visualization services (check And of course all of this with reliability in mind, adding Oracle RAC in the Cloud, Exadata Service in the Cloud, as well as fault tolerant java server deployment on different location, enabling zero planned and unplanned downtime.

As Larry said more announcements are to come, especially around security. So stay tuned for the coming days of Oracle Open World 2015.

lundi sept. 29, 2014

#OOW14 kick-off: Cloud, Big Data & Innovation

Many announcements were made today by Larry Ellison during his kick-off session of Oracle Open World. As an introduction to his key notes, Renée J. James, Intel President came to share Intel vision and co-development we are doing together.

Intel vision relies on 3 pillars:  (Big) Data, Cloud and Security.


It came to no surprise, that the goal we are sharing with Intel is to provide mining full insights from the flow of Data. To illustrate what we are delivering already today in this space, Intel provided two customers cases study, both leveraging the power of Exadata technology, moving IT into the core business, and no more as a pure support function. And Balaji Yelamanchili, Senior Vice President, Product Development, Oracle Business Analytics, came to explain the join worked we did with Intel to design a workload optimized platform, leveraged by Exalytics. Oracle new x86 servers X4-4 (used in Exalytics) and X4-8 are the first and only systems based on the Intel Processor E7-8895 v2. In conjunction with specialized Oracle Software, those systems provide unique capabilities that allow customers to dynamically address different workloads in real time. In the case of Exalytics, this means multiple cores to run parallel highly transactional workloads and fewer fastest cores to run batch processes.


The other major element for IT to be a business enabler rely on his flexibility, and a Cloud architecture is a key enabler to provide this flexibility. Private Clouds are growing faster than public Cloud, due to compliance, security and SLA ("where your workloads work best"). The good news is that by applying a proper design pattern, Intel's experience shows that private cloud remain competitive with public cloud over time.


Of course, in this virtual world, security is paramount and even more complex to achieve where virtual machine are talking through virtual networks and moving around from servers to servers, putting a real challenge on firewall rules to follow... That's why Intel moved his Next Generation Firewall into Oracle VM insuring security intrusion protection at the VM level and securing firewall rules when VM are moving around. Again, another example of collaboration between Intel and Oracle to enable a true secure virtual architecture, one of the key elements of  flexible cloud architectures.

Following Renée J. James, Larry Ellison set the agenda on Software as a Service, Platform as a Service, Infrastructure as a Service, enable by innovation from Engineereed Systems, Servers, Storage to silicon.

SaaS: lot's of more enterprise SaaS applications

With a clear build and buy strategy, we are unique on the market by offering a full range of Cloud applications covering all 3 suites: Customer Experience, Human Capital Management, and Enterprise Resource Planing.  We keep on extend our capabilities to better answer to our customers needs, with the recent acquisition of Bluekai, delivering Data as a Service for a better customer experience, or by developing engineered sales campaign to help sales forces sales more.

Oracle Cloud Platform: easy to move existing application to the cloud

The foundation of our Platform as a Service is based on Oracle Database Cloud Service and Weblogic Java Cloud Service.
Anything build on top of our cloud platform will be: multi-tenant, social, mobile, in-memory high speed analytic, secured.

Those foundations are the same that we used to build our applications. This is the same standard that we provide to extend our SaaS applications through this platform, and that are currently in used by other cloud providers to build their own SaaS applications. Standard are still important, as security and reliability.

One major announcement made today by Larry Ellision was to stick on our promise of upward compatibility of Oracle database: from mainframe to client-server, from client-server to internet thin client, and now from internet thin client to the Cloud. Move to cloud - move back : no change to code !

Oracle Cloud Infrastructure:  secure, reliable, lowest cost

Our goal is to be competitive in this space, with currently more than 30000 servers and 400PB of storage already in used by our customers.

Innovation from Engineereed Systems, Servers, Storage to silicon

As we all agree that the world will be hybrid, between public and private clouds, we are keeping the innovation at the systems level not only for our own Cloud, but also to enable our customers IT transformation.

With our Engineered Systems, we are leveraging the uniform configuration approach which benefits our customers in all the 3 phases of Design, Build and Run. This seems obvious for the Design and Build phase as we are taking care of this at Oracle. For the Run, thanks to the uniform configuration, when a bug is find and fixed, it is fixed for all our customers. One of the latest in the Oracle Engineered Systems family is Oracle Virtual Compute Appliance, which provides an highly reliable compute and storage appliance, including network virtualization for the lowest possible price. 2 announcements were made today on the Engineered Systems family: the Zero Data Loss Recovery Appliance, which secures all Oracle Databases from 10, 11g and 12c, thanks to real-time redo transport, and the arrival of Oracle Database 12c in-memory inside Exalytics for even faster in-memory analytics.

Larry Ellison also announced the arrival of a full SAN array from Oracle with massive scale-out capability and unmatched performances in this area: Oracle FS1 Flash Storage System. This storage is engineered to get the best out of a combination of Flash and disks for the lowest price and highest performance for a given SLA.

To close his key notes, Larry answered to one question of Intel President on: "why Oracle was doing the M7 ?". Of course for performance, but also for security, security and security. As security is becoming the most critical topic, especially on flexible, shared, Cloud infrastructures. On the performance side, M7 microprocessor Software in Silicon is bringing Database acceleration inside the silicon with features like inline decompression engine at memory speed. On the security side, we will deliver with M7, hardware based memory protection. This will stop malicious programs from accessing other application memory, which will result in more secure and higher available application... and as a nice side effect, greatly speeds software development, thanks to easier code debugging enabling a direct root cause analysis of error in memory corruption.

Stay tuned for what's coming in the next days of Oracle Open World 2014... 

mardi févr. 17, 2009

OpenSolaris et Intel Xeon Processor Nehalem

Translate in English

Lorsque nous mettons en place un partenariat, comme dans le cas d'Intel, ce n'est pas pour être un simple revendeur, mais bien pour innover ensemble, et apporter le meilleur des 2 sociétés à  nos clients.
A ce titre, je vous ai déjà parlé de l'ingénierie optimisée de nos systèmes x86/x64, mais notre collaboration va bien au-delà... Solaris (et OpenSolaris) est une des raisons majeures de l'accord de partenariat qui nous lie avec Intel. De part sa stabilité et sa capacité à exploiter des systèmes multiprocesseurs et multi-coeurs, Solaris dispose de fonctions avancées... Fonctions qu'Intel intègre à sa nouvelle architecture multi-coeur, Nehalem  pour :

  • exploiter un grand nombre de Threads, grâce au "Dispatcher" optimisé de Solaris
  • tirer partie des architectures NUMA, avec la fonction "Memory Placement Optimization" (MPO)
  • gérer la consommation d'énergie, au travers du projet TESLA
  • optimiser les performances des machines virtuelles en collaborant dans le projet de Virtualization xVM Server
  • intégrer les nouveaux jeux d'instructions dans les outils Solaris (Studio, ...) pour tirer partie des nouvelles fonctions matérielles du processeur (XML instruction, loop CPC, counters...)

(note: ce n'est pas moi sur la vidéo, mais David Stewart, Software Engineering Manager d'Intel) 

Toutes ces intégrations sont disponibles dans les distributions OpenSolaris2008.11 et  Solaris 10 Update 6.

A cela s'ajoute également l'optimisation des couches logicielles pour les architectures mutli-coeurs.
Sun fournit des logiciels opensource recompilés pour ces architectures, au travers des distributions CoolStack.
Ces logiciels sont disponibles sur architectures x86/x64, mais également SPARC. Car il ne faut pas l'oublier, Sun a toujours une avance importante sur les technologies mutli-coeurs. Nous avons lancé dès 2005 un processeur SPARC CMT (CMT pour Chip Multi-Threading) 8 coeurs, avec 4 threads par coeur, soit 32 threads matériels d'exécution. Ceci a posé un certain nombre d'enjeux au niveau du système d'exploitation. Enjeux qui permettent aujourd'hui à Solaris d'exceller sur ce type d'architectures. Nous sommes aujourd'hui à la 3ième génération de ce processeur (eh oui, une par an, pas mal pour le monde des processeurs), qui supporte 8 coeurs, 8 threads par coeur et des systèmes jusqu'à 4 processeurs (256 threads d'exécutions matériels !).

Maintenant, la question que nous avons souvent : quand utiliser x86/x64 et quand utiliser le processeur SPARC CMT massivement multi-coeurs/multi-threads ?

De façon synthétique, l'architecture x86/x64 est aujourd'hui plus généraliste, et est à privilégier dans les cas où l'application n'est pas mullti-thread, et où la performance d'un thread ainsi que le temps de réponse associé est le facteur important, en bref, clé pour le HPC.

A contrario, le SPARC CMT est vraiment spécialisé pour :

  • les applications fortement multi-threads (Java en fait partie bien sûr, ce qui rend éligible un nombre important d'applications)
  • la prédictibilité du comportement (même sur très fortes charges transactionnelles) : pas de surprise !
  • la consommation électrique optimisée (fréquence moins élevée = moins de dissipation calorifique)
  • le MTBF élevé, de part une intégration importante de fonctions au niveau du processeur (gestionnaire mémoire, I/O, réseau et de cryptographie !)
Un point à ne pas négliger non plus : la configuration et le paramétrage des logiciels dans un environnement multi-coeur change !
Il faut penser différemment et parfois même revoir son paramétrage applicatif à l'inverse des habitudes. Tuning JVM pour mutli-cores : GC ! , pool thread java...!

Donc si vous souhaitez tirer partie au mieux des nouvelles architectures multi-coeurs :
  1. sélectionnez le matériel par rapport à vos besoins : x86/x64 ou SPARC CMT
  2. utilisez le bon OS : Solaris ou OpenSolaris
  3. utilisez la bonne stack logiciel
  4. utilisez les bons paramètres
  5. et obtenez le meilleur meilleur ratio prix/performance/watt/m²

Note : je n'ai pas évoqué ici les systèmes "high-end", SPARC64, car dans une autre classe de serveurs que ceux de type x86/x64 et SPARC CMT. Toutefois, ces systèmes ont un rôle à jouer dans des environnements nécessitant des besoins de croissance applicative verticale (SMP, pour Symetric Multi-Processing), beaucoup d'entrées/sorties et avec un niveau de criticité élevé (car ils disposent notamment de fonctions d'intervention à chaud).

Translate in English


Chief Technologist @Oracle writing on architecture and innovation encompassing IT transformation. You can follow me on twitter @ericbezille


« juillet 2016