mercredi févr. 26, 2014

Mobile World Congress in Barcelona: IoT, connected Cars…. and boats

I was wondering when I entered in the Exhibitors Halls, if I was really at the mobile world congress or at a World Wide Automotive Show. Nearly every booth has a car to demonstrate a mobile phone on wheels… But for Oracle: we do have a boat! We demonstrate how the usage of the 300 sensors embedded in the America’s Cup sailing boat, can drive real-time human decisions. Of course, this is one of the many use cases we do have. I won’t go through the entire list from communications, connected cars to smart grids, analytic and Cloud enable mobile applications. I will rather focus on a rather interesting embedded topic as the trend around IoT (Internet of Things) is huge.

I already touch on this during my last OOW blog. As we are in a (very) small world (i.e. the device can be very small), the BOM (bill of material) needs to be at the right price, the right energy consumption and provide a very long life cycle: you don’t want your car to break after one year, or to face complex upgrades every 6 months and be connected (physically) with your repair shop too often. That’s where Java comes into play, because it provides the right footprint, management and life cycle. The new thing we are showing today is Java TEE (Trusted Execution Environment) integrated in hardware. This brings security inside the device by providing a secure store for your keys. As security is a major concern in IoT, especially for large industrial projects like connected Cars, Smart Grid, Smart Energy or even Healthcare. You don’t want your devices to be temper, either for 1) safety reason or 2) frauds. And Java is very good in all IoT uses cases, even for stronger security requirement, that for example, Gemalto is implementing with it.

To help you get there, Gemalto's Cinterion concept board enables you to quickly prototype your embedded devices and connect them securely (even your dart board)...


On the other side of those devices, you have you!…. That needs to make enrich decisions… That’s where data (and analytics) comes into play. And for this part, I invite you to join us in Paris, on March 19th on a special event around Data Challenges for Business. Ganesh Ramamurthy, Oracle VP Software Development in our Engineering System group will be with us to explain what Oracle Systems brings to manage and analyze all your Data. He will be with Cofely Services - GDF Suez, Bouygues Telecom and Centre Hospitalier de Douai, who will share their experiences.

mercredi sept. 25, 2013

#OOW2013: Jump into the Cloud...

Today we went into the Cloud, with 3 major announcements delivered by Thomas Kurian: a full Database as a Service, a full Java as a Service and a full Infrastructure as a Service in Oracle Cloud, guaranteed, backup and operated by Oracle, depending on different level of services.

Database as a Service

You will be able to provision inside Oracle Cloud a full Oracle Database (12c or 11g) either in single node or in highly available RAC cluster. This Database will be accessible in full SQL*NET, with Root access. This service will be offer in 3 different models :

Basic Service: pre-configured, automatically installed Database Software, managed by you through Enterprise Manager Express.

Managed Service: Oracle Databases managed by Oracle, including :

  • Quarterly Patching and Upgrades with SLA
  • Automated Backup and Point-in-Time Recovery
  • Elastic Compute and Storage

Maximum Availability Service: Oracle manages an highly available Database, including:

  • Real Application Cluster (RAC)
  • Data Guard for Maximum Availability
  • More flexible upgrade schedule

Of course you will be able to move your Data or even you entire Database between your Enterprise Datacenter and Oracle Cloud by leveraging regular tools like SQL loader or Data Pump for example.

Java as a Service

In the same model as the Database as a Service, you will be able to deploy dedicated Weblogic cluster(s) on our Compute Service. Full WLST, JMX and Root access will be provided as well. The 3 different models of services will be the following:

Basic Service: pre-configured, automatically installed weblogic software, with a single node Weblogic Suite (12c or 11g), managed by you using Enterprise Manager.

Managed Service: Oracle manages one or more Weblogic domains in the same way as the Database as a Service's Managed Service.

Maximum Availability Service: Oracle Manage an Highly Available environment, with the following characteristics :

  • Weblogic cluster integrated with RAC
  • Automated Disaster Recovery and Failover
  • More flexible upgrade schedules
  • Additional staging environment

So now let's have a quick look at the constituents of our Infrastructure as a Service layer.

Infrastructure as a Service

Compute Service: will provide an elastic compute capacity in Oracle Cloud, based on 3 different type of requirements : Standard, Compute Intensive or Memory Intensive. The management will be based on REST API, and providing as well Root VM access. This Compute Service will provide network isolation and elastic IP addresses. And of course, it will be highly available.

Storage Service: will store and manage digital content. The management will be through Java and REST API (OpenStack Swift). It has been designed for performance, scalability and availability.

 All those new or enhanced services, are complementing all the Oracle Software as a Services already existing and adopted with success by many of our customers, like was shown in many testimonies during Thomas Key Notes. This provides a Platform for our partners who are leveraging our technologies to build their own services in Oracle Cloud. That's why we also created an Oracle Cloud Market Place, enabling the delivery of our partners applications, as well as their combination/integration tailor to your specific needs directly in Oracle Cloud.

Let's Jump into the Cloud....

lundi sept. 23, 2013

#OOW2013: All your Database in-memory for All your existing applications... on Big Memory Machines

Many announcements have been made today by Larry Ellison, during his opening of Oracle OpenWorld. To begin with, Americas Cup is still running, as Oracle won today's races.  I must admit that seeing those boats racing at such a speed and crossing each other at few meters was really impressive. On OpenWorld side, it was also very impressive. More people this year are attending the event : 60 000 ! And in terms of big numbers, we saw very impressive results of the new features and products that have been announced today by Larry: Database 12c in-memory option, M6-32 Big Memory Machine, M6-32 SuperCluster and Oracle Database Backup, Logging, Recovery Appliance (yes, I am not joking, that's its real product name !).

Database 12c in-memory option: both row and column in-memory formats for same data/table

This new option will benefit all your existing applications unchanged. We are leveraging the memory to store both formats at the same time. This enable us to drop all the indexes that are usually necessary to process queries, for a design target of x100 improvement on performance for real-time analytic. As you will see later, we can achieve even more, especially if we are running on an M6-32 Big Memory Machine. At the same time the goal was also to improve transactions x2 !

The nice thing of this option is that it will benefit to all your existing applications running on top of Oracle Database 12c: no change required.

On stage, Juan Loaiza, did a small demonstration of this new option on a 3 billions row database, representing wikipedia research query. On a regular database, without this option, after identifying (or guessing) the query that will most likely be used by users, you put in place appropriate indexes (from 10 to 20 indexes), then you can run you query with acceptable performance, in this case: 2005 Million Rows Scanned / Sec instead of  5 Million Rows Scanned / Sec. Not too bad... Now if we replace the indexes required by the new Column formats store in-memory, we achieved in this case: 7151 Million Rows Scanned / Sec ! Something people looking into Big Data, and real-time decisions, will surely have a look at it.

 The second announcement was a new processor, and a new system associated with it: the M6 chip and the M6-32 Big Memory Machine... available now !

M6-32 Big Memory Machine: Terabyte Scale Computing

This system is compatible with the previous generation of M5 chips, protecting existing investment, and can host as well the new M6 12x cores, 96 threads processor. All in this system is about Terabytes : up to 32 TB of memory, 3 Terabytes/sec of system bandwidth, 1.4 Terabytes/sec of memory bandwidth, 1 Terabyte per second of I/O bandwidth !

This new machine is also the compute node of the new M6-32 SuperCluster announced also today.

M6-32 SuperCluster: In-Memory Database & Application System

That's our fastest Database Machine, with big memory for Column store and integrated Exadata Storage ! Juan Loaiza did also the same demonstration of the wikipedia search on this system... but not with 3 billions rows, but 218 billions rows ! The result speaks by itself: 341 072 Million Rows Scanned / Sec !

With all those critical systems hosting such amount of Data, it is also very important to provide a powerful Database Backup and Restore Solution... And that's all the latest Appliance announced today is about.

Oracle Database Backup, Logging, Recovery Appliance

By just reading its name you get nearly all the capabilities this new appliance will provide to you. First, it is specialized to backup Oracle Database of ALL your systems running an Oracle Database (Engineered Systems, like the lastest M6-32 SuperCluster or Exadata, as well as your regular servers). Second, it also captures all your Database logs. So not only do you have a backup but also the deltas between now and your latest backup. This is allowing you to come back at the point you want when recovering your database.

It can even be coupled with our new Database Backup service on Oracle Public Cloud, for an extra secure copy.

With this new appliance you can now be confident in securing your Oracle Database data.

Building your future datacenter

Today, not only did we see the new Oracle Database 12c enabling to work on memory for all you application, we also saw the associated M6-32 server and associated Engineered Systems M6-32 SuperCluster to run the stack with Big Memory capacity... all being secured by Oracle Database backup, Logging, Recovery Appliance. All of those innovations contributing to build your Datacenter of the future, where all is engineered to work together at the factory.

dimanche sept. 22, 2013

GRTgaz new Information System on Oracle SuperCluster

This testimony from Mr Sébastien Flourac, Head of Strategy for GRTgaz IT, concluded the last week SPARC Showcase event. Mr Flourac highlighted why he selected an Oracle SuperCluster, Engineered Systems, over a more traditional build it yourself approach, that he also studied.

Due to EEC regulation, GRTgaz a subsidary of GDF-Suez, has to be externalized including of course all its applications and existing IT in less than 2 years. But, of course, the current platforms are shared with other GDF-Suez services, which means for GRT gaz, that they have to build entirely a new platform to migrate their existing application with the lowest associated risks. As a major part of the technologies supporting GRT gaz applications was running on Oracle Database and Oracle Weblogic, either on IBM AIX or SPARC Solaris, GRT gaz had a closer look on what Oracle has to propose to simplify Oracle software on Oracle Hardware, compatible with the existing GRT gaz environment. And it became obvious to Mr Flourac that Oracle SuperCluster was the best fit for his project and for the future for several reasons.

Simplicity and lower cost

With Oracle Engineered Systems, all the complexity and cost of traditional build it yourself solutions has been taken care of at Oracle Engineering level. All the configurations and setup have been defined and integrated at all levels (software, virtualization and hardware) to offer the best SLA (performance and availability). This was concurring to simplify its externalization project, and was also bringing additional benefits on the storage layer for the future.

It was the best financial scenario in their project context.

Lower risks

Not only does the SuperCluster offer the best SLA by design. It also provides a very important feature in this complex applications migration : a full compatibility to run existing Oracle software versions. This was very important for Mr Flourac to avoid in his project to do both : migrate and upgrade.
It is also providing :

  • an integrated stack of Oracle Software and Hardware
  • an upgrade process tested by Oracle
  • a better support of the entire stack

Build for the future

Oracle SuperCluster provides to GRT gaz a consolidated, homogeneous and extremely scalable platform, which not only enable this externalization project but also will be able to host the new business requests.

With this new platform in place, Mr Flourac already knows that in the next phases he will be able to leverage additional integrated and unique features that running Oracle Softwares on Oracle SuperCluster provides:

  • Exadata integration and acceleration for Oracle Database starting with 11gR2
  • Exalogic integration and acceleration for Oracle Weblogic starting with 10.3.4

Of course the SuperCluster is a key enabler, but such a project requires also a team to manage the migration, the transition and the run. This is done through the support of Oracle ACS (transition), Fujitsu (migration) and Euriware (run).


T5-2 for high-performance financial trading

On this post, I will focus on the second testimony reported by François Napoleoni. Here,the goal was to select the next best platform to face the growth and upgrade of a financial trading application. The comparaison has been made between :
  1. T5-2, running Oracle VM, Solaris, Sybase and the financial application
  2. x86, running VMWare, Redhat, Sybase and the financial application

The decision criteria being :

  • Simplified architecture
  • Systems performance in real life (what has been tested and measured)
  • Platform stability
  • Single point of support
  • Leverage internal skills
  • Strong security enforced between the different virtual environments
  • ROI of the solution

For those of you understanding french, I will let you listen to this few minutes video below. And for the English readers, go through more details in this post.

Oracle VM virtualization on T4-4 : architecture and service catalog

Last Tuesday, during the SPARC Showcase event, Jean-François Charpentier, François Napoleoni and Sébastien Flourac delivered very interesting uses cases of deployment of latest Oracle Systems at work : Oracle VM virtualization project on T4-4, intensive financial trading on T5-2 and a complete Information System migration to Oracle SuperCluster.

I will start to cover in this post the main points that Jean-François Charpentier focus on to build a consolidated T4-4 platform, effective for his Business users.

Oracle VM virtualization on T4-4 Architecture

As often, Mr Charpentier had to handle existing environment, that needed to be taken into account when building this new virtual platform. First he had to be able to provide a platform that could consolidate all the existing environments, the main driver here being :

  • total memory requirement of existing asset
  • multi-tenant capability to share the platform securely between several different networks
  • comply to strong SLA

T4-4 was the best building block to cover them :

  • memory : up to 2 TB
  • multiple networks connection : up to 16x PCIe extension
  • Oracle VM built-in with redundant channels capability and live migration
  • Solaris binary compatibility to enable easy consolidation of existing environments

Overall Architecture Design

The deployment choice have been to setup 2x Oracle VM T4-4 clusters per sites as follow:


To cover his SLA requirements, Mr Charpentier built redundancy not only by providing multiple T4-4 nodes per Oracle VM clusters, but also at the Oracle VM itselfs. For Oracle VM, he chose to make redundant storage and network virtual access layer as display in the following 2 diagrams.

Oracle VM Virtual Mutlipathing with alternate I/O Domain


Oracle VM Virtual Network Access through IPMP


 All of this virtual network layer being link to different network back-bones, thanks to the 16x PCIe extension of the T4-4, as illustrated in the following diagram.


Another option could have been to deploy Oracle Virtual Network, to enable the disk and network access with only 2x PCIe slots at the server layer.

Oracle VM on T4-4 Service Catalog

Beside the architecture choice, that needs to comply with strong SLA. The development of a service catalogue is also very key to bring the IT toward a service provider. And it is exactly what have been put in place by Jean-François Charpentier, as follow :

 By putting in place this new virtual platform with its associated service catalog, Mr Charpentier was able to provide to his Business better agility thanks to easier and faster deployment. This platform has become the standard for all Solaris deployment for his business unit, and they expect to reach a 90% to 100% Solaris virtualization by 2014.

jeudi sept. 05, 2013

Preparing for #OOW: DB12c, M6, In-memory, Clouds, Big Data... and IoT

It’s always difficult to fit the upcoming Oracle Open World topics, and all its sessions in one title, even if "Simplifying IT. Enabling Business Transformation." makes it clear on what Oracle is focusing on, I wanted to be more specific on the "How". At least for those of you who attended Hot Chips conference, some of the acronyms will be familiar to you, some may not (I will come back to "IoT" later). For those of you attending, or those of you who will get the sessions presentations once available online, here are few things that you don't want to miss which will give you not only what Oracle R&D has done for you since last year, but also what customers -like you- have implemented thanks to the red-stack and its partners, being ISVs or SIs.

First, don't miss Oracle Executives Key notes, second, have a look into the general sessions delivered by VPs of Engineering to get a more in-deep direction, and last but not least, network with your peers, being on specifics deep-dive sessions, experience sharing or even the demo ground where you will be able to get the technologies in action with the Oracle developers subject matters experts.You will find hereafter a small selection.

Oracle Strategy and roadmaps

Industry Focus

Projects implementation feedbacks & lessons learn

Deep-dive with the Experts

Learn how to do it yourself (in 1 hour): Hands-on-Labs

Watch the technologies at work : Demos Ground

This digest is an extract of the many valuable sessions you will be able to attend to accelerate your projects and IT evolution.

jeudi juin 13, 2013

3 minutes video of last Month Oracle Solaris Users Group event

A quick report of last month Oracle Solaris Users Group event in Paris... in french...

vendredi mai 17, 2013

Why OS matters: Solaris Users Group testimony

Wednesday evening, a month after the new SPARC servers T5 & M5 launch in Paris, the french Solaris users group, get together to get the latest from Oracle experts on SPARC T5 & M5, Oracle Virtual Network, as well as the new enhancements inside Solaris 11.1 for Oracle Database. They also came to share their projects experiences and lessons learn, leveraging Solaris features : René Garcia Vallina from PSA, did a deep dive on ZFS internal and best practices around SAP deployment and Bruno Philippe explained how he managed to consolidate 100 Solaris servers into 6 thanks to Solaris 11 specific features.

It was very interesting to see all the value that an operating system like Solaris can bring. As of today, operating systems are often deeply hidden in the bottom layers of the IT stack, and we tend to forget that this is a key layer to leverage all the hardware innovations (being new CPUs cores, SSD storage, large memory subsystems,....) and expose them to the applications layers (being Databases, Java application servers,...). Solaris is going even further than most operating systems, around performances (will get back to that point), observability (with DTrace), reliability (predictive self healing,...), and virtualization (Solaris ZFS, Solaris Zones & Solaris Network Virtualization, also known as project "crossbow").

All of those unique features are bringing even more values and benefits for IT management and operations in a time of cost optimization and efficiency. And during this event, this was something that we could get from all the presentations and exchanges.

Solaris and SPARC T5 & M5

As Eric Duminy explained in the introduction of his session on the new SPARC T5 & M5, we are looking into new paradigm of CPU design and associated systems. Following Moor's law, we are using transistors in completely new ways. This is no more a run for frequency, if you want to achieve performance gain, you need more. You need to bring application features directly at CPU and Operating System level. Looking at SPARC T5, we are talking about a 16 cores, 8 threads/core processor, with up to 8x sockets, 4 TB RAM, SPARC T5-8 server in only 8 rack units ! This mean also, 128 cores and 1024 threads, and even more for the M5-32, with up to 192 cores, 1536 threads, 32 TB RAM  ! That's why the operating system is a key piece that needs to be able to handle such systems efficiently : ability to scale to that level, ability to place the process threads and associated memory on the right cores to avoid context switch, ability to manage the memory to feed the cores at the right pace.... This is all what we have done inside Solaris, and even more with Solaris 11.1 to leverage all this new SPARC T5 & M5 servers, and get the results that we announced a month ago at the launch.

 Of course we don't stop there. To get the best out of the infrastructure, we are designing at CPU, system and Solaris level to optimize for the application, starting at the database level.This is what Karim Berrah covered in his session.

Solaris 11.1 unique optimizations for Oracle Database

Karim's explained first the reasoning behind the complete new virtual memory management of Solaris 11.1, something that benefits directly to Oracle Database for the PGA and SGA allocation. You will experience it directly at database startup (twice faster !). The new virtual memory system will also benefit to ALL your applications, just looking for example at the mmap() function which is now x45 faster (this is what is used for all the shared libraries). Beyond performances, optimizations have been made on security, audit, and management. For example, with the up coming new release of Oracle Database, you will be able to dynamically resize your SGA and also get greater visibility for the DBA in datapath performances thanks to a new DTrace table directly available inside the database: a tight integration between Oracle DB and Solaris unique features.

Alain Chereau one of our performance guru from EMEA Oracle Solution Center provided his foresight and expertise. He especially reminded that the performance is achieve when ALL the layers work well together, and that "your OS choice has an impact on the DB and reverse. Something to remember for your critical applications." Alain closed the session with a final advice on best use of SSD for Oracle DB and Solaris ZFS. In short, SSD are align on 4k block. For Oracle DB, starting with 11.2.0.3, redolog can write in 4k block. This needs to be specify at redolog creation on the record size setting. For Solaris, ZFS knows about SSD and directly adapt. That's the reason why putting ZFS secondary cache on SSD (readzilla) is a very good idea, and a way to avoid bad behavior introduced by new "blind" storage tiering when combined with ZFS. Just put SSD drives for ZFS secondary cache directly inside your T5 or M5 servers and you are done. This is an important topic, as even if a majority of customers are running Oracle Database on ASM on production to get the benefit of grid and Oracle RAC security and scalability, that maybe different for development environments. As a matter of fact, for development systems most customers are leveraging Solaris ZFS and its compression and infinite clone and snapshot functions.

This brings me to René's session on SAP on ZFS...

Lessons learn from deploying SAP on ZFS

Clearly one of the most technical session of this event. Congratulation to René for a very clear explanation on ZFS allocation mechanisms and algorithm policies. I will start by René's conclusion : "Don't follow your ISV (SAP in this case) recommendations blindly". In fact, PSA was experiencing performances degradation and constant I/O activity even with very few transactions on application side. This was due to the fact that SAP recommends to use the SAP Data filesystem at more than 90% ! A very bad idea when you put your data on a Copy-on-Write (COW) filesystem like ZFS... Where I always recommend to keep around 20% of free space to allow for the COW operations to take place ! That's of course the new rule for SAP deployment at PSA.

So if you already have ZFS deployed with this rule in place, you don't have to read further, just keep doing it and move directly to the next topic... otherwise you maybe facing currently some performance problems as well.  To identify which of your ZFS pools are facing this situation, René provided a nice dtrace command that will tell you :

# dtrace -qn 'fbt::zio_gang_tree_issue:entry { @[pid]=count();  }' -c 'sleep 60'

Then to solve the problem, you understand that you need to add free space to enable the COW operation (in one shot). The best way would be to add a vdev (for more details: Oracle Solaris ZFS: A Closer Look at Vdevs and Performance). You could also use a zfs replace with a bigger vdev, but that's not the best option in the long run. If you go through a whole modification cycle of the content of the pool, your zpool will "defragement" by itself. If you want to "defragment" the zfs pool immediatly, if you have a Database, you can do it through "alter table move" operations (special thank to Alain Chereau for the tip). For standard files, you need to copy them and rename them back, or best, do a zfs send | zfs receive to another free zpool and you are done.

From 100 Servers to 6 thanks to Solaris 11

Last but not least, we also had another deep dive session during this event, with live demo ! Thanks to Bruno Philippe, President of the French Solaris Users Group, who shared with us his project of consolidating 100 servers, going from Solaris 8 to Solaris 10 into 6 servers with minimal to no business impact allow ! Bruno achieved his project thanks to Solaris 11 unique new feature : Solaris network virtualization, combine with Solaris Zones P2V and V2V, and SPARC Hardware hypervisor (Oracle VM for SPARC, known also as "LDOM", or Logical Domain).

I invite you to visit Bruno's blog for more details : Link Aggregations and VLAN Configurations for your consolidation (Solaris 11 and Solaris Zone)

Awaiting his next entry explaining the detail of the V2V and P2V operations that he demonstrated to us live on his laptop through a Solaris 11 x86 VBOX image.

I hope to see you on the up coming Solaris and SPARC event to share your feedback and experience with us.

The up coming Paris events will take place on June 4th, for  Datacenter Virtualization, focus on storage and network, and July 4th for a special session on new SPARC servers and their business impact.

mardi mars 26, 2013

New SPARC Servers Launched today : Extreme Performances at exceptionally low cost

It will be very difficult to summarize in a short post all the details and already available customers and ISVs results leveraging Oracle investment, design and ability to execute on SPARC servers complete renewal with not one but 2 processors launched : SPARC T5 and M5. It is somehow captured in the title of this entry, in Larry's own words: "extreme performances at exceptionally low cost". To give you a quick idea, we just announced 8 552 523 tpmC with 1x T5-8 (a new 8x sockets T5 mid-range server). Adding on top, "extreme scalability with extreme reliability", as with the M5-32 server, we can scale up to 32x sockets and 32 TB of memory, in a mission-critical system.

New way of designing systems

As what John Fowler was saying : "this starts with design". Here at Oracle, we have a new way of designing. Historically systems were designed by putting servers, storage, network and OS together. At Oracle we add Database, Middleware and Applications in the design. We think what it takes for the coherency protocols, the interfaces, and design around those points... and more.
Today we introduce not one but 2x processors with the whole family of servers associated with them. Thanks to common architecture they are design to work together. All of this of course runs Solaris. You can run Solaris 10, Solaris 11 and virtualize.  No break on binary compatibility.

Direct benefit for your applications... at lowest risk... and lowest cost

This is good for our customers and our ISVs, enabling them to run their applications unchanged on those new platforms with no equivalent performance gain, lowest cost and lowest risks, thanks to the binary compatibility and the new servers design under Oracle era. There was many customers examples on-stage, so I will just pick 2, SAS moving from M9000 to M5-32 with a x15 gain overall, and Sybase moving from M5000 to a T5-2 with a x11 gain overall. Those being in my opinion very important as they are reflecting real applications and customers experiences, many of them being in the financial services, and already having jump on those new systems (thanks to the beta program).

To get a better idea of what the new SPARC T5 and M5 will bring to your applications, being Siebel, E-Business Suite, JD Edwards, Java, or SAP... Have a look here : https://blogs.oracle.com/BestPerf/ on the 17 world records... on performances and price.

mardi déc. 04, 2012

Understanding Oracle Strategy, Cloud and Engineered Systems

Sometimes small self-explanatory videos are better than long talks... I wanted to share with you today 3 short videos explaining Oracle Strategy, our Cloud positioning and what Engineered Systems bring to your IT. Enjoy...

Oracle Strategy....


… the Cloud...



and Oracle Engineered Systems...

mardi nov. 20, 2012

#OOW 2012 @PARIS...talking Oracle and Clouds, and Optimized Datacenter

For those of you who want to get most out of Oracle technologies to evolve your IT to the Next Wave, I encourage you to register to the up coming Oracle Optimized Datacenter event that will take place in Paris on November 28th. You will get the opportunity to exchange with Oracle experts and customers having successfully evolve their IT by leveraging Oracle technologies. You will also get the latest news on some of the Oracle systems announcements made during OOW 2012.

During this event we will make an update about Oracle and Clouds, from private to public and hybrid models. So in preparing this session, I thought it was a good start to make a status of Cloud Computing in France, and CIO requirements in particular. Starting in 2009 with the first Cloud Camp in Paris, the market has evolved, but the basics are still the same : think hybrid.

From Traditional IT to Clouds

One size doesn't fit all, and for big companies having already an IT in place, there will be parts eligible to external (public) cloud, and parts that would be required to stay inside the firewalls, so ability to integrate both side is key.  None the less, one of the major impact of Cloud Computing trend on IT, reported by Forrester, is the pressure it makes on CIO to evolve towards the same model that end-users are now used to in their day to day life, where self-service and flexibility are paramount. This is what is driving IT to transform itself toward "a Global Service Provider", or for some as "IT "is" the Business" (see : Gartner Identifies Four Futures for IT and CIO), and for both models toward a Private Cloud Service Provider.

In this journey, there is still a big difference between most of existing external Cloud and a firm IT : the number of applications that a CIO has to manage. Most cloud providers today are overly specialized, but at the end of the day, there are really few business processes that rely on only one application. So CIOs has to combine everything together external and internal. And for the internal parts that they will have to make them evolve to a Private Cloud, the scope can be very large. This will often require CIOs to evolve from their traditional approach to more disruptive ones, the time has come to introduce new standards and processes, if they want to succeed.

So let's have a look at the different Cloud models, what type of users they are addressing, what value they bring and most importantly what needs to be done by the  Cloud Provider, and what is left over to the user.

IaaS, PaaS, SaaS : what's provided and what needs to be done


First of all the Cloud Provider will have to provide all the infrastructure needed to deliver the service. And the more value IT will want to provide, the more IT will have to deliver and integrate : from disks to applications. As we can see in the above picture, providing pure IaaS, left a lot to cover for the end-user, that’s why the end-user targeted by this Cloud Service is IT people.

If you want to bring more value to developers, you need to provide to them a development platform ready to use, which is what PaaS is standing for, by providing not only the processors power, storage and OS, but also the Database and Middleware platform.

SaaS being the last mile of the Cloud, providing an application ready to use by business users, the remaining part for the end-users being configuring and specifying the application for their specific usage.

In addition to that, there are common challenges encompassing all type of Cloud Services :

  • Security : covering all aspect, not only of users management but also data flows and data privacy

  • Charge back : measuring what is used and by whom

  • Application management : providing capabilities not only to deploy, but also to upgrade, from OS for IaaS, Database, and Middleware for PaaS, to a full Business Application for SaaS.

  • Scalability : ability to evolve ALL the components of the Cloud Provider stack as needed

  • Availability : ability to cover “always on” requirements

  • Efficiency : providing a infrastructure that leverage shared resources in an efficient way and still comply to SLA (performances, availability, scalability, and ability to evolve)

  • Automation : providing the orchestration of ALL the components in all service life-cycle (deployment, growth & shrink (elasticity), upgrades,...)

  • Management : providing monitoring, configuring and self-service up to the end-users

Oracle Strategy and Clouds

For CIOs to succeed in their Private Cloud implementation, means that they encompass all those aspects for each component life-cycle that they selected to build their Cloud. That’s where a multi-vendors layered approach comes short in terms of efficiency.

That’s the reason why Oracle focus on taking care of all those aspects directly at Engineering level, to truly provide efficient Cloud Services solutions for IaaS, PaaS and SaaS. We are going as far as embedding software functions in hardware (storage, processor level,...) to ensure the best SLA with the highest efficiency.

The beauty of it, as we rely on standards, is that the Oracle components that you are running today in-house, are exactly the same that we are using to build Clouds, bringing you flexibility, reversibility and fast path to adoption.

With Oracle Engineered Systems (Exadata, Exalogic & SPARC SuperCluster, more specifically, when talking about Cloud), we are delivering all those components hardware and software already engineered together at Oracle factory, with a single pane of glace for the management of ALL the components through Oracle Enterprise Manager, and with high-availability, scalability and ability to evolve by design. To give you a feeling of what does that bring in terms just of implementation project timeline, for example with Oracle SPARC SuperCluster, we have a consistent track of record to have the system plug into existing Datacenter and ready in a week. This includes Oracle Database, OS, virtualization, Database Storage (Exadata Storage Cells in this case), Application Storage, and all network configuration.

This strategy enable CIOs to very quickly build Cloud Services, taking out not only the complexity of integrating everything together but also taking out the automation and evolution complexity and cost.

I invite you to discuss all those aspect in regards of your particular context face2face on November 28th.

jeudi oct. 04, 2012

#OOW 2012: Big Data and The Social Revolution

As what was saying Cognizant CSO Malcolm Frank about the "Futur of Work", and how the Business should prepare in the face of the new generation  not only of devices and "internet of things" but also due to their users ("The Millennials"), moving from "consumers" to "prosumers" :  we are at a turning point today which is bringing us to the next IT Architecture Wave. So this is no more just about putting Big Data, Social Networks and Customer Experience (CxM) on top of old existing processes, it is about embracing the next curve, by identifying what processes need to be improve, but also and more importantly what processes are obsolete and need to be get ride of, and new processes put in place. It is about managing both the hierarchical and structured Enterprise and its social connections and influencers inside and outside of the Enterprise. And this does apply everywhere, up to the Utilities and Smart Grids, where it is no more just about delivering (faster) the same old 300 reports that have grown over time with those new technologies but to understand what need to be looked at, in real-time, down to an hand full relevant reports with the KPI relevant to the business. It is about how IT can anticipate the next wave, and is able to answers Business questions, and give those capabilities in real-time right at the hand of the decision makers... This is the turning curve, where IT is really moving from the past decade "Cost Center" to "Value for the Business", as Corporate Stakeholders will be able to touch the value directly at the tip of their fingers.

It is all about making Data Driven Strategic decisions, encompassed and enriched by ALL the Data, and connected to customers/prosumers influencers. This brings to stakeholders the ability to make informed decisions on question like : “What would be the best Olympic Gold winner to represent my automotive brand ?”... in a few clicks and in real-time, based on social media analysis (twitter, Facebook, Google+...) and connections link to my Enterprise data.

A true example demonstrated by Larry Ellison in real-time during his yesterday’s key notes, where “Hardware and Software Engineered to Work Together” is not only about extreme performances but also solutions that Business can touch thanks to well integrated Customer eXperience Management and Social Networking : bringing the capabilities to IT to move to the IT Architecture Next wave.

An example, illustrated also todays in 2 others sessions, that I had the opportunity to attend. The first session bringing the “Internet of Things” in Oil&Gaz into actionable decisions thanks to Complex Event Processing capturing sensors data with the ready to run IT infrastructure leveraging Exalogic for the CEP side, Exadata for the enrich datasets and Exalytics to provide the informed decision interface up to end-user. The second session showing Real Time Decision engine in action for ACCOR hotels, with Eric Wyttynck, VP eCommerce, and his Technical Director Pascal Massenet.

I have to close my post here, as I have to go to run our practical hands-on lab, cooked with Olivier Canonge, Christophe Pauliat and Simon Coter, illustrating in practice the Oracle Infrastructure Private Cloud recently announced last Sunday by Larry, and developed through many examples this morning by John Folwer. John also announced today Solaris 11.1 with a range of network innovation and virtualization at the OS level, as well as many optimizations for applications, like for Oracle RAC, with the introduction of the lock manager inside Solaris Kernel. Last but not least, he introduced Xsigo Datacenter Fabric for highly simplified networks and storage virtualization for your Cloud Infrastructure.

Hoping you will get ready to jump on the next wave, we are here to help...

lundi oct. 01, 2012

#OOW 2012 : IaaS, Private Cloud, Multitenant Database, and X3H2M2

The title of this post is a summary of the 4 announcements made by Larry Ellison today, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait a little, as I will go in order, beginning with the IaaS - Infrastructure as a Service - announcement.

Oracle IaaS goes Public... and Private...

Starting in 2004 with Fusion development, Oracle Cloud was launch last year to provide not only SaaS Application, based on standard development, but also the underlying PaaS, required to build the specifics, and required interconnections between applications, in and outside of the Cloud. Still, to cover the end-to-end Cloud  Services spectrum, we had to provide an Infrastructure as a Service, leveraging our Servers, Storage, OS, and Virtualization Technologies, all "Engineered Together".

This Cloud Infrastructure, was already available for our customers to build rapidly their own Private Cloud either on SPARC/Solaris or x86/Linux... The second announcement made today bring that proposition a big step further : for cautious customers (like Banks, or sensible industries) who would like to benefits from the Cloud value of "as a Service", but don't want their Data out in the Cloud... We propose to them to operate the same systems, Exadata, Exalogic & SuperCluster, that are providing our Public Cloud Infrastructure, behind their firewall, in a Private Cloud model.

Oracle 12c Multitenant Database

This is also a major announcement made today, on what's coming with Oracle Database 12c : the ability to consolidate multiple databases with no extra additional  cost especially in terms of memory needed on the server node, which is often THE consolidation limiting factor. The principle could be compare to Solaris Zones, where, you will have a Database Container, who is "owning" the memory and Database background processes, and "Pluggable" Database in this Database Container. This particular feature is a strong compelling event to evaluate rapidly Oracle Database 12c once it will be available, as this is major step forward into true Database consolidation with Multitenancy on a shared (optimized) infrastructure.

X3H2M2, enabling the new Exadata X3 in-Memory Database

Here we are :  X3H2M2 stands for X3 (the new version of Exadata announced also today) Heuristic Hierarchical Mass Memory, providing the capability to keep most if not all the Data in the memory cache hierarchy. Of course, this is the major software enhancement of the new X3 Exadata machine, but as this is a software, our current customers would be able to benefit from it on their existing systems by upgrading to the new release. But that' not the only thing that we did with X3, at the same time we have upgraded everything :

  • the CPUs, adding more cores per server node (16 vs. 12, with the arrival of Intel E5 / Sandy Bridge),
  • the memory with 256GB memory as well per node,
  •  and the new Flash Fire card, bringing now up to 22 TB of Flash cache.

All of this 4TB of RAM + 22TB of Flash being use cleverly not only for read but also for write by the X3H2M2 algorithm... making a very big difference compare to traditional storage flash extension.

But what does those extra performances brings to you on an already very efficient system: double your performances compare to the fastest storage array on the market today (including flash) and divide you storage price x10 at the same time... Something to consider closely this days... Especially that we also announced the availability of a new Exadata X3-2 8th rack : a good starting point.

As you have seen a major opening for this year again with true innovation. But that was not the only thing that we saw today, as before Larry's talk, Fujitsu did introduce more in deep the up coming new SPARC processor, that they are co-developing with us. And as such Andrew Mendelsohn - Senior Vice President Database Server Technologies came on stage to explain that the next step after I/O optimization for Database with Exadata, was to accelerate the Database at execution level by bringing functions in the SPARC processor silicium. All in all, to process more and more Data... The big theme of the day... and of the Oracle User Groups Conferences that were also happening today and where I had the opportunity to attend some interesting sessions on practical use cases of Big Data one in Finances and Fraud profiling and the other one on practical deployment of Oracle Exalytics for Data Analytics.

In conclusion, one picture to try to size Oracle Open World


... and you can understand why, with such a rich content... and this is only the first day !

lundi sept. 03, 2012

Pre-rentrée Oracle Open World 2012 : à vos agendas

A maintenant moins d'un mois de l’événement majeur d'Oracle, qui se tient comme chaque année à San Francisco, fin septembre, début octobre, les spéculations vont bon train sur les annonces qui vont y être dévoilées... Et sans lever le voile, je vous engage à prendre connaissance des sujets des "Key Notes" qui seront tenues par Larry Ellison, Mark Hurd, Thomas Kurian (responsable des développements logiciels) et John Fowler (responsable des développements systèmes) afin de vous donner un avant goût.

Stratégie et Roadmaps Oracle

Bien entendu, au-delà des séances plénières qui vous donnerons  une vision précise de la stratégie, et pour ceux qui seront sur place, je vous engage à ne pas manquer les séances d'approfondissement qui auront lieu dans la semaine, dont voici quelques morceaux choisis :

Retours d'expériences et témoignages

Si Oracle Open World est l'occasion de partager avec les équipes de développement d'Oracle en direct, c'est aussi l'occasion d'échanger avec des clients et experts qui ont mis en oeuvre  nos technologies pour bénéficier de leurs retours d'expériences, comme par exemple :

Echanges avec les groupes d'utilisateurs et les équipes de développement Oracle

Si vous avez prévu d'arriver suffisamment tôt, vous pourrez également échanger dès le dimanche avec les groupes d'utilisateurs, ou tous les soirs avec les équipes de développement Oracle sur des sujets comme :

Testez et évaluez les solutions

Et pour finir, vous pouvez même tester les technologies au travers du Oracle DemoGrounds, (1133 Moscone South pour la partie Systèmes Oracle, OS, et Virtualisation) et des "Hands-on-Labs", comme :

En conclusion, une semaine très riche en perspective, et qui vous permettra de balayer l'ensemble des sujets au coeur de vos préoccupations, de la stratégie à l'implémentation... Cette semaine doit se préparer, pour tailler votre agenda sur mesure, à travers les plus de 2000 sessions dont je ne vous ai fait qu'un extrait, et dont vous pouvez retrouver l'ensemble en ligne.

About

Eric Bezille

Search

Archives
« avril 2014
lun.mar.mer.jeu.ven.sam.dim.
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today