mercredi févr. 26, 2014

Mobile World Congress in Barcelona: IoT, connected Cars…. and boats

I was wondering when I entered in the Exhibitors Halls, if I was really at the mobile world congress or at a World Wide Automotive Show. Nearly every booth has a car to demonstrate a mobile phone on wheels… But for Oracle: we do have a boat! We demonstrate how the usage of the 300 sensors embedded in the America’s Cup sailing boat, can drive real-time human decisions. Of course, this is one of the many use cases we do have. I won’t go through the entire list from communications, connected cars to smart grids, analytic and Cloud enable mobile applications. I will rather focus on a rather interesting embedded topic as the trend around IoT (Internet of Things) is huge.

I already touch on this during my last OOW blog. As we are in a (very) small world (i.e. the device can be very small), the BOM (bill of material) needs to be at the right price, the right energy consumption and provide a very long life cycle: you don’t want your car to break after one year, or to face complex upgrades every 6 months and be connected (physically) with your repair shop too often. That’s where Java comes into play, because it provides the right footprint, management and life cycle. The new thing we are showing today is Java TEE (Trusted Execution Environment) integrated in hardware. This brings security inside the device by providing a secure store for your keys. As security is a major concern in IoT, especially for large industrial projects like connected Cars, Smart Grid, Smart Energy or even Healthcare. You don’t want your devices to be temper, either for 1) safety reason or 2) frauds. And Java is very good in all IoT uses cases, even for stronger security requirement, that for example, Gemalto is implementing with it.

To help you get there, Gemalto's Cinterion concept board enables you to quickly prototype your embedded devices and connect them securely (even your dart board)...


On the other side of those devices, you have you!…. That needs to make enrich decisions… That’s where data (and analytics) comes into play. And for this part, I invite you to join us in Paris, on March 19th on a special event around Data Challenges for Business. Ganesh Ramamurthy, Oracle VP Software Development in our Engineering System group will be with us to explain what Oracle Systems brings to manage and analyze all your Data. He will be with Cofely Services - GDF Suez, Bouygues Telecom and Centre Hospitalier de Douai, who will share their experiences.

dimanche sept. 29, 2013

#OOW2013: Internet of Things... and Big Data

As promised in my first entry few weeks ago, in preparing Oracle OpenWorld, I am coming back to IoT: Internet of Things... and Big Data. As this was the closing topic develop by Edward Screven, Chris Baker and Deutche Telekom, Dr. Thomas Kiessling. Of course, Big Data and Internet of Things (or M2M - Machine2Machine) have been topics not only covered the last day, but all along the conference, including in JavaOne, with 2 interesting sessions from Gemalto. Gemalto even developed a kit to test your own use cases for M2M. Internet of Things is opening new opportunities but also challenges to overcome to get it right, that at Oracle we classify in 3 categories : Acquire & Transmit, Integrate & Secure, and Analyze & Act.

Acquire & Transmit

Just think of potentially billions of devices that you need to remotely deploy, maintain, update, insure proper transmission of data (the right data at the right time - as your power budget is constrain) and even extend decision making closer to the source. With standards-based Java platform optimized for devices, we are already covering today all those requirements, and are already involved in major Internet of Things projects, like the Smart Grids or Connected Cars.

Integrate & Secure

Of course integrating -securely- all the pieces together is key. As you want it 1) to reliably work with potentially a very large amount of devices and 2) not be compromised by any means. Here again, at the device level, Java is providing the intrinsic security functions that you need, from secure code loading, verification, and execution, confidentiality of data handling, storage, and communication, up to authentication of entities involved in secure operations. And we are driving this secured integration up to the Datacenter, thanks to our comprehensive Identity and Access Management system, up to Data masking, fraud detection, and built-in network security and encryption.

Analyze & Act

Last but not least, is to analyze and correlate those Data and take appropriate actions. This is where M2M and Internet of Things link to Big Data. There are different things that characterize "Big Data" : Volume, Velocity (time & speed), Variety (data format), Value (what is really interesting in those data related to my business), Vizualization (how do I find something in this, of value ?), Veracity (insure that what I will add into my trusted data (DWH...) coming from those new sources is something validated. In M2M, we don't always have Volume, but we still have the other "Vs" to take care. To handle all this IoT generated information  inside the Datacenter, and do correlation with existing Data relevant to your customer business (being ERP, Supply Chain, quality tracking of supplier, improving purchasing process, etc...), you may need need tools. That's why Oracle developed the Oracle Big Data Appliance to build an "HPC for Data" grid including Hadoop & NoSQL to capture those IoT data, and Oracle Exalytics/Oracle Endeca Information Discovery, to enable the vizualisation/discovery phase. Once you pass the discovery phase we can act automatically in real time ! on the specific triggers that you will have identified, thanks to Oracle Event Processing solution.

Deliver

As you see, Oracle Internet of Things platform enables you to quickly develop and deliver, securely, an end-to-end solution.

The end result is a quick time-to-market for an M2M project like the one presented on stage and used live during the conference. This project was develop in 4 weeks, with 6 individuals ! The goal was to control the room capacity and in/out doors live control depending on the participants flow in the room. And as you can see in the architecture diagram we are effectively covering from Java on the device up to Exalytics in the Datacenter.

mercredi sept. 25, 2013

#OOW2013: Jump into the Cloud...

Today we went into the Cloud, with 3 major announcements delivered by Thomas Kurian: a full Database as a Service, a full Java as a Service and a full Infrastructure as a Service in Oracle Cloud, guaranteed, backup and operated by Oracle, depending on different level of services.

Database as a Service

You will be able to provision inside Oracle Cloud a full Oracle Database (12c or 11g) either in single node or in highly available RAC cluster. This Database will be accessible in full SQL*NET, with Root access. This service will be offer in 3 different models :

Basic Service: pre-configured, automatically installed Database Software, managed by you through Enterprise Manager Express.

Managed Service: Oracle Databases managed by Oracle, including :

  • Quarterly Patching and Upgrades with SLA
  • Automated Backup and Point-in-Time Recovery
  • Elastic Compute and Storage

Maximum Availability Service: Oracle manages an highly available Database, including:

  • Real Application Cluster (RAC)
  • Data Guard for Maximum Availability
  • More flexible upgrade schedule

Of course you will be able to move your Data or even you entire Database between your Enterprise Datacenter and Oracle Cloud by leveraging regular tools like SQL loader or Data Pump for example.

Java as a Service

In the same model as the Database as a Service, you will be able to deploy dedicated Weblogic cluster(s) on our Compute Service. Full WLST, JMX and Root access will be provided as well. The 3 different models of services will be the following:

Basic Service: pre-configured, automatically installed weblogic software, with a single node Weblogic Suite (12c or 11g), managed by you using Enterprise Manager.

Managed Service: Oracle manages one or more Weblogic domains in the same way as the Database as a Service's Managed Service.

Maximum Availability Service: Oracle Manage an Highly Available environment, with the following characteristics :

  • Weblogic cluster integrated with RAC
  • Automated Disaster Recovery and Failover
  • More flexible upgrade schedules
  • Additional staging environment

So now let's have a quick look at the constituents of our Infrastructure as a Service layer.

Infrastructure as a Service

Compute Service: will provide an elastic compute capacity in Oracle Cloud, based on 3 different type of requirements : Standard, Compute Intensive or Memory Intensive. The management will be based on REST API, and providing as well Root VM access. This Compute Service will provide network isolation and elastic IP addresses. And of course, it will be highly available.

Storage Service: will store and manage digital content. The management will be through Java and REST API (OpenStack Swift). It has been designed for performance, scalability and availability.

 All those new or enhanced services, are complementing all the Oracle Software as a Services already existing and adopted with success by many of our customers, like was shown in many testimonies during Thomas Key Notes. This provides a Platform for our partners who are leveraging our technologies to build their own services in Oracle Cloud. That's why we also created an Oracle Cloud Market Place, enabling the delivery of our partners applications, as well as their combination/integration tailor to your specific needs directly in Oracle Cloud.

Let's Jump into the Cloud....

lundi sept. 23, 2013

#OOW2013: All your Database in-memory for All your existing applications... on Big Memory Machines

Many announcements have been made today by Larry Ellison, during his opening of Oracle OpenWorld. To begin with, Americas Cup is still running, as Oracle won today's races.  I must admit that seeing those boats racing at such a speed and crossing each other at few meters was really impressive. On OpenWorld side, it was also very impressive. More people this year are attending the event : 60 000 ! And in terms of big numbers, we saw very impressive results of the new features and products that have been announced today by Larry: Database 12c in-memory option, M6-32 Big Memory Machine, M6-32 SuperCluster and Oracle Database Backup, Logging, Recovery Appliance (yes, I am not joking, that's its real product name !).

Database 12c in-memory option: both row and column in-memory formats for same data/table

This new option will benefit all your existing applications unchanged. We are leveraging the memory to store both formats at the same time. This enable us to drop all the indexes that are usually necessary to process queries, for a design target of x100 improvement on performance for real-time analytic. As you will see later, we can achieve even more, especially if we are running on an M6-32 Big Memory Machine. At the same time the goal was also to improve transactions x2 !

The nice thing of this option is that it will benefit to all your existing applications running on top of Oracle Database 12c: no change required.

On stage, Juan Loaiza, did a small demonstration of this new option on a 3 billions row database, representing wikipedia research query. On a regular database, without this option, after identifying (or guessing) the query that will most likely be used by users, you put in place appropriate indexes (from 10 to 20 indexes), then you can run you query with acceptable performance, in this case: 2005 Million Rows Scanned / Sec instead of  5 Million Rows Scanned / Sec. Not too bad... Now if we replace the indexes required by the new Column formats store in-memory, we achieved in this case: 7151 Million Rows Scanned / Sec ! Something people looking into Big Data, and real-time decisions, will surely have a look at it.

 The second announcement was a new processor, and a new system associated with it: the M6 chip and the M6-32 Big Memory Machine... available now !

M6-32 Big Memory Machine: Terabyte Scale Computing

This system is compatible with the previous generation of M5 chips, protecting existing investment, and can host as well the new M6 12x cores, 96 threads processor. All in this system is about Terabytes : up to 32 TB of memory, 3 Terabytes/sec of system bandwidth, 1.4 Terabytes/sec of memory bandwidth, 1 Terabyte per second of I/O bandwidth !

This new machine is also the compute node of the new M6-32 SuperCluster announced also today.

M6-32 SuperCluster: In-Memory Database & Application System

That's our fastest Database Machine, with big memory for Column store and integrated Exadata Storage ! Juan Loaiza did also the same demonstration of the wikipedia search on this system... but not with 3 billions rows, but 218 billions rows ! The result speaks by itself: 341 072 Million Rows Scanned / Sec !

With all those critical systems hosting such amount of Data, it is also very important to provide a powerful Database Backup and Restore Solution... And that's all the latest Appliance announced today is about.

Oracle Database Backup, Logging, Recovery Appliance

By just reading its name you get nearly all the capabilities this new appliance will provide to you. First, it is specialized to backup Oracle Database of ALL your systems running an Oracle Database (Engineered Systems, like the lastest M6-32 SuperCluster or Exadata, as well as your regular servers). Second, it also captures all your Database logs. So not only do you have a backup but also the deltas between now and your latest backup. This is allowing you to come back at the point you want when recovering your database.

It can even be coupled with our new Database Backup service on Oracle Public Cloud, for an extra secure copy.

With this new appliance you can now be confident in securing your Oracle Database data.

Building your future datacenter

Today, not only did we see the new Oracle Database 12c enabling to work on memory for all you application, we also saw the associated M6-32 server and associated Engineered Systems M6-32 SuperCluster to run the stack with Big Memory capacity... all being secured by Oracle Database backup, Logging, Recovery Appliance. All of those innovations contributing to build your Datacenter of the future, where all is engineered to work together at the factory.

dimanche sept. 22, 2013

GRTgaz new Information System on Oracle SuperCluster

This testimony from Mr Sébastien Flourac, Head of Strategy for GRTgaz IT, concluded the last week SPARC Showcase event. Mr Flourac highlighted why he selected an Oracle SuperCluster, Engineered Systems, over a more traditional build it yourself approach, that he also studied.

Due to EEC regulation, GRTgaz a subsidary of GDF-Suez, has to be externalized including of course all its applications and existing IT in less than 2 years. But, of course, the current platforms are shared with other GDF-Suez services, which means for GRT gaz, that they have to build entirely a new platform to migrate their existing application with the lowest associated risks. As a major part of the technologies supporting GRT gaz applications was running on Oracle Database and Oracle Weblogic, either on IBM AIX or SPARC Solaris, GRT gaz had a closer look on what Oracle has to propose to simplify Oracle software on Oracle Hardware, compatible with the existing GRT gaz environment. And it became obvious to Mr Flourac that Oracle SuperCluster was the best fit for his project and for the future for several reasons.

Simplicity and lower cost

With Oracle Engineered Systems, all the complexity and cost of traditional build it yourself solutions has been taken care of at Oracle Engineering level. All the configurations and setup have been defined and integrated at all levels (software, virtualization and hardware) to offer the best SLA (performance and availability). This was concurring to simplify its externalization project, and was also bringing additional benefits on the storage layer for the future.

It was the best financial scenario in their project context.

Lower risks

Not only does the SuperCluster offer the best SLA by design. It also provides a very important feature in this complex applications migration : a full compatibility to run existing Oracle software versions. This was very important for Mr Flourac to avoid in his project to do both : migrate and upgrade.
It is also providing :

  • an integrated stack of Oracle Software and Hardware
  • an upgrade process tested by Oracle
  • a better support of the entire stack

Build for the future

Oracle SuperCluster provides to GRT gaz a consolidated, homogeneous and extremely scalable platform, which not only enable this externalization project but also will be able to host the new business requests.

With this new platform in place, Mr Flourac already knows that in the next phases he will be able to leverage additional integrated and unique features that running Oracle Softwares on Oracle SuperCluster provides:

  • Exadata integration and acceleration for Oracle Database starting with 11gR2
  • Exalogic integration and acceleration for Oracle Weblogic starting with 10.3.4

Of course the SuperCluster is a key enabler, but such a project requires also a team to manage the migration, the transition and the run. This is done through the support of Oracle ACS (transition), Fujitsu (migration) and Euriware (run).


T5-2 for high-performance financial trading

On this post, I will focus on the second testimony reported by François Napoleoni. Here,the goal was to select the next best platform to face the growth and upgrade of a financial trading application. The comparaison has been made between :
  1. T5-2, running Oracle VM, Solaris, Sybase and the financial application
  2. x86, running VMWare, Redhat, Sybase and the financial application

The decision criteria being :

  • Simplified architecture
  • Systems performance in real life (what has been tested and measured)
  • Platform stability
  • Single point of support
  • Leverage internal skills
  • Strong security enforced between the different virtual environments
  • ROI of the solution

For those of you understanding french, I will let you listen to this few minutes video below. And for the English readers, go through more details in this post.

Oracle VM virtualization on T4-4 : architecture and service catalog

Last Tuesday, during the SPARC Showcase event, Jean-François Charpentier, François Napoleoni and Sébastien Flourac delivered very interesting uses cases of deployment of latest Oracle Systems at work : Oracle VM virtualization project on T4-4, intensive financial trading on T5-2 and a complete Information System migration to Oracle SuperCluster.

I will start to cover in this post the main points that Jean-François Charpentier focus on to build a consolidated T4-4 platform, effective for his Business users.

Oracle VM virtualization on T4-4 Architecture

As often, Mr Charpentier had to handle existing environment, that needed to be taken into account when building this new virtual platform. First he had to be able to provide a platform that could consolidate all the existing environments, the main driver here being :

  • total memory requirement of existing asset
  • multi-tenant capability to share the platform securely between several different networks
  • comply to strong SLA

T4-4 was the best building block to cover them :

  • memory : up to 2 TB
  • multiple networks connection : up to 16x PCIe extension
  • Oracle VM built-in with redundant channels capability and live migration
  • Solaris binary compatibility to enable easy consolidation of existing environments

Overall Architecture Design

The deployment choice have been to setup 2x Oracle VM T4-4 clusters per sites as follow:


To cover his SLA requirements, Mr Charpentier built redundancy not only by providing multiple T4-4 nodes per Oracle VM clusters, but also at the Oracle VM itselfs. For Oracle VM, he chose to make redundant storage and network virtual access layer as display in the following 2 diagrams.

Oracle VM Virtual Mutlipathing with alternate I/O Domain


Oracle VM Virtual Network Access through IPMP


 All of this virtual network layer being link to different network back-bones, thanks to the 16x PCIe extension of the T4-4, as illustrated in the following diagram.


Another option could have been to deploy Oracle Virtual Network, to enable the disk and network access with only 2x PCIe slots at the server layer.

Oracle VM on T4-4 Service Catalog

Beside the architecture choice, that needs to comply with strong SLA. The development of a service catalogue is also very key to bring the IT toward a service provider. And it is exactly what have been put in place by Jean-François Charpentier, as follow :

 By putting in place this new virtual platform with its associated service catalog, Mr Charpentier was able to provide to his Business better agility thanks to easier and faster deployment. This platform has become the standard for all Solaris deployment for his business unit, and they expect to reach a 90% to 100% Solaris virtualization by 2014.

lundi sept. 09, 2013

SPARC Showcase in Paris : September 17th

I recently posted about preparing Oracle Open World. A good start, for those of you in Paris, on September 17th, would be to come to the SPARC Showcase, where our customers will develop why and where they are leveraging latest SPARC technologies T4, T5 and Oracle SuperCluster to their IT and Business benefits : 

  • Mr Jean-Marc Jacquot, from Mysis
  • Mr Jean-François Charpentier, Technical Architect from a leader in RH services solutions
  • Mr Sébastien Flourac, Head of Strategy for GRTgaz IT

You can register here.

jeudi sept. 05, 2013

Preparing for #OOW: DB12c, M6, In-memory, Clouds, Big Data... and IoT

It’s always difficult to fit the upcoming Oracle Open World topics, and all its sessions in one title, even if "Simplifying IT. Enabling Business Transformation." makes it clear on what Oracle is focusing on, I wanted to be more specific on the "How". At least for those of you who attended Hot Chips conference, some of the acronyms will be familiar to you, some may not (I will come back to "IoT" later). For those of you attending, or those of you who will get the sessions presentations once available online, here are few things that you don't want to miss which will give you not only what Oracle R&D has done for you since last year, but also what customers -like you- have implemented thanks to the red-stack and its partners, being ISVs or SIs.

First, don't miss Oracle Executives Key notes, second, have a look into the general sessions delivered by VPs of Engineering to get a more in-deep direction, and last but not least, network with your peers, being on specifics deep-dive sessions, experience sharing or even the demo ground where you will be able to get the technologies in action with the Oracle developers subject matters experts.You will find hereafter a small selection.

Oracle Strategy and roadmaps

Industry Focus

Projects implementation feedbacks & lessons learn

Deep-dive with the Experts

Learn how to do it yourself (in 1 hour): Hands-on-Labs

Watch the technologies at work : Demos Ground

This digest is an extract of the many valuable sessions you will be able to attend to accelerate your projects and IT evolution.

jeudi juin 13, 2013

3 minutes video of last Month Oracle Solaris Users Group event

A quick report of last month Oracle Solaris Users Group event in Paris... in french...

vendredi mai 17, 2013

Why OS matters: Solaris Users Group testimony

Wednesday evening, a month after the new SPARC servers T5 & M5 launch in Paris, the french Solaris users group, get together to get the latest from Oracle experts on SPARC T5 & M5, Oracle Virtual Network, as well as the new enhancements inside Solaris 11.1 for Oracle Database. They also came to share their projects experiences and lessons learn, leveraging Solaris features : René Garcia Vallina from PSA, did a deep dive on ZFS internal and best practices around SAP deployment and Bruno Philippe explained how he managed to consolidate 100 Solaris servers into 6 thanks to Solaris 11 specific features.

It was very interesting to see all the value that an operating system like Solaris can bring. As of today, operating systems are often deeply hidden in the bottom layers of the IT stack, and we tend to forget that this is a key layer to leverage all the hardware innovations (being new CPUs cores, SSD storage, large memory subsystems,....) and expose them to the applications layers (being Databases, Java application servers,...). Solaris is going even further than most operating systems, around performances (will get back to that point), observability (with DTrace), reliability (predictive self healing,...), and virtualization (Solaris ZFS, Solaris Zones & Solaris Network Virtualization, also known as project "crossbow").

All of those unique features are bringing even more values and benefits for IT management and operations in a time of cost optimization and efficiency. And during this event, this was something that we could get from all the presentations and exchanges.

Solaris and SPARC T5 & M5

As Eric Duminy explained in the introduction of his session on the new SPARC T5 & M5, we are looking into new paradigm of CPU design and associated systems. Following Moor's law, we are using transistors in completely new ways. This is no more a run for frequency, if you want to achieve performance gain, you need more. You need to bring application features directly at CPU and Operating System level. Looking at SPARC T5, we are talking about a 16 cores, 8 threads/core processor, with up to 8x sockets, 4 TB RAM, SPARC T5-8 server in only 8 rack units ! This mean also, 128 cores and 1024 threads, and even more for the M5-32, with up to 192 cores, 1536 threads, 32 TB RAM  ! That's why the operating system is a key piece that needs to be able to handle such systems efficiently : ability to scale to that level, ability to place the process threads and associated memory on the right cores to avoid context switch, ability to manage the memory to feed the cores at the right pace.... This is all what we have done inside Solaris, and even more with Solaris 11.1 to leverage all this new SPARC T5 & M5 servers, and get the results that we announced a month ago at the launch.

 Of course we don't stop there. To get the best out of the infrastructure, we are designing at CPU, system and Solaris level to optimize for the application, starting at the database level.This is what Karim Berrah covered in his session.

Solaris 11.1 unique optimizations for Oracle Database

Karim's explained first the reasoning behind the complete new virtual memory management of Solaris 11.1, something that benefits directly to Oracle Database for the PGA and SGA allocation. You will experience it directly at database startup (twice faster !). The new virtual memory system will also benefit to ALL your applications, just looking for example at the mmap() function which is now x45 faster (this is what is used for all the shared libraries). Beyond performances, optimizations have been made on security, audit, and management. For example, with the up coming new release of Oracle Database, you will be able to dynamically resize your SGA and also get greater visibility for the DBA in datapath performances thanks to a new DTrace table directly available inside the database: a tight integration between Oracle DB and Solaris unique features.

Alain Chereau one of our performance guru from EMEA Oracle Solution Center provided his foresight and expertise. He especially reminded that the performance is achieve when ALL the layers work well together, and that "your OS choice has an impact on the DB and reverse. Something to remember for your critical applications." Alain closed the session with a final advice on best use of SSD for Oracle DB and Solaris ZFS. In short, SSD are align on 4k block. For Oracle DB, starting with 11.2.0.3, redolog can write in 4k block. This needs to be specify at redolog creation on the record size setting. For Solaris, ZFS knows about SSD and directly adapt. That's the reason why putting ZFS secondary cache on SSD (readzilla) is a very good idea, and a way to avoid bad behavior introduced by new "blind" storage tiering when combined with ZFS. Just put SSD drives for ZFS secondary cache directly inside your T5 or M5 servers and you are done. This is an important topic, as even if a majority of customers are running Oracle Database on ASM on production to get the benefit of grid and Oracle RAC security and scalability, that maybe different for development environments. As a matter of fact, for development systems most customers are leveraging Solaris ZFS and its compression and infinite clone and snapshot functions.

This brings me to René's session on SAP on ZFS...

Lessons learn from deploying SAP on ZFS

Clearly one of the most technical session of this event. Congratulation to René for a very clear explanation on ZFS allocation mechanisms and algorithm policies. I will start by René's conclusion : "Don't follow your ISV (SAP in this case) recommendations blindly". In fact, PSA was experiencing performances degradation and constant I/O activity even with very few transactions on application side. This was due to the fact that SAP recommends to use the SAP Data filesystem at more than 90% ! A very bad idea when you put your data on a Copy-on-Write (COW) filesystem like ZFS... Where I always recommend to keep around 20% of free space to allow for the COW operations to take place ! That's of course the new rule for SAP deployment at PSA.

So if you already have ZFS deployed with this rule in place, you don't have to read further, just keep doing it and move directly to the next topic... otherwise you maybe facing currently some performance problems as well.  To identify which of your ZFS pools are facing this situation, René provided a nice dtrace command that will tell you :

# dtrace -qn 'fbt::zio_gang_tree_issue:entry { @[pid]=count();  }' -c 'sleep 60'

Then to solve the problem, you understand that you need to add free space to enable the COW operation (in one shot). The best way would be to add a vdev (for more details: Oracle Solaris ZFS: A Closer Look at Vdevs and Performance). You could also use a zfs replace with a bigger vdev, but that's not the best option in the long run. If you go through a whole modification cycle of the content of the pool, your zpool will "defragement" by itself. If you want to "defragment" the zfs pool immediatly, if you have a Database, you can do it through "alter table move" operations (special thank to Alain Chereau for the tip). For standard files, you need to copy them and rename them back, or best, do a zfs send | zfs receive to another free zpool and you are done.

From 100 Servers to 6 thanks to Solaris 11

Last but not least, we also had another deep dive session during this event, with live demo ! Thanks to Bruno Philippe, President of the French Solaris Users Group, who shared with us his project of consolidating 100 servers, going from Solaris 8 to Solaris 10 into 6 servers with minimal to no business impact allow ! Bruno achieved his project thanks to Solaris 11 unique new feature : Solaris network virtualization, combine with Solaris Zones P2V and V2V, and SPARC Hardware hypervisor (Oracle VM for SPARC, known also as "LDOM", or Logical Domain).

I invite you to visit Bruno's blog for more details : Link Aggregations and VLAN Configurations for your consolidation (Solaris 11 and Solaris Zone)

Awaiting his next entry explaining the detail of the V2V and P2V operations that he demonstrated to us live on his laptop through a Solaris 11 x86 VBOX image.

I hope to see you on the up coming Solaris and SPARC event to share your feedback and experience with us.

The up coming Paris events will take place on June 4th, for  Datacenter Virtualization, focus on storage and network, and July 4th for a special session on new SPARC servers and their business impact.

mercredi avr. 10, 2013

IT Modernization: SPARC Servers Engineering Vice President in Paris

Avec le renouvèlement complet des serveurs SPARC annoncé il y a 2 semaines, Masood Heydari, vice-président de l'ingénierie SPARC sera à Paris le 18 Avril, afin de partager avec vous, les apports de ces nouveaux serveurs T5 et M5 sur le marché. Après l'intervention de Masood, Didier Vionnet, ACCOR vice-président du back-office, Bruno Philippe, président du groupe français des utilisateurs de Solaris, Renato Vista, CTO CAP Gemini Infrastructure Services, Harry Zarrouk, Directeur des Systèmes d'Oracle pour la France et moi-même, participeront à une table ronde sur les apports de ces innovations pour la modernisation des systèmes d'informations et les nouveaux besoins métiers des entreprises. Je vous invite à vous inscrire à cet évènement afin de venir échanger avec l'ensemble des intervenants.


With the complet renewal of SPARC Servers announced 2 weeks ago, Masood HEYDARI, Senior Vice President of SPARC Servers Engineering will be in Paris on April 18th, to share what the new SPARC Servers T5 and M5 bring on the market. Following Masood key notes, Didier Vionnet, ACCOR Vice-President of Back-office, Bruno Philippe, President of French Solaris Users Group, Renato Vista, CTO CAP Gemini Infrastructure Services, Harry Zarrouk, Director of Oracle Systems for France and myself, will exchange on the benefits those innovations bring to IT Modernization and Business needs.



mardi mars 26, 2013

New SPARC Servers Launched today : Extreme Performances at exceptionally low cost

It will be very difficult to summarize in a short post all the details and already available customers and ISVs results leveraging Oracle investment, design and ability to execute on SPARC servers complete renewal with not one but 2 processors launched : SPARC T5 and M5. It is somehow captured in the title of this entry, in Larry's own words: "extreme performances at exceptionally low cost". To give you a quick idea, we just announced 8 552 523 tpmC with 1x T5-8 (a new 8x sockets T5 mid-range server). Adding on top, "extreme scalability with extreme reliability", as with the M5-32 server, we can scale up to 32x sockets and 32 TB of memory, in a mission-critical system.

New way of designing systems

As what John Fowler was saying : "this starts with design". Here at Oracle, we have a new way of designing. Historically systems were designed by putting servers, storage, network and OS together. At Oracle we add Database, Middleware and Applications in the design. We think what it takes for the coherency protocols, the interfaces, and design around those points... and more.
Today we introduce not one but 2x processors with the whole family of servers associated with them. Thanks to common architecture they are design to work together. All of this of course runs Solaris. You can run Solaris 10, Solaris 11 and virtualize.  No break on binary compatibility.

Direct benefit for your applications... at lowest risk... and lowest cost

This is good for our customers and our ISVs, enabling them to run their applications unchanged on those new platforms with no equivalent performance gain, lowest cost and lowest risks, thanks to the binary compatibility and the new servers design under Oracle era. There was many customers examples on-stage, so I will just pick 2, SAS moving from M9000 to M5-32 with a x15 gain overall, and Sybase moving from M5000 to a T5-2 with a x11 gain overall. Those being in my opinion very important as they are reflecting real applications and customers experiences, many of them being in the financial services, and already having jump on those new systems (thanks to the beta program).

To get a better idea of what the new SPARC T5 and M5 will bring to your applications, being Siebel, E-Business Suite, JD Edwards, Java, or SAP... Have a look here : https://blogs.oracle.com/BestPerf/ on the 17 world records... on performances and price.

lundi mars 11, 2013

Exa Showcase : customers testimony

In my last blog entry, I shared with you some quick videos illustrating our strategy to simplify IT. To move from videos to reality, if you are in Paris, on March 21st, I invite you to register (here) to an event where Oracle Engineered Systems / Exa* customers will share their results. You will have the opportunity to listen and ask questions to : 

  • Elizabeth Rabet,VP IT Finance, Capgemini
  • Eric Minet, CTO, Lyreco
  • Stéphane Hamy, Responsable MCO SI, Cofely France

If you are running your ERP on SAP like Lyreco, have a big Data-warehouse to optimize like Capgemini or relying on modernizing your FORMS & REPORTS applications like Cofely, I am sure you will get very interesting feedbacks.

mardi déc. 04, 2012

Understanding Oracle Strategy, Cloud and Engineered Systems

Sometimes small self-explanatory videos are better than long talks... I wanted to share with you today 3 short videos explaining Oracle Strategy, our Cloud positioning and what Engineered Systems bring to your IT. Enjoy...

Oracle Strategy....


… the Cloud...



and Oracle Engineered Systems...

About

Eric Bezille

Search

Archives
« avril 2014
lun.mar.mer.jeu.ven.sam.dim.
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today