X

Architecture and Innovation for Developers and Operations, to accelerate Value for Business

Recent Posts

Architecture and Innovation

Why it is critical for Governments and Firms to invest in technologies to support economic development

Today's Mark Hurd's Oracle OpenWorld Keynote was introduced with Ian Bremmer, CEO of Eurasia Group. He provided a geopolitical update of the world economy, explaining that currently China is investing the equivalent of x7 the size of the Marshal Plan into technologies to become self sufficient. Something US and Europe are not doing at the same scale at all.  And in those technologies, the ability to exploit and manage Data at scale is a critical factor, explaining why China is investing from Semi-Conductors (the horse power) up to Artificial Intelligence (the brain). Data being at the core of everything, Cybersecurity is at the same level of priority. Because the game has changed and now Cyber Attacks are more powerful than either. All that said, this clearly set the stage on why the investment for the future should better be in technologies (virtual) rather than "brick & mortar" (physical).  Following this introduction, Sherry Aaholm, CIO of Cummins came on stage, illustrating the importance of technologies even for a 100 years old company. Cummins, even as a large manufacturer, cares in what is happening on environmental and regulations. It is not just about selling electric vehicles, but to address the whole ecosystems shift. This change needs to be powered by data captured directly from vehicles and customers behavior. So, IT is becoming a critical asset even for industrial to leverage data. ERP systems need to be always-on, at the latest release. Sherry Aahom is seeing the move to the cloud with ERP power by Artificial Intelligence features (see also here) as critical for Cummins better efficiency to focus their people on more valuable data analysis on data that they capture. And this move to the Cloud was also illustrated by Navindra Yadav, Founder, Tetration Analytics showing that it is not always as simple as it seems from a technical and economic standpoint, depending on your cloud provider capabilities in their back-end infrastructure. His first experience has not been conclusive. Due to the large IO requirement Tetration needs this has led to an over killing cost in infrastructure at scale, and also limiting Tetration go to market capabilities. Since they move to Oracle Cloud Infrastructure, that has been a game changer: unleashing the CPU power they need up to 75% cpu usage vs. previously only 10% stuck waiting on IO, leading to a x60 performance improvement. From a market stand point that means less resources to get the job done, a better price point to attack a larger market, and the ability to guarantee SLA to their customer as they also eliminated the "noisy neighbor" effect. To close Mark Hurd's Keynote, Thaddeus Arroyo, CEO of  AT&T Business explained how with the digital transformation, we are moving from a world of building things to consuming things and adding value on top of it. AT&T Business moved 80% of their applications into the cloud. And now they are tackling the critical databases at a very large scale, pushing the envelope for at the same time enhancing the autonomous capabilities for speed and agility along with Oracle. More and more the product is software. The network is actually virtual today and they are in a journey to move all the network to the cloud. Network is now all about latency: compute deliver over the network with a zero latency! Friction will be gone of building private cloud to public cloud connections, accelerating the transformation with Cloud services accessible at zero latency at your fingertips. What an introduction for the day! Now to get that level of capacity you need the right technologies at play with strong foundation layers for your Cloud to deliver performances and Artificial Intelligent features for the highest efficiency. This is where what we develop with Autonomous features powered by Exadata makes the difference...

Today's Mark Hurd's Oracle OpenWorld Keynote was introduced with Ian Bremmer, CEO of Eurasia Group. He provided a geopolitical update of the world economy, explaining that currently China is investing...

Architecture and Innovation

Oracle OpenWorld 2018, Cloud Generation 2: more secure, more performances and autonomous

Oracle OpenWorld and Oracle Code One started today. Again, fully packed. And again, a very busy day, with a lot of customer meetings around Exadata, on-premises or Cloud at Customer, Recovery Appliance, and Big Data architecture evolution. Before diving into what Oracle is doing around his new Generation 2 Cloud, I wanted to recognize our customer Galeries Lafayette (photo), who was nominated in the Innovation Award for Data Management. For those who want to learn more on what they did (and why) in a very short time, moving to Cloud at Customer (an extension of Oracle Cloud inside their Datacenter), you can still go to the customer panel: Unleash the Power of your Data with Exadata Cloud at Customer, that will be held on Tuesday at 5:45pm at Moscone South Room 214. Unfortunately this will be running at the same time as the session I will deliver with Michael Nowak on Exadata Maximum Availability Architecture, where I will illustrate how we transform what Oracle development is doing into real projects for the highest availability (and performances). So after this quick introduction, let's focus on the key announcement of Larry Elison's Keynote: Oracle Gen2 Cloud : Think Autonomous ! To start with, Oracle Cloud need to be Autonomous from a security stand point. As Larry said, "we need to move from our people against their (hacker) robots", to "our robots against their robots". Leveraging Artificial Intelligence and Machine Learning to identify and kill threats. But we don't stop there. We are adding a barrier of defense, with a dedicated network of Cloud control computers, not co-mingle with customers systems and workloads. At the same time we continue to extend our Cloud Datacenter footprint, up to your own Datacenter thanks to our Cloud at Customer solution. And even this extension of Oracle Public Cloud inside your Datacenter will benefit from Oracle Cloud Gen2 security enhancement. And Cloud at Customer will also benefit from Oracle Autonomous Database capabilities which is enriching as well. To close the day, I saw Oracle Cloud Infrastructure at play, delivering real value to our customers, like CISCO or GAP, with real performance and cost gap compare to other cloud solution. So let's give it a try either in Public Cloud or at Customer depending on your constrains.

Oracle OpenWorld and Oracle Code One started today. Again, fully packed. And again, a very busy day, with a lot of customer meetings around Exadata, on-premises or Cloud at Customer, Recovery...

Architecture and Innovation

What happened at Oracle Code Paris

After Oracle Code Paris event hold last week, I wanted to take some time to reflect on what I saw, learn and exchanged with developers and excellent speakers. Many interesting topics were covered with live use cases, demos and Hands-on-Labs: from Serverless Orchestration, Blockchain and Smart Contracts, Chatbot powered by Machine Learning, Java Revolution for the Cloud era, DevOps Infrastructure-as-Code (including Terraform), full Containers pipeline orchestration up to the Database… and more. The opening Keynote was done by Lonneke Dikmans from eProceed, around process orchestration, comparing Monolithic, Microservices and Serverless design impact on the implementation. First, the more you move into smaller pieces, the more logic needs to be implemented at the client side. Second, from an orchestration stand point it can become very complex if you only rely on a choreography approach. Using choreography, process flows are “embedded” in the code of multiple applications, making it hard to adapt to change, and somewhat going in the opposite direction of microservices goal: being independent and under one team control. She illustrated her point with the example of Netflix, who had to write his own orchestrator (conductor), which looks like very similar to BPEL, to overcome the issue when you must manage many microservices. And moving into even finer granularity with Functions (Serverless) could make the thing even worse. That’s where Fn Flow implementation on the opensource Fn project lead by Oracle makes the thing easier. If you want to try it, just download the stack from fnproject.io, like some developers working on very large-scale devices orchestration did directly during the conference (they will recognize themselves). The second Keynote was delivered by James Allerton-Austin from Oracle Cloud Platform. He demonstrated how emerging technologies with Chatbot, Blockchain and Functions ūüėä, can be applied already to real life use cases to build (very) quickly business applications, in his case to sell cars. The Chatbot is used to interact with sellers and buyers. It leverages machine learning to recognize cars and propose market prices for the seller and availability inside the dealer network for the buyer. And validate ownership of the car, all through Functions invocation. It leverages also blockchain to secure the transaction between seller, buyer and dealer network. Java was also well represented, with Bernard Traversat, VP Software Development for Java Platform. Bernard explained the transformation put in Java to embrace the Cloud Revolution, especially with a more granular and more agile release mechanism. Including many projects covering Cloud area of focus: security, productivity and compatibility, density (shrinking the resource required), startup time (Ahead of Time Compiler…), real time predictability, serviceability (managing deployment at scale). In the line of Cloud deployment, I followed Gregory Guillou’s talk on deploying Nomadvantage application on Oracle Cloud Infrastructure (OCI) thanks to Terraform Infrastructure-as-Code providers. Nomadvantage goal is to benefit from Oracle Cloud to deliver his application in a SaaS model. It was very instructive, on how he could quickly set the proper infrastructure on Oracle Cloud thanks to Terraform. You can have a look on Gregory’s github for the details, and follow what he is also doing there around Terraform API and Service Management Chatbot. But most importantly on Terraform providers for Oracle Cloud, the list keeps growing and you can find a lot here: https://github.com/oracle/terraform-provider-oci spending from simple Loadbalancer to full Database, Weblogic, or Application infrastructure deployment being IaaS or PaaS. And the nice thing is, as we do have the option to deploy Cloud @Customer on your own premises, those providers will work as well there. Finally, I could not finish without the 2 talks from Christophe Pruvost who did a deep dive on DevOps Managed Services capabilities for Natives Containers applications. This range from managed Kubernetes, to a complete CI/CD pipeline that can deploy anywhere (in the Cloud or on-premises). First Christophe explained Oracle involvement and contribution in Opensource, with 2 principles: No Fork and no lock-in. Second he highlighted some of the key features around Managed Kubertenes on OCI: leveraging OCI Availability Domains to deploy a highly available Kubernetes Cluster and persistent storage, all delivered for you out-of-the-box. And eventually finished by a demonstration of a full CI/CD deployment on top. But… what about your Data? This is often the sensitive topic when I talk with developers around DevOps CI/CD deployment. If you have tools and technics to deal with application layer (Canary, Blue/Green,…), when it comes to the Database, you often stop there and ask for the connection. What if you were able to push DevOps up to your Database schema update? This is what Christophe also presented. If your code can be version with Git, thanks to Flyway opensource software you can also version your Database Schema. Add on top Oracle Edition View Redefinition, and you are now able to update your Database schema with NO INTERRUPTION. If you want to see all of that in action and more, I invite you to check Christophe Pruvot Tutorials on Youtube. Thanks again to all the speakers of Oracle Code Paris, with a very special thanks to Sora who not only organized it but also set up a special round table for “Women in Tech” with Salwa Toko, Francis Nappez and Dr. Aurélie Jean. Congratulation! You want to see even more innovation, return on experience, great speakers, plan already in your agenda to be at San Francisco from October 22nd to 25th for Oracle Open World and Oracle Code One: If you get that far, I hope you enjoyed this post and learn a few things by reading it.

After Oracle Code Paris event hold last week, I wanted to take some time to reflect on what I saw, learn and exchanged with developers and excellent speakers. Many interesting topics were covered with...

Architecture and Innovation

#OOW17 - First Journey to the Cloud, Data Lake, Smart Meters and Industrie 4.0

First Journey to the Cloud was all about Private Cloud and IT simplification to achieve projects performance goal, SLA and be ready for the Cloud (Public). I had the opportunity to see two Cloud providers -Revera, based on New Zealand and Secure 24, based in US- who explained why they decided to choose Oracle Private Cloud Appliance as the building bloc of their Cloud.  Both recognized : the Time-to-Market (very fast to deploy), the Performances and the strong asset management capabilities (HW and SW). (check here). The other important good news in the journey to the Cloud is the ability that we provide to easily move VM from Oracle Private Cloud Appliance to Oracle Public Cloud. As we are the only cloud provider to offer the same technologies both on-premises and in the Cloud. Which is a key point to offer this capability. (check here) I met also today with Gaurav Singh, from Energy Australia to exchange on Data Lake and Data Analytics architectures and benefits for Utilities. Again Oracle Engineered Systems, Exadata and Big Data Appliance, along with tightly coupled Software integration and unique optimization,  Big Data SQL, Oracle Data Integration, Active Dataguard, have been instrumental in achieving his project goal with a very strong ROI. At the same time making Energy Australia ready for the cloud, thanks to the simplification achieved and the fully compatible stack he got out of this transformation. Following in the Utilities, I introduced GRDF Smart Meters session, where François Vetter, Deputy Head of Infrastructures and Operation, and Mikael Petit, Lead Architect for Smart Meters Gazpar project, presented the back-end system supporting the 11 Million Smart Meters deployment based on SuperCluster and Oracle Recovery Appliance (check here, here and here). It was interesting to see again, the value of a fully integrated stack to achieve the best performance requested by such a project (check here), and also the governance that GRDF put in place to move from a traditional silo approached to integrated stack Oracle Engineered Systems Exadata and SuperCluster operation (check here and here). To close the day, I went on the Oracle Open World Industries 4.0 booth, where Eric Prevost managed to put on the ground floor a live smart factory, with full assets management and predictive analytic (check here). I also met with Gemalto, who are securing all the chain from IoT devices up to the Cloud (check here), as security is a key piece for success, that can't be an after thought. And last but not least, I got a unique Oracle Open World immersive experience inside a warehouse thanks to Perfect Industrie (check here). Again a fully very instructive day, that brings to you real life experiences and innovative ideas in just a few hours: incredibly efficient !

First Journey to the Cloud was all about Private Cloud and IT simplification to achieve projects performance goal, SLA and be ready for the Cloud (Public). I had the opportunity to see two Cloud...

Architecture and Innovation

#OOW17 - From Development to Analytic (Secured) Augmented Business into the Cloud

It is always hard to find the right title to summarize the tag line of a day at Oracle Open World. Such a large portfolio of innovations supported by $ 5B of R&D investment. So let's move on to debunk what I summarized by "From development to analytic (secured) augmented Business into the Cloud". Today started with Dave Donatelli's Keynote describing the 6 journeys that Oracle provides to lead you to the Cloud. As Mark Hurd said yesterday, it is "not a question of if, but when". And everybody is not starting from the same stage. So we are building a portfolio of solutions to support your journey to the Cloud from wherever you start: being a well established company with large and complex existing assets (first journey) or a brain new startup (sixth journey). All backed up by 3 deployment models: on your premises, Cloud Services that we bring to you, inside your own Data Center or full Public. Which is unique on the market, and give you the most flexibility that you can find with NO LOCK-IN. And to support you and your Business at best into our Cloud, major announcements were made today by Thomas Kurian, on all layers of the stack from IaaS to PaaS to SaaS. First, we are delivering to you our Cloud Data Center as Code. Accessible through APIs, you can now get the latest and most powerful Intel Skylake CPUs, Nvidia GPU, blowing speed Oracle Storage, 25Gb Ethernet Network and the most scalable DNS to link you to the Cloud. On the PaaS side, build on top of this very powerful IaaS, Oracle's goal is to provide to you fully autonomous services, in the line of Oracle Autonomous Database, announced by Larry Ellison on Sunday. And this is a very strong and important point. Because as soon as those services are autonomous, they can be easily leverage and instantiated automatically without any human intervention. Making new services (very) simple to compose. And really the demos speak by themselves (replay of Thomas Kurian's Keynote is available here). To enable this service composition and support full DevOps, Microservices and even now Serverless (functions) development, Oracle Cloud supports 5 development models, adding today fully automated Docker with managed Kubernetes cluster, which can benefit of a fully managed Apache Kafka stream infrastructure through Oracle Event Hub to enable Data ingestion at scale. Bringing analytic into the game. Now that we do provide a PaaS that allows easy and fast development, and Data ingestion, Oracle announced today a new Analytic as a Service PaaS on any type of Data -including video- based on Machine Learning (at that point, for the technical savvy reader, you understand the interest of latest Nvidia GPU IaaS addition inside Oracle Cloud). The end goal being to provide better services and next best actions from a Business or Customer experience stand point. Which is why -based on those PaaS services- Thomas Kurian also announced today embedded Analytics directly available inside Oracle SaaS applications. Which I call : "Augmented" Applications. Last but not least, Larry Ellison in his Keynote explained how we are applying Machine Learning and Analytic to provide highly automated Cyber Defense, through Oracle Management and Security Cloud Services. Encompassing both on-premises and Cloud environments. Because, without Data you do nothing, and that's why security is a key topic up to CEO level now, as Mark Hurd said yesterday. You need to protect your digital assets. With that, I hope you will be available to leverage all of those innovations to create what you need for your business and customers.

It is always hard to find the right title to summarize the tag line of a day at Oracle Open World. Such a large portfolio of innovations supported by $ 5B of R&D investment. So let's move on to debunk...

Architecture and Innovation

#OOW17 - Cloud is driving IT transformation and Oracle innovation

Today was again a very fully packed day at Oracle Open World. Starting with Mark Hurd, providing the vision of Oracle CEO on the global market and how Enterprises are forced to re-invent themselves or to face disintermediation, due to new comers leveraging the technologies to their benefits: doing the same thing but differently. Enterprise have to simplify and stop spending 80% of their IT budget to keep the light on. That's where the Cloud model kicks in. As Oracle we could not keep up if we were running Oracle Cloud like a traditional IT does, with multiple heterogeneous layers and vendors to integrate, patch and upgrade. The fact that we own all the stack from disk, to servers, to operating systems and even higher up with the Database and application stacks make a huge difference. We have one thing to manage, like Exadata to run the Oracle Database. That's why Cloud is driving IT transformation and our innovation. (full video replay here). This was clearly illustrated during Oracle Systems General Session, re-emphasis on the tight integration benefit of hardware and software, leading to unique optimizations, including one key aspect that was already touch on by Larry Ellision opening Keynote: Security (check here and here)... A new topics that is now also on CEO agenda. And this tight integration is leading us to unique innovation capabilities helping us to scale, and you as well, as our technology is available in 3 deployments models. Again this was also demonstrated today with the launch of Exadata X7, applying in-memory technology in storage not to accelerate analytic but OLTP workloads: something pretty counter intuitive for pure hardware vendors of storage, but possible as soon as you combine hardware and software up to the Database layer. That makes a huge difference. A huge difference which is leveraged by our customers like GRDF. GRDF is leveraging Oracle Engineered Systems (SuperCluster) to deploy Gazpar Smart Metering project of 11 Million meters in France. Helen Lagoutte with Mikaël Petit from GRDF explained today the full project scope, where I learn that Smart Meters are not that smart ! In fact, you have to put less intelligence into the meter for the meter to last 20 years. So, now you understand where the brain of the 11 Million meters is running and why you new a reliable infrastructure. I also had the opportunity today to follow 2 sessions in the JavaOne / Oracle Code conference which are running inside Open World for developers. And where I got very interesting hints on Machine Learning and choosing the right tool for the job , and Blockchain : a special thanks to Lonneke Dikmans from eProseed.

Today was again a very fully packed day at Oracle Open World. Starting with Mark Hurd, providing the vision of Oracle CEO on the global market and how Enterprises are forced to re-invent themselves or...

Architecture and Innovation

#OOW17 - Welcome into a more secure world thanks to fully automated self driving database

For those who follow me on twitter @ericbezille you should already have seen the experience of seasons experts that Oracle Open World is bringing to you. I would have love to clone myself to follow some sessions in parallel.That's why it is often a good idea to come to Open World not only prepared but not alone to get the most out of it. So before going in details of today's Larry Ellison Keynote, let me share with you some of the interesting topics I saw today.   This year again, the Cloud is everywhere, being private or public or both. For the City of Okland, it was reimplementing their EBusiness Suite on to a private cloud, leveraging Oracle Engineered Systems, Exalogic and Exadata. Having the right scalable and reliable architecture, not only helped them achieve blowing numbers in performances -moving from 11h to 30mns their Payroll processes- and achieve 99.99% uptime, it permitted them to focus on a key part of such a big project: change management, instead of the too often usual tweaking of trying to make it run ! (more details here and here). End result: project is on time and on budget ! The dream of many CIOs I know.   Today, if you have not heard about Docker, you must not be working in IT... But what Docker means for DBA and Oracle Database in general ? It is what Adeesh Fulay wanted to bring to the Oracle Open World DBA audience (room full of course). So in a very short summary, there is still some miles to go to fully address an Oracle Database requirement for production with Docker, LXC will do it better from his experience. But for standalone test/dev environment, Docker is fine. And you will even find many Docker images of Oracle products (beyond Database) available. One key additional piece that you also need to keep in mind with those technologies (containers) is the management sprawl that you will have to take care. This is where good orchestration tools, like Kubernetes, are more than necessary, they are mandatory.   Of course at the end of the day, all those technologies are not the end in themselves, the ultimate purpose is to be able to leverage Data for Business Transformation or even for a greater goal : Life Transformation. An interesting topic that was address by Rene Kuipers, in his Data Chain session, or "how to use the right tool for the job". Rene brought to us the case of DNA pattern matching to identify deceases. Where a single sample account for 2TB of Data... A use case that the current trend would think eligible to run into Big Data Hadoop systems, but that Rene put on an Oracle Database on Exadata. Because Oracle Database was the right tool in term of Data model for DNA basepairs and because Oracle Exadata is a unique technology to execute Oracle Database at its best. End result: x700 performance improvement and 85% less data, thanks to unique Exadata compression capabilities.   To close this full picture, going from Cloud, Technologies and Data, my agenda of the day would not have been complete without addressing Data security. Which I did by joining the Wells Fargo Bank's Session and their implementation of Oracle Database Vault... on Exadata. The platform on which they are consolidating now all their Oracle Databases. At least it is addressing one of the question I got about security and encryption: performances ! Maybe also one of the reason Wells Fargo Bank's KPI to chose Exadata.   Data and associated required security, scalability, flexibility and expendability were at the heart of Doug Fisher's Intel Keynote and the focus in Intel  current chip, SSD and memory development that we (Oracle) leverage in Oracle Engineered Systems and in our Cloud, to achieve massive scale, securely. "Data is the unseen force of business transformation", said Doug Fisher. Illustrating with Autonomous driving, Virtual Reality, Financial Services examples, all relying today on huge amount of Data on which they apply Machine Learning to enable all of this new capabilities, creating even new economy, like the Passenger Business (of autonomous driving), estimated at $ 7B.   Data and Machine Learning were at the heart of Larry Ellison's Oracle Open World opening Keynote. Sort of wrapping up all the themes you read in this entry so far into Oracle Autonomous Database 18c and Highly Automated Cyber Security ! running on Exadata (of course)...    The goal is to provide the safest place to store your data. But for that to be possible, to prevent data thief, you need to avoid (remove) human intervention. So automation becomes essential. You need to automatically detect threats when they first occur and direct the database to immediately re-mediate. It might means that we have to patch the database itself... with no scheduled downtime... while running. Human process doesn't work. We have to automate our cyber defenses, without having to shut your computers off-line. Most Data thief (99,9+%) were done leveraging breaches that had a patch available for at least 3 months if not a year ! So when Larry Ellison said: "Human process doesn't work", facts speak by themselves.   But to create an autonomous Database, you need a full set of technologies to be able to keep the Database running in any circumstances, including human errors. Technologies like clustering (Oracle RAC), transaction replay (Flash Back Query),.... all integrated together (Exadata)... and automated to act when something happens.  This is where Machine Learning is at the heart. This is revolutionary. This is what makes autonomous self driving cars, Computer vision recognition possible. And now fully automate Oracle database 18c and partially automate our cyber security.   Oracle Database 18c constantly adapts and tune the system without human intervention. It tunes itself, patch itself, upgrade itself. No more human error produces 99.995% availability or less than 30min a year. Self tuning also provides very efficient use of computer resources: truly elastic...in the cloud or in your Data Center.   The roll-out of Oracle Autonomous Database will start by end of 2017 and during 2018. And the goal is to provide all of those capability with no equivalent on the market in terms of capabilities and price : the most secure, the most scalable and the most affordable !

For those who follow me on twitter @ericbezille you should already have seen the experience of seasons experts that Oracle Open World is bringing to you. I would have love to clone myself to follow...

Architectures and Technologies

#OOW16 : Oracle Foundation Layers for the Cloud

As you already got from my previous posts on Oracle Open World 2016, Oracle is very serious about Infrastructure, and Infrastructure is at the heart of this year Edition. As Don Johnson, said in his session : "Infrastructure is the Foundation for Enterprise Cloud Services". Without a strong foundations you can't provide and operate your services at scale, with the right performance, availability and at the right cost.  And the last Open World 2016 Keynote, of Dave Donatelli, John Fowler and Juan Loaizia was all about what we are doing to build this infrastructure. Horizontal Integration & Compromise are No Longer The Only Way Legacy implementation of disparate components has always been the norm in IT. What was often called "Best-of-Breed" is now the cause of a damage IT brand. This legacy implementation approach created silos requiring specific integration between them, leading to an absolutely non-agile infrastructure, and very long IT delivery cycles. Something that the Business can't accept today. As if you can't deliver the right infrastructure on-time, with the right SLA (performance, cost, security and availability), you can not deliver your services. And yes, SLA are as important as "on-time". With the Cloud "self-service" approach, and now DevOps Continuous Integration and Continuous Delivery that is surfacing in the industry, we tend to focus on the "on-time", and forget a very important part: the SLA. SLA which are not only about performance and availability, but need to encompass cost and security (at minimum). This is what make the difference between a Cloud and an Enterprise Cloud. Modern Strategy: building Transformational Technologies The modern strategy is to revolutionary co-engineer processor, storage, networking, virtualization, operating system into a complete Infrastructure system, with the fastest and most cost-effective Hardware and unique Software optimizations across the entire system. The goal being to be able to do transformational breakthrough to keep the innovation pace and successfully deliver on the required SLA. And being able to manage the entire stack from a single interface, to provide the agility to deliver "on-time" and with the best operational control plane, as illustrated by Christian Bilien, from Soci√©t√© G√©n√©rale during his session. What John Folwer and Juan Loaizia call: A Revolution in Computing and Data Management Of course, for those of you familiar with Oracle Engineered Systems strategy this is not new. What is important is that we keep improving to sustain ever increasing: performance requirement, cost optimization, availability, with the only Open Systems certified up to 99,999% and security As an example, last year we introduce the SPARC M7 with the unique Software on Silicon features enabling faster analytic and extreme security, not only with hardware encryption, but also with memory violation protection. And today, was announced a new member in the Exadata family, introducing SPARC M7 Database nodes running Linux : the Exadata SL - SPARC Linux. I won't go in all the testimonies and announcements that were made today and instead will invite you to take the time to look at the replay to get the best out of the Transformational technologies that  we are engineering for the Cloud.

As you already got from my previous posts on Oracle Open World 2016, Oracle is very serious about Infrastructure, and Infrastructure is at the heart of this year Edition. As Don Johnson, said in his...

Architectures and Technologies

#OOW16 : Adaptative Cloud ready for any workload and regulation

Thomas Kurian closed his keynote today with a clear vision of what Oracle wants to achieve with the Cloud: providing a seamless operational model to deliver by a click of a button your Data Center infrastructure up to advanced Visual Analytics encompassing IoT. In short, answering to any type of requirements, allowing you to run in full production in the Cloud easily. Of course to be able to do so, you need not only to provide advanced Oracle services that we are delivering already through SaaS and PaaS, but also you need to complement by a powerful IaaS to be able to cover any type of workload that you can find in any Data Center today. That's clearly what Larry Ellison wanted to demonstrate in his today's Keynote, with a clear focus on what was behind the Gen2 Data Center that he announced on Sunday. In few word, providing a production ready IaaS to be able to run any workload at scale, at the right price. And this resonate to many customers and partners I met, as that was exactly the roadblocks they faced when some of them tried to move either production workloads into existing IaaS providers or work at scale at the right price their Big Data implementation. And sometime this can lead to big problems, as whatever you do in Cloud provider, if for any reasons (scalability, regulatory, ...) you have to come back to the Ground, you may need to have to implement the solution completely differently, leading not only to additional cost, but also delay and risks. Because "what runs in the cloud, stays in the cloud" for all of them... but Oracle. At Oracle, we build an Adaptative Architecture : the same architecture that we provide in the cloud, we can provide the same on-premise. In our case it is the same architecture in the Cloud and on the Ground. This may sound irrelevant, but it is not... As a business we can't anticipate how the Society and politics will evolve and the regulation associated with it. But we can anticipate an Adapative Architecture to be able to be flexible enough. Especially in today's world where every single thing we do rely on Data, and that Data Privacy and regulation is evolving very quickly. You need to be able to adapt. What if suddenly you want to enter a new market/country that request that the Data you are using be locally store in this country... and the Cloud provider you are relying on, doesn't have a Datacenter there ? With the choice of deployment model that we offer, the same architecture in our Cloud, or on your premise, you are in a safe position to do what ever you need. There are also 2 very important points that we put in our Adaptative Architecture: Design for scale*, so you don't have to get rid of your solution at the moment you need it the most: when your business is starting to scale Open Standard, so it is easy to use and integrate *Update: Oracle Beats Amazon Web Services in Head-to-Head Cloud Database Comparison Comparison of Database Cloud Services - Benchmark Testing Overview

Thomas Kurian closed his keynote today with a clear vision of what Oracle wants to achieve with the Cloud: providing a seamless operational model to deliver by a click of a button your Data Center...

Architectures and Technologies

#OOW16 : CEO Goals and Challenges meet the Cloud

Today Mark Hurd invited several CEO and CIO across countries and industries of all size, demonstrating the move to the Cloud happening everywhere, even of key Enterprise Applications like ERP. But first, he started with the fact of the Global Market today which is a key driver of the transformation for CEO. CEO can't accept status-co if they want to stay in their position beyond 18 quarters. If their industry is not heavily involved in doing business with China, which account for nearly 40% of the GDP's 2.5% annual growth, they only can grow if they take market shares. And for that they need better efficiency. This is the reason why there are need to improve their customer knowledge, marketing, processes... But at the same time they need to grow, they are cutting cost... and rely on old IT assets, where they spend 80% of their budget just to keep it running. That's where the Cloud is meeting the CEO agenda, providing the agility he is looking for and saving the IT budget of keep it running. And at the same time offering ready to use pre-integrated services able to complement his strategy. That was one of the key message we got from Orange testimony on moving its ERP to the Cloud and at the same time getting the benefice of accessing to better social, mobile, and analytic functions, sustaining its transformation. In closing, Mark Hurd came back to its OOW15 predictions for 2025 to add 3 more, reflecting how the Cloud is changing IT spending: By 2025, 80% of IT budgets will be spend on Cloud, not on traditional IT systems By 2025, the number of corporate-owned data centers will drop by 80% If all this happens by 2025, CIOs will spend 80% of their IT budgets on innovation, not maintenance (that will be done by the Cloud providers) Of course, as Oracle we are clearly investing in the Cloud, with $5.2B R&D budget, more than 19 data centers across the globe, and 82% growth rate in Cloud...

Today Mark Hurd invited several CEO and CIO across countries and industries of all size, demonstrating the move to the Cloud happening everywhere, evenof key Enterprise Applications like ERP. But...

Architectures and Technologies

#OOW16 GO LIVE : More Cloud, More Secure, More Choice, Same Goals, Same Strategy

Big day today, again ! It's Oracle Open World 2016 Go Live. Data, Data, Data... As every year, there is a lot's of content, not only in Larry's Keynote, but also through all the Users Groups sessions that are taking place on the Sunday. I had the opportunity to share the experience of Gwen Shapira, who developed Kafka, and listen to Micha√ęl Rainey from Rittmanmead who validated the integration of Oracle Golden Gate into Kafka on the All Data Reference Architecture, to interconnect Big Data and RDBMS for better Business Operations. Then I followed General Electric's EPM shared services implementation with Gary Crisci, going through all their transformation, including the Business outcomes. Very interesting to get the evidence from customers experiences of the choice of running Oracle on Oracle Engineered Systems (in this case T5-8 Exalytics) to get the best out of IT Spend, Operation Efficiency and Business Benefits (in no priority order). To finish with Paypal's John Karnagaraj's view of NoSQL, Big Data, Small Data, the Internet of Things, Machine Learning, and the ‚ÄúNext Big Thing" where extending the RDBMS with NoSQL is the key, but still challenging as it is based in a continuous evolving ecosystem. As you can see, Data Efficiency to get the most for the Business and Operation is everywhere... and the Cloud is right around the corner. Which lead me to Intel's Diane Bryant and Larry Ellison Keynote, where Cloud was at the heart to provide the best possible efficient implementation model.  Cloud, Cloud, Cloud... Yes, but Hybrid, managing more and more Data, and being more and more Clever and Secure... Diane Bryant started Larry's Keynote with a fact that Intel is seeing everywhere: Cloud is a big disruptor, and converges toward an Hybrid Model. As for many different reason, both world needs to co-exist, and both world require efficiency in operation and delivery model. Behind  "efficiency", you should also read hyper scale capability and automation: being able to compute always more and more Data and get the meaning out of it, more and more quickly thanks to compute capability and machine learning. That's why Oracle and Intel are working closely together to get the most out of Hardware and Software integration, clearly illustrated by Exadata... and 100% inline with the delivery model, which provide the choice of on-premise or in the cloud. More Cloud In few year, Oracle Cloud has achieved tremendous extensions in its services coverage. Oracle is the only one Cloud provider encompassing the 3 different layers (IaaS, PaaS, SaaS) and keeping adding on it. Starting with Gen2 of our Datacenter Cloud's infrastructure, that Larry Ellison announced today. With even more capacity, and resiliency, both locally and across regions. Follow by new services in the Cloud, more and more leveraging Machine Learning, as the more Data you have to manage the better you need to automate... and thanks to Moore's law and Machine Learning, we have the power to do it. Machine Learning is everywhere: inside Oracle Cloud Management Service, inside our Applications for personalized web and mobile experience, Anti-money laundering, or Banking customer analytic... Of course as we are living in an Hybrid World, we need to provide the services which facilitate this co-existence. This is done through Oracle Integration Cloud Service, which provide a visual orchestration design and a marketplace of pre-built integration to better interconnect both world. And Data Integration Cloud Service, which enable Data movement to the Cloud from Applications, Databases and Big Data (SAP, Salesforce, Redshft, Hadoop, Spark,...). More Secure Security is at the heart of Cloud success or failure. You don't want to have your Data compromised in any mean wherever they are... That's why we put the security at all the layer of the stack, and keep adding also into it.  We are announcing today a Security Monitoring Analytics Cloud Services, monitoring security events and user behavior, and being able to react automatically. And the acquisition of  Palerra, a leading Cloud Access Security Broker. More Choice We are also extending the choice of Hybrid deployment model scope and coverage. We already announced Oracle Cloud Machine, which is providing Oracle Cloud IaaS and PaaS capabilities behind your Firewall. We are now extending this to Exadata and Big Data with respectively Oracle Exadata Cloud Machine and Oracle Big Data Cloud Machine. And adding an Exadata Express Cloud Service starting at $175 per month. Offering even more choice in your deployment and consumption model of services. Still keeping the same goals and strategy for our Cloud since the beginning, just adding into it. Same Six Design Goals Cost: lowest acquisition price Reliability: Things are allow to break but everything is redundant with no single point of failure Performance: we want the fastest Database because we want the fastest Applications. And performance is always related to cost. Standards: SQL, Hadoop, NoSQL,... Java, Ruby, Node.js,... Linux, Docker Compatibility: we are in a process to move things to the cloud... But it will take at least 10 years. So it is important that those 2 things co-exist gracefully and that you can move in between including application. We need to be compatible. Security: always-on continuous defense against cyber attacks ! ... and Same Strategy Big focus of delivering a complete suite of applications integrated, so customers don't have to integrate lots of separate products Preserve existing application and Database investment and easily lift and shift what they have to the Cloud In infrastructure, compute is a commodity... So cost is important. So we have to be lower cost and higher performance than the competition. Still there is differentiation in infrastructure on security. For more details, I invite you to review my tweets about this first day of OOW16.

Big day today, again ! It's Oracle Open World 2016 Go Live. Data, Data, Data... As every year, there is a lot's of content, not only in Larry's Keynote, but also through all the Users Groups sessions...

Architectures and Technologies

DataLab in a Box: accelerate your Data Value to stay ahead

Since 2011 andMcKinsey analysis launching the ‚ÄúBig Data‚ÄĚ phenomenon and my first blog entryon this trend (here),many things evolved. It is no more a ‚Äúhype‚ÄĚ, and depending on which angle youare looking at it, ‚ÄúBig Data‚ÄĚ evolution brings new jobs like Chief DataOfficer, new enrichment and additional complexity for operations with thecontinuous evolution of tools revolving around it. Either way,there is no way to stop it if you don‚Äôt want to stay behind, or even becomeirrelevant. Digitalization of the World and Data are removing the previousknown barriers to entry. If you are not convince yet, have a look at the valuationof AirBnB compare to AccorHotels. This is call now ‚ÄúUberization‚ÄĚ, and theword even made the French dictionary this year. All thanks to masteringDigitalization technologies and Data to offer the right services at theright time at the right price: ‚ÄúData is eating the world‚ÄĚ, and you shouldbetter be a master at it. Just getting back to my previous example about AirBnBand AccorHotels, AccorHotels is not standing still, and is investingheavily to master their Data. Getting the right tools for the Job Following McKinsey 2011 prediction,mastering Data starts with Data Scientists‚Ķ but even here, Data Scientist Jobis facing evolution (or revolution). In Jennifer LewisPriestley's article "DataScience: The Evolution or the Extinction of Statistics?" , not only will you get an interesting view on this evolution,but I also invite you to look into the comment from Andrew Ekstrom , that I would summarized in one sentence: ‚Äúget the righttools for the job, that can scale to crunch more and more data‚ÄĚ. And tools have evolved also prettyfast, for better enrichment and capabilities, but also bringing more additionalcomplexity to keep up. At the end of the day, what you would like is a DataLab in a box, ready to use, withthe right capabilities. Spending time building it, maintaining it, moving fromHadoop MapReduce, to Yarn, to Spark (to name a few), combining it together with NoSQL and SQLsourcesis complex (check Mark Rittman - EnkitecE4 Barcelona : SQL and Data Integration Futures on Hadoop presentation formore details). Would not it be nice if you canget all of this ready to use, to be focus on the analysis to get the value ofALL your Data. To take another analogy, just think if you had to build your carbefore using it, not really convenient. Unfortunately it is often what most ofIT tends to do. DataLab in a Box Applying the ConvergedInfrastructure to Big Data, what we call a BigData Appliance, results in the ability to provide to you a car ready todrive, encompassing not only a scalable Hadoop platform combine with NoSQL, butas well SQL connectors, with a nice and ready to use visual exploration tool (BigData Discovery) to get the value out of your Data at the hands of DataScientists. All in all a DataLab in a Box, available both in the Cloud or onyour premises. Many of our customers are leveraging it with success (I inviteyou to have a look herefor references and use cases). That‚Äôs why Oracle was named leader in TheForrester Wave‚ĄĘ for Big Data Hadoop Optimized Systems, Q2 2016. Nowthat you know how to get an Operational DataLab (in a Box), where do you start fromhere ? The first steps to get to the Value The2 tips to keep in mind from many customers having already found the value in Big Data: Start with the Data that you already have, by bringing them into your DataLab Ask the right ‚ÄúSMART‚ÄĚ question, top burning issue, for your Business line to be solved With that, youshould be in the right place to accelerate your Data Valueto stay ahead. To go further: Data Science for Business -What You Need to Know about Data Mining and Data-Analytic Thinking- By Foster Provost, Tom Fawcett How to Measure Anything: Finding the Value of Intangibles in Business - by Douglas W. Hubbard Jennifer LewisPriestley's article "DataScience: The Evolution or the Extinction of Statistics?" and comments Mark Rittman - EnkitecE4 Barcelona : SQL and Data Integration Futures on Hadoop TheForrester Wave‚ĄĘ for Big Data Hadoop Optimized Systems, Q2 2016 BigData Appliance BigData Discovery Big Data SQL References and use cases

Since 2011 and McKinsey analysis launching the ‚ÄúBig Data‚ÄĚ phenomenon and my first blog entry on this trend (here), many things evolved. It is no more a ‚Äúhype‚ÄĚ, and depending on which angle youare...

Architectures and Technologies

#OOW15 Oracle: your Infrastructure Provider ?

As explained by Dave Donatelli during today's Keynote: Cloud is changing everything. Especially disrupting the business model of infrastructure providers. To succeed today, you have to have a Cloud Strategy and a Cloud to offer to your customer, as it has several benefits: Offering customers the right solution for the job: being either in the Cloud or on-premise Learn and improve your development at every level thanks to your Cloud operations that need to work at a large scale Maximizing your R&D investment as you are not only building for your customers but also for your Cloud And this is exactly Oracle Strategy. Here, we are going beyond a pure performance gain, we are looking at efficiency. Any improvement made, not only benefit to customers, it is also benefiting to us by optimizing our Cloud, that's why we want to be the best. As stated by John Fowler: the better we do the Platform, the less we have to build multimillion $ Datacenter link to its associated hydro-electric plant ! That's why we are so serious about it. So, why Oracle as your infrastructure provider ? Because we need everything to build our Cloud efficiently, and we are engineering everything toward the new Cloud architecture: Archive: our biggest customers are the big Clouds Backup: from Tapes to disks with ZFS up to our unique Zero Data Lost Recovery Appliance for even greater efficiency Storage: the market is moving to full flash, but need a new architecture to match change and we have this architecture. Networking: to get the best efficiency, Network is a key part, here again we have the technology Compute: we cover from the just announced M7 servers, x86, up to Engineered Systems OS: we are developing our Oracle Linux, Solaris and Oracle VM Integration: Tight all those things together  for maximum efficiency because we engineered the Database, Middleware, and the whole Suite of Applications... We can co engineered: we are unique Security: we have it across the stack, especially with the new M7 servers announcement and its unique Security on Silicon for a fully encrypted Datacenter Management: one way to navigate between public and private Support: One phone call for support Having the ownership of all the components allow us to put the right function/feature at the right place: allowing for maximum efficiency. This efficiency is even adopted at very large scale as demonstrated by several customers during many Oracle Open World sessions, like the Customer Panel on Big Data and Data warehouses. As such, the CERN to effectively manage and run the LHC with reduce IT involvement did replace a Do-It-Yourselves approach by a whole Oracle Engineered Systems for its Big Data implementation. As this was also illustrated during Neil Mendelson's Big Data and Analytic Strategy session: we are using the same technologies that we deliver to you on-premise and in our Cloud, and that we are keeping to improve for better efficiency at all level of the stack with the complete flexibility on where you want to operate back and force with a 100% hybrid Cloud model.

As explained by Dave Donatelli during today's Keynote: Cloud is changing everything. Especially disrupting the business model of infrastructure providers. To succeed today, you have to have a Cloud...

Architectures and Technologies

#OOW15 Complete Integrated Secured Cloud Suite

Feeling the experience of the Suite was clearly the main take out of Thomas Kurian's morning Keynote. Thomas took us on the journey of a CTO facing more and more common IT challenges to answer to Digital Business Transformation: "Everything Now !". Leveraging the full capacity of Oracle Public Cloud, spanning now from IaaS to SaaS, we started by enabling the deployment into the Cloud of an "elastic" application (with build-in  up and down scalability configuration, to pay for what you use when you need it). Leveraging the IaaS Platform to deploy an opensource based application front-end on Node.js and Exadata as a Service to sustain the Oracle Database back-end. Then providing a 360¬į view of the customer by connecting the sales forces directly on their mobiles to every pieces of the Suite from CRM, to ERP to Supply Chain, optimizing even the sales forces visits: all integrated from a single Cloud provider. And last, giving the key up to the end-user of the best decision and optimization, through Big Data and Data Visualization service. (check http://cloud.oracle.com) All of this experience being extremely secured, as explained in great details by Larry Ellison in his afternoon Keynote. Everything becoming digital, we are in war against Cyber Attacks.  Everyday, everywhere data are stolen, meaning that our current level of defense are not enough today. Until know, even if we can provide you the most comprehensive security features spanning all Oracle stack, it was often either an after thought or a compromise to find due to the performance impact you had to pay by turning security on. But, with the new M7 servers launched yesterday, this is coming to an end. We can now provide "always on security" by default thanks to Security in Silicon advanced features providing: real time memory intrusion detection high-speed encryption with near zero overhead And with just a few of those M7 servers, we can and are protecting an entire Cloud. M7 can detect any potential attack in real time. And so when this attack is detected we can now patch the entire Cloud, even live with features like KSplice. And of course we are also deploying in our Cloud everyhting secured by default with Transparent Data Encryption in the Oracle Datase, including a Key Management System ensuring that you keep your encryption key with you (on your premise)  and a such blocking anyone from being able to see you data in clear in our Cloud but you. We also provide you Data Masking and subset capability to benefit from moving for example your test/dev to the cloud with anonymized Data. Adding even Compliance reporting with Audit Vault, up to Database Firewall. As you can imagine now, we are turning Security on by default at every layer of the stack. In closing of his Keynote, Larry announced a new Engineered System supporting the same Oracle Public Cloud IaaS & PaaS on premise - 100% compatible with Oracle Cloud. So for those who will want to replicate our Secured Oracle Public Cloud IaaS & PaaS you will be able to run exactly the same on your premise...

Feeling the experience of the Suite was clearly the main take out of Thomas Kurian's morning Keynote. Thomas took us on the journey of a CTO facing more and more common IT challenges to answer to...

Architectures and Technologies

#OOW15 Enabling Efficiency and Security at Scale with Software in Silicon

Beside the Cloud, the other big theme of the day was the launch of the M7 Servers and SuperCluster. To get a good view on why we are all so exited about it, I invite you to read the following article from Chuck Hollis: "The Amazing Oracle M7". As reflected in Mark Hurd's Keynote, even if it is about Cloud, it is also about efficiency, scalability and security that power both the Cloud and radical innovation that customers like GE have to undertake to differentiate themselves. And as Larry Ellison stated on his Sunday opening Keynote, the deeper and lower you manage to go, the greater efficiency. And it is exactly what we have done with "The Amazing Oracle M7". We started this journey several years ago, and we are delivering it to you today. Yes, you can order it now, and being ship to you for Christmas or even before. The nice thing is that we are not the only one to be exited about it, ISVs and customers too... as I was able to see in several meetings I had today at Oracle Open World (if they read this blog, they will recognized themselves). Not only did we focus on the usual chip design things, but the most important was to embed Software in Silicon. Undertaking the Security challenges Turn security on by default ! No compromise, for your Data at rest, on the wire AND in memory ! All going at hardware speed ! With the Oracle M7 Software in Silicon features, not only did we improved the cyphers algorithms to encrypt your data (transparently), we are also providing a completely unique capability to secure program memory access: Silicon Secured Memory. This feature locks the memory of a program at allocation time, providing a "key" (colored pointer) to the program that it will be the only one to match with its memory segment. So this is the end of Heartbleed or Venom like attacks. The nice thing about it, is that it also prevents from bad programming and improving code quality, which is becoming more and more complex especially when you are talking Big Data in-memory... the upcoming "new normal". Undertaking the Efficiency Challenge Of course all the improvements we did on core count, cache size, memory bandwidth... benefit out of the box to any applications. But here again we are embedding SQL in Silicon, that direct benefits to Oracle Database 12c in-memory Analytic by a factor of x10 at minimum: a customer during the Beta Program experienced a x83 gain running Oracle Database 12c in-memory on M7 vs. all Flash. We are even exploring to open the interface to other applications, and Spark on SPARC is running in the DEMOground at Oracle Open World providing a x6 time improvement, as stated by Ganesh Ramamurthy during his Keynote. If you are not convince yet on the new efficiency, I invite you to read our published benchmarks today, not only on generic benchmarks, but also on real Enterprise benchmark and many area like In-Memory, Hadoop, NoSQL, Graph, Neural Network with R... Just pick the one you need and it should be here... Oh, and I forgot to mention, of course we did those benchmarks with Security turned on by default.

Beside the Cloud, the other big theme of the day was the launch of the M7 Servers and SuperCluster. To get a good view on why we are all so exited about it, I invite you to read the...

Architectures and Technologies

#OOW15 Cloud Predictions for 2025

Marc Hurd launched second day of Oracle Open World by providing his Vision 2025, joint on stage by Jim Fowler (GE CIO) and Mike Brady (CTO AIG), along with video testimonials from Solarus Aircraft business, Avaya and DocuSign. A panel of customers spanning from very large to very small companies. We are facing today a massive change in the industry, driven by a lower world wide growth and a shift of consumers impacting technologies. The current IT model based on old applications pre-internet, pre-mobile, pre-social world is not sustainable. Combining all those macro and micro economics trends, explains why Cloud is becoming so popular. It is build on the promise to get to faster innovation at a lower cost NOW. But to make it effective, it would take time: at least a decade. That's why as Oracle we undertake our journey to Cloud several years ago to prepare this transformation, starting by rewriting our applications for the Cloud, allowing coexistence -between existing on-premise and Cloud- to sustain this decade to move to the Cloud. By 2025, Mark Hurd's Vision is that : 80% of applications will be in the cloud 2 suites providers will own 80% of SaaS market 100% of test/dev will be in the cloud Virtually all enterprise Data will be store in the cloud Enterprise cloud will be the most secured IT environment And we are readying ourselves for that transformation. Jim Fowler, GE CIO, was completely reflecting this vision. He has setup goals of having 70% of his applications in the Cloud for 2020. And for that he wants to leverage partners who know best on where they are not adding value to the business and focusing on building where GE can differentiate, like software analytic for gas turbine or locomotive. And even for that 30% in-house intellectual property development GE is still looking at partners to buy innovation at the infrastructure layer to provide scalability and security. A good thing for Oracle Engineered Systems strategy to power both the Cloud and on-premise transformation. AIG CTO Mike Brady reflected the same movement, underlying the need to link both world, as he can't move to the Cloud without connected the Data stream coming from the Enterprise. And along those lines, Security was THE word expressed in all testimonials. All in all, those testimonials were not only enforcing Oracle Vision for the Cloud but also our investments in the full stack innovation, deep into the chip level to be able to deliver the scalability and security both needed in the Cloud and to deliver on-premise transformation and innovation as well... securely. And upper the stack with a full integrating application portfolio spanning CRM, ERP, Marketing, Social Relationship, and extensible to your own requirements thanks to open Platform leveraging 100% compatible Java technology to run it anywhere, with no-locking. A strategic that Avaya leveraged to move from a 100% proprietary Cloud provider to Oracle Suite of Cloud Application and Java PaaS, reducing his specifics by 80%, and for the remaining 20% using a fully open Java PaaS. To get additional insight on Mark's Keynote, I invite you to read also the following article: "Oracle CEO Mark Hurd Lays Out the Future of the Cloud".

Marc Hurd launched second day of Oracle Open World by providing his Vision 2025, joint on stage by  Jim Fowler (GE CIO) and Mike Brady (CTO AIG), along with video testimonials from SolarusAircraft...

Architectures and Technologies

#OOW15 Building High Performing Datacenters and Moving to the Cloud

This year to start Oracle Open World 2015, Intel CEO, Brian Krzanich was on stage to share his vision and what Intel is building with Oracle to enable the transformation to the Cloud. And this starts by building high performing Datacenters that: Make it easy to adopt cloud technology Make it perform and deliver a real ROI Make it compelling to deliver actionable insight Make it secure Along the line of Software Define everything (see my previous entry on this similar topic) and Oracle Engineered Systems Strategy to "Make it Easy, Perform, Compelling and Secure" (check also this "Performance Study: Big Data Appliance compared with DIY Hadoop" and picture below extract from JavaOne Keynote), Intel is also working with Oracle on Project Apollo to provide Enterprise grade availability and scalability. The goal of the project is to improve the massive deployment of virtual environments, making a better usage of the resources (scalability) and removing the variance of workloads behavior to provide sustainable SLA (availability). Of course this is done at hardware and software level. As reported by Intel CEO, at the current state of project, Intel and Oracle were able to improve the resources usage by 50% and reduce the variance by an order of magnitude from 30% to 3%. He also touched on innovations Intel is working on around SSD and DIMM, stating that "separation between memory and storage will completely transform performance in the year to come". This Intel introduction was fully inline with Larry Ellison's Keynote on what Oracle is doing to enable the move to the Cloud: a new era of utility computing. As Larry said, we are designing our Cloud with 6 goals: Cost: lowest acquisition price and lowest total cost of ownership Reliability: fault tolerant with no SPOF (single point of failure) - as "we need to be as reliable as your utility company" Performance: fastest database, middleware, analytics,... from batch to real-time Standards: in early days of cloud, there were not standards but pionniers. But people have a huge investment on premise that they want to move to the cloud. So we need to implement the same standards in the cloud: SQL, Hadoop, NoSQL, Java, Ruby, node.js, linux, docker... Compatibility: easy move of workloads between on-premise and cloud Security: always on continuous defense against cyber attacks As Larry said, "security should always be on and should be item number one: as all our data goes online !" To provide the capabilities to enable your transformation to the Cloud, with the choice to run on-premise or in the cloud, through a single management pane of glass, major announcements where done today. We enriched today our SaaS offering with Manufacturing and E-Commerce. Extending as well the Mobile consumer interface of our applications and the learning curve. We also announced major enhancements on the platform both for the cloud and big data analytics. For the Cloud with extended multitenant capabilities both at Oracle Database level (with up to 4096 Pluggable Databases per Container) and Java Server level (improving by x3 the consolidation ratio). For Big Data Analytic, first with in-memory offloading capabilities to replicated Oracle Database instances, second by providing Big Data now also into the cloud, with the ability to deploy Hadoop Clusters, along with Big Data Preparation, Discovery and Visualization services (check http://cloud.oracle.com). And of course all of this with reliability in mind, adding Oracle RAC in the Cloud, Exadata Service in the Cloud, as well as fault tolerant java server deployment on different location, enabling zero planned and unplanned downtime. As Larry said more announcements are to come, especially around security. So stay tuned for the coming days of Oracle Open World 2015.

This year to start Oracle Open World 2015, Intel CEO, Brian Krzanich was on stage to share his vision and what Intel is building with Oracle to enable the transformation to the Cloud. And this starts...

Architectures and Technologies

Real-Time Financial Risk Management: big data in-memory at scale

If you are working in financial market and are going to attend Oracle Open World in few days, you should take a valuable 1 hour of your time to join the head of R&D of Quartet FS, Antoine Chambille  for one of his sessions at JavaOne. He will be joint by our Java and ISV Engineering experts to explain how you can make Real-Time correlations with Java, to take the right decision. As a matter of fact, there is a time where scale-out -web search engine like design pattern- doesn't work anymore. And when you need to correlate very large amount of data in real-time, this is becoming a very interesting challenge. This is what has been achieved by Quartet FS with Oracle technologies running Java in-memory at very large scale in real-time. But to know more and ask the burning questions that you should have, on how you can leverage this capabilities in your own context (even outside of Financial Market, like Retail), join Antoine Chambille and Oracle Engineering for the following sessions: Operating a 16-Terabyte JVM...and Living to Tell the Tale [CON1855] Multidimensional Java Databases, Large Heap Sizes, and the JVM...Oh My! [BOF10260] As a first preview, and for those not going to be at JavaOne, you can have a look at the first 22 minutes of last Quartet FS User Group's Technology Keynote, about performance.

If you are working in financial market and are going to attend Oracle Open World in few days, you should take a valuable 1 hour of your time to join the head of R&D of Quartet FS, Antoine Chambille  fo...

Architectures and Technologies

Oracle: your Software Define Network provider ?

Our Engineered Systems and Cloud are all over the place, and I am engage on many customer projects around those technologies both on-premise or in an hybrid public/private mode. And one of the key under laying piece is the network. Oracle is not well known as a network switch provider... But should... as this is a critical asset that seats in your data path insuring flexibility, performance and security. For Oracle it is a key piece to build our Cloud, our Engineered Systems and that we are integrating up to the chip level (see latest announcement at Hot Chips 27 conference). Attributes of a ‚ÄúCloud-Enabled‚ÄĚ Datacenter Software Defined ‚ÄúEverything‚ÄĚ: Converged, Modular, Agile Infrastructure with Dynamic Provisioning Data Center Security: Enhanced security in Converged Infrastructure and Fabric based solutions Network Services in Fabric: Network Virtualization, Software Define Network (SDN)/Network Function Virtualization (NFV), Flatter L3 networks, Virtual Network Services, Multi-tenancy, Strong provisioning and management tools Networking is the "converged" in Converged Infrastructure. Oracle SDN enables the creation of segregated, High-Speed L2 Networks and is complemented by Oracle SDN Virtual Network Services to deploy Virtual Firewalls, Routers and Load-Balancers "On Demand". Software Define "Everything": Oracle Virtual Network Fabric and  Services With Oracle Virtual Networking, we are drastically simplifying Datacenter network topology by converging your infrastructure and consolidating all your I/O (being Ethernet and Fiber Channel), with a fully dynamic provisioning capability of vNIC and vHBA up to your running Virtual Machines (VMs). This simplification focuses on removing unnecessary North-South traffic and optimizing East-West traffic between your VMs, with strong tenants segregation (PCI DSS compliant) leading to better performance and infrastructure secure consolidation. With Oracle Virtual Network Services, we provide modular, highly scalable, network services, that can be instantiated by tenant, in an highly available and secured model. Those services, all delivered as a software VM, encompass Firewall, Load-balancer, Routing, VPN and NAT. They can all be daisy chain in a single virtual instance and are High Available with redundancy built-in using VRRP. Unified management is delivered by using Oracle Fabric Manager, integrated into Oracle Enterprise Manager or OpenStack. What's next: Enterprise IT and Telco Networks Convergence We are now at a turning point where Network Core Services, thanks to Software-based Network Virtual Functions will be able to be supported by Enterprise IT... as long as the right platform including Software and Hardware can be delivered to sustain the 99,999% well known in Telco industry. What could we do without any network connections today: not much ! Building of our long experience as a Telco technologies provider encompassing Hardware and Software, we are now bridging both world of Enterprise IT and Telco Networks IT backbone infrastructure with our Netra Modular System Platform to support the deployment of any kind of virtual environment in a very secure and scalable architecture. As a design pattern we took the best of rackmount servers and blade chassis without the drawback: the time-to-deploy for Rackmount and the proprietary chassis for Blades. Combining everything together, this is how it could look like to operate your Enterprise IT Datacenter, through OpenStack, leveraging Oracle Software Define Network and Virtual Services on top of our Netra Modular System: If you are going to Oracle Open World 2015, starting October 25th, I invite you to look for the following sessions to exchange with our experts: Network and Security Function with Oracle SDN Virtual Network Services - HOL10372 Oracle SDN with Virtual Network Services for the Ultimate in Network Connectivity - CON5788 And of course, I will be happy to see you there or on twitter as well.

Our Engineered Systems and Cloud are all over the place, and I am engage on many customer projects around those technologies both on-premise orin an hybrid public/private mode. And one of the key...

Architectures and Technologies

Embrace Storage Innovation and Avoid Data Growth Disruption

Last week, we did a major announcement on Oracle Storage, with the upcoming next generation of our SAN Storage. In my last blog entry I talked on the characteristics that were needed today to enable the Real-Time Enterprise: Real-Time, Always-on, At Scale, Secure. All those characteristics have been built-in in the FS1. So let me go back on some of the unique features that we put into the FS1 to make that happen. Business Prioritization up to the storage To deliver the most efficient I/O subsystem, not only did we design the FS1 with a multi-tiers Flash and Disks model, we also took a special attention on the auto-tiering capabilities to be inline with the Business requirements. As such the algorithm that we use to move hot data to the most appropriate storage insures that we are moving the data relevant to the application or Database, and not stale data, not useful for the Business, with a very fine grain control (640k). This is complementary to what we already provided, with the unique I/O prioritization based on Business Value, where we re-order the I/O to return the most important first, and not in the usual first-in/first-out model. All of those unique functions not only contribute to the performance you are expecting to get real-time. It also contribute to scale at an optimal cost per performance. We also continue to offload the replication engine so we can scale properly as your data grow while maintaining your disaster recovery capability. High-Reliability for your Data The system is designed with not single point of failure, and extremely fast controller failover. Combined with Oracle MaxRep replication engine, we can secure your data in multiple sites (one-to-many or many-to-one),  synchronously or asynchronously, up to the application level protection, by providing a consistent recovery point rollback capability. Enabling Storage at Scale FS1 has been designed to scale from the ground up, at an optimal cost per performance. By leveraging the unique software features, co-engineered up to the Database and application level, we are uniquely position to manage your data growth. As such we deliver flash performance at a fraction of the cost. And also offer the unique compression mechanism co-engineered with the Database, which enable us to reduce your data by a ratio going from x5 to x10 to x50 depending on your data sets. We are also offering unique configuration templates which enable you to configure the storage per type of application workload, reducing your operation cost even when your data and applications landscape is evolving. Securing your Data efficiently As I said previously, security is paramount and mandatory  especially in a Cloud multi-tenant environments. And it is why we took a particular attention to equip the FS1 with the capability to be split into dedicated domains. Adding also the support of T10 Protection Information, which prevents from silent data corruption. If you are in Paris, on November 5th, I invite you to join us for a special event around the FS1 announcement, where you will be able to meet customers who participated in our Beta Program.

Last week, we did a major announcement on Oracle Storage, with the upcoming next generation of our SAN Storage. In my last blog entry I talked on the characteristics that were needed today to enable...

Architectures and Technologies

#OOW14 Real-Time, Always-On, At Scale, Secure

John Fowler, in the last #OOW14 Key Notes, tackled the challenges that we were facing to enable the Business to continuously evolve, and as such create the Real-Time Enterprise. Complexity is everywhere, and the complexity will continue to grow: more things connected, in real-time, more risk exposure, and you could go on and on. That's a fact.What we need to avoid is complication, and look into simplification. But to get there, devils are in the details. And Oracle has a unique advantage to address those details: owning all the components of the stack makes the difference. Why ? Because we can understand the behavior of a specific Business workload and the associated impact at all the layers of the stack. Doing so, we can do things differently to attack the problem, and it is what we do, starting at system level... as this the baseline on which you lay down all your Business applications, and build your Clouds (Private or Public). Real-Time To get real-time, inline with the continuous Data explosion, you need to remove all bottlenecks, starting at I/O level. That's why, without breaking any software compatibility, we did rewrite our Oracle Database and engineered a specialized storage to be able to execute part of the queries directly inside (Exadata), and did the same for your applications running Java (Exalogic). We are the only one provider able to provide to you the best value per transaction to run your Oracle Database. But we don't stop there. As we all know that the fastest storage, is neither disk or flash but DRAM. That's why we developed our Oracle Database 12c in-memory option, and associated highly scalable systems (SuperCluster, M6-32) to enable you to simplify drastically not only your datacenter assets footprint, but also your Data Life Cycle management. No more needs to create multiple copies of your data, one for OLTP, one for Datawarehouse, you can now do all in-one. We are pushing the limits even further with the introduction of the M7 at Hot Chip this year, where we are putting directly in the chip specific Database Acceleration. Always-On Business is running 24h/7d, so can't afford any disruption of his applications. It is becoming critical to architect for no-downtime. Not only for unplanned outages but also for maintenance and patching. That's why we invest in details like OS, and capabilities to do live patching without any OS restart required. That's why controlling all the stack and being able to provide a complete validated bundled patch for all components is a key asset to be "Always-on". At-Scale To  handle large Cloud, shared infrastructure and more and more Data, you need to plan for scale, avoiding complication and looking at simplification. That's why we are building large capacity systems with a linear cost model, to enable you to invest in a simplify accessible model. At Oracle the cost per unit of work is the same for a single T5-2 server or an M6-32. Of course an M6-32 is equivalent to many, many T5-2... but with potentially only one OS to manage (and patch), and massive amount of memory to transform your Enterprise into a real-time Enterprise. And we are not only doing that for servers, but also for storage, where we put massive amount of DRAM and cleaver algorithms being in ZS3 (NAS) of FS1 (SAN) to leverage the hardware for the highest performances at the lowest possible cost. At the end, Oracle Engineered Systems are the bargain of IT systems today. You could run your application today x2 faster with a fraction of what you have today, thanks to unique software improvement bundled in hardware. Not only saving in infrastructure and datacenter footprint but also in operations. This is one of the aspect that was reflected in the testimony of Bill Callahan - Director, Products and Technology, CCC Information Services, during his session on  Maximum Availability Architecture with Oracle Engineered Systems. And we continue to invest to push it further with a single management tool, through EM 12c. We are also adopting Openstack at all levels including path integration up to the Database and Java to enable you to deploy DBaaS and Java as a Service through a light weight method. Secure Everything being connected, running on shared (Cloud) infrastructure, security at every level is a paramount and mandatory. That's why we are providing secure multi-tenant consolidation mechanism embedded in our OS, virtualization layers, servers and storage. With continuous enhancement up to a built-in new Software in Silicon feature that will be available in the M7: Application Data Integrity, which prevent malicious access to memory. An M7 is for example automatically protected against attacks like heartbleed. If you want to have a test of it and run your code on M7, it is already available today in the Cloud, go to: SWiSdev.oracle.com. What's Next At Oracle, this is the Oracle engineering (Systems & Software) which is operating Oracle Cloud. We learn a lot from this experience. That's why for example we are developing Dynamically Optimized Database Storage and Detailed Cloud Scale Analytics. Dynamically Optimized Database Storage will integrate analytic for highest observability instantly knowing coherence between storage and Database in a multi-tenant environment. Detailed Cloud Scale Analytics will be based on a distributed real time store that captures performance information from your landscape, from your hardware, up to Database, Middleware and Applications, and will provide a unified view to find multi-tier issues. In summary, further than Software Define Datacenter, we are working on enabling Future Datacenter engineered to run Software, and as such your Business, at scale.

John Fowler, in the last #OOW14 Key Notes, tackled the challenges that we were facing to enable the Business to continuously evolve, and as such create the Real-Time Enterprise. Complexity is...

Architectures and Technologies

#OOW14 Digital & SMAC: Social, Mobile, Analytics, Cloud

Tuesday at #OOW14 was again a day with new major announcements made by Thomas Kurian and demonstrated on stage by Larry Ellison around SMAC, which, as was saying Intel's CIO, will make everything smart and connected, and be a massive shift. And this will drive the Digital transformation. Digital  Transformation Dr. Didier Bonnet from Capgemini started the day with a very interesting introduction on Digital transformation and what needs to be done to really turn this investment into a true advantage. A true advantage that can be up to +26% of profitability for digital savvy leaders, according to a Capgemini study. The 4 points to keep in mind being: Have a vision: identify strategic assets,create a transformating vision, define a clear intent and outcome and keep evolving Engage: wire with your customers/employees, drive adoption, and scale Provide governance: avoid duplication, put coherence into the program, prioritize, enable Have a Technology leadership: to enable the transforming experience leading to transforming operation and business model Supporting this Digital Transformation guideline, you can understand "how", the SMAC (Social, Mobile, Analytics, Cloud) resonate, and drive the alignment of the Technology leadership enablers. SMAC and Digital Transformation Analytic plays a key role in providing insight at every stage. To get there, everybody is looking into creating Data Lake / Data Reservoir, but the question is: "how do you scan your data sources (structured / unstructured) being in Hadoop, NoSQL, or relational Databases and make them available to Business analysts as an information for knowledgeable insights efficiently ?" That's why we have developed and announced Big Data SQL, Big Data Discovery and Big Data Analytics, running on Oracle Engineered Systems (Big Data Appliance, Exadata, and Exalytics), and available on-premises or in Oracle Cloud as a Service. In few words, Big Data SQL is a way to query all your Data in one single SQL statement from multiple sources, and very efficiently, as we did for Hadoop, what we did for Exadata: decompose the query to process it at the hadoop node level and return only the result to the database for aggregation. This lead to very rapidly scan your Oracle Database, your NoSQL and your Hadoop, and create the illusion that all the data is in one place. But this is only one piece of the puzzle, the other piece is to provide simple tools for non Data scientists to do discovery, for prediction and correlation in Hadoop, the visual face of hadoop with Big Data Discovery and at the Business Intelligence level with Big Data Analytics. Big Data Analytics is also a set of tools which help you explore data in your Databases and Hadoop, very fast thanks to in-memory and easily thanks to a simple user interface. This other aspect of Digital, is of course Mobility. Here also major announcements were made around mobile, where we will provide a mobile applications development framework that help you to create application that can run anyware, on iOS, Android, mobiles and tablettes, as well as your usual desktop. Write-once, run anywhere. All supported with a mobile cloud service to provide APIs, shape, persitence and analytics around your devices... adding mobile security identity management and secure (corporate) applications container. All in all  true enabler for not only your mobiles & BOYD strategy but also for IoT (Internet of Things). Last but not least, we also announced yesterday, Documents sharing & Social Network services in the Cloud... As you can see all the Technology leadership to shape your Digital Transformation.

Tuesday at #OOW14 was again a day with new major announcements made by Thomas Kurian and demonstrated on stage by Larry Ellison around SMAC, which, as was saying Intel's CIO, will make everything...

Architectures and Technologies

#OOW14 IT Transformation: CIOs value of the Cloud

Yesterday, #OOW14 Mark Hurd's CIO panel was focus on real customers transformation through the Cloud (being public or private). Here are the few quotes I capturedon CIO's motivation for this transformation: Walgreens: "We don't want to be on the business of integrating things. So the much we move into the cloud, the better."P&G: "[It is] more about a solution toward a business case and engineered together rather than technology pieces."Dunnhumby: "Scientists are very good, but you need technology platform to amplify and be able to deliver at scale." Intel: "Business thinks we [IT] move too slow. We need Oracle to keep innovating whether it is Engineered Systems or Cloud." All of those statement were even more true to me, when one of my customer head of production told me, that Oracle Engineered Systems were transforming the way he used to think of his Enterprise Architecture. This integrated approach being the enabler for an Enterprise Architecture building block [in a box], matching a business requirement with much more efficiency than assembling discrete pieces. And that he needed to have enough flexibility to satisfy his business either in private mode or in public, bringing the compatibility model as a critical success factor of hybrid Cloud. That's why, today's announcement and demonstration made by both Thomas Kurian and Larry Ellison, on a truly easy migration path from existing Oracle Database and Oracle Applications to Oracle Cloud, and reverse was very compelling in addressing IT transformation at scale.

Yesterday, #OOW14 Mark Hurd's CIO panel was focus on real customers transformation through the Cloud (being public or private). Here are the few quotes I capturedon CIO's motivation for this...

Architectures and Technologies

#OOW14 kick-off: Cloud, Big Data & Innovation

Many announcements were made today by Larry Ellison during his kick-off session of Oracle Open World. As an introduction to his key notes, Ren√©e J. James, Intel President came to share Intel vision and co-development we are doing together. Intel vision relies on 3 pillars:  (Big) Data, Cloud and Security. Data It came to no surprise, that the goal we are sharing with Intel is to provide mining full insights from the flow of Data. To illustrate what we are delivering already today in this space, Intel provided two customers cases study, both leveraging the power of Exadata technology, moving IT into the core business, and no more as a pure support function. And Balaji Yelamanchili, Senior Vice President, Product Development, Oracle Business Analytics, came to explain the join worked we did with Intel to design a workload optimized platform, leveraged by Exalytics. Oracle new x86 servers X4-4 (used in Exalytics) and X4-8 are the first and only systems based on the Intel Processor E7-8895 v2. In conjunction with specialized Oracle Software, those systems provide unique capabilities that allow customers to dynamically address different workloads in real time. In the case of Exalytics, this means multiple cores to run parallel highly transactional workloads and fewer fastest cores to run batch processes. Cloud The other major element for IT to be a business enabler rely on his flexibility, and a Cloud architecture is a key enabler to provide this flexibility. Private Clouds are growing faster than public Cloud, due to compliance, security and SLA ("where your workloads work best"). The good news is that by applying a proper design pattern, Intel's experience shows that private cloud remain competitive with public cloud over time. Security Of course, in this virtual world, security is paramount and even more complex to achieve where virtual machine are talking through virtual networks and moving around from servers to servers, putting a real challenge on firewall rules to follow... That's why Intel moved his Next Generation Firewall into Oracle VM insuring security intrusion protection at the VM level and securing firewall rules when VM are moving around. Again, another example of collaboration between Intel and Oracle to enable a true secure virtual architecture, one of the key elements of  flexible cloud architectures. Following Ren√©e J. James, Larry Ellison set the agenda on Software as a Service, Platform as a Service, Infrastructure as a Service, enable by innovation from Engineereed Systems, Servers, Storage to silicon. SaaS: lot's of more enterprise SaaS applications With a clear build and buy strategy, we are unique on the market by offering a full range of Cloud applications covering all 3 suites: Customer Experience, Human Capital Management, and Enterprise Resource Planing.  We keep on extend our capabilities to better answer to our customers needs, with the recent acquisition of Bluekai, delivering Data as a Service for a better customer experience, or by developing engineered sales campaign to help sales forces sales more. Oracle Cloud Platform: easy to move existing application to the cloud The foundation of our Platform as a Service is based on Oracle Database Cloud Service and Weblogic Java Cloud Service. Anything build on top of our cloud platform will be: multi-tenant, social, mobile, in-memory high speed analytic, secured. Those foundations are the same that we used to build our applications. This is the same standard that we provide to extend our SaaS applications through this platform, and that are currently in used by other cloud providers to build their own SaaS applications. Standard are still important, as security and reliability. One major announcement made today by Larry Ellision was to stick on our promise of upward compatibility of Oracle database: from mainframe to client-server, from client-server to internet thin client, and now from internet thin client to the Cloud. Move to cloud - move back : no change to code ! Oracle Cloud Infrastructure:  secure, reliable, lowest cost Our goal is to be competitive in this space, with currently more than 30000 servers and 400PB of storage already in used by our customers. Innovation from Engineereed Systems, Servers, Storage to silicon As we all agree that the world will be hybrid, between public and private clouds, we are keeping the innovation at the systems level not only for our own Cloud, but also to enable our customers IT transformation. With our Engineered Systems, we are leveraging the uniform configuration approach which benefits our customers in all the 3 phases of Design, Build and Run. This seems obvious for the Design and Build phase as we are taking care of this at Oracle. For the Run, thanks to the uniform configuration, when a bug is find and fixed, it is fixed for all our customers. One of the latest in the Oracle Engineered Systems family is Oracle Virtual Compute Appliance, which provides an highly reliable compute and storage appliance, including network virtualization for the lowest possible price. 2 announcements were made today on the Engineered Systems family: the Zero Data Loss Recovery Appliance, which secures all Oracle Databases from 10, 11g and 12c, thanks to real-time redo transport, and the arrival of Oracle Database 12c in-memory inside Exalytics for even faster in-memory analytics. Larry Ellison also announced the arrival of a full SAN array from Oracle with massive scale-out capability and unmatched performances in this area: Oracle FS1 Flash Storage System. This storage is engineered to get the best out of a combination of Flash and disks for the lowest price and highest performance for a given SLA. To close his key notes, Larry answered to one question of Intel President on: "why Oracle was doing the M7 ?". Of course for performance, but also for security, security and security. As security is becoming the most critical topic, especially on flexible, shared, Cloud infrastructures. On the performance side, M7 microprocessor Software in Silicon is bringing Database acceleration inside the silicon with features like inline decompression engine at memory speed. On the security side, we will deliver with M7, hardware based memory protection. This will stop malicious programs from accessing other application memory, which will result in more secure and higher available application... and as a nice side effect, greatly speeds software development, thanks to easier code debugging enabling a direct root cause analysis of error in memory corruption. Stay tuned for what's coming in the next days of Oracle Open World 2014... 

Many announcements were made today by Larry Ellison during his kick-off session of Oracle Open World. As an introduction to his key notes, Renée J. James, Intel President came to share Intel vision...

Architectures and Technologies

Preparing your agenda to get the most of #OOW 2014

Oracle Open World will start in 2 weeks. If you have the opportunity to be there, this is a unique time in the year for Oracle community (customers, partners, experts, ISVs, OEM, User Groups...) to get together and share their experiences in all industries, as well as getting the latest news on Oracle strategic direction. To get the most out of Open World it is important to invest some time in preparing your agenda. As each year, you will have to navigate between a portfolio of more than 2000+ sessions, from Industries, Cloud, Mobile, Social, Big Data, up to Systems and Technologies. Depending on your current projects and interest, have a look in the "focus on documents" as a starting point, and don't miss the Executive Key Notes to get a view on where Oracle and the industry in general is going. You will find hereafter a small selection covering Oracle strategy and roadmaps as well as customers return on experience. Oracle Strategy and Roadmaps Sunday 28/09 5:00pm - 7:00pm Opening Keynote - Moscone North - Hall D - Larry Ellison - Chief Executive Officer, Oracle Monday 29/09 8:30 AM - 9:45 AM Oracle Keynote - The Business Value of the Cloud - Moscone North, Hall D - Mark Hurd - Executive Vice President, Oracle 11:30 AM - 1:00 PM General Session: Infrastructure Transformation Made Easy with Oracle Systems - Oracle Plaza @ Howard Street Theater - Keith Lippiatt - Managing Director, Accenture; Kerry Osborne; Ganesh Ramamurthy - VP, Oracle; John Fowler - Executive Vice President, Oracle; Juan Loaiza - Senior Vice President, Oracle; 2:45 PM - 3:30 PM General Session: SPARC Server Strategy and Roadmap - Intercontinental - Grand Ballroom B Masood Heydari - SVP SPARC Systems, Oracle; 2:45 PM - 3:30 PM Oracle Exalogic Roadmap: Hardware, Software, and Platform News - Moscone South - 270 - Brad Cameron, VP Development, Oracle 4:00 PM - 4:45 PM Oracle Software in Silicon Technical Deep Dive - Intercontinental - Grand Ballroom B Rick Hetherington VP, Hardware Development 4:00 PM - 4:45 PM Oracle Exadata: What‚Äôs New and What‚Äôs Coming - Moscone South - 103- Juan Loaiza - Senior Vice President, Oracle 5:15pm - 6:00pm - Oracle Big Data Appliance: Deep Dive and Roadmap for Customers and Partners - Jean-Pierre Dijck - Senior Principal Product Manager, Big Data - Moscone South - 104 Tuesday 30/09 8:30 AM - 9:45 AM - Oracle Key Notes - Cloud Services for the Modern Enterprise - Moscone North - Hall D - Thomas Kurian, Executive VP Product Development, Oracle 10:45 AM - 11:30 AM  General Session: Oracle Solaris Strategy, Engineering Insights, and Roadmap Intercontinental - Grand Ballroom B - Robert Milkowski - Unix Engineer, VP, Morgan Stanley; Markus Flierl - Vice President, Oracle; 1:30 PM - 3:15 PM Oracle OpenWorld Tuesday Afternoon Keynote - Moscone North - Hall D - Larry Ellison 3:45 PM - 4:30 PM Oracle Big Data SQL: Deep Dive (SQL over Relational, NoSQL, and Hadoop) - Moscone South 103 3:45 PM - 4:30 PM Zero Data Loss Recovery Appliance: Deployment Best Practices - Moscone South 305 - 5:00 PM - 5:45PM - Oracle SuperCluster Technical Deep Dive- Allan Packer Senior Principal Software Engineer  - Intercontinental - Grand Ballroom B Wednesday 1/10 8:30 AM - 9:45 AM The Real-Time Enterprise - Moscone North - Hall D - John Fowler - Executive Vice President, Oracle; 10:15 AM - 11:00 AM What‚Äôs New with Oracle VM Server for x86 and SPARC: A Technical Deep Dive - Intercontinental - Union Square - John Falkenthal - Sr. Director Oracle VM, Oracle; Honglin Su - Director of Product Management, Oracle; 10:15 AM - 11:00 AM Virtual Compute Appliance: Product Roadmap and Cloud Implementations - Intercontinental - Grand Ballroom B- Premal Savla, Director Product Management, Oracle 11:30 AM - 12:15 AM Run OpenStack Cloud on SPARC ‚ÄúEnterprise Cloud Infrastructure" - Moscone South - 305- 12:45 AM - 1:30 PM OpenStack and Oracle Solaris: Engineered for the Cloud - Intercontinental - Grand Ballroom A - 3:30 PM - 4:15 PM Oracle Exalytics In-Memory Machine: The Fast Path to In-Memory Analytics - Moscone West - 3014 - Gabby Rubin - Sr. Director, Product Management, Oracle 4:45 PM - 5:30 PM  Building a Scalable Private Cloud with Oracle Exalogic, Nimbula, and OpenStack - Moscone South - 304-  Customers Cases Study Sunday 28/09 2:30pm - 3:45pm - Think Exa! - Enkitec - Moscone South - 310 3:30 pm - 4:15pm Oracle Exadata/Oracle Exalytics Integration for Fast Analytics and an Optimized Data Warehouses - Apps Associates Pvt Ltd - Moscone South - 303 - Monday 29/09 1:30pm - 2:15pm Case Study: Delivering DBaaS for End User Customers on Oracle SuperCluster - Dimension Data - Moscone South - 305 5:15pm - 6:00pm  Why Database as a Service Will Be a Breakaway Technology at Soci√©t√© G√©n√©rale - Moscone South - 301- Christian Bilien, Global Head of the Data Base teams, SOCIETE GENERALE Tuesday 30/09 12:00pm - 12:45pm Top Five Database Features DBAs and Storage Admins Should Know About Storage - Loyalty Newzeland - Intercontinental - Intercontinental C 3:45pm - 4:30pm Oracle Exadata Migrations: Lessons Learned from Retail - Moscone South - 310 Svetoslav Gyurov - Principal Consultant, e-DBA; jason arneil - Exadata Consultant, e-dba Limited; Wednesday 1/10 10:15am - 11:00am Deployment of Oracle Exadata and Oracle Exalogic Increases Business Efficiency - "Gemological Institute of America, Inc.", Tata Consultancy Services Ltd - Moscone South - 310 12:45pm - 1:30pm Best Practices for Deploying Oracle Software on Virtual Compute Appliance - Secure 24 - Intercontinental - Grand Ballroom B 2:00pm - 2:45pm Simplify Your Data Center with Huge Database Consolidation on Oracle SuperCluster - Energy transfer, Intercontinental - Grand Ballroom B 2:00PM - 2:45PM Oracle Database via Direct NFS Client: Learn Why It‚Äôs in Your Future - Loyalty Newzeland - Intercontinental - Intercontinental C 3:30pm - 4:15pm Best Practices for Real-Time Transactions for Mobile on Oracle SuperCluster - KDDI - Intercontinental - Telegraph Hill 4:45pm - 5:30pm - Enterprise Virtualization with Virtual Compute Appliance X4-2 - BIAS Corporation - Intercontinental - Grand Ballroom B4:45pm - 5:30pm - The Dynamic Duo: How Oracle Big Data Appliance and Oracle Exadata Deliver Analytics - Regions Bank - Moscone South - 303 Thursday 2/10 9:30 AM - 10:15 AM Real-World Oracle Maximum Availability Architecture with Oracle Engineered Systems - Exadata, Exalogic, ZFS Backup Appliance - Intercontinental - Grand Ballroom B - CCC Information Services; 10:45 AM - 11:30 AM Deliver Oracle Database Cloud: Oracle Database 12c Multitenant on Oracle SuperCluster - HDFC Bank Moscone South - 308 12:00pm - 12:45pm Customer Case Study: SPARC Server Consolidation on T5 Servers Vodafone Hungary - Intercontinental - Grand Ballroom C 2:30PM - 3:15PM Odyssey of DBaaS: A UBS Story - Oracle Enterprise Manager 12c & Oracle Database Appliance - UBS - Moscone South - 301 And to go further on Oracle Systems tracks, here are the specific focus-on to look at: Engineered Systems Central at OpenWorld Focus On Oracle SuperCluster Focus On Oracle Optimized Solutions Focus On Oracle Servers Focus On Oracle Storage Focus On Oracle Solaris Focus on Virtualization ‚Äď Oracle VM Welcome to Open World 2014 !

Oracle Open World will start in 2 weeks. If you have the opportunity to be there, this is a unique time in the year for Oracle community (customers, partners, experts, ISVs, OEM, User Groups...) to...

Architectures and Technologies

Deploy and operate your Business Applications quickly and securely

Last week, we announced the upcoming release of Solaris 11.2. Even if this is a minor release, it contains major features that enable strong benefits for IT operations toward Business requirements that I am hearing everyday: "time-to-market", pay for what you use, security, simplified operations. Yes, OS are not a commodity and are a corner piece bringing either a lot's of value or a lot's of pain (depending on which one you rely on). At Oracle we are putting a large part of our R&D effort to hide the complexity and bring strong value through Solaris toward your infrastructure and applications management: Solaris acting as the glue between your hardware execution layer and your applications.  Time-to-market: Centralized Cloud Management, Fast and Agile Application Provisioning Cloud brings a lot's of promise and as such all Businesses are strongly looking at Clouds optimization, targeting "Time-to-market" (agility) and pay for what you use. Source: TwinStrata On the other side, either for the Cloud Service Providers or your internal IT operations, there is a lot's of work to do to make it happen in a standard, repeatable, secured and scalable model. To achieve this, one emerging standard embrace by the major IT players, including Oracle, is OpenStack. Still, discussing with many System Integrators who are starting to leverage it, OpenStack requires a lot's of work to make it right and maintain it, as it is in an constant evolution path. What we did in Solaris 11.2, was to benefit from strong proven features already in place, to build the foundation of the OpenStack framework, and get it ready and engineered to be delivered quickly and reliably. This is a key milestone to provide Cloud Service Providers and internal IT the foundation layer required to achieve the expected "Time-to-Market" of your Cloud infrastructure. The nice thing about it, is that it is not only the management point of your Solaris environments, but of all your OpenStack compliant systems. Pay for what you use: Independent and Isolated Environments... compliant with licensing boundaries The next step once the service is deployed is to provide the required boundaries to "pay for what you use", not only from a pure hardware perspective, but also from a software licensing model. We already benefit from this capability, since Solaris 10, with the Solaris Zones virtualization technology. In Solaris 11.2, this has been enhanced to simplify the allocation of  resources at cpus, cores and sockets level, matching your software licenses requirements. Security: Reduce Risk with Comprehensive Compliance Checking and Reporting In a Cloud environment, where resources are shared, you need to insure very strong security mechanism between tenants. Thanks to built-in security, inside Solaris, we were already able to provide those mechanisms. In Solaris 11.2, we bring it  a step further, being able to provide compliance checking and reporting of the configuration, like PCI/DSS required in Card Banking industry. This provide to Cloud Providers and Cloud Consumers a reliable, secured platform to build upon, simplifying complex configurations and associated audits processes. Simplified operation: Enhancing OpenStack with Live Reconfiguration Once you get your application on time, paying for what you use in a secured environment, you may think you are done. Are you ? Of course not ! What about application upgrade, platform patching, all the life cycle management of both your application and the Cloud platform you are relying on ? This is a well known challenge of consolidation projects, where the more tenants you put on a platform, the more complex is it to find a proper maintenance windows. At the Solaris 11.2 platform level, you can now : dynamically reconfigure Oracle Solaris Zones without requiring a reboot, helping to eliminate system downtime, manage Solaris Zones patching independently, helping you apply your system and/or applications patches at your own pace To go further in understanding the multiple values Solaris 11.2 platform can bring to your Business, I invite you to join us in Paris, on May 22nd, for a Solaris Tech Day. You will have the opportunity to exchange with Chris Arms, VP of Engineering, ISVs and customers using it today. Associated Resources: Press Release Oracle Solaris Partner Quotes Oracle Solaris Videos What's New in Oracle Solaris 11.2 Beta Oracle Solaris 11.2 Beta FAQs

Last week, we announced the upcoming release of Solaris 11.2. Even if this is a minor release, it contains major features that enable strong benefits for IT operations toward Business requirements...

Architectures and Technologies

Mobile World Congress in Barcelona: IoT, connected Cars…. and boats

I was wondering when I entered in the Exhibitors Halls, if Iwas really at the mobile world congress or at a World Wide Automotive Show.Nearly every booth has a car to demonstrate a mobile phone on wheels… But forOracle: we do have a boat! We demonstrate how the usage of the 300 sensorsembedded in the America’s Cup sailing boat, can drive real-time human decisions.Of course, this is one of the many use cases we do have. I won’t go through theentire list from communications, connected cars to smart grids, analytic and Cloud enablemobile applications. I will rather focus on a rather interesting embedded topicas the trend around IoT (Internet of Things) is huge. I already touch on this during my last OOW blog. As we arein a (very) small world (i.e. the device can be very small), the BOM (bill ofmaterial) needs to be at the right price, the right energy consumption andprovide a very long life cycle: you don’t want your car to break after oneyear, or to face complex upgrades every 6 months and be connected (physically) with your repair shop too often. That’s where Java comes into play, because itprovides the right footprint, management and life cycle. The new thing we areshowing today is Java TEE (Trusted Execution Environment) integrated in hardware.This brings security inside the device by providing a secure store for yourkeys. As security is a major concern in IoT, especially for large industrialprojects like connected Cars, Smart Grid, Smart Energy or even Healthcare. Youdon’t want your devices to be temper, either for 1) safety reason or 2) frauds.And Java is very good in all IoT uses cases, even for stronger securityrequirement, that for example, Gemalto is implementing with it. To help you get there, Gemalto's Cinterion concept board enables you to quickly prototype your embedded devices and connect them securely (even your dart board)... On the other side of those devices, you have you!…. Thatneeds to make enrich decisions… That’s where data (and analytics) comes intoplay. And for this part, I invite you to join us in Paris, on March 19thon a special event around Data Challenges for Business. Ganesh Ramamurthy, Oracle VP SoftwareDevelopment in our Engineering System group will be with us to explain what OracleSystems brings to manage and analyze all your Data. He will be with Cofely Services - GDF Suez, Bouygues Telecom and Centre Hospitalier de Douai, who will share their experiences.

I was wondering when I entered in the Exhibitors Halls, if I was really at the mobile world congress or at a World Wide Automotive Show.Nearly every booth has a car to demonstrate a mobile phone...

Architectures and Technologies

#OOW2013: Internet of Things... and Big Data

As promised in my first entry few weeks ago, in preparing Oracle OpenWorld, I am coming back to IoT: Internet of Things... and Big Data. As this was the closing topic develop by Edward Screven, Chris Baker andDeutche Telekom, Dr. Thomas Kiessling. Of course, Big Data and Internet of Things (or M2M - Machine2Machine) have been topics not only covered the last day, but all along the conference, including in JavaOne, with 2 interesting sessions from Gemalto. Gemalto even developed a kit to test your own use cases for M2M. Internet of Things is opening new opportunities but also challenges to overcome to get it right, that at Oracle we classify in 3 categories : Acquire & Transmit, Integrate & Secure, and Analyze & Act. Acquire & Transmit Just think of potentially billions of devices that you need to remotely deploy, maintain, update, insure proper transmission of data (the right data at the right time - as your power budget is constrain) and even extend decision making closer to the source. With standards-based Java platform optimized for devices, we are already covering today all those requirements, and are already involved in major Internet of Things projects, like the Smart Grids or Connected Cars. Integrate & Secure Of course integrating -securely- all the pieces together is key. As you want it 1) to reliably work with potentially a very large amount of devices and 2) not be compromised by any means. Here again, at the device level, Java is providing the intrinsic security functions that you need, from secure code loading, verification, and execution, confidentiality of data handling, storage, and communication, up to authentication of entities involved in secure operations. And we are driving this secured integration up to the Datacenter, thanks to our comprehensive Identity and Access Management system, up to Data masking, fraud detection, and built-in network security and encryption. Analyze & Act Last but not least, is to analyze and correlate those Data and take appropriate actions. This is where M2M and Internet of Things link to Big Data. There are different things that characterize "Big Data" : Volume, Velocity (time & speed), Variety (data format), Value (what is really interesting in those data related to my business), Vizualization (how do I find something in this, of value ?), Veracity (insure that what I will add into my trusted data (DWH...) coming from those new sources is something validated. In M2M, we don't always have Volume, but we still have the other "Vs" to take care. To handle all this IoT generated information  inside the Datacenter, and do correlation with existing Data relevant to your customer business (being ERP, Supply Chain, quality tracking of supplier, improving purchasing process, etc...), you may need need tools. That's why Oracle developed the Oracle Big Data Appliance to build an "HPC for Data" grid including Hadoop & NoSQL to capture those IoT data, and Oracle Exalytics/Oracle Endeca Information Discovery, to enable the vizualisation/discovery phase. Once you pass the discovery phase we can act automatically in real time ! on the specific triggers that you will have identified, thanks to Oracle Event Processing solution.Deliver As you see, Oracle Internet of Things platform enables you to quickly develop and deliver, securely, an end-to-end solution. The end result is a quick time-to-market for an M2M project like the one presented on stage and used live during the conference. This project was develop in 4 weeks, with 6 individuals ! The goal was to control the room capacity and in/out doors live control depending on the participants flow in the room. And as you can see in the architecture diagram we are effectively covering from Java on the device up to Exalytics in the Datacenter.

As promised in my first entry few weeks ago, in preparing Oracle OpenWorld, I am coming back to IoT: Internet of Things... and Big Data. As this was the closing topic develop by Edward Screven, Chris...

Architectures and Technologies

#OOW2013: Jump into the Cloud...

Today we went into the Cloud, with 3 major announcements delivered by Thomas Kurian: a full Database as a Service, a full Java as a Service and a full Infrastructure as a Service in Oracle Cloud, guaranteed, backup and operated by Oracle, depending on different level of services. Database as a Service You will be able to provision inside Oracle Cloud a full Oracle Database (12c or 11g) either in single node or in highly available RAC cluster. This Database will be accessible in full SQL*NET, with Root access. This service will be offer in 3 different models : Basic Service: pre-configured, automatically installed Database Software, managed by you through Enterprise Manager Express. Managed Service: Oracle Databases managed by Oracle, including : Quarterly Patching and Upgrades with SLA Automated Backup and Point-in-Time Recovery Elastic Compute and Storage Maximum Availability Service: Oracle manages an highly available Database, including: Real Application Cluster (RAC) Data Guard for Maximum Availability More flexible upgrade schedule Of course you will be able to move your Data or even you entire Database between your Enterprise Datacenter and Oracle Cloud by leveraging regular tools like SQL loader or Data Pump for example. Java as a Service In the same model as the Database as a Service, you will be able to deploy dedicated Weblogic cluster(s) on our Compute Service. Full WLST, JMX and Root access will be provided as well. The 3 different models of services will be the following: Basic Service: pre-configured, automatically installed weblogic software, with a single node Weblogic Suite (12c or 11g), managed by you using Enterprise Manager. Managed Service: Oracle manages one or more Weblogic domains in the same way as the Database as a Service's Managed Service. Maximum Availability Service: Oracle Manage an Highly Available environment, with the following characteristics : Weblogic cluster integrated with RAC Automated Disaster Recovery and Failover More flexible upgrade schedules Additional staging environment So now let's have a quick look at the constituents of our Infrastructure as a Service layer. Infrastructure as a Service Compute Service: will provide an elastic compute capacity in Oracle Cloud, based on 3 different type of requirements : Standard, Compute Intensive or Memory Intensive. The management will be based on REST API, and providing as well Root VM access. This Compute Service will provide network isolation and elastic IP addresses. And of course, it will be highly available. Storage Service: will store and manage digital content. The management will be through Java and REST API (OpenStack Swift). It has been designed for performance, scalability and availability.  All those new or enhanced services, are complementing all the Oracle Software as a Services already existing and adopted with success by many of our customers, like was shown in many testimonies during Thomas Key Notes. This provides a Platform for our partners who are leveraging our technologies to build their own services in Oracle Cloud. That's why we also created an Oracle Cloud Market Place, enabling the delivery of our partners applications, as well as their combination/integration tailor to your specific needs directly in Oracle Cloud. Let's Jump into the Cloud....

Today we went into the Cloud, with 3 major announcements delivered by Thomas Kurian: a full Database as a Service, a full Java as a Service and a full Infrastructure as a Service in Oracle Cloud,...

Architectures and Technologies

#OOW2013: All your Database in-memory for All your existing applications... on Big Memory Machines

Many announcements have been made today by Larry Ellison, during his opening of Oracle OpenWorld. To begin with, Americas Cup is still running, as Oracle won today's races.  I must admit that seeing those boats racing at such a speed and crossing each other at few meters was really impressive. On OpenWorld side, it was also very impressive. More people this year are attending the event : 60 000 ! And in terms of big numbers, we saw very impressive results of the new features and products that have been announced today by Larry: Database 12c in-memory option, M6-32 Big Memory Machine, M6-32 SuperCluster and Oracle Database Backup, Logging, Recovery Appliance (yes, I am not joking, that's its real product name !). Database 12c in-memory option: both row and column in-memory formats for same data/table This new option will benefit all your existing applications unchanged. We are leveraging the memory to store both formats at the same time. This enable us to drop all the indexes that are usually necessary to process queries, for a design target of x100 improvement on performance for real-time analytic. As you will see later, we can achieve even more, especially if we are running on an M6-32 Big Memory Machine. At the same time the goal was also to improve transactions x2 ! The nice thing of this option is that it will benefit to all your existing applications running on top of Oracle Database 12c: no change required. On stage, Juan Loaiza, did a small demonstration of this new option on a 3 billions row database, representing wikipedia research query. On a regular database, without this option, after identifying (or guessing) the query that will most likely be used by users, you put in place appropriate indexes (from 10 to 20 indexes), then you can run you query with acceptable performance, in this case: 2005 Million Rows Scanned / Sec instead of  5 Million Rows Scanned / Sec. Not too bad... Now if we replace the indexes required by the new Column formats store in-memory, we achieved in this case: 7151 Million Rows Scanned / Sec ! Something people looking into Big Data, and real-time decisions, will surely have a look at it.  The second announcement was a new processor, and a new system associated with it: the M6 chip and the M6-32 Big Memory Machine... available now ! M6-32 Big Memory Machine: Terabyte Scale Computing This system is compatible with the previous generation of M5 chips, protecting existing investment, and can host as well the new M6 12x cores, 96 threads processor. All in this system is about Terabytes : up to 32 TB of memory, 3 Terabytes/sec of system bandwidth, 1.4 Terabytes/sec of memory bandwidth, 1 Terabyte per second of I/O bandwidth ! This new machine is also the compute node of the new M6-32 SuperCluster announced also today. M6-32 SuperCluster: In-Memory Database & Application System That's our fastest Database Machine, with big memory for Column store and integrated Exadata Storage ! Juan Loaiza did also the same demonstration of the wikipedia search on this system... but not with 3 billions rows, but 218 billions rows ! The result speaks by itself: 341 072 Million Rows Scanned / Sec ! With all those critical systems hosting such amount of Data, it is also very important to provide a powerful Database Backup and Restore Solution... And that's all the latest Appliance announced today is about. Oracle Database Backup, Logging, Recovery Appliance By just reading its name you get nearly all the capabilities this new appliance will provide to you. First, it is specialized to backup Oracle Database of ALL your systems running an Oracle Database (Engineered Systems, like the lastest M6-32 SuperCluster or Exadata, as well as your regular servers). Second, it also captures all your Database logs. So not only do you have a backup but also the deltas between now and your latest backup. This is allowing you to come back at the point you want when recovering your database. It can even be coupled with our new Database Backup service on Oracle Public Cloud, for an extra secure copy. With this new appliance you can now be confident in securing your Oracle Database data. Building your future datacenter Today, not only did we see the new Oracle Database 12c enabling to work on memory for all you application, we also saw the associated M6-32 server and associated Engineered Systems M6-32 SuperCluster to run the stack with Big Memory capacity... all being secured by Oracle Database backup, Logging, Recovery Appliance. All of those innovations contributing to build your Datacenter of the future, where all is engineered to work together at the factory.

Many announcements have been made today by Larry Ellison, during his opening of Oracle OpenWorld. To begin with, Americas Cup is still running, as Oracle won today's races.  I must admit that seeing...

Architectures and Technologies

GRTgaz new Information System on Oracle SuperCluster

This testimony from Mr Sébastien Flourac, Head of Strategy for GRTgaz IT, concluded the last week SPARC Showcase event. Mr Flourac highlighted why he selected an Oracle SuperCluster, Engineered Systems, over a more traditional build it yourself approach, that he also studied. Due to EEC regulation, GRTgaz a subsidary of GDF-Suez, has to be externalized including of course all its applications and existing IT in less than 2 years. But, of course, the current platforms are shared with other GDF-Suez services, which means for GRT gaz, that they have to build entirely a new platform to migrate their existing application with the lowest associated risks. As a major part of the technologies supporting GRT gaz applications was running on Oracle Database and Oracle Weblogic, either on IBM AIX or SPARC Solaris, GRT gaz had a closer look on what Oracle has to propose to simplify Oracle software on Oracle Hardware, compatible with the existing GRT gaz environment. And it became obvious to Mr Flourac that Oracle SuperCluster was the best fit for his project and for the future for several reasons. Simplicity and lower cost With Oracle Engineered Systems, all the complexity and cost of traditional build it yourself solutions has been taken care of at Oracle Engineering level. All the configurations and setup have been defined and integrated at all levels (software, virtualization and hardware) to offer the best SLA (performance and availability). This was concurring to simplify its externalization project, and was also bringing additional benefits on the storage layer for the future. It was the best financial scenario in their project context. Lower risks Not only does the SuperCluster offer the best SLA by design. It also provides a very important feature in this complex applications migration : a full compatibility to run existing Oracle software versions. This was very important for Mr Flourac to avoid in his project to do both : migrate and upgrade.It is also providing : an integrated stack of Oracle Software and Hardware an upgrade process tested by Oracle a better support of the entire stack Build for the future Oracle SuperCluster provides to GRT gaz a consolidated, homogeneous and extremely scalable platform, which not only enable this externalization project but also will be able to host the new business requests. With this new platform in place, Mr Flourac already knows that in the next phases he will be able to leverage additional integrated and unique features that running Oracle Softwares on Oracle SuperCluster provides: Exadata integration and acceleration for Oracle Database starting with 11gR2 Exalogic integration and acceleration for Oracle Weblogic starting with 10.3.4 Of course the SuperCluster is a key enabler, but such a project requires also a team to manage the migration, the transition and the run. This is done through the support of Oracle ACS (transition), Fujitsu (migration) and Euriware (run).

This testimony from Mr Sébastien Flourac, Head of Strategy for GRTgaz IT, concluded the last week SPARC Showcase event. Mr Flourac highlighted why he selected an Oracle SuperCluster,...

Architectures and Technologies

T5-2 for high-performance financial trading

On this post, I will focus on the second testimony reported by François Napoleoni. Here,the goal was to select the next best platform to face the growth and upgrade of a financial trading application. The comparaison has been made between : T5-2, running Oracle VM, Solaris, Sybase and the financial application x86, running VMWare, Redhat, Sybase and the financial application The decision criteria being : Simplified architecture Systems performance in real life (what has been tested and measured) Platform stability Single point of support Leverage internal skills Strong security enforced between the different virtual environments ROI of the solution For those of you understanding french, I will let you listen to this few minutes video below. And for the English readers, go through more details in this post. <noframes><img alt="MISYS, Kondor et les Sparc T5 securisent les transactions bancaires" src="http://playertv-itplace.pad-playertv.brainsonic.com/web/index.php/ressources/media/photo-11054-misys-kondor-et-les-sparc-t5-securisent-les-transactions-bancaires.html" /><h2>MISYS, Kondor et les Sparc T5 securisent les transactions bancaires<span id="XinhaEditingPostion"></span> 2013</h2></noframes>

On this post, I will focus on the second testimony reported by François Napoleoni. Here,the goal was to select the next best platform to face the growth and upgrade of a financial trading application....

Architectures and Technologies

Oracle VM virtualization on T4-4 : architecture and service catalog

Last Tuesday, during the SPARC Showcase event, Jean-Fran√ßois Charpentier, Fran√ßois Napoleoniand S√©bastien Flourac delivered very interesting uses cases of deployment of latest Oracle Systems at work : OracleVM virtualization project on T4-4, intensive financial trading on T5-2 and a completeInformation System migration to Oracle SuperCluster. I will start to cover in this post the main points that Jean-Fran√ßois Charpentier focus on to build a consolidated T4-4 platform, effective for his Business users. Oracle VM virtualization on T4-4 Architecture As often, Mr Charpentier had to handle existing environment, that needed to be taken into account when building this new virtual platform. First he had to be able to provide a platform that could consolidate all the existing environments, the main driver here being : total memory requirement of existing asset multi-tenant capability to share the platformsecurely between several different networks comply to strong SLA T4-4 was the best building block to cover them : memory : up to 2 TB multiple networks connection : up to 16x PCIe extension Oracle VM built-in with redundant channels capability and live migration Solaris binary compatibility to enable easy consolidation of existing environments Overall Architecture Design The deployment choice have been to setup 2x Oracle VM T4-4 clusters per sites as follow: To cover his SLA requirements, Mr Charpentier built redundancy not only by providing multiple T4-4 nodes per Oracle VM clusters, but also at the Oracle VM itselfs. For Oracle VM, he chose to make redundant storage and network virtual access layer as display in the following 2 diagrams. Oracle VM Virtual Mutlipathing with alternate I/O Domain Oracle VM Virtual Network Access through IPMP  All of this virtual network layer being link to different network back-bones, thanks to the 16x PCIe extension of the T4-4, as illustrated in the following diagram. Another option could have been to deploy Oracle Virtual Network, to enable the disk and network access with only 2x PCIe slots at the server layer. Oracle VM on T4-4 Service Catalog Beside the architecture choice, that needs to comply withstrong SLA. The development of a service catalogue is also very key to bringthe IT toward a service provider. And it is exactly what have been put in placeby Jean-Fran√ßois Charpentier, as follow :  By putting in place this new virtual platform with its associated service catalog, Mr Charpentier was able to provide to his Business better agility thanks to easier and faster deployment. This platform has become the standard for all Solaris deployment for his business unit, and they expect to reach a 90% to 100% Solaris virtualization by 2014.

Last Tuesday, during the SPARC Showcase event, Jean-François Charpentier, François Napoleoniand Sébastien Flourac delivered very interesting uses cases of deployment of latest Oracle Systems at work :...

Architectures and Technologies

Preparing for #OOW: DB12c, M6, In-memory, Clouds, Big Data... and IoT

It‚Äôs always difficult to fit the upcoming Oracle Open World topics, and all its sessions in one title, even if "Simplifying IT. Enabling Business Transformation." makes it clear on what Oracle is focusing on, I wanted to be more specific on the "How". At least for those of you whoattended Hot Chips conference,some of the acronyms will be familiar to you, some may not (I will come back to "IoT" later). For those of you attending, or those of you who will get the sessions presentations once available online, here are few things that you don't want to miss which will give you not only what Oracle R&D has done for you since last year, but also what customers -like you- have implemented thanks to the red-stack and its partners, being ISVs or SIs. First, don't miss Oracle Executives Key notes, second, have a look into the general sessions delivered by VPs of Engineering to get a more in-deep direction, and last but not least, network with your peers, being on specifics deep-dive sessions, experience sharing or even the demo ground where you will be able to get the technologies in action with the Oracle developers subject matters experts.You will find hereafter a small selection. Oracle Strategy and roadmaps Oracle Database 12c - Engineered for Clouds and Big Data [GEN8229], with Andrew Mendelsohn Engineered Systems Principles and Architecture [GEN9730], with John Fowler, Juan Loaiza, Balaji Yelamanchili, and Ganesh Ramamurthy SPARC Systems Update and Roadmap [GEN8954], with Masood Heydari The World‚Äôs Fastest Microprocessor: Today and Tomorrow [CON9191], with Ricky Hetherington Pushing the Envelope‚ÄĒNew Storage Solutions from Oracle [GEN9159], with Michael Workman, James Kate, Scott Tracy Oracle Virtualization Strategy and Roadmap [GEN9535], with Wim Coekaerts Oracle Solaris Strategy, Engineering Insights, and Roadmap [GEN9021], with Markus Flierl Big Data Deep Dive‚ÄĒOracle‚Äôs Strategy and Roadmap [GEN8645], with Jean-Pierre Dijck Oracle‚Äôs Strategy for Its Public Cloud Platform and Infrastructure Services [GEN8617], with Christopher Pinkham Industry Focus Perspective on Today‚Äôs Utilities and the Industry Vision for Tomorrow [GEN10638], with Perry Stoneman - CAP Gemini Communications Industry [GEN10054], with Montgomery Hong - Accenture Transformational Steps Toward Modernizing Life Sciences R&D [GEN11354], with Neil De Crescenzo Empowering Modern Finance [GEN8986], with Drew Scaggs - Deloitte Consulting LLP Big Data in Financial Services [CON5929], with Jim Acker and Ambreesh Khanna Big Data for Connected Vehicles [CON10157], with Rakhi Makad, Infosys Ltd Engineered Systems for Retail: Lower Total Cost of Ownership [CON9894], with dunnhumbyUSA, Hot Topic, Inc., Whole Foods Market, Inc. Projects implementation feedbacks & lessons learn Moving Oracle‚Äôs Global Single Instance to a SPARC SuperCluster [CON8865] - Oracle Oracle Runs on Engineered Systems: How to Upgrade a Global Enterprise in a Year [CON8566] - Oracle Deploying Fabric-Based Data Centers with Oracle Virtual Networking [CON4432] - Omgeo LLC Building a SPARC Cloud with Oracle Enterprise Manager Ops Center 12c [CON9589] - Verizon Wireless SPARC T5 Servers, at UZ Leuven: Performance Testing Results [CON5953] - UZ Leuven Customer Panel: Oracle Solaris in Action [CON6997] - Verizon, State of Michigan, Allied Irish Banks Ten Key Solaris Zones Differentiators Enhancing Cloud Platforms [CON1933] - AAPT Best Practices for Mission-Critical Applications on Oracle SuperCluster [CON9263] - HDFC SECURITIES LTD Changing the Game with Big Data Insights [CON5488] - BAE Systems Detica Customer Panel: Data Warehouse and Big Data [CON8620] - Thomson Reuters, Turkcell, P&G, Zagrebańćka banka Customer Success: EPM on Oracle Exalytics [CON9516] - Deutsche Telekom AG Oracle Exalogic: Delivering Extreme Performance for SOA, Healthcare, and Fast Data [CON9294] - Emdeon, Canon Oracle Exadata and Oracle Enterprise Manager 12c: Extreme Consolidation in the Cloud [CON2734] - HDFCBANK LTD Accelerate Your Oracle Exadata Deployment with DBA Skills You Already Have [UGF9791] - The Pythian Group Inc. Database Migration Best Practices and Oracle Migration Factory [CON6054] - UBS Best Practices for Creating and Managing Database Copies [CON3888] - Turkcell PeopleSoft In-Memory Project Discovery: Gain a Competitive Advantage [CON9166] - Ciber Running SAP Applications on Oracle Exadata [CON8587] - Lion Corporation Deep-dive with the Experts What‚Äôs New in Oracle Database Application Development [GEN8579] - Thom Kyte The Latest Oracle Innovations in Data Protection [CON8734] - Juan Loaizia Breakthrough Efficiency for Enterprises with In-Memory Computing for Database, Middleware, Apps [CON9020] - Uday Shetty The New Oracle SPARC M6: In-Memory Infrastructure for the Entire Enterprise [CON9067] - Gary Combs Speed Oracle Database 12c with Direct Database to Storage Communications [CON6358] - Mark Maybee Searching Huge Transactional and Digital Data Stores Across Multiple Databases [CON9068] - Michael Brewer Oracle Virtual Compute Appliance: From Power On to Production in About an Hour [CON11650] - Premal Savla Consolidate Databases and Provide Dynamic Quality of Services [CON6651] - Scott Michael Creating a Database Cloud for DBaaS on Oracle SuperCluster [CON6227] - Thomas Daly, Lawrence Mcintosh, Roger Bitar Delivering Oracle Fusion Middleware ‚Äúas a Service‚ÄĚ on Oracle Exalogic [CON9227] - Ayalla Goldschmidt Optimizing Oracle E-Business Suite with Oracle Engineered Systems [CON8259] - John Snyder Accelerate SAP with Oracle SuperCluster [CON8635] - Pierre Reynes Learn how to do it yourself (in 1 hour): Hands-on-Labs Deploy and Manage a Private Cloud with Oracle VM and Oracle Enterprise Manager 12c [HOL10003] Oracle Real Application Clusters 12c: Deploying Four Nodes in Minutes with Oracle VM Templates [HOL9982] Oracle Solaris for Red Hat Linux Users [HOL10163]  How to Set Up a Hadoop Cluster with Oracle Solaris [HOL10182] Best Practices for Migrating to Oracle VM and Oracle Linux from VMware and Red Hat [HOL9981] Dynamic Provisioning of Virtual Network Resources with Oracle Virtual Networking [HOL10105] Oracle Solaris Integrated Load Balancer in 60 Minutes [HOL10181] Zero to Sun ZFS Storage Appliance Backup and Recovery in 60 Minutes [HOL8348] Self-Service Discovery with Oracle Endeca Information Discovery: Bring Your Own Data [HOL10119] Watch the technologies at work : Demos Ground Analyzing Enterprise Big Data Using Oracle SuperCluster T5 Extreme Analytics with Oracle Exalytics In-Memory Machine T5-8 In-Memory Features on Oracle SPARC M5/M6 Servers Oracle Engineered Systems Showcase Using Oracle SuperCluster to Improve Your SAP Operations and Achieve Extreme Performance Accelerating Banking Automation: How to Optimize Oracle FLEXCUBE Deployment Platforms Accelerate SAS Applications with Oracle's In-Memory Infrastructure Oracle Fusion Middleware Tools for Test-to-Production, Oracle Exalogic Factory Assemblies Oracle Virtual Compute Appliance: From Loading Dock to Production in One Hour Virtual Networking for Hybrid Data Centers Using Oracle's Software-Defined Networking Controller Automated Oracle VM Templates for SPARC Creation, Deployment, and Configuration Accelerate Development by Cloning Complete Database Environments in Minutes This digest is an extract of the many valuable sessions you will be able to attend to accelerate your projects and IT evolution.

It’s always difficult to fit the upcoming Oracle Open World topics, and all its sessions in one title, even if "Simplifying IT. Enabling Business Transformation." makes it clear on what Oracle is...

Architectures and Technologies

Why OS matters: Solaris Users Group testimony

Wednesday evening, a month after the new SPARC servers T5 & M5 launch in Paris, the french Solaris users group, get together to get the latest from Oracle experts on SPARC T5 & M5, Oracle Virtual Network, as well as the new enhancements inside Solaris 11.1 for Oracle Database. They also came to share their projects experiences and lessons learn, leveraging Solaris features : Ren√© Garcia Vallina from PSA, did a deep dive on ZFS internal and best practices around SAP deployment and Bruno Philippe explained how he managed to consolidate 100 Solaris servers into 6 thanks to Solaris 11 specific features. It was very interesting to see all the value that an operating system like Solaris can bring. As of today, operating systems are often deeply hidden in the bottom layers of the IT stack, and we tend to forget that this is a key layer to leverage all the hardware innovations (being new CPUs cores, SSD storage, large memory subsystems,....) and expose them to the applications layers (being Databases, Java application servers,...). Solaris is going even further than most operating systems, around performances (will get back to that point), observability (with DTrace), reliability (predictive self healing,...), and virtualization (Solaris ZFS, Solaris Zones & Solaris Network Virtualization, also known as project "crossbow"). All of those unique features are bringing even more values and benefits for IT management and operations in a time of cost optimization and efficiency. And during this event, this was something that we could get from all the presentations and exchanges. Solaris and SPARC T5 & M5 As Eric Duminy explained in the introduction of his session on the new SPARC T5 & M5, we are looking into new paradigm of CPU design and associated systems. Following Moor's law, we are using transistors in completely new ways. This is no more a run for frequency, if you want to achieve performance gain, you need more. You need to bring application features directly at CPU and Operating System level. Looking at SPARC T5, we are talking about a 16 cores, 8 threads/core processor, with up to 8x sockets, 4 TB RAM, SPARC T5-8 server in only 8 rack units ! This mean also, 128 cores and 1024 threads, and even more for the M5-32, with up to 192 cores, 1536 threads, 32 TB RAM  ! That's why the operating system is a key piece that needs to be able to handle such systems efficiently : ability to scale to that level, ability to place the process threads and associated memory on the right cores to avoid context switch, ability to manage the memory to feed the cores at the right pace.... This is all what we have done inside Solaris, and even more with Solaris 11.1 to leverage all this new SPARC T5 & M5 servers, and get the results that we announced a month ago at the launch.  Of course we don't stop there. To get the best out of the infrastructure, we are designing at CPU, system and Solaris level to optimize for the application, starting at the database level.This is what Karim Berrah covered in his session. Solaris 11.1 unique optimizations for Oracle Database Karim's explained first the reasoning behind the complete new virtual memory management of Solaris 11.1, something that benefits directly to Oracle Database for the PGA and SGA allocation. You will experience it directly at database startup (twice faster !). The new virtual memory system will also benefit to ALL your applications, just looking for example at the mmap() function which is now x45 faster (this is what is used for all the shared libraries). Beyond performances, optimizations have been made on security, audit, and management. For example, with the up coming new release of Oracle Database, you will be able to dynamically resize your SGA and also get greater visibility for the DBA in datapath performances thanks to a new DTrace table directly available inside the database: a tight integration between Oracle DB and Solaris unique features. Alain Chereau one of our performance guru from EMEA Oracle Solution Center provided his foresight and expertise. He especially reminded that the performance is achieve when ALL the layers work well together, and that "your OS choice has an impact on the DB and reverse. Something to remember for your critical applications." Alain closed the session with a final advice on best use of SSD for Oracle DB and Solaris ZFS. In short, SSD are align on 4k block. For Oracle DB, starting with 11.2.0.3, redolog can write in 4k block. This needs to be specify at redolog creation on the record size setting. For Solaris, ZFS knows about SSD and directly adapt. That's the reason why putting ZFS secondary cache on SSD (readzilla) is a very good idea, and a way to avoid bad behavior introduced by new "blind" storage tiering when combined with ZFS. Just put SSD drives for ZFS secondary cache directly inside your T5 or M5 servers and you are done. This is an important topic, as even if a majority of customers are running Oracle Database on ASM on production to get the benefit of grid and Oracle RAC security and scalability, that maybe different for development environments. As a matter of fact, for development systems most customers are leveraging Solaris ZFS and its compression and infinite clone and snapshot functions. This brings me to Ren√©'s session on SAP on ZFS... Lessons learn from deploying SAP on ZFS Clearly one of the most technical session of this event. Congratulation to Ren√© for a very clear explanation on ZFS allocation mechanisms and algorithm policies. I will start by Ren√©'s conclusion : "Don't follow your ISV (SAP in this case) recommendations blindly". In fact, PSA was experiencing performances degradation and constant I/O activity even with very few transactions on application side. This was due to the fact that SAP recommends to use the SAP Data filesystem at more than 90% ! A very bad idea when you put your data on a Copy-on-Write (COW) filesystem like ZFS... Where I always recommend to keep around 20% of free space to allow for the COW operations to take place ! That's of course the new rule for SAP deployment at PSA. So if you already have ZFS deployed with this rule in place, you don't have to read further, just keep doing it and move directly to the next topic... otherwise you maybe facing currently some performance problems as well.  To identify which of your ZFS pools are facing this situation, Ren√© provided a nice dtrace command that will tell you : # dtrace -qn 'fbt::zio_gang_tree_issue:entry { @[pid]=count();  }' -c 'sleep 60' Then to solve the problem, you understand that you need to add free space to enable the COW operation (in one shot). The best way would be to add a vdev (for more details: Oracle Solaris ZFS: A Closer Look at Vdevs and Performance). You could also use a zfs replace with a bigger vdev, but that's not the best option in the long run. If you go through a whole modification cycle of the content of the pool, your zpool will "defragement" by itself. If you want to "defragment" the zfs pool immediatly, if you have a Database, you can do it through "alter table move" operations (special thank to Alain Chereau for the tip). For standard files, you need to copy them and rename them back, or best, do a zfs send | zfs receive to another free zpool and you are done. From 100 Servers to 6 thanks to Solaris 11 Last but not least, we also had another deep dive session during this event, with live demo ! Thanks to Bruno Philippe, President of the French Solaris Users Group, who shared with us his project of consolidating 100 servers, going from Solaris 8 to Solaris 10 into 6 servers with minimal to no business impact allow ! Bruno achieved his project thanks to Solaris 11 unique new feature : Solaris network virtualization, combine with Solaris Zones P2V and V2V, and SPARC Hardware hypervisor (Oracle VM for SPARC, known also as "LDOM", or Logical Domain).I invite you to visit Bruno's blog for more details : Link Aggregations and VLAN Configurations for your consolidation (Solaris 11 and Solaris Zone) Awaiting his next entry explaining the detail of the V2V and P2V operations that he demonstrated to us live on his laptop through a Solaris 11 x86 VBOX image. I hope to see you on the up coming Solaris and SPARC event to share your feedback and experience with us. The up coming Paris events will take place on June 4th, for  Datacenter Virtualization, focus on storage and network, and July 4th for a special session on new SPARC servers and their business impact.

Wednesday evening, a month after the new SPARC servers T5 & M5 launch in Paris, the french Solaris users group, get together to get the latest from Oracle experts on SPARC T5 & M5, Oracle Virtual...

Architectures and Technologies

IT Modernization: SPARC Servers Engineering Vice President in Paris

Avec le renouvèlement complet des serveurs SPARC annoncé il y a 2 semaines, Masood Heydari, vice-président de l'ingénierie SPARC sera à Paris le 18 Avril, afin de partager avec vous, les apports de ces nouveaux serveurs T5 et M5 sur le marché. Après l'intervention de Masood, Didier Vionnet, ACCOR vice-président du back-office, Bruno Philippe, président du groupe français des utilisateurs de Solaris, Renato Vista, CTO CAP Gemini Infrastructure Services, Harry Zarrouk, Directeur des Systèmes d'Oracle pour la France et moi-même, participeront à une table ronde sur les apports de ces innovations pour la modernisation des systèmes d'informations et les nouveaux besoins métiers des entreprises. Je vous invite à vous inscrire à cet évènement afin de venir échanger avec l'ensemble des intervenants. With the complet renewal of SPARC Servers announced 2 weeks ago, Masood HEYDARI, Senior Vice President of SPARC Servers Engineering will be in Paris on April 18th, to share what the new SPARC Servers T5 and M5 bring on the market. Following Masood key notes, Didier Vionnet, ACCOR Vice-President of Back-office, Bruno Philippe, President of French Solaris Users Group, Renato Vista, CTO CAP Gemini Infrastructure Services, Harry Zarrouk, Director of Oracle Systems for France and myself, will exchange on the benefits those innovations bring to IT Modernization and Business needs.

Avec le renouvèlement complet des serveurs SPARC annoncé il y a 2 semaines, Masood Heydari, vice-président de l'ingénierie SPARC sera à Paris le 18 Avril, afin de partager avec vous, les apports de ces nouveaux ser...

Innovation

New SPARC Servers Launched today : Extreme Performances at exceptionally low cost

It will be very difficult to summarize in a short post all the details and already available customers and ISVs results leveraging Oracle investment, design and ability to execute on SPARC servers complete renewal with not one but 2 processors launched : SPARC T5 and M5. It is somehow captured in the title of this entry, in Larry's own words: "extreme performances at exceptionally low cost". To give you a quick idea, we just announced 8 552 523 tpmC with 1x T5-8 (a new 8x sockets T5 mid-range server). Adding on top, "extreme scalability with extreme reliability", as with the M5-32 server, we can scale up to 32x sockets and 32 TB of memory, in a mission-critical system. New way of designing systems As what John Fowler was saying : "this starts with design". Here at Oracle, we have a new way of designing. Historically systems were designed by putting servers, storage, network and OS together. At Oracle we add Database, Middleware and Applications in the design. We think what it takes for the coherency protocols, the interfaces, and design around those points... and more.Today we introduce not one but 2x processors with the whole family of servers associated with them. Thanks to common architecture they are design to work together. All of this of course runs Solaris. You can run Solaris 10, Solaris 11 and virtualize.  No break on binary compatibility. Direct benefit for your applications... at lowest risk... and lowest cost This is good for our customers and our ISVs, enabling them to run their applications unchanged on those new platforms with no equivalent performance gain, lowest cost and lowest risks, thanks to the binary compatibility and the new servers design under Oracle era. There was many customers examples on-stage, so I will just pick 2, SAS moving from M9000 to M5-32 with a x15 gain overall, and Sybase moving from M5000 to a T5-2 with a x11 gain overall. Those being in my opinion very important as they are reflecting real applications and customers experiences, many of them being in the financial services, and already having jump on those new systems (thanks to the beta program). To get a better idea of what the new SPARC T5 and M5 will bring to your applications, being Siebel, E-Business Suite, JD Edwards, Java, or SAP... Have a look here : https://blogs.oracle.com/BestPerf/ on the 17 world records... on performances and price.

It will be very difficult to summarize in a short post all the details and already available customers and ISVs results leveraging Oracle investment, design and ability to execute on SPARC servers...

Architectures and Technologies

#OOW 2012 @PARIS...talking Oracle and Clouds, and Optimized Datacenter

For those of you who want to get most out of Oracle technologies to evolve your IT to the Next Wave, I encourage you to register to the up coming Oracle Optimized Datacenter event that will take place in Paris on November 28th. You will get the opportunity to exchange with Oracle experts and customers having successfully evolve their IT by leveraging Oracle technologies. You will also get the latest news on some of the Oracle systems announcements made during OOW 2012. During this event we will make an update about Oracle and Clouds, from private to public and hybrid models. So in preparing this session, I thought it was a good start to make a status of Cloud Computing in France, and CIO requirements in particular. Starting in 2009 with the first Cloud Camp in Paris, the market has evolved, but the basics are still the same : think hybrid. From Traditional IT to Clouds One size doesn't fit all, and for big companies having already an IT in place, there will be parts eligible to external (public) cloud, and parts that would be required to stay inside the firewalls, so ability to integrate both side is key.  None the less, one of the major impact of Cloud Computing trend on IT, reported by Forrester, is the pressure it makes on CIO to evolve towards the same model that end-users are now used to in their day to day life, where self-service and flexibility are paramount. This is what is driving IT to transform itself toward "a Global Service Provider", or for some as "IT "is" the Business" (see : Gartner Identifies Four Futures for IT and CIO), and for both models toward a Private Cloud Service Provider. In this journey, there is still a big difference between most of existing external Cloud and a firm IT : the number of applications that a CIO has to manage. Most cloud providers today are overly specialized, but at the end of the day, there are really few business processes that rely on only one application. So CIOs has to combine everything together external and internal. And for the internal parts that they will have to make them evolve to a Private Cloud, the scope can be very large. This will often require CIOs to evolve from their traditional approach to more disruptive ones, the time has come to introduce new standards and processes, if they want to succeed. So let's have a look at the different Cloud models, what type of users they are addressing, what value they bring and most importantly what needs to be done by the  Cloud Provider, and what is left over to the user. IaaS, PaaS, SaaS : what's provided and what needs to be done First of all the Cloud Provider will have to provide all theinfrastructure needed to deliver the service. And the more value ITwill want to provide, the more IT will have to deliver and integrate: from disks to applications. As we can see in the above picture,providing pure IaaS, left a lot to cover for the end-user,that‚Äôs why the end-user targeted by this Cloud Service is ITpeople. If you want to bring more value to developers, you need to provideto them a development platform ready to use, which is what PaaSis standing for, by providing not only the processors power, storageand OS, but also the Database and Middleware platform. SaaS being the last mile of the Cloud, providing anapplication ready to use by business users, the remaining part forthe end-users being configuring and specifying the application fortheir specific usage. In addition to that, there are common challenges encompassing alltype of Cloud Services : Security : covering allaspect, not only of users management but also data flows and dataprivacy Charge back : measuringwhat is used and by whom Application management : providing capabilities notonly to deploy, but also to upgrade, from OS for IaaS, Database, andMiddleware for PaaS, to a full Business Application for SaaS. Scalability : ability to evolve ALL the components ofthe Cloud Provider stack as needed Availability : ability to cover ‚Äúalways on‚ÄĚrequirements Efficiency : providing a infrastructure that leverageshared resources in an efficient way and still comply to SLA(performances, availability, scalability, and ability to evolve) Automation : providing the orchestration of ALL thecomponents in all service life-cycle (deployment, growth &shrink (elasticity), upgrades,...) Management : providing monitoring, configuring andself-service up to the end-users Oracle Strategy and Clouds For CIOs to succeed in their Private Cloud implementation, meansthat they encompass all those aspects for each component life-cyclethat they selected to build their Cloud. That‚Äôs where amulti-vendors layered approach comes short in terms of efficiency. That‚Äôs the reason why Oracle focus on taking care of allthose aspects directly at Engineering level, to truly provideefficient Cloud Services solutions for IaaS, PaaS and SaaS. We aregoing as far as embedding software functions in hardware (storage,processor level,...) to ensure the best SLA with the highest efficiency. The beauty of it, as we rely on standards, is that theOracle components that you are running today in-house, are exactlythe same that we are using to build Clouds, bringing youflexibility, reversibility and fast path to adoption. With Oracle Engineered Systems (Exadata, Exalogic &SPARC SuperCluster, more specifically, when talking about Cloud), weare delivering all those components hardware and software alreadyengineered together at Oracle factory, with a single pane of glacefor the management of ALL the components through Oracle EnterpriseManager, and with high-availability, scalability and ability to evolve by design. To give you a feeling of what does that bring in terms just of implementation project timeline, for example with Oracle SPARC SuperCluster, we have a consistent track of record to have the system plug into existing Datacenter and ready in a week. This includes Oracle Database, OS, virtualization, Database Storage (Exadata Storage Cells in this case), Application Storage, and all network configuration. This strategy enable CIOs to very quickly build Cloud Services,taking out not only the complexity of integrating everythingtogether but also taking out the automation and evolution complexityand cost. I invite you to discuss all those aspect in regards of yourparticular context face2face on November 28th.

For those of you who want to get most out of Oracle technologies to evolve your IT to the Next Wave, I encourage you to register to the up coming Oracle Optimized Datacenter event that will take...

Innovation

#OOW 2012: Big Data and The Social Revolution

As what was saying Cognizant CSO Malcolm Frank about the "Futur of Work", and how the Business should prepare in the face of the new generation  not only of devices and "internet of things" but also due to their users ("The Millennials"), moving from "consumers" to"prosumers" :  we are at a turning point today whichis bringing us to the next IT Architecture Wave. So this is no morejust about putting Big Data, Social Networks and Customer Experience(CxM) on top of old existing processes, it is about embracing thenext curve, by identifying what processes need to be improve, butalso and more importantly what processes are obsolete and need to beget ride of, and new processes put in place. It is about managingboth the hierarchical and structured Enterprise and its socialconnections and influencers inside and outside of the Enterprise. Andthis does apply everywhere, up to the Utilities and Smart Grids,where it is no more just about delivering (faster) the same old 300reports that have grown over time with those new technologies but tounderstand what need to be looked at, in real-time, down to an handfull relevant reports with the KPI relevant to the business. It isabout how IT can anticipate the next wave, and is able to answersBusiness questions, and give those capabilities in real-time right atthe hand of the decision makers... This is the turning curve, whereIT is really moving from the past decade "Cost Center" to"Value for the Business", as Corporate Stakeholders will beable to touch the value directly at the tip of their fingers. It is all about making Data DrivenStrategic decisions, encompassed and enriched by ALL the Data, andconnected to customers/prosumers influencers. This brings tostakeholders the ability to make informed decisions on question like :‚ÄúWhat would be the best Olympic Gold winner to represent myautomotive brand ?‚ÄĚ... in a few clicks and in real-time, based onsocial media analysis (twitter, Facebook, Google+...) and connections link to my Enterprise data. Atrue example demonstrated by Larry Ellison in real-time duringhis yesterday‚Äôs key notes, where ‚ÄúHardware and Software Engineered toWork Together‚ÄĚ is not only about extreme performances but alsosolutions that Business can touch thanks to well integrated CustomereXperience Management and Social Networking : bringing the capabilitiesto IT to move to the IT Architecture Next wave. An example, illustrated also todays in 2others sessions, that I had the opportunity to attend. The firstsession bringing the ‚ÄúInternet of Things‚ÄĚ in Oil&Gaz intoactionable decisions thanks to Complex Event Processing capturingsensors data with the ready to run IT infrastructure leveragingExalogic for the CEP side, Exadata for the enrich datasets andExalytics to provide the informed decision interface up to end-user. The second session showing Real Time Decision engine in action forACCOR hotels, with Eric Wyttynck, VP eCommerce, and his Technical Director Pascal Massenet. I have to close my post here, as Ihave to go to run our practical hands-on lab, cooked with OlivierCanonge, Christophe Pauliat and Simon Coter, illustrating in practicethe Oracle Infrastructure Private Cloud recently announced lastSunday by Larry, and developed through many examples this morning byJohn Folwer. John also announced today Solaris 11.1 with a range of network innovation and virtualization at the OS level, as well as many optimizations for applications, like for Oracle RAC, with the introduction of the lock manager inside Solaris Kernel. Last but not least, he introduced Xsigo Datacenter Fabric for highly simplified networks and storage virtualization for your Cloud Infrastructure. Hoping you will get ready to jump onthe next wave, we are here to help...

As what was saying Cognizant CSO Malcolm Frank about the "Futur of Work", and how the Business should prepare in the face of the new generation  not only of devices and "internet of things" but also...

Innovation

#OOW 2012 : IaaS, Private Cloud, Multitenant Database, and X3H2M2

The title of this post is a summary of the 4 announcements made by Larry Ellison today, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait a little, as I will go in order, beginning with the IaaS - Infrastructure as a Service - announcement. Oracle IaaS goes Public... and Private... Starting in 2004 with Fusion development, Oracle Cloud was launch last year to provide not only SaaS Application, based on standard development, but also the underlying PaaS, required to build the specifics, and required interconnections between applications, in and outside of the Cloud. Still, to cover the end-to-end Cloud  Services spectrum, we had to provide an Infrastructure as a Service, leveraging our Servers, Storage, OS, and Virtualization Technologies, all "Engineered Together". This Cloud Infrastructure, was already available for our customers to build rapidly their own Private Cloud either on SPARC/Solaris or x86/Linux... The second announcement made today bring that proposition a big step further : for cautious customers (like Banks, or sensible industries) who would like to benefits from the Cloud value of "as a Service", but don't want their Data out in the Cloud... We propose to them to operate the same systems, Exadata, Exalogic & SuperCluster, that are providing our Public Cloud Infrastructure, behind their firewall, in a Private Cloud model. Oracle 12c Multitenant Database This is also a major announcement made today, on what's coming with Oracle Database 12c : the ability to consolidate multiple databases with no extra additional  cost especially in terms of memory needed on the server node, which is often THE consolidation limiting factor. The principle could be compare to Solaris Zones, where, you will have a Database Container, who is "owning" the memory and Database background processes, and "Pluggable" Database in this Database Container. This particular feature is a strong compelling event to evaluate rapidly Oracle Database 12c once it will be available, as this is major step forward into true Database consolidation with Multitenancy on a shared (optimized) infrastructure. X3H2M2, enabling the new Exadata X3 in-Memory Database Here we are :  X3H2M2 stands for X3 (the new version of Exadata announced also today) Heuristic Hierarchical Mass Memory, providing the capability to keep most if not all the Data in the memory cache hierarchy. Of course, this is the major software enhancement of the new X3 Exadata machine, but as this is a software, our current customers would be able to benefit from it on their existing systems by upgrading to the new release. But that' not the only thing that we did with X3, at the same time we have upgraded everything : the CPUs, adding more cores per server node (16 vs. 12, with the arrival of Intel E5 / Sandy Bridge), the memory with 256GB memory as well per node,  and the new Flash Fire card, bringing now up to 22 TB of Flash cache. All of this 4TB of RAM + 22TB of Flash being use cleverly not only for read but also for write by the X3H2M2 algorithm... making a very big difference compare to traditional storage flash extension. But what does those extra performances brings to you on an already very efficient system: double your performances compare to the fastest storage array on the market today (including flash) and divide you storage price x10 at the same time... Something to consider closely this days... Especially that we also announced the availability of a new Exadata X3-2 8th rack : a good starting point. As you have seen a major opening for this year again with true innovation. But that was not the only thing that we saw today, as before Larry's talk, Fujitsu did introduce more in deep the up coming new SPARC processor, that they are co-developing with us. And as such Andrew Mendelsohn - Senior Vice President Database Server Technologies came on stage to explain that the next step after I/O optimization for Database with Exadata, was to accelerate the Database at execution level by bringing functions in the SPARC processor silicium. All in all, to process more and more Data... The big theme of the day... and of the Oracle User Groups Conferences that were also happening today and where I had the opportunity to attend some interesting sessions on practical use cases of Big Data one in Finances and Fraud profiling and the other one on practical deployment of Oracle Exalytics for Data Analytics. In conclusion, one picture to try to size Oracle Open World ... and you can understand why, with such a rich content... and this is only the first day !

The title of this post is a summary of the 4 announcements made by Larry Ellison today, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait...

Innovation

Pre-rentrée Oracle Open World 2012 : à vos agendas

A maintenant moins d'un mois de l‚Äô√©v√©nement majeur d'Oracle, qui se tient comme chaque ann√©e √† San Francisco, fin septembre, d√©but octobre, les sp√©culations vont bon train sur les annonces qui vont y √™tre d√©voil√©es... Et sans lever le voile, je vous engage √† prendre connaissance des sujets des "Key Notes" qui seront tenues par Larry Ellison, Mark Hurd, Thomas Kurian (responsable des d√©veloppements logiciels) et John Fowler (responsable des d√©veloppements syst√®mes) afin de vous donner un avant go√Ľt. Strat√©gie et Roadmaps Oracle Bien entendu, au-del√† des s√©ances pl√©ni√®res qui vous donnerons  une vision pr√©cise de la strat√©gie, et pour ceux qui seront sur place, je vous engage √† ne pas manquer les s√©ances d'approfondissement qui auront lieu dans la semaine, dont voici quelques morceaux choisis : "Accelerate your Business with the Oracle Hardware Advantage" avec John Fowler, le lundi 1er Octobre, 3:15pm-4:15pm "Why Oracle Softwares Runs Best on Oracle Hardware" , avec Bradley Carlile, le responsable des Benchmarks, le lundi 1er Octobre, 12:15pm-13:15pm "Engineered Systems - from Vision to Game-changing Results", avec Robert Shimp, le lundi 1er Octobre 1:45pm-2:45pm "Database and Application Consolidation on SPARC Supercluster", avec Hugo Rivero, responsable dans les √©quipes d'int√©gration mat√©riels et logiciels, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle‚Äôs SPARC Server Strategy Update", avec Masood Heydari, responsable des d√©veloppements serveurs SPARC, le mardi 2 Octobre, 10:15am - 11:15am "Oracle Solaris 11 Strategy, Engineering Insights, and Roadmap", avec Markus Flier, responsable des d√©veloppements Solaris, le mercredi 3 Octobre, 10:15am - 11:15am "Oracle Virtualization Strategy and Roadmap", avec Wim Coekaerts, responsable des d√©veloppement Oracle VM et Oracle Linux, le lundi 1er Octobre, 12:15pm-1:15pm "Future of Oracle Exadata: Developments for OLTP, Warehousing, and Consolidation", avec Juan Loaiza, responsable des d√©veloppements Exadata, le lundi 1er Octobre, 12:15pm-1:15pm "Big Data: The Big Story", avec Jean-Pierre Dijcks, responsable du d√©veloppement produits Big Data, le lundi 1er Octobre, 3:15pm-4:15pm "Oracle Storage‚ÄĒBest of Breed and Best for Oracle", avec James Cates, Michael Workman, Philip Bullinger, responsables des d√©veloppements stockages, archivages, librairies, le mercredi 3, 11:45am-12:45pm "Scaling with the Cloud: Strategies for Storage in Cloud Deployments", avec Christine Rogers,  Principal Product Manager, et Chris Wood, Senior Product Specialist, Stockage , le lundi 1er Octobre, 10:45am-11:45am Retours d'exp√©riences et t√©moignages Si Oracle Open World est l'occasion de partager avec les √©quipes de d√©veloppement d'Oracle en direct, c'est aussi l'occasion d'√©changer avec des clients et experts qui ont mis en oeuvre  nos technologies pour b√©n√©ficier de leurs retours d'exp√©riences, comme par exemple : "Oracle Optimized Solution for Siebel CRM at ACCOR", avec les t√©moignages d'Eric Wyttynck, directeur IT Multichannel & CRM  et Pascal Massenet, VP Loyalty & CRM systems, sur les b√©n√©fices non seulement m√©tiers, mais √©galement projet et IT, le mercredi 3 Octobre, 1:15pm-2:15pm "Tips from AT&T: Oracle E-Business Suite, Oracle Database, and SPARC Enterprise", avec le retour d'exp√©rience des experts Oracle, le mardi 2 Octobre, 11:45am-12:45pm "Creating a Maximum Availability Architecture with SPARC SuperCluster", avec le t√©moignage de Carte Wright, Database Engineer √† CKI, le mercredi 3 Octobre, 11:45am-12:45pm "Multitenancy: Everybody Talks It, Oracle Walks It with Pillar Axiom Storage", avec le t√©moignage de Stephen Schleiger, Manager Systems Engineering de Navis, le lundi 1er Octobre, 1:45pm-2:45pm "Oracle Exadata for Database Consolidation: Best Practices", avec le retour d'exp√©rience des experts Oracle ayant particip√© √† la mise en oeuvre d'un grand client du monde bancaire, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle Exadata Customer Panel: Packaged Applications with Oracle Exadata", anim√© par Tim Shetler, VP Product Management, mardi 2 Octobre, 1:15pm-2:15pm "Big Data: Improving Nearline Data Throughput with the StorageTek SL8500 Modular Library System", avec le t√©moignage du CTO de CSC, Alan Powers, le jeudi 4 Octobre, 12:45pm-1:45pm "Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC", avec le t√©moignage de Syed Qadri, Lead DBA et Michael Arnold, System Architect d'US Cellular, le mardi 2 Octobre, 10:15am-11:15am "Transform Data Center TCO with Oracle Optimized Servers: A Customer Panel", avec les t√©moignages notamment d'AT&T et Liberty Global, le mardi 2 Octobre, 11:45am-12:45pm "Data Warehouse and Big Data Customers‚Äô View of the Future", avec The Nielsen Company US, Turkcell, GE Retail Finance, Allianz Managed Operations and Services SE, le lundi 1er Octobre, 4:45pm-5:45pm "Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization", le t√©moignage de l'IT interne d'Oracle sur la transformation et la migration de l'ensemble de notre infrastructure de stockage, mardi 2 Octobre, 1:15pm-2:15pm Echanges avec les groupes d'utilisateurs et les √©quipes de d√©veloppement Oracle Si vous avez pr√©vu d'arriver suffisamment t√īt, vous pourrez √©galement √©changer d√®s le dimanche avec les groupes d'utilisateurs, ou tous les soirs avec les √©quipes de d√©veloppement Oracle sur des sujets comme : "To Exalogic or Not to Exalogic: An Architectural Journey", avec Todd Sheetz - Manager of DBA and Enterprise Architecture, Veolia Environmental Services, le dimanche 30 Septembre, 2:30pm-3:30pm "Oracle Exalytics and Oracle TimesTen for Exalytics Best Practices", avec Mark Rittman, de Rittman Mead Consulting Ltd, le dimanche 30 Septembre, 10:30am-11:30am "Introduction of Oracle Exadata at Telenet: Bringing BI to Warp Speed", avec Rudy Verlinden & Eric Bartholomeus - Managers IT infrastructure √† Telenet, le dimanche 30 Septembre, 1:15pm-2:00pm "The Perfect Marriage: Sun ZFS Storage Appliance with Oracle Exadata", avec Melanie Polston, directeur, Data Management, de Novation et Charles Kim, Managing Director de Viscosity, le dimanche 30 Septembre, 9:00am-10am "Oracle‚Äôs Big Data Solutions: NoSQL, Connectors, R, and Appliance Technologies", avec Jean-Pierre Dijcks et les √©quipes de d√©veloppement Oracle, le lundi 1er Octobre, 6:15pm-7:00pm "The Storage Forum at Oracle OpenWorld: Meet Oracle‚Äôs Storage Experts", avec les √©quipes de d√©veloppements stockages, librairies et archivages √† votre disposition pour r√©pondre √† vos questions sur un cr√©neau de 2 heures, o√Ļ vous pouvez vous pr√©senter √† votre convenance, le mercredi 3 entre 3:00pm et 5:00pm Testez et √©valuez les solutions Et pour finir, vous pouvez m√™me tester les technologies au travers du Oracle DemoGrounds, (1133 Moscone South pour la partie Syst√®mes Oracle, OS, et Virtualisation) et des "Hands-on-Labs", comme : "Deploying an IaaS Environment with Oracle VM", le mardi 2 Octobre, 10:15am-11:15am "Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-on Lab", le mardi 2 Octobre, 11:45am-12:45pm (il est fortement conseill√© d'avoir suivi le "Hands-on-Labs" pr√©c√©dent avant d'effectuer ce Lab. "x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance", le mercredi 3 Octobre, 5:00pm-6:00pm "StorageTek Tape Analytics: Managing Tape Has Never Been So Simple", le mercredi 3 Octobre, 1:15pm-2:15pm "Oracle‚Äôs Pillar Axiom 600 Storage System: Power and Ease", le lundi 1er Octobre, 12:15pm-1:15pm "Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c", le lundi 1er Octobre, 1:45pm-2:45pm "Managing Storage in the Cloud", le mardi 2 Octobre, 5:00pm-6:00pm "Learn How to Write MapReduce on Oracle‚Äôs Big Data Platform", le lundi 1er Octobre, 12:15pm-1:15pm "Oracle Big Data Analytics and R", le mardi 2 Octobre, 1:15pm-2:15pm "Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications", le lundi 1er Octobre, 10:45am-11:45am "Managing Your Data with Built-In Oracle Solaris ZFS Data Services in Release 11", le lundi 1er Octobre, 4:45pm-5:45pm "Virtualizing Your Oracle Solaris 11 Environment", le mardi 2 Octobre, 1:15pm-2:15pm "Large-Scale Installation and Deployment of Oracle Solaris 11", le mercredi 3 Octobre, 3:30pm-4:30pm En conclusion, une semaine tr√®s riche en perspective, et qui vous permettra de balayer l'ensemble des sujets au coeur de vos pr√©occupations, de la strat√©gie √† l'impl√©mentation... Cette semaine doit se pr√©parer, pour tailler votre agenda sur mesure, √† travers les plus de 2000 sessions dont je ne vous ai fait qu'un extrait, et dont vous pouvez retrouver l'ensemble en ligne.

A maintenant moins d'un mois de l’événement majeur d'Oracle, qui se tient comme chaque année à San Francisco, fin septembre, début octobre, les spéculations vont bon train sur les annonces qui vont...

Architectures and Technologies

Cloud Computing : publication du volet 3 du Syntec Numérique

Translate in English Une vision client/fournisseur r√©unie autour d'une √©bauche de cadre contractuel Lors de la Cloud Computing World Expo qui se tenait au CNIT la semaine derni√®re, j'ai assist√© √† la pr√©sentation du nouveau volet du Syntec num√©rique sur le Cloud Computing et les "nouveaux mod√®les" induits : mod√®les √©conomiques, contrats, relations clients-fournisseurs, organisation de la DSI. L'originalit√© de ce livre blanc vis √† vis de ceux d√©j√† existants dans le domaine est de s'√™tre attach√© √† regrouper l'ensemble des acteurs clients (au travers du CRIP) et fournisseurs, autour d'un cadre de formalisation contractuel, en s'appuyant sur le mod√®le e-SCM. Acc√©l√©ration du passage en fournisseur de Services et fin d'une IT en silos ? Si le Cloud Computing permet d'acc√©l√©rer le passage de l'IT en fournisseur de services (dans la suite d'ITIL v3), il met √©galement en exergue le challenge pour les DSI d'un mod√®le en rupture n√©cessitant des comp√©tences transverses permettant de garantir les qualit√©s attendues d'un service de Cloud Computing : d√©ploiement en mode "self-service" √† la demande, acc√®s standardis√© au travers du r√©seau,  gestion de groupes de ressources partag√©es,  service "√©lastique" : que l'on peut faire croitre ou diminuer rapidement en fonction de la demande mesurable On comprendra bien ici, que le Cloud Computing va bien au del√† de la simple virtualisation de serveurs. Comme le d√©crit fort justement Constantin Gonzales dans son blog ("Three Enterprise Principles for Building Clouds"), l'important r√©side dans le respect du standard de l'interface d'acc√®s au service. Ensuite, la fa√ßon dont il est r√©alis√© (dans le nuage), est de la charge et de la responsabilit√© du fournisseur. A lui d'optimiser au mieux pour √™tre comp√©titif, tout en garantissant les niveaux de services attendus. Pour le fournisseur de service, bien entendu, il faut ma√ģtrisercette impl√©mentation qui repose essentiellement sur l'int√©grationet l'automatisation des couches et composants n√©cessaires... dans ladur√©e... avec la prise en charge des √©volutions de chacun des√©l√©ments. Pour le client, il faut toujours s'assurer de la r√©versibilit√© de la solution au travers du respect des standards... Point √©galement abord√© dans le livre blanc du Syntec, qui rappelle les points d'attention et fait un √©tat des lieux de l'avancement des standards autour du Cloud Computing. En vous souhaitant une bonne lecture... Translate in English

Translate in English Une vision client/fournisseur réunie autour d'une ébauche de cadre contractuel Lors de la Cloud Computing World Expo qui se tenait au CNIT la semaine dernière, j'ai assisté à la...

Architectures and Technologies

Big Data : opportunité Business et (nouveau) défi pour la DSI ?

Translate in English Ayant particip√© √† quelques conf√©rences sur ce th√®me, voici quelques r√©flexions pour commencer l'ann√©e 2012 sur le sujet du moment... Big Data : Opportunit√©s Business Comme le souligne une √©tudede McKinsey (¬ę Big Data: The next frontier for innovation,competition, and productivity ¬Ľ ), lama√ģtrise des donn√©es (dans leur diversit√©) et la capacit√© √† les analyser √† un impact fort surl‚Äôapport que l‚Äôinformatique (la DSI) peut fournir aux m√©tiers pour trouver de nouveaux axes de comp√©titivit√©. Pour ne citer que 2 exemples, McKinsey estime que l'exploitation du Big Data pourrait permettre d'√©conomiser plus de ‚ā¨250 milliards sur l'ensemble du secteur public Europ√©en (identification des fraudes, gestion et mesures de l'efficacit√© des affectations des subventions et des plans d'investissements, ...). Quant au secteur marchand, la simple utilisation des donn√©es de g√©olocalisation pourrait permettre un surplus global de $600 milliards, opportunit√© illustr√©e par Jean-Pierre Dijcks dans son blog : "Understanding a Big Data Implementation and its Components". Volume, V√©locit√©, Vari√©t√©... Le "Big Data" est souvent caract√©ris√© par ces 3x V : Volume : pour certains, le Big Data commence √† partir du seuil pour lequel le volume de donn√©es devient difficile √† g√©rer dans une solution de base donn√©es relationnelle. Toutefois, les avanc√©es technologiques nous permettent toujours de repousser ce seuil de plus en plus loin sans remettre en cause les standards des DSI (cf: Exadata), et c'est pourquoi, l'aspect volume en tant que tel n'est pas suffisant pour caract√©riser une approche "Big Data". V√©locit√© : le Big Data n√©cessite donc √©galement une notion temporelle forte associ√©e √† de gros volumes. C'est √† dire, √™tre capable de capturer une masse de donn√©es mouvante pour pouvoir soit r√©agir quasiment en temps r√©el face √† un √©v√®nement ou pouvoir le revisiter ult√©rieurement avec un autre angle de vue. Vari√©t√© : le Big Data va adresser non seulement les donn√©es structur√©es mais pas seulement. L'objectif essentiel est justement de pouvoir aller trouver de la valeur ajout√©e dans l'ensemble des donn√©es accessibles √† une entreprise. Et √† l'heure du num√©rique, de la d√©mat√©rialisation, des r√©seaux sociaux, des fournisseurs de flux de donn√©es, du Machine2Machine, de la g√©olocalisation,... la vari√©t√© des donn√©es accessibles est importante, en perp√©tuelle √©volution (qui sera le prochain Twitter ou Facebook, Google+ ?) et rarement structur√©e. ...Visualisation et Valeur A ces 3x V qui caract√©risent le "Big Data" de mani√®re g√©n√©rale j'en ajouterai 2 : visualisation et valeur ! Visualisation, car face √† ce volume de donn√©es, sa vari√©t√© et sa v√©locit√©, il est primordial de pouvoir se doter des moyens de naviguer au sein de cette masse, pour en tirer (rapidement et simplement) de l'information et de la Valeur, afin de trouver ce que l'on cherche mais aussi,... b√©n√©ficier d'un atout int√©ressant au travers de la diversit√© des donn√©es non structur√©es coupl√©es aux donn√©es structur√©es de l'entreprise : la s√©rendipit√© ou, trouver ce que l'on ne cherchait pas (le propre de beaucoup d'innovations) ! Les opportunit√©s pour le Business se situent √©videmment dans les 2 derniers V : savoir visualiser l'information utile pour en tirer une valeur Business ... (nouveau) D√©fi pour la DSI Le d√©fi pour la DSI est dans la cha√ģne de valeur globale : savoir acqu√©rir et stocker un volume important de donn√©es vari√©es et mouvantes, et √™tre capable de fournir les √©l√©ments (outils) aux m√©tiers pour en tirer du sens et de la valeur. Afin de traiter ces donn√©es (non-structur√©es), il est n√©cessaire de mettre en oeuvre des technologies compl√©mentaires aux solutions d√©j√† en place pour g√©rer les donn√©es structur√©es des entreprises. Ces nouvelles technologies sont initialement issues des centres de R&D des g√©ants de l'internet, qui ont √©t√© les premiers √† √™tre confront√©s √† ces masses d'information non-structur√©es. L'enjeu aujourd'hui est d'amener ces solutions au sein de l'entreprise de mani√®re industrialis√©e avec √† la fois la ma√ģtrise de l'int√©gration de l'ensemble des composants (mat√©riels et logiciels) et leur support sur les 3 √©tapes fondamentales que constitue une cha√ģne de valeur autour du Big Data : Acqu√©rir, Organiser et Distribuer. Acqu√©rir : une fois les sources de donn√©es identifi√©es (avec les m√©tiers), il faut pouvoir les stocker √† moindre co√Ľt avec de forte capacit√© d'√©volution (de part la volum√©trie concern√©e et la rapidit√© de croissance) √† des fins d'extraction d'information. Un syst√®me de grille de stockage √©volutif doit √™tre d√©ploy√©, √† l'instar du mod√®le Exadata. La r√©f√©rence dans ce domaine pour le stockage en grille de donn√©es non-structur√©es √† des fins de traitement est  HDFS (Hadoop Distributed Filesystem), ce syst√®me de fichiers √©tant directement li√© aux algorithmes d'extraction permettant d'effectuer l'op√©ration directement l√† o√Ļ les donn√©es sont stock√©es. Organiser : associer un premier niveau d'index {cl√©,valeur} sur ces donn√©es non-structur√©es avec NoSQL (pour Not Only SQL) . L'int√©r√™t ici, par rapport √† un mod√®le SQL classique √©tant de pouvoir traiter la vari√©t√© (mod√®le non pr√©d√©finie √† l'avance), la v√©locit√© et le volume. En effet, la particularit√© du NoSQL est de traiter les donn√©es sur un mod√®le CRUD (Create, Read, Update, Delete) et non pas ACID (Atomicity, Consistency, Isolation, Durability), avec ses avantages de rapidit√© (pas besoin de rentrer les donn√©es dans un mod√®le structur√©) et ses inconv√©nients (accepter pour des raisons de capacit√© d'acquisition de pouvoir √™tre amen√© √† lire des donn√©es "p√©rim√©es", entre autres). Et ensuite pouvoir √©galement extraire de l'information au travers de l'op√©ration MapReduce s'effectuant directement sur la grille de donn√©es non-structur√©es (pour √©viter de transporter les donn√©es vers des noeuds de traitement). L'information ainsi extraite de cette grille de donn√©es non-structur√©es devient une partie du patrimoine de l'entreprise et a toute sa place dans les donn√©es structur√©es et donc fiables et √† "haute densit√©" d'information. C'est pourquoi, l'extraction d'information des donn√©es non-structur√©es n√©cessite √©galement une passerelle vers l'entrep√īt de donn√©es de l'entreprise pour enrichir le r√©f√©rentiel. Cette passerelle doit √™tre en mesure d'absorber d'importants volumes d'information dans des temps tr√®s courts.Ces 2 premi√®res √©tapes ont √©t√© industrialis√©es aussi bien sur la partie mat√©rielle (grille/cluster de stockage) que logicielle (HDFS, Hadoop MapReduce, NoSQL, Oracle Loader for Hadoop) au sein de l'Engineered System d'Oracle : Oracle Big Data Appliance, le r√©f√©rentiel de donn√©es structur√©es pouvant quant √† lui √™tre impl√©ment√© au sein d'Exadata. Distribuer : la derni√®re √©tape consiste √† rendre disponible l'information aux m√©tiers, et leur permettre d'en tirer la quintessence : Analyser et Visualiser. L'enjeu est de fournir les capacit√©s de faire de l'analyse dynamique sur un gros volume de donn√©es (cubes d√©cisionnels) avec la possibilit√© de visualiser simplement sur plusieurs facettes. Un premier niveau d'analyse peut se faire directement sur les donn√©es non-structur√©es au travers du langage R, directement sur le Big Data Appliance.L'int√©r√™t r√©side √©galement dans la vision agr√©g√©e au sein du r√©f√©rentiel enrichi suite √† l'extraction, directement au travers d'Exadata par exemple... ou via un v√©ritable tableau de bord m√©tier dynamique qui vient s'interfacer au r√©f√©rentiel et permettant d'analyser de tr√®s gros volumes directement en m√©moire avec des m√©canismes de visualisation multi-facettes, pour non seulement trouver ce que l'on cherche mais aussi d√©couvrir ce que l'on ne cherchait pas (retour sur la s√©rendipit√©...). Ceci est fait gr√Ęce √† l'identification (visuelle) d'axes de recherches que les utilisateurs n'avaient pas forc√©ment anticip√©s au d√©part. Cette derni√®re √©tape est industrialis√©e au travers de la solution Exalytics, illustr√©e dans la vid√©o ci-dessous dans le monde de l'automobile, o√Ļ vous verrez une d√©monstration manipulant dynamiquement les donn√©es des ventes automobiles mondiales sur une p√©riode de 10 ans, soit environ 1 milliard d'enregistrements et 2 TB de donn√©es manipul√©es en m√©moire (gr√Ęce a des technologies de compression embarqu√©es). HSM (Hierachical Storage Management) et Big Data Pour terminer la mise en place de l'√©co-syst√®me "Big Data" au sein de la DSI, il reste un point fondamental souvent omis : la s√©curisation et l'archivage des donn√©es non-structur√©es. L'objectif est de pouvoir archiver/sauvegarder les donn√©es non-structur√©es √† des fins de rejeu √©ventuel, et pour faire face √† la croissance des volumes en les stockant sur un support appropri√© en fonction de leur "fra√ģcheur".  En effet, une grille de type Hadoop base sa s√©curit√© sur la duplication de la donn√©es, mais si une donn√©e est corrompue, ses copies le sont aussi. En outre, cette grille est l√† pour permettre un traitement √† un instant t (v√©locit√©) sur les donn√©es, une fois ce traitement effectu√©, les donn√©es sur la grille sont souvent remplac√©es par des donn√©es plus r√©centes (voir l'exemple : "‚ĀěUnderstanding a Big Data Implementation and its Components" qui traite bien du cas d'usage des donn√©es li√©es √† un contexte temporel) . Dans certains cas d'usage, il peut √™tre int√©ressant de pouvoir revisiter des donn√©es captur√©es ult√©rieurement avec un autre angle d'analyse, ou pour des besoins de v√©rification, et dans tous les cas pour pouvoir restaurer en cas d'incident de corruption. C'est l√† o√Ļ le couplage avec une solution de stockage hi√©rarchique (HSM) est indispensable pour la capture initiale des donn√©es non-structur√©es et leur archivage √† moindre co√Ľt face aux volum√©tries √† traiter. C'est ce que nous couvrons au travers de notre solution Storage Archive Manager (SAM), solution d'ailleurs utilis√©e dans un projet "Big Data" fran√ßais pour pouvoir archiver 1 PB de donn√©es non-structur√©es. Pour aller plus loin : http://www.oracle.com/us/technologies/big-data/index.html http://blogs.oracle.com/datawarehousing/category/Oracle/Big+Data Understanding a Big Data Implementation and its Components Oracle Big Data Appliance Oracle Exadata Oracle Exalytics In-Memory Machine Storage Archive Manager  Je profite de ce billet pour vous rappeler de mettre √† jour votre "feed reader" : http://blogs.oracle.com/EricBezille/feed/entries/rss , avec tous mes meilleurs voeux pour 2012. Translate in English

Translate in English Ayant participé à quelques conférences sur ce thème, voici quelques réflexions pour commencer l'année 2012 sur le sujet du moment... Big Data : Opportunités Business Comme le...

Architectures and Technologies

Solaris 11 : les nouveautés vues par les équipes de développement

Translate in English  Pour ceux qui ne sont pas dans la liste de distribution de la communaut√© des utilisateurs Solaris francophones, voici une petite compilation de liens sur les blogs des d√©veloppeurs de Solaris 11 et qui couvre en d√©tails les nouveaut√©s dans de multiples domaines.  Les nouveaut√©s c√īt√© Desktop What's new on the Solaris 11 Desktop ? S11 X11: ye olde window system in today's new operating system Accessible Oracle Solaris 11 - released ! Les outils de d√©veloppements Nagging As a Strategy for Better Linking: -z guidance Much Ado About Nothing: Stub Objects Using Stub Objects The Stub Proto: Not Just For Stub Objects Anymore elffile: ELF Specific File Identification Utility Le nouveau syst√®me de packaging : Image Packaging System (IPS) Replacing the Application Packaging Developer's guide IPS Self-assembly - Part 1: overlays Self Assembly - Part 2: Multiple Packages Delevering configuration La s√©curit√© renforc√©e dans Solaris Completely disabling root logins in Solaris 11 Passwork (PAM) caching for Solaris su - "a la sudo" User home directory encryption with ZFS My 11 favorite Solaris 11 features (autour de la s√©curit√©) - par Darren Moffat Exciting crypto advances with the T4 processor and Oracle Solaris 11 SPARC T4 OpenSSL Engine Solaris AESNI OpenSSL Engine for Intel Westmere Gestion et auto-correction d'incident - "Self-Healing" : Service Management Facility (SMF) & Fault Management Architecture (FMA)  Introducing SMF Layers Oracle Solaris 11 - New Fault Management Features Virtualisation : Oracle Solaris Zones These are 11 of my favorite things! (autour des zones) - par Mike Gerdts Immutable Zones on Encrypted ZFS The IPS System Repository (avec les zones) - par Tim Foster Quelques bonus de la communaut√© Solaris  Solaris 11 DTrace syscall Provider Changes Solaris 11 - hostmodel (Control send/receive behavior for IP packets on a multi-homed system) A Quick Tour of Oracle Solaris 11 Pour terminer, je vous engage √©galement √† consulter ce document de r√©f√©rence fort utile :  Transition from Oracle Solaris 10 to Oracle Solaris 11 Bonne lecture ! Translate in English 

Translate in English  Pour ceux qui ne sont pas dans la liste de distribution de la communauté des utilisateurs Solaris francophones, voici une petite compilation de liens sur les blogs des...

Innovation

Oracle Open World 2011 : Very Big (again) !

Translate in English Oracle Open World continue a battre des records aussi bien en terme d'audience avec plus de 45000 personnes que de contenus, avec plus de 2000 sessions, sans parler des annonces majeures qui ont eu lieu cette ann√©e et sur lesquelles je vais revenir dans ce poste, jour par jour. Premier jour : Engineered Systems L'√©v√®nement a √©t√© lanc√© avec une "key notes" 100% mat√©riel, autour des Engineered Systems, avec un rappel du succ√®s d'Exadata et d'Exalogic, et du pourquoi : massivement parall√®le √† tous les niveaux, avec en plus la compression des donn√©es pour pouvoir bouger beaucoup de donn√©es, beaucoup plus vite qu'une architecture traditionnelle, le tout bas√© sur un coeur infiniband... Conception pouss√©e jusqu'au processeur avec le T4 et Solaris c√īt√© syst√®me d'exploitation, qui aboutissent a un nouvel "Engineered Systems", le Supercluster, pour proposer la solution la plus adapt√©e (int√©gr√©e) sur le terrain des applications utiles √† l'Entreprise (Java/Database). Pour le partie calcul g√©om√©trique, ce sera la prochaine √©tape... Cette premi√®re "key notes" s'est conclue toujours sur du "Hardware and Software Engineered to work together", pour d√©livrer des r√©sultats "plus vite que la pens√©e" avec l'Exalytics, qui s'interface de pr√©f√©rence avec un Exadata, mais pas seulement... pourquoi pas votre SAP, pour en tirer des analyses tr√®s rapide sur une volum√©trie de donn√©es importante, que l'on arrive √† faire tenir dans 1 TB de RAM, gr√Ęce √† des technologies de compressions (jusqu'√† x5) qui ont √©t√© d√©velopp√©es et int√©gr√©es dans Exalytics autour d'un moteur OBI EE, d'une base de donn√©es en m√©moire et d'Essbase. Deuxi√®me jour : Simplifier, Acc√©l√©rer & Big Data Le deuxi√®me jour, le mat√©riel √©tait pr√©sent dans toutes les sessions logiciel (et vis versa), car il faut simplifier l'int√©gration "Hardware and Software" pour acc√©l√©rer les performances et les gains : une strat√©gie tr√®s clair avec une nouvelle annonce dans la lign√©e de simplification et d'acc√©l√©ration : la Big Data Appliance. C'est une solution pour tirer de l'information d'une masse de donn√©es non-structur√©es. Type d'applications : d√©tection de fraudes bancaires, connaissance et valorisation des clients, au travers d'analyses li√©es aux donn√©es des r√©seaux (sociaux). On y retrouve des mots cl√©s comme No-SQL et Hadoop, int√©gr√© et industrialis√© apr√®s le passage dans les mains de l'Engineering d'Oracle et √©videmment connectable √† de l'Exadata pour analyser le tout de fa√ßon efficace... La Big Data Appliance est constitu√©e notamment : pour le mat√©riel de 18 serveurs x4270 M2 avec pour chacun d'eux : 48 GB de m√©moire, 12 Intel cores, 24 TB de stockage pour le software d'Oracle no-SQL, de Hadoop et d'Oracle loader pour Hadoop Pour plus d'information sur cette nouvelle solution, vous permettant de valoriser les donn√©es non-structur√©es qui vous entourent, je vous invite √©galement √† consulter ce lien :http://blogs.oracle.com/databaseinsider/entry/oracle_unveils_the_oracle_big 3i√®me jour : Architectures, Engineered Systems & Management du Cloud Encore une journ√©e avec une forte tendance √† voir surgir des Appliances ou "Engineered Systems" un peu partout et m√™me chez nos concurrents. Toutefois, en y regardant de pr√®s, il y a toujours un d√©tail ou deux qui fond la diff√©rence. Comme le disait Juan Loaiza, responsable des d√©veloppements d'Exadata, dans la session de John Fowler sur la strat√©gie syst√®mes : "Architectures Matters", et c'est cette ma√ģtrise des d√©veloppements jusqu'aux applications qui fait l'efficacit√© et la diff√©rence, compar√© √† une m√©thode de pure assemblage. Le tout d√©montr√© en "live" sur 2 Superclusters dans le "DEMOground": l'un tournant  E-Business Suite et l'autre Peoplesoft et ayant permis de fournir l'√©talon nous permettant de revendiquer un x2 par rapport au pure assemblage propos√© par d'autres fournisseurs. L'autre sujet d'importance, c'est bien √©videmment, la sortie d'Enterprise Manager 12c avec des fonctions  indispensable pour r√©ussir la transformation de l'IT vers le "Cloud" : self-services, provisioning (de bout en bout), charge-back, capacity planning... Cela accompagn√© par la fourniture gratuite d'une des briques de bases, Enterprise Manager Ops Center, pour tout syst√®mes Oracle sous support Premier. Encore une proposition de valorisation verticale de la stack pour un meilleur TCO, tout en garantissant une meilleur int√©gration "out-of-the-box"... C'√©tait d'ailleurs l'une des conclusions de la session de Bank of America, expliquant sa migration sur Exadata. Quatri√®me jour : Oracle Fusion Application & Cloud Apr√®s une entr√©e en mati√®re dans le dure (HW), une cl√īture qui d√©colle vers les couches applicatives... dans le Cloud Public d'Oracle, annonc√© mercredi par Larry Elyson... Plus pr√©cis√©ment, 3 annonces : La disponibilit√© (apr√®s 6 ans de d√©veloppement) des Fusions Applications; Le Public Cloud d'Oracle permettant de fournir ces m√™mes Fusion Applications "as a service", et en plus une plate-forme de d√©veloppement 100% Java EE, avec les composants indispensables de l'infrastructure : Database services, Middleware Services, Security Services... plus une nouveaut√© : Data Services. Le Oracle Sociale Network (r√©seau social d'entreprise), permettant une mise en r√©seau et une interaction en continue int√©gr√©e au coeur des Fusion Applications. Pour plus d'information, vous pouvez vous connecter sur :http://cloud.oracle.com Un point caract√©ristique du Cloud d'Oracle est son appui sur les standards vs. d'autres solutions du march√© qui disposent de langages propri√©taires et induisent un verrouillage de fait... En corollaire, les bonnes questions que vous devez vous poser, tournent autour de la r√©versibilit√© et donc de la capacit√© √† d√©placer son application, ses donn√©es d'un fournisseur  √† un autre, ou chez soi.... C'est un √©l√©ment fondamental au quel Oracle r√©pond sans compromis, c'est √† dire pour Java, par exemple, la mise √† disposition d'une plate-forme compl√®te Java EE.... Ce qui est tr√®s loin des offres actuelles du march√© dans ce domaine. Quel en est le b√©n√©fice direct pour vous : la compatibilit√© entre les standards de vos applications d'entreprise et ceux du Cloud Public d'Oracle. Ainsi vous pouvez b√Ętir votre Cloud priv√© (avec la solution optimis√©e d'Enterprice Cloud Infrastructure), et vous aurez la garantie de trouver d'autres acteurs compatibles, comme Oracle ou Amazone, car bas√© sur une solution reposant sur les  standards du march√©. Translate in English 

Translate in English Oracle Open World continue a battre des records aussi bien en terme d'audience avec plus de 45000 personnes que de contenus, avec plus de 2000 sessions, sans parler des annonces...

Architectures and Technologies

Oracle Open World - Hands-on Lab : Configuring ASM and ACFS on Solaris - Part 2

Oracle Open World - Hands-on Lab - Participant Guide Contentand Goal "Oracle Automatic Storage Management gives database administrators astorage management interface that is consistent across all server andstorage platforms and is purpose-built for Oracle Database. Oracle Automatic Storage Management Cluster File System is ageneral-purpose file system for single-node and clusterconfigurations. It supports advanced data services such as taggingand encryption. This hands-on lab shows how to configure Oracle Automatic StorageManagement and Oracle Automatic Storage Management Cluster FileSystem for installation of an Oracle Database instance on OracleSolaris 10 8/11. You'll learn how to install the software, build Oracle AutomaticStorage Management volumes, and configure and mount Oracle AutomaticStorage Management Cluster File System file systems." Overview Thistutorial covers the installation of Oracle Grid Infrastructure for astandalone server. In the Oracle 11g Release 2, the GridInfrastructure contains, amongst other software: AutomaticStorage Managment (ASM) ASMDynamic Volume Manager (ADVM) ASMCluster File System (ACFS) Thislab is divided into 4 exercises. Exercise1: We install the ASM binaries and grid infrastructure. As partof the install we create a diskgroup of three disks called DATA. DATAwill later be used to store the database data files. Exercise2: We use ASMConfiguration Assistant (ASMCA) to create a second diskgroup calledMYDG. From MYDG we create a ADVM volume called MYVOL and from that wecreate a ACFS file system called u02. Exercise3: We use the installerto install the Oracle database binaries into our new ACFS filesystem (u02). Exercise4: We then use the database configuration assistant to create adatabase with the tablespaces populating the DATA ASM diskgroup. Inour setup we use "External" redundancy for our disks. Thisimplies that the RAID or mirroring is performed in an external RAIDarray which of course for our laptops is not true. However forpractical reasons, we demonstrate using this choice. Note:In the following text, text displayed in the terminal window is incourier font. Text you areexpected to input at the terminal window is in courierbold. Thepassword for the oracle account is oracle. The passwordfor the root account is provided during the lab Exercise 1 - Prepare for ASM & Grid installation -Ifnot logged in, log in as oracle. Someof the operations we need to perform require administrator access(root). -Start a terminal for root. Using the existing Terminal window menu"File->Terminal->Default. In the new window type su‚Äď root and the rootpassword. You should now see a root prompt " # ". Apoint that is key to installing ASM is that the ASM softwarerecognises disk devices as being available to it through the factthat they are owned by oracle (or other ASM install useraccount). Do not expect to start the GUI and find useable diskdevices if the permissions on their device files have not beenaltered as a pre-requisite step. This tutorial will change the diskdevice ownership appropriately, below. Allof the devices in an Oracle ASM disk group should be the same sizeand have the same performance characteristics. Donot specify multiple partitions on a single physical disk as a diskgroup device. Oracle ASM expects each disk group device to be on aseparate physical disk. -In the root terminal browse the available disks using format(1) root@asmdemo:~#format Forthis exercise there are 8 disks. Disk0 (c0t0d0) is the boot disk. Disks1 contains a filesystem mounted under /u01. The6 disks c0t3d0...c0t7d0 are for use by ASM/ACFS. Toprevent Oracle ASM from overwriting the partition table, we cannotuse slices that start at cylinder 0 (for example, slice 2) so we areusing slice 0 which has been created for the exercise to begin atcylinder 1. Examine the disk table as follows:  (still under format)  Specify disk (enter its number): 7 selecting c0t7d0 [disk formatted] FORMAT MENU: disk - select a disk type - select (define)a disk type partition - select(define) a partition table current- describe the current disk format - format and analyze the disk fdisk - run the fdisk program repair - repair a defective sector label - write label to the disk analyze- surface analysis defect - defect list management backup - search for backup labels verify - read and display labels save - save new disk/partition definitions inquiry- show vendor, product and revision volname- set 8-character volume name !<cmd> - execute <cmd>, then return quit format> p PARTITION MENU: 0 - change `0' partition 1 - change `1' partition 2 - change `2' partition 3 - change `3' partition 4 - change `4' partition 5 - change `5' partition 6 - change `6' partition 7 - change `7' partition select - select a predefined table modify - modify a predefined partition table name - name the current table print - display the current table label - write partition map and label to the disk !<cmd> - execute <cmd>, then return quit partition> p Current partition table(original): Total disk cylinders available:1021 + 2 (reserved cylinders) Part TagFlag Cylinders Size Blocks 0 usrwm 1 - 10201020.00MB(1020/0/0) 2088960 1 unassignedwu 0 0 (0/0/0) 0 2 backupwu 0 - 10201021.00MB(1021/0/0) 2091008 3 unassignedwu 0 0 (0/0/0) 0 4 unassignedwu 0 0 (0/0/0) 0 5 unassignedwu 0 0 (0/0/0) 0 6 unassignedwu 0 0 (0/0/0) 0 7 unassignedwu 0 0 (0/0/0) 0 8 bootwu 0 -0 1.00MB(1/0/0) 2048 9 unassignedwu 0 0 (0/0/0) 0 partition> [type ‚Äúq‚ÄĚ twice or control-d to exit the format utility] Takethe disk devices into oracle ownership (owner : oracle, group: dba) as follows: # cd /export/home/oracle/scripts# cat ./change-perms.sh #!/bin/ksh for i in c0t3d0s0 c0t4d0s0 c0t5d0s0 c0t6d0s0 c0t7d0s0 c0t8d0s0 do chown oracle:dba /dev/rdsk/${i} done# chmod +x ./change-perms.sh# ./change-perms.sh -Confirm that the device owners have changed: # find /dev -user oracle -print# find /devices -user oracle -print Exercice 2 - Install ASM (and Grid) Switchfrom the root window to theoracle user terminal window. 1: Examine and set the environment variables for grid install (notethe dot in the final command below): $cat scripts/vars-for-grid.sh # For GridORACLE_OWNER=oracleORACLE_BASE=/u01/app/oracleORACLE_HOME=/u01/app/oracle/product/11.2.0/gridPATH=/usr/bin:/u01/app/oracle/product/11.2.0/grid/binORACLE_SID=+ASMexport ORACLE_OWNER ORACLE_BASE ORACLE_HOME PATH ORACLE_SID$ chmod +x scripts/*$ . scripts/vars-for-grid.sh 2: Running the installer. Allof the Oracle binary distribution (11.2.0.2) is in /u01. There ismore than we need there so will remove some elements of it to createneeded space. Forthese exercises we only require the database and gridinstall binaries. Whenwe run the installer we do so with the -ignorePrereqoption because we have notconfigured as much swap as the installer would like to see for thisexercise. $ cd /u01/staging$ rm -rf deinstall client examples$ cd grid$ ./runInstaller -ignorePrereq Screen1 (Download Software Updates). Click on button ‚ÄúSkip softwareupdates‚ÄĚ. Click on Next Screen2 (Select Install Option). Click on button ‚ÄúConfigure OracleGrid Infrastructure for a Standalone server‚ÄĚ. Click on Next. Screen3 (Select Product Languages). Accept default of ‚ÄúEnglish‚ÄĚ.Click on Next. Screen4 (Create ASM Disk Group). Click on the ‚ÄúAll disks‚ÄĚ button toshow which disks are available. Check the ‚ÄúExternal Redundancy‚ÄĚoption. Leave the Disk Group Name field as DATA. Choose the first 3of the disks the software has identified (c0t3-c0t5) by clicking ontheir check boxes. Leave c0t6, c0t7 and c0t8 unchecked. Click on Next Screen5 (Specify ASM Password). Check the ‚ÄúUse same passwords forthese accounts‚ÄĚ button and enter the same password as given for theoracle user account. Click on Next and dismiss the warning notice byclicking "Yes". Screen6 (Privileged Operating System Groups) Accept the defaults ClickNext. Click on Yes to dismiss the warnings dialog box. Screen7 (Specify Installation Location). Accept the Oracle Base of/u01/app/oracle and the Software Location of/u01/app/oracle/11.2.0/product/grid. Click Next. Screen7a (Create Inventory) Accept the default of/u01/app/oraInventory. Click Next. Screen8 Summary. Click on Install - and read the box below entitled"While you are waiting". WhileYou Are Waiting Thereare times during this lab when you just have to wait for things toinstall.During these times you are encouraged to move to the rootTerminal window and try some commands to explore the Solarisenvironment. Read the manual pages for commands to find out moreabout them - for command foo #man foo Hereis a selection to get you going: Wheream I: # uname -a Devices/RAM: # prtconf | more Devices:# prtdiag -v | more Memoryusage: # echo "::memstat"| mdb -k mdbis the kernel debugger, Network Interfaces #ifconfig -a Networks(many options) #netstatDisks # echo | format Disktables (quickly) # prtvtoc/dev/rdsk/c8d0s2 Filesystems# df -h Thefollowing are ZFS equivalents, if ZFS is configured on yoursystem. #zfs list #zpool status -v #zpool list #zpool iostat 5 Screen8a Execute Configuration Scripts. Whenthis appears, Execute the following scripts in the root terminalwindow: # /u01/app/oraInventory/orainstRoot.sh# /u01/app/oracle/product/11.2.0/grid/root.sh (accept defaults) andthen click OK on Screen 8a to dismiss it. Screen9 (Finish). Click Close to dismiss the GUI. TheASM instance (by default on a single node having the SID ASM+)is running as can be seen from the ps(1) output: $ ps-fuoracle UID PID PPID CSTIME TTYTIME CMD oracle 2197 2133 018:57:42 pts/1 0:00 ksh -o vi oracle 4321 1 0 11:42:43? 0:01 /u01/app/oracle/112/bin/ocssd.bin oracle 3883 1 0 11:39:18? 0:11 /u01/app/oracle/112/bin/oraagent.bin oracle 4497 2447 012:06:14 pts/2 0:00 -ksh oracle 4673 4497 012:30:59 pts/2 0:00 ps -fuoracle oracle 3894 1 0 11:39:18? 0:01 /u01/app/oracle/112/bin/evmd.bin oracle 3724 1 0 11:39:05? 0:10 /u01/app/oracle/112/bin/ohasd.bin reboot oracle 4308 1 0 11:42:43? 0:03 /u01/app/oracle/112/bin/cssdagent oracle 4186 1 0 11:42:25? 0:01 /u01/app/oracle/112/bin/tnslsnr LISTENER -inherit oracle 3984 3894 011:39:22 ? 0:00 /u01/app/oracle/112/bin/evmlogger.bin -o/u01/app/oracle/112/evm/log/evmlogger. oracle 4332 1 0 11:42:44? 0:01 /u01/app/oracle/112/bin/diskmon.bin -d -f oracle 4310 1 0 11:42:43? 0:03 /u01/app/oracle/112/bin/orarootagent.bin oracle 4410 1 0 11:43:29? 0:00 asm_diag_+ASM oracle 4408 1 0 11:43:29? 0:00 asm_gen0_+ASM oracle 4402 1 0 11:43:29? 0:01 asm_psp0_+ASM oracle 4414 1 0 11:43:30? 0:00 asm_mman_+ASM oracle 4412 1 0 11:43:29? 0:01 asm_dia0_+ASM oracle 4416 1 0 11:43:30? 0:00 asm_dbw0_+ASM oracle 4418 1 0 11:43:30? 0:00 asm_lgwr_+ASM oracle 4420 1 0 11:43:30? 0:00 asm_ckpt_+ASM oracle 4422 1 0 11:43:30? 0:00 asm_smon_+ASM oracle 4424 1 0 11:43:30? 0:01 asm_rbal_+ASM oracle 4426 1 0 11:43:30? 0:01 asm_gmon_+ASM oracle 4404 1 1 11:43:29? 0:06 asm_vktm_+ASM oracle 4428 1 0 11:43:31? 0:01 asm_mmon_+ASM oracle 4430 1 0 11:43:31? 0:01 asm_mmnl_+ASM oracle 4400 1 0 11:43:29? 0:01 asm_pmon_+ASM oracle 4440 1 0 11:43:38 ? 0:00 oracle+ASM(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) Youcan use ipcs(1) to view she shared memory and semaphore structuresfor the ASM SGA: $ ipcs-s IPC status from <runningsystem> as of Mon Sep 5 17:45:06 PDT 2011 T ID KEY MODE OWNERGROUP Semaphores: s 7 0x496bed6c--ra-ra---- oracle oinstall $ ipcs-m IPC status from <runningsystem> as of Mon Sep 5 17:45:10 PDT 2011 T ID KEY MODE OWNERGROUP Shared Memory: m29 0xfa55c7d8 --rw-rw---- oracle oinstall Theredoes not need to be an oracle database instance for an ASM instance. Noticetwo files in the /u01/app/oracle/product/11.2.0/grid/dbs directoryprovided for ASM: ab_+ASM.datThis is generated when the ASM instance starts and used by the RDBMSinstance to determine environment information. hc_+ASM.datEnterprise Manager uses this file for health-check monitoring. Weconnect as follows: $ sqlplus/ as sysasm SQL*Plus: Release 11.2.0.2.0Production on Wed Aug 31 12:34:46 2011 Copyright (c) 1982, 2010,Oracle. All rights reserved. Connected to: Oracle Database 11g EnterpriseEdition Release 11.2.0.2.0 - 64bit Production With the Automatic StorageManagement option Try running some obvious queries: SHOW SGA SELECT INSTANCE_NAME FROMV$INSTANCE; SELECT NAME, VALUE FROMV$DIAG_INFO; SELECT NAME, PATH,GROUP_NUMBER FROM V$ASM_DISK ; SELECT PATH, HEADER_STATUS,MODE_STATUS FROM V$ASM_DISK ; QUIT Exercice 3 - Using ASMCA to make an ASM Diskgroup, ADVMVolume and ACFS Filesystem Runthe ASM Configuration Assistant: $ asmca Screen1: It has three tabs - Disk groups, Volumes and ASM clusterfilesystems. In the Disk Groups Tab you already have a diskgroup called DATA which you created when you installed the gridsoftware. Click the Create button to bring up a disk group creationdialog. Screen2: Diskgroup Creation Dialog to create the Disk Group we will usefor our ACFS filesystem Completethe fields of the form as follows: DiskGroup Name: MYDGRedundancy: External Checkdisks c0t6-c0t8Click OK and a dialog box should (eventually)indicate succesful completion. Screen3: Select the Volumes tab. Click the Create button tobring up a Create Volume dialog box. Fillin the fields as follows: Volumename: MYVOL DiskGroupname: Use pulldown to select MYDG Size: Enter 8.3 and leave GBytes selected.Click OK and a dialog box should (eventually)indicate succesful completion. Screen4: Select the ASM Cluster Filesystems tab. Click on Createto bring up a Create ASM Cluster Filesystem dialog: Volume:MYVOL (this will already be selected)Check the Database HomeFilesystem button if not already checked.Database home mountPoint: /u02/acfsmountsClick on the Show Command button and adialog will appear to tell you that the following commands should berun as privileged user : /sbin/mount-F acfs /dev/asm/myvol-333 /u02/acfsmounts (number may vary) chownoracle:oinstall /u02/acfsmountschmod 775 /u02/acfsmounts Donot run these commands. Dismissthe dialog Clickon the OK button to create the filesystem. Screen5: A dialog will appear telling you to run the following scriptas root: /u01/app/oracle/cfgtoollogs/asmca/scripts/acfs_script.sh Runthe script in the root Terminal and dismiss the dialog box.  # /u01/app/oracle/cfgtoollogs/asmca/scripts/acfs_script.sh Clickon Exit to leave ASMCA and reply Yes. Examinethe script you ran:/u01/app/oracle/cfgtoollogs/asmca/scripts/acfs_script.sh Itcontains the same commands as the Show Commands dialog (from rootterminal): #!/bin/sh/sbin/mount -F acfs /dev/asm/myvol-168 /u02/acfsmountsif [ $? = "0" ]; then chown oracle:oinstall /u02/acfsmountschmod 775 /u02/acfsmountsexit 0fi Runthe following comands: $ df -h $ df -n andobserve the output. Soto recap we now have -an ASM disk group called DATA with 17 Gb of space: this will hold ourdatabase -A volume called MYVOL composed of the diskgroup MYDG -a file system at /u02/acfsmounts composed of MYVOL: this will holdour oracle binaries Exercice 4 - Creating The Database Inthe oracle user terminal window. 1: Set the parameters for the Database that you will install $cd $cat scripts/vars-for-database.sh # For DBORACLE_OWNER=oracle ORACLE_HOME=/u02/acfsmounts/product/11.2.0/dbhome_1ORACLE_SID=orclPATH=/usr/bin:$ORACLE_HOME/binexport ORACLE_OWNER ORACLE_HOME ORACLE_SID PATH$ . scripts/vars-for-database.sh 2: Install Oracle Database distribution $ cd /u01/staging/database$ ./runInstaller -ignorePrereq Screen1 (Configure Security Updates) Uncheck the ‚ÄúI wish toreceive...‚ÄĚ box and click Next. Click "Yes" on thewarning dialog that appears. Screen2 (Download Software Updates) Check the ‚ÄúSkip Software Updates‚ÄĚbutton and click Next. Screen3 (Select Installation Option)Check the ‚ÄúInstall Database Software Only‚ÄĚ button and Click Next.[DO NOT CHECK Create and configure a Database, as depending on yourVM setup you may run out of memory in the installation process] Screen4 (Grid Installation Options) Check the Single Instance DatabaseInstallation Click Next Screen5 (Select Product Languages)Accept the default of English and click Next. Screen6 (Select Database Edition) Check the Enterprise Edition andclick Next Screen7 (Specify Install Location) Accept Oracle base of/u01/app/oracle. Accept Software Location of/u02/acfsmounts/product/11.2.0/dbhome_1 Screen8 - (Privileged Operating Systems Group) Accept default buttonand click Next Screen9 (Summary) - Click on Install Screen10: Installion in progress - this will take few minutes Screen11: Execute Configuration Scripts: Runthe script as root :  # /u02/acfsmounts/product/11.2.0/dbhome_1/root.sh (hit enter to accept the default path) andthen click Close in the Finishscreen to exit the installer. 3:Install Oracle Database Rundbca as oracle user :  $ dbca Screen1 : (Welcome) ‚Äď Click Next Screen2 : (Operations) - Chose "Create a Database" Screen3 : (Database Templates) - Chose "General Purpose orTransaction Processing" [For the Lab, we pick up this one, as itis essentially an RMAN restore, and it is the fastest to fit ourpurpose. In real life, you may chose "Custom Database", andbe ready for a 45 minutes to 1 hour installation process] Screen4 : (Specify Database Identification) Global Database name =orcl, Oracle Service Identifier (SID) = orcl. ClickNext Screen5 : (Management Options) - As this is not the purpose ofthis Lab and we want to fit in 1 hour, uncheck ConfigureEnterprise Manager. Click Next. Screen6 : (Database Credentials) ‚Äď Check "Use Same Administrative Passwords for all accounts". Enter the oraclepassword as the passwords for SYS and SYSTEM. Click Next. Click"Yes" to dimiss the warning dialog. Screen7 : (Database File Locations) - Select Storage Type:Automatic Storage Management (ASM) and Use Oracle-Managed Files,Browse for a Database Area of DATA. Click Next Screen8 (Choose the recovery option) - As this is not the purpose ofthis Lab, uncheck all [For a real production Database it ishighly recommended to specify both. The Fast Recovery Area, wouldhave required another ASM Disk Group to be provided to store thefiles to be used in case of "Fast  Recovery". Thearchive log are *mandatory* to recover the Database on asystem crash.] Screen9 (Sample Schema) - uncheck Screen10 (Specify Configuration Options) Check the ‚ÄúCustom" boxunder the Memory tab , specify "Automatic Shared MemoryManagement", SGA size = 350 MB, PGA size = 250 MB. Screen11 (Storage) - Have a look at the Storage Datafiles layout. Thisshould differ from what you are used to in a regular filesystem. Thissimplify the Oracle installation process thanks to the tightintegration with ASM. Click Next. Screen12 (Select the Database Creation Option) - Check all and clickfinish Screen12a (Installation in Progress) - This will take several minutes Screen12b (Database Creation Complete) - We are done (nearly :), with arunning Database on ASM. You can click on Exit. Asoracle user, connect to your newly created instance $ sqlplus / AS SYSDBA> SELECT * FROM V$INSTANCE; (note : run this again after dot-running vars-for-grid.sh if you wish) And here you are : Wrap up : In our Solaris 10 8/11, we have now a running Oracle Database instance, installed on an ASM Diskgroup, from which the Oracle Distribution has been installed on an ACFS filesystem. Sincerely yours, Eric Bezille & Dominic Kay

Oracle Open World - Hands-on Lab - Participant Guide Content and Goal "Oracle Automatic Storage Management gives database administrators astorage management interface that is consistent across all...

Architectures and Technologies

Oracle Open World - Hands-on Lab : Configuring ASM and ACFS on Solaris - Part 1

A quick introduction I have been invited by Dominic Kay, Product Manager for Solaris Storage sub-systems, for an hands-on lab at OOW. For those of you who will assist at this session, next Monday, at 11:00am, in Marriott Marquis - Salon 5/6, here are the gory details to get you through this lab. For the others that won't have the opportunity to be there, we hope it will be usefull for you to set it up on your own environment.  The reasoning behind this Lab I already posted on this blog many times, about ZFS, and all its benefits, including the deployment of Oracle Database. And Dominic found very valuable to develop the knowledge of ASM (and ACFS) deployment on Solaris, as you can look at ASM in a way, as the "ZFS" from a DBA perspective, with another interesting benefit : the ability to deploy an Oracle Database in a shared multi-nodes environment with Oracle RAC, which is what's is running on Solaris on Exadata and on this week's new Engineered System announced, SPARC Supercluster . [You can look at this lab, as a follow up of the one done by Alain Chereau last year, which was covering ZFS.] The content and goal "Oracle Automatic Storage Management gives database administrators astorage management interface that is consistent across all server andstorage platforms and is purpose-built for Oracle Database. Oracle Automatic Storage Management Cluster File System is ageneral-purpose file system for single-node and clusterconfigurations. It supports advanced data services such as taggingand encryption. This hands-on lab shows how to configure Oracle Automatic StorageManagement and Oracle Automatic Storage Management Cluster FileSystem for installation of an Oracle Database instance on OracleSolaris 10 8/11. You'll learn how to install the software, build Oracle AutomaticStorage Management volumes, and configure and mount Oracle AutomaticStorage Management Cluster File System file systems." Preparing for the Lab : downloads ! 1 : Download your favorite Virtualbox and install it [WARNING: you will need to enable Intel VT or AMD-V for your VM to be able to run in 64bits, which is required for this lab, as we need to run Solaris x86 in 64bits] 2 : Get Solaris 10 8/11 3 : Get your latest Oracle 11gR2 release for Solaris x86 64bits (at the current writing of this post this is in the patchet P10098816 that you will get from your Oracle support). Setup your Solaris 10 8/11 Virtualbox VM 1 :  Create your VM - name - asmdemo - type = solaris 10 10/09 and later 64 bit - memory size = 4096 mb - createdisk type vdi, fixed size of 16 GB (boot/root) 2 : Add additionnal disks to your VM [Those disks will be used for your Oracle binaries, your ASM and ACFS storage.] Under "settings", select "storage", then "add hard disk" - create 1 fixed size disk, type vdi, size 10 GB called asmdemo1 (this will be /u01) - create 6 fixed size VDI disks of 3 GB (asmdemo2-asmdemo7) - add the downloaded Solaris .iso in the storage settings as vitual CD/DVD 3 : Setup the network of your VM as NAT under "settings" > "network"  Tips : On a Solaris laptop you need to run Virtualbox as root to enable all the networks plumbing, if you forget this stage, just export your VM and Import it back in the Virtualbox of root user.You also need to disable "nwam" as you don't want him to get in the way.disable nwam : root@solaris-laptop:~# svcadm disable nwam plumb your interface manually : root@solaris-laptop:~# ifconfig e1000g0 dhcp start 4 : Start the VM & Install Solaris - use option 1 (solaris interactive default) to create a UFSroot orchoose option 3 to create a ZFS root - keyboard is US-English [or French (AZERTY) if required : WARNING this will become the default for your Vbox VM setup and may impact if you plan to export your VM to someone using a different keyboard setup] - language is English - choose DHCP networking - say no to ipv6 - say no to kerberos and name services - use a system derived NFSv4 domain - choose the appropriate timezone (example : US Pacific timezone) - setup the root password - enable remote services - choose not to register - no proxies - accept the license terms :-) - choose a Custom (not Default) install - no software localizations - no web start ready products - Solaris Software Group = ‚ÄúEntire Group‚ÄĚ - choose just the first disk (c0t0d0) - one partition for whole disk (accept defaults) - In the ‚ÄúLay Out File Systems‚ÄĚ configure at least 8 GB of swap,taking space from /export/home [This is often a requirement from Oracle Database installer to have at least 2x swap space for the memory configured. This can be useful when using Dynamic Intimate Shared Memory, but otherwise not require. In this lab we will run the installation so it pass even if we don't set up 2x swap space, and we will also pay attention during the installation on the SGA and PGA size] - after install you will have to stop the system rebooting intoinstall again and remove the DVD from the attached devices - bootinto the system Setup of your Solaris to be able to configure ASM, ACFS and install an Oracle DB on it 1: Configure your disks Check for your additional disks, partition and label each one . [Note : if you don't see your additional disks, run # devfsadm; disks;]. Run format and fdisk and label for disks 1-7 to put a TOC on them,accepting the defaults. Put allthe space except the first cylinder in s0 on each disk, to prevent Oracle ASM from overwriting the partition table. # format > disks> (select a disk)> fdisk => yes> partition (leave first cylinder)> label If you chose UFS boot setup - create /u01 as a UFS filesystem using all of c0t2d0 (10 Gb) :  newfs c0t2d0s0, and mount it. Then add an entry to /etc/vfstab for it. If you chose ZFS boot setup # zpool create u01_pool /dev/dsk/c0t2d0s0# zfs create u01_pool/u01# zfs set mountpoint=/u01 u01_pool/u01 2: Create groups for Oracle account # groupadd oinstall # groupadd dba 3: Create the oracle account, preparing directories and mount points for ACFS This requires setting /etc/default/loginvariables as follows before changing (this is for Lab purpose only, to allow easy passwd setting): NAMECHECK=NO MINNONALPHA=0 # useradd -g oinstall -G dba -d /export/home/oracle -s /usr/bin/ksh oracle# passwd oracle (oracle) # mkdir -p /export/home/oracle# chown oracle:dba /export/home/oracle# chown -R oracle:dba /u01# mkdir -p /u01/app/oraInventory ; chown oracle:oinstall /u01/app/oraInventory# mkdir -p /u02/acfsmounts ; chown oracle:dba /u02/acfsmounts 4: Establish necessary kernel parameters # projadd -U oracle user.oracle# projmod -s -K "project.max-sem-ids=(priv,100,deny)" user.oracle# projmod -s -K "project.max-sem-nsems=(priv,256,deny)" user.oracle# projmod -s -K "project.max-shm-memory=(priv,429967296,deny)" user.oracle# projmod -s -K "project.max-shm-ids=(priv,100,deny)" user.oracle# projmod -s -K "project.max-file-descriptor=(priv,65536,deny)" user.oracle 5: Get the Oracle distribution in your VM You need to copy in your VM (in /u01/staging) the dowloaded Oracle Patchset. We will do that through ftp.First enable ftp on your laptop, and permit external access (for Lab purpose only) : root@solaris-laptop:~# svcadm enable ftproot@solaris-laptop:~# cd /etc/ftpdroot@solaris-laptop:/etc/ftpd# cat > ftpusers From within your VM : # su - oracle$ cd /u01/staging$ sftp @IP-your-laptop> prompt> bi> mget *.zip> bye You will have under /u01/staging the following zip files : p10098816_112020_Solaris86-64_1of6.zip p10098816_112020_Solaris86-64_2of6.zip p10098816_112020_Solaris86-64_3of6.zip p10098816_112020_Solaris86-64_4of6.zip p10098816_112020_Solaris86-64_5of6.zip p10098816_112020_Solaris86-64_6of6.zip 1 and 2 are database 3 is grid 4 is client 5 is examples 6 is deinstall - unzip them to create subdirectories client, database, deinstall,examples, grid. That leaves us with 5.6 Gb to play with. Currentlyin format output disk 0 is the boot disk, disk 1 is now /u01 leavingdisks 2-7 for ASM. This gives us c0t3d0s0 c0t4d0s0 c0t5d0s0 c0t6d0s0 c0t7d0s0 c0t8d0s0 for the ASMexercise Set the screensaver for oracle user to off. !! At this stage we have all we need in the way of packages,accessible disks and oracle software so we shutdown the VM and do anexport before going further. This will be our starting point for the lab exercice (as we only have one hour) !! This end the Part 1 of the Lab... and this post. Stay tuned for Part 2 to play with...(coming soon) Sincerely yours, Eric Bezille & Dominic Kay

A quick introduction I have been invited by Dominic Kay, Product Manager for Solaris Storage sub-systems, for an hands-on lab at OOW. For those of you who will assist at this session, next Monday, at...

Innovation

DSI, Technologies Clés 2015 & Stratégie Oracle

Translate in English Trois√©l√©ments m'ont pouss√© √† √©crire ces quelques lignes : (1) lespr√©occupations des DSI face au Cloud Computing, (2) la sortie durapport sur les technologies cl√©s 2015,  publi√© en ce d√©butd'ann√©e par le Minist√®re de l'√©conomie, des finances et del'industrie, et (3) la venue la semaine derni√®re √† Paris du responsable du d√©veloppement des Solutions Optimis√©es au seind'Oracle. Les pr√©occupations des DSI face au CloudComputing Les syst√®mes d‚Äôinformations sont devenus aufils du temps des syst√®mes complexes, qui, aux travers des approchesmulti-couches, multi-fournisseurs, ont souvent du mal √† √©voluer.Tous les DSI regardent les acteurs du Cloud Computing publique avecenvie, mais restent r√©aliste sur le fait qu‚Äôils ont un existant √†faire vivre et un nombre de services m√©tiers √† adresser biensup√©rieur. La t√Ęche du DSI et de ses √©quipes estd‚Äôautant plus complexe, qu‚Äôil faut non seulement traduire lebesoin m√©tier en solution applicative adapt√©e, mais aussi enarchitectures mat√©rielles associ√©es, tout en int√©grant le projetde mise en Ňďuvre dans les d√©lais, et sa capacit√© d‚Äôexploitationune fois la solution d√©ploy√©e. 3 √©l√©ments du rapport sur lestechnologies cl√©s 2015 √† mettre en perspective Comme l‚Äôa clairement identifi√© le volet TICde l‚Äô√©tude sur les technologies cl√©s 2015 : ¬ę Lar√©volution num√©rique, avec la num√©risation accrue des contenus etservices et le d√©veloppement de l‚Äôinternet, a permis [..]d‚Äô√©tendre tr√®s largement la diffusion des TIC au-del√† desgrandes entreprises aupr√®s du grand public et des PME ¬Ľ ¬ę Cetted√©pendance croissante de pans entiers de l‚Äô√©conomie vis-√†-visdes TIC, impose une fiabilit√© et une disponibilit√© accrue tantpour les infrastructure [..] que pour les applications. ¬Ľ ¬ęle march√© se banalise fortement avecune demande qui, sous l‚Äôeffet de l‚Äôinformatique en nuage,s‚Äôoriente vers des centres de donn√©es automatis√©s et mutualis√©sreposant sur des mat√©riels standardis√©s ¬Ľ. Ce qui implique¬ę d‚Äôavoir une ma√ģtrise simultan√©e de nombreusestechnologies (mat√©riel, logiciel, contenu et r√©seau) permettant led√©veloppement d‚Äôapplications, services et contenus [..], enprenant en compte une multitude d‚Äôinterfaces. ¬Ľ  Strat√©gie Oracle : Solutions Optimis√©es, approcheverticale et standards ouverts C‚Äôest √†partir de ces m√™mes constats : l‚Äôefficacit√© du Cloud Computingpublique et la r√©alit√© des syst√®mes d‚Äôinformation d‚Äôentreprise,qu‚ÄôOracle a d√©cid√© d‚Äôoptimiser et d‚Äôint√©grer en amontl‚Äôensemble des composants r√©pondant aux enjeux m√©tiers del‚Äôentreprise, du portail B2B/B2C, √† la gestion client, en passantpar la paie, et jusqu‚Äô√† l‚ÄôERP. En effet, de par notre capacit√©√† adresser l‚Äôensemble des besoins, des applications m√©tiersjusqu‚Äôaux infrastructures mat√©riels, nous sommes dans la positionunique de pouvoir r√©aliser toute la probl√©matique dedimensionnement et de mise en oeuvre en amont, tout enincluant les niveaux de services maximum d√®s la conception :performance, haute-disponibilit√©, s√©curit√©, et √©volution. Le tout pouvant venir s‚Äôint√©grer dans unexistant, car l‚Äôensemble des composants Oracle s‚Äôappuient sur desstandards ouverts. C‚Äôest ainsi que nous avons lanc√© d√©j√† 7 Solutions Optimis√©es, autour de nos applications m√©tiers (PortailWebcenter, Siebel CRM, Peoplesoft RH, ERP E-Business Suite,...) et 2solutions int√©gr√©es en usine, l‚Äôune couvrant la base de donn√©es(Exadata) et l‚Äôautre le middleware (Exalogic). Cette capacit√© nous permet de revoir lesapproches traditionnelles pour aller vers beaucoup plus d‚Äôefficacit√©,et offrir une vrai fondation pour la mise en oeuvre d‚Äôun CloudComputing d‚ÄôEntreprise s‚Äôappuyant sur des composants standards, ouverts, et optimis√©s pour bien fonctionner ensemble : des applications jusqu'aux infrastructures disques et bandes (mais j'y reviendrai dans un prochain billet). Pour aller plus loin, je vous invite √† consulter l'ensemble des documents techniques d√©j√† publi√©s autour des Solutions Optimis√©es, et √† les revisiter r√©guli√®rement, car nous en g√©rons le cycle de vie en fonction des √©volutions des composants mat√©riels ou logiciels constituant chaque solution. Translate in English

Translate in English Trois éléments m'ont poussé à écrire ces quelques lignes : (1) les préoccupations des DSI face au Cloud Computing, (2) la sortie durapport sur les technologies clés 2015,  publié...

Innovation

Les détails qui fond la différence : l'importance d'une bonne gestion de la mémoire

Translate in English La semaine derni√®re, William Roche a cl√ītur√© son intervention sur la gestion de la m√©moire, dans le cadre du Groupe des Utilisateurs Solaris. Le sujet √©tait tellement riche, qu'il a fallu 2 sessions √† William pour expliciter les avanc√©es apport√©es au niveau de Solaris pour exploiter au mieux la m√©moire. Tout utilisateur manipulant de gros volumes de donn√©es, ou cherchant √† consolider ses applications ou des machines virtuelles, connait l'importance de la m√©moire. En effet, face √† l'√©volution des processeurs vers les multi-coeurs avec en plus plusieurs unit√©s d'ex√©cution par coeur, la m√©moire prend une place pr√©pond√©rante pour pouvoir les alimenter comme il se doit. Pour que cela soit efficace, il faut √† la fois concevoir des serveurs sans goulot d'√©tranglement mat√©riel avec une architecture homog√®ne permettant de mettre en oeuvre suffisamment de m√©moire en face des processeurs. Mais aussi disposer d'un syst√®me d'exploitation capable de tirer partie de l'infrastructure √† disposition et d'en rendre la pleine puissance aux applications. La m√©moire se cache partout, pas uniquement la RAM, mais aussi dans les disques (SWAP, SSD), et pour Solaris c'est beaucoup beaucoup plus que cela : y compris les registres et le cache du CPU (le plus rapide, mais le plus cher). Quelques chiffres de temps d'acc√®s Registres CPUS : < 1 ns d'acc√®s. Les caches L1, L2 : 1 √† 20 ns La RAM : 80ns √† 500ns SSD : 25 us √† 250 us Disques : 5ms - 20ms DVD : 100ms ...mis en perspective  Type de Cache  Taille  Temps d'acc√®s  Perspective  Registes CPU  1 Koctets  1 cycle  1 seconde  L1 CPU  100 Ko  2 cycles  2 secondes  L2 CPU  8-16 Mo  19 cycles  10 secondes  m√©moire principale  128 Mo √† 512 Go (et plus)  50 - 300 cycles  50 secondes √† 5 minutes  Disques  40 Go √† plusieurs To  11 M cycles  4.24 mois !  R√©seau  Pas de limite  80 M cycles  2.57 ann√©es !! Garder les donn√©es actives proches du processeur est critique ! Et c'est l√† o√Ļ, la conception des serveurs et Solaris rentrent en action. Lors du d√©marrage, Solaris prend connaissance du syst√®me sous-jacent : processeurs, bancs m√©moires, temps d'acc√®s entre chacun des processeurs avec les diff√©rents bancs m√©moires, etc... Cela lui sert non seulement dans l'optimisation de l'allocation des processus vis √† vis de l'acc√®s √† la m√©moire, mais aussi pour s√©curiser le syst√®me, en pouvant par exemple "black lister" des coeurs ou des bancs m√©moires d√©faillant sans impacter les applications. En outre, gr√Ęce aux fonctions de "Large Page Support", Solaris adapte dynamiquement  la taille des pages m√©moires en fonction du comportement de l'application pour √©viter des cycles d'acc√®s inutiles, qui comme illustr√© dans le tableau pr√©c√©dent co√Ľtent cher ! Ces points ne repr√©sentent que quelques unes des nombreuses optimisations nous permettant de tirer parti des √©volutions technologiques, et de r√©pondre √† l'√©volution des besoins des applications, sans que vous, utilisateur, n'ayez √† y penser. Si vous souhaitez approfondir le sujet, je vous renvoie √† l'excellente pr√©sentation de William, ainsi qu'√† l'article suivant : "Understanding Memory Allocation and File System Caching in OpenSolaris" Vous pourrez m√™me √©changer directement avec William, ainsi que plusieurs experts Oracle Solaris, comme Clive King, Thierry Manf√© ou Claude Teissedre,  lors du Solaris Tech Day, qui se tiendra le 5 avril prochain. Translate in English

Translate in English La semaine derni√®re, William Roche a cl√ītur√© son intervention sur la gestion de la m√©moire, dans le cadre du Groupe des Utilisateurs Solaris. Le sujet √©tait tellement...

Innovation

Oracle OpenWorld : BIG !

Translate in English Gigantesque est bien le mot. Je suis dans l'avion qui me ram√®ned'oracle openworld avec Christophe Talvard et nous voulions vouslivrer quelques impressions "√† chaud" et un chiffre : 41000 personnes! Evidemment vous n'avez s√Ľrement pas manqu√© les nombreux articles surle sujet, avec bien entendu l'annonce majeure sur la solution ExalogicElastic Cloud, qui vient s'adosser √† l'offre Exadata pour couvrir letier applicatif de fa√ßon tr√®s efficace : 12 fois plus performantequ'une architecture traditionnelle en grille de serveursd'application. Un niveau de performance permettant de supporter lacharge du trafic mondial de Facebook sur seulement deuxconfigurations! Ce qui d√©montre en soi la strat√©gie d'Oracle :"Hardware and Software engineered to work together". Une strat√©gie quiva bien au del√† de l'extraordinaire gain en performance et quis'attache √©galement √† faciliter la gestion de l'ensemble descomposants logiciels et mat√©riels de l'Exalogic avec la possibilit√© deles mettre √† jour avec un unique fichier, pr√©-test√© et valid√© parOracle. Avec Exalogic et Exadata, tous les √©l√©ments sont pr√©sents pourd√©ployer un Cloud public ou priv√© : les performances, l'int√©grationdes logiciels et des mat√©riels mais √©galement la tol√©rance aux pannes,la flexibilit√© et l'√©volutivit√©. Mais ce n'est pas tout, SPARC et Solaris disposaient √©galement d'uneplace de choix avec la pr√©sentation de la roadmap √† 5 ans et l'annoncedu processeur T3, ses 16 cŇďurs et quelques records du monde √† la cl√©,ainsi que l'arriv√©e prochaine de Solaris 11, non seulement de fa√ßong√©n√©rale mais aussi en option au sein d'Exalogic et de la nouvelleversion d'Exadata.A ce titre de nombreuses sessions d'√©changes sur des retoursd'exp√©rience de mises en Ňďuvre d'Exadata ont fait salles combles,notamment celles de Jim Duffy et Christien Bilien sur la solutiond√©ploy√©e chez BNP Paribas (voir pr√©c√©dent poste). A noter √©galementplusieurs t√©moignages sur l'utilisation d'Exadata en consolidation debases de donn√©es. Un mod√®le qui devrait s'acc√©l√©rer avec la nouvellemachine x2-8, ses nŇďuds tr√®s capacitifs de 64 cores et 2 To de RAM etses unit√©s de stockage Exadata Storage server ultra performantes etoptimis√©es pour vos donn√©es structur√©es. Sans oublier l'annonce de lanouvelle gamme ZFS Storage Appliances pour l'ensemble de vos donn√©esnon structur√©es et le stockage de vos environnements virtualis√©s aumeilleur co√Ľt et avec une s√©curit√© maximum (triple parit√©). Toutes ces infrastructures mat√©rielles et logiciels con√ßues pourtravailler ensemble, sont les fondations des applications supportantles m√©tiers de votre entreprise. Et dans ce domaine, l'annonce del'arriv√©e de Fusion Applications, l'un des plus gros projet ded√©veloppement de l'histoire d'Oracle, est majeure. En effet, Fusion Application apporte √† vos applications m√©tiers (CRM, ERP, RH,...) unsocle standard et non plus un moteur propri√©taire comme c'√©tait le casjusqu'ici. Or, nous le savons tous, ces moteurs propri√©taires li√©s auxd√©veloppements sp√©cifiques sont les causes de la complexit√© dessyst√®mes d'informations actuellement en place et de leur non agilit√© √†r√©pondre aux changements des m√©tiers toujours plus rapide. Fusion Application change radicalement les perspectives, car nonseulement il fournit une souche standard mais il a √©t√© √©galement con√ßupour d√©coupler les besoins sp√©cifiques du socle et donc pour ne pasfreiner les √©volutions et l'agilit√© de l'entreprise. En bref, nous disposons de solutions technologiques ouvertes, qui,tout en s'int√©grant de mani√®re √©volutive dans votre syst√®med'information vont en r√©volutionner les op√©rations avec un alignementsur les besoins m√©tiers et une agilit√© incomparable. Et nous sommestous pr√™ts √† travailler √† vos c√īt√©s pour les mettre en application d√®saujourd'hui. Translate in English

Translate in English Gigantesque est bien le mot. Je suis dans l'avion qui me ramène d'oracle openworld avec Christophe Talvard et nous voulions vouslivrer quelques impressions "à chaud" et un chiffre...

Innovation

Gagnez en observabilité et en rapidité d'analyse avec DTrace

Translate in English Faisant suite √† la pr√©sentation de William Roche sur DTrace, le groupe d'utilisateurs Solaris se r√©unit pour une mise en pratique ce soir √† Supinfo. Afin de comprendre les atouts de cette technologie, j'ai repris dans ce poste, les principaux points expos√©s par William lors de son intervention. Avec ces √©l√©ments, vous serez capable de : vous lancer dans DTrace briller dans l'analyse et la r√©solution de vos probl√®mes de production ! nous rejoindre ce soir pour d√©buter l'exp√©rience Un peu d'historique Dtrace existe depuis l'origine de Solaris 10. Il permet d'observerdynamiquement non seulement Solaris mais √©galement les applications,incluant Java, python, php, MySQL,... et bien d'autres. DTraces'accompagne d'un langage (D) qui permet d'√©crire des programmes servant √† collecter des informations, voir m√™me √† agir sur lesyst√®me. DTrace a √©t√© con√ßu pour √™tre utilis√© en productionPar exemple, DTraceest con√ßu pour ne pas causer de Panic, de crash, de corruption dedonn√©es, ou de d√©gradation de performance... Comment ? En le concevantsur la m√™me id√©e que Java : un script DTrace va √™tre ex√©cut√© en √©tant interpr√©t√© dans le noyau, et en passant des filtres d'ex√©cutions et desfiltres de donn√©es au plus pr√™t possible o√Ļ elles sont g√©n√©r√©es. Ce qui aboutit notamment √†une r√©duction au maximum du temps de capture, de s√©lection et de traitementdes donn√©es : limitant d'autant l'impact d'une telle analyse sur un syst√®me en production. DTrace r√©volutionne la m√©thode d'analyse Avant, (1) vous partiez d'unehypoth√®se, (2) vous instrumentiez le binaire (il vous fallait les sources, pour y ajouter votre instrumentation :  printf("coucou\\n"); ce qui n'√©tait pas toujours simple, puis recompiler, et enfin ex√©cuter), (3) vous collectiez vos donn√©es, (4) vous les analysiez, (5) vousrevoyez vos hypoth√®ses... et vous relanciez la boucle d'analyse. Avec DTrace, la partie instrumentation est tr√®s fortement r√©duite : pasbesoin de recompiler un programme, capaciter de s'attacher √† un programmequi est d√©j√† en cours d'ex√©cution, capaciter de mettre des pointsd'observations,.... tous cela au plus proche de l√† o√Ļ sont g√©n√©r√©es les donn√©es ! Le programme qui passe dans l'endroit que l'on souhaite surveiller g√©n√©re un √©v√®nement, et DTrace capture l'information √† la source. La boucle d'analyse est r√©duite et on peut se concentrer sur le probl√®me √† r√©soudre et non sur l'instrumentation proprement dite. Pourquoi Dynamic ? Il y a √©norm√©ment de m√©canismes qui permettent d'analyser des erreurssur un programme, comme par exemple, un core applicatif, ou un crash dumpsyst√®me, avec les outils d'analyse postmortem comme mdb ou dbx. Par contre pour analyser des probl√®mes transient, il existe √©galementtruss voir mdb... Mais avec des limitations, comme par exemplel'impossibilit√© de couvrir les appels syst√®mes. Pouvoir comprendre sur un syst√®me, pourquoi il y a √©norm√©ment dethreads, qui les r√©veille, qui les mets en sommeille, c'est √† ce type d'information et bien d'autre que Dtrace nous donne en temps r√©el : des √©l√©ments d'analyse pr√©cieux. Dtrace est un framework accessible √† travers l'interface DTrace (7D).Pour parler √† la partir noyau, nous disposons de la librairie libDTrace(3LIB). dtrace (1M) , lockstat (1M), plockstat (1M) utilisent cettelibrairie pour collecter des donn√©es. En dessous du framework, il y a le coeur du syst√®me : les providers,permettant de collecter les donn√©es, de mettre les points d'observations :sdt (stat du syst√®me), sysinfo (mpstat), vminfo(vmstat), fasttrap(memory trap, etc...), lockstat (lock syst√®me), sched (info sur lescheduling : qui se r√©veille, qui s'endort, ...), proc (instrumentationd'un processus user land) , syscall (appels syst√®me), mib (mib II), io ,nfsv4, ip, fbt (les fonctions du noyaux : tous les points d'entr√©es etde d√©parts de presque toutes les fonctions du noyaux)... pour n'en siter que quelques uns ! Les providers activ√©svont instrumenter le code et l'ex√©cution du code dans le noyau va faireun petit bond dans le provider pour collecter les informations. MySQL, Java, php, quand ils se lancent viennent s'authentifier auniveau du framework DTrace et enregistrer leurs providers. Par d√©faut,ils ne sont pas actifs. Si par contre vous avez besoin d'une informationpr√©cise, il suffit d'activer le probe voulu. La richesse de notre impl√©mentation DTrace vient des providers ! Un √©diteur de logiciel peut mettre ses points d'observation dans sonlogiciel : USDT (User-Level Statically Defined Tracing) et fournir ainsi son provider. On peut avoir plusieurs observateurs en parall√®le qui regardent le m√™me"probe". Quand un utilisateur arr√™te un programme DTrace, tous lesprobes qu'il utilisaient sont d√©s r√©f√©renc√©s de 1, jusqu'√†potentiellement la d√©sactivation du probe. La syntaxe d'un probe Probe : <provider>: <module>:<fonction>: <name>Exemple : FBT:XXX:FFF:ENTRYLe DTrace framework s'executant dans le Kernel,  il n'y a pas de context switch lors de la rencontre d'une trace.Nous pouvons avoir des buffers m√©moires DTrace par CPU (meilleurgestion des performances), et nous capturons exactement l'informationconsolid√©e : par exemple le nombre de fois que l'on rentre dans unefonction... Une grosse diff√©rente avec la m√©thode printf("coucou\\n");. Pour obtenir la liste de tous les providers (enregistr√©s √† l'instant t) : # dtrace -l Puis pour avoir des informations plus cibl√©es : # dtrace -P <provider> # dtrace -m <module> # dtrace -f <function># dtrace -n <name> Au passage, il est important de noter les droits d'utilisation de dtrace, g√©n√©ralement r√©serv√©s √† root. Vous pouvez d√©l√©guer tout ou parti de ces droits, en utilisant les niveaux suivants dans /etc/user_attr : dtrace_user, dtrace_proc, dtrace_kernel Le language D Le language D a presque la syntaxe du C avec la mani√®re d'√©crire unprogramme awk(1M).  C'est un script qui d√©finit un certain nombre depr√©dicats et qui a chaque fois que ce pr√©dicat est rencontr√©, g√©n√®reune action. Dans l'action on peut positionner des variables pouvantelles-m√™mes faire l'objet de conditions. DTrace c'est un programme construit de cette mani√®re qui dispose devariables globales, locales au thread d'ex√©cutions, locales au provider,avec des variables int√©gr√©es comme execname et timestamp.On d√©finit ensuite l'action, par exemple : tracer la donn√©es,enregistrer une trace de la stack, enregistrer la donn√©e. Quand un √©v√®nement est d√©clanch√© et que le pr√©dicat est vrai, l'actionest ex√©cut√©e. Example : "Print all the system calls executed by bash" #!/usr/sbin/dtrace -s syscall:::entry   => probe description /execname=="bash"/ => /predicat/ { printf("%s called\\n", probefunc); => action statements } Nous venons d'√©crire une commande truss capable de tracer TOUS les "bash"du syst√®me. PredicatExecute l'action seulement si la condition du predicat est vrai.La condition est une expression en language D. ActionCertaines actions peuvent changer l'√©tat du syst√®me de mani√®re biend√©finie (-w). Exemple d'actions pour enregistrer les donn√©es : trace() printf() : la v√©rification de la coh√©rence sur le nombre et sur le typedes param√®tres est faite √† la compilation du script : mieux qu'en C ! stack() enregistre la stack kernel ustack() enregistre la stack userland du thread dont la stack estex√©cut√©e printa() qui permet d'afficher une aggr√©gation exit() pour sortir Actions "destructive" : "-w" stop() un process raise() envoi un signal √† un process breakpoint() le kernel panic() le syst√®me chill() introduit une latence AggregationParfois ce n'est pas l'√©v√®nement mais la tendance qui nous int√©resse :"combien de fois appelle-t-on cette fonction avec un argument entre 4k et8k ?"Ce type d'information va nous √™tre fourni au travers des fonctions suivantes : sum(),count(), min(), max(), avg(), quantize() en puissance de 2 (exponentiel),lquantize() en mode lin√©aire L'aggr√©gation peut √™tre collect√© dans un tableau de taille ind√©termin√©.L'aggr√©gation peut porter un nom : @name[keys] = aggfunc(args); Par exemple, si vous souhaitez regarder la r√©partion des tailles de malloc() dans le processque l'on suit : Script dtrace : aggr2.d #!/usr/sbin/dtrace -spid$target:libc:malloc:entry{  @["Malloc Distribution"]=quantize(arg0);} $ aggr2.d -c who (le pid du process "who" qui va √™tre lanc√© √† ce momentl√† va remplacer pid$target dans le script dtrace aggr2.d) trace: script './aggr2.d' matched 1 probe...dtrace: pid 6906 has exitedMalloc Distribution        value ------------- Distribution ------------- -----------------count           1  |                                                         0           2  |@@@@@@@@@@@@@@@@@                                        3           4  |                                                         0           8  |@@@@@@                                                   1           16 |@@@@@@                                                   1           32 |                                `                        0           64 |                                                         0          128 |                                                         0          256 |                                                         0          512 |                                                         0         1024 |                                                         0         2048 |                                                         0         4096 |                                                         0         8192 |@@@@@@@@@@@                                              2        16384 |                                                         0 Calculer le temps pass√© dans une fonctionL√†, nous avons besoin de variables sp√©cifiques au thread d'ex√©cution : variablespr√©-fix√©es par "self->" qui √©liminent les cas de "race condition" surune m√™me variable. Quand vous remettez √† z√©ro la variable, dtrace la d√©s-alloue (c'est son "garbage collector"). #!/usr/sbin/dtrace -ssyscall::open\*:entry,syscall::close\*:entry{ self->ts=timestamp;}syscall::open\*:return,syscall::close\*:return{     printf("ThreadID %d spent %d nsecs in %s", tid, timestamp - self->ts, probefunc); self->ts=0; /\*allow DTrace to reclaim the storage \*/} DTrace fournit √©galement l'acc√®s aux variables kernel et externes.Pour acc√®der aux variables externes, il suffit de les pr√©fixer par `(anti-quote) #!/usr/sbin/dtrace -qsdtrace:::BEGIN{printf("physmem is %d\\n", `physmem);printf("maxusers is %d\\n", `maxusers);printf("ufs:freebehind is %d\\n", ufs`freebehind);exit(0);} Speculative tracingIl est important de trier les informations collect√©es car les buffers en m√©moire de Dtrace sontlimit√©s (pour de bonnes raisons : impact des performances, utilisableen production...).On cr√©e dans ce cas "une sp√©culation". Vous allez allouer un buffer m√©moire qui collecte vostraces, et si le thread sort en erreur, nous gardons l'information (pour analyse), sinon nous lib√©rons le contenu du buffer. C'est tr√®s int√©ressant dans le cas o√Ļ vous recherchez un probl√®me al√©atoire qui sort de temps entemps en erreur. self->spec = speculation () : mise en place d'un buffer speculatifpar thread Un exemple : si l'on recherche les cas de mount() qui sortent en erreur. #!/usr/sbin/dtracesyscall::mount:entry{   self->spec = speculation();}fbt:::return/self->spec/{   speculate(self->spec);   printf(‚Äúreturning %d\\n‚ÄĚ, arg1);}syscall::mount:return/self->spec && errno != 0/{   commit(self->spec);}syscall::mount:return/self->spec && errno == 0/{   discard(self->spec);}syscall::mount:return/self->spec/{ self->spec = 0; } copyin & copyinstr Ce sont des fonctions qui servent √† r√©cup√©rer desinformations de processus userland, pour voir ces informations dans le noyau, car DTrace tourne dans le kernel : copyin(addr,len) copinstr(addr) - cha√ģne de caract√®re Anonymous tracingImaginons que le syst√®me crash de temps en temps au boot, ou en panic.Mais pour √©tudier cela avec un script, quand il y a un crash, le syst√®medisparait.Pour y r√©pondre, dtrace dispose d'une solution non rattach√©e √†un consommateur donn√© et qui doit √™tre enregistr√© dans le noyau. Le script dtrace est dans ce cas mis dans un fichier de configuration qui sera d√©marr√© au boot. Lesbuffers de trace associ√©s √† ce script sont dans le kernel et donc r√©cup√©rables et analysables surcrash. Pour cr√©er ce script : # dtrace -APour collecter les informations : # dtrace -a Dtrace : une technologie √† la port√© des d√©veloppeurs  Au del√† de la disponibilit√© de dtrace sur un syst√®me en production, vous pouvez √©galement en b√©n√©ficier d√®s les phases de d√©veloppement ! D-Light plugin dans SunStudio 12.1Dans Sun Studio, au travers de D-Light, vous disposez de tout un ensemble de scripts dtrace qui permettent der√©cup√©rer des informations en temps r√©el sur le process quel'on ex√©cute au sein de SunStudio. Cette collecte se fait de fa√ßon graphique avec une capacit√© de"drill-down" fine. Vous pouvez suivre un processus con√ßu dans l'IDE, mais √©galementun processus s'ex√©cutant sur un syst√®me. Le point crucial : la documentation ! Vous disposez d'un grand nombre d'informations (liste des providers, etc...) indispensables √† l'analyse sur le site wiki de DTrace : http://wikis.sun.com/display/DTrace/Documentation Il existe √©galement un kit avec un ensemble de scripts d√©j√† pr√™ts : le Dtrace Toolkit. C'est un ensemble de scripts qui m√©langent du Dtrace etdu perl (pour la pr√©sentation) qui permettent de r√©pondre √† un grandnombre de questions sur les diff√©rents param√®tres d'utilisation de vos applications et du syst√®me : http://hub.opensolaris.org/bin/view/Community+Group+dtrace/dtracetoolkit http://www.solarisinternals.com/wiki/index.php/DTraceToolkit Vous pouvez m√™me utiliser DTrace √† l'int√©rieur d'Oracle Solaris Studio (ex-Sun Studio Tools) - voir plus haut. Et en standard sur toutes vos machines Solaris : /usr/demo/dtrace "let's do it" ! Pour aller plus loin : DTrace party ce soir √† Supinfo ! La formation : EXL-2210 Developing and Optimizing Applications with Dtrace  > 4 days - Experienced Java and C/C++ Developers EXL-2211 Optimizing Solaris Administration with Dtrace  > 4 days - Experienced System Administrators Bryan Cantrill en live : http://www.youtube.com/watch?v=6chLw2aodYQ Translate in English

Translate in English Faisant suite à la présentation de William Roche sur DTrace, le groupe d'utilisateurs Solaris se réunit pour une mise en pratique ce soir à Supinfo. Afin de comprendre les atouts...

Architectures and Technologies

Virtualisation : restitution du groupe de travail du CRIP

Translate in English Avant-hier ce tenait le retour du groupe de travail du CRIP sur lavirtualisation. PSA, Orange, Generali et Casino ont t√©moign√© de leursretours d'exp√©riences et des orientations qu'ils √©taient en train deprendre sur leurs futures √©volutions. Le focus √©tait sur la mise enoeuvre des technologies d'hyperviseurs et des diff√©rents choix possibled'impl√©mentations : mono ou multi-hyperviseurs, avec chacun ses avantages et inconv√©nients. Entre une approche unifi√©e, comme Orange, avec une √©quipe d√©di√©e sur un hyperviseur pour supporter des environnements h√©t√©rog√®nes. Et une approche comme celle de PSA, int√©gr√©e par environnement, sans √©quipe d√©di√©e, et o√Ļ l'expertise est li√©e √† l'√©quipe syst√®me (Solaris, Linux, Windows) et associ√©e √† l'offre du fournisseur correspondante pour assurer le support de bout en bout.Des gains √©vidents qui poussent vers une virtualisation globale Dans tous les cas, tout le monde s'accorde sur les avantages recherch√©s par la mise enoeuvre de la virtualisation des serveurs (voir √©galement un pr√©c√©dent billet sur le sujet) : consolidation desserveurs avec le corollaire sur les √©conomies √©nerg√©tiques, gestionde l'obsolescence des serveurs et des applications comme par exemple, chezSun, avec le support des Containers Solaris 8 et Solaris 9 sur des serveursr√©cents en Solaris 10, plus d'agilit√© et de r√©activit√© de part la fournitureet le d√©placement des environnements virtuels de fa√ßon beaucoup plus ais√© que des serveurs physiques Toutefois, la majorit√© des entreprises semblent √™tre √† un taux devirtualisation de 25%, l√† o√Ļ la technologie disponible aujourd'huidevrait les amener vers un taux plus proche de 70%. Ceci est due √† unephase d'adoption et √† l'√©volution de maturit√© des hyperviseurs o√Ļ toutn'√©tait pas forc√©ment √©ligible lors du d√©marrage des premiersprojets de virtualisation.  Aujourd'hui, tout est virtualisable et virtualis√©, avec plus ou moins de pr√©caution en fonction de l'environnement. Ainsi, pour Solaris, comme certains des t√©moignages le pr√©cisaient, c'est la virtualisation "les yeux ferm√©s". En effet, le mod√®le des Containers Solaris, s'affranchit de la couche d'hyperviseur et offre une capacit√© (pratiquement) sans limite de l'utilisation des ressources de la machine (pas de limite en nombre de CPU, taille m√©moire et I/O) et multi-plate-formes (SPARC et x64). Par contre, impossible d'y faire tourner un OS windows ou Linux (sauf exception). C'est pourquoi, c'est une strat√©gie qui est compl√©mentaire √† un mod√®le d'hyperviseurs, et que nous appliquons en pratique chez nos clients, aussi bien sur architecture SPARC (en compl√©ment des domaines physiques et d'Oracle VM pour SPARC, hyperviseur mat√©riel) qu' x86/x64, avec notamment l'arriv√©e du support de Solaris comme Guest dans Oracle VM pour x86, hyperviseur bas√© sur la souche opensource Xen, et supportant d√©j√† les Guest Linux et Windows (1). Une communication avec un mod√®le √©conomique : un facteur cl√©s de succ√®s Il est clair, qu'une communication importante est n√©cessaire pourd√©ployer un projet de virtualisation dans l'entreprise et qu'un mod√®le√©conomique attrayant pour les m√©tiers est indispensable. Bien prendre en compte l'ensemble des co√Ľts et garder la ma√ģtrise Il faut √©galement prendre en compte l'ensemble des co√Ľts op√©rationnels. En effet, l'introduction de la couche de virtualisation n√©cessited'√™tre prise en charge. Et il ne faut pas oublier que si les gains sontr√©els, la ma√ģtrise de la prolif√©ration des environnements virtuelsdoit √™tre contr√īl√©e pour ne pas avoir un effet n√©gatif sur le ROI (et les SLA). Car ces derniers restent √† administrer (patches, mise √† jours,...). C'est d'ailleurs pourquoi, des entreprises comme Casino valorisent √©galement une approche de virtualisation de plus haut niveau, comme par exemple la consolidation de plusieurs bases de donn√©es ou serveurs d'applications sur une m√™me instance d'OS. Strat√©gie parfaitement compatible avec le cloisonnement offert par les Containers de Solaris reposant sur une m√™me souche OS. (1) Note : Sachez √©galement que le support de Solaris, OpenSolaris,Oracle Enterprise Linux et Oracle VM est inclus dans le co√Ľt du support mat√©riel des serveurs Sun. Translate in English

Translate in English Avant-hier ce tenait le retour du groupe de travail du CRIP sur la virtualisation. PSA, Orange, Generali et Casino ont témoigné de leursretours d'expériences et des orientations...

Architectures and Technologies

ZFS pour base de données Oracle : Best Practices

Translate in English Le mois dernier, Alain Ch√©reau, expert Solaris et Oracle du Sun Solution Center, a partag√© son retour d'exp√©riencesur l'optimisation des bases Oracle sur ZFS. C'est donc avec un peude retard, √† la veille de la prochaine conf√©rence du GUSES surDTrace, que je vous livre les points cl√©s de son intervention. Les quelques bases de tuning de ZFS restent bien entendud'actualit√© : taille de la m√©moire utilis√© par ZFS d√©validation du m√©canisme de flush des caches disques surbaie de disques s√©curis√©s taille du record size, de grande importance dans un contexteOracle, pour √©viter de lire plus de donn√©es que n√©cessaire (dufait du m√©canisme de Copy on Write), car les performances d'Oraclesont surtout sensibles aux temps der√©ponse des √©critures et aux d√©bits de lecture Ajuster le recordsize= db_block_size = 8k pour index et tables (tr√®s important carsupprime √©norm√©ment de lecture) Garder 128k (d√©faut)pour redo, undo, temporaire et archivelog Ajuster le recordsizedes filesystems des bases DW ayant plus des contraintes de lecturesque de batch de mise √† jour : recordsize = db_block_size = 16k ou32k une attention sur le prefetch (en fonction du type decharge). A cela Alain a ajout√© un certain nombre d'√©l√©mentsd'optimisation tr√®s pertinents. Gestion des √©critures synchrones Oracle ouvre tout ses fichiers en O_DSYNC et demande de ce fait √†ZFS d'√©crire en mode synchrone. ZFS va donc √©crire dans son ZIL(ZFS Intent Log), qui n'est utilis√© que pour des √©critures de typesynchrone. Une fois l'√©criture dans le ZIL effectu√©e, latransaction peut √™tre acquitt√© c√īt√© Oracle. Il est donc importantd'aller vite (privil√©gier la latence). Si on dispose d'une baiedisque externe, on mettra le ZIL dans la baie de stockage (ou, sinonsur SSD). 15 secondes de flux d'√©criture suffisent comme volume pourle log device de ZFS (ZIL). En outre, si la baie voit que l'onr√©√©crit toujours les m√™mes blocks, elle ne va pas l'√©crire surdisque et elle va le garder en cache. Un petit ZIL sera donc √†pr√©f√©rer √† un gros (utilisation d'une slice (64Mo) dans un LUN siles LUN des baies sont trop gros)Cache Disques, cache ZFS etcache OracleCaches √† tous les niveaux : c'est la r√®gle g√©n√©raleen transactionnel ! Une fois que le cache hit ratio Oracle est bon,il n'y a plus besoin d'agrandir le cache Oracle et il faut mieuxlaisser la m√©moire restante au cache ZFS car les politiques de cachesont diff√©rentes et se compl√®tent. Ecritures et lectures s√©quentielles ZFS √©crit en s√©quentiel (en Copy on Write) : toutes les√©critures logiquement al√©atoires deviennent s√©quentielles. Il adonc un comportement optimis√© pour des baies de disques (et pasuniquement des disques locaux), et aussi pour les indexes (indexes quel'on va √©galement mettre en g√©n√©rale sur SSD). Par contre, il fautfaire attention au comportement de tout ce qui est full scan/rangescan (lecture s√©quentielle) qui du fait du Copy On Write (COW) deZFS auront √©t√© √©parpill√©es. ZFS va quand m√™me regrouper au mieuxses I/Os sur ce type de lecture. Cela aura √©galement un impactsur le comportement des sauvegardes. C'est pourquoi l'utilisation desnapshots peut-√™tre int√©ressant √† ce niveau, ainsi que lesfonctions zfs send/receive.Throughput vs. latencyIl peut√™tre utile lors de grosses √©critures s√©quentielles d'√©viter unedouble √©criture (ZIL, puis disques), pour cela il est possibled'indiquer √† ZFS en mode ¬ę throughput ¬Ľ (mais attention,ensuite il s'applique √† toute la machine). Ainsi, si on positionnedans /etc/system zfs_immediate_write_sz √† 8000 sur Sparc (sur intelil faut mettre en dessous de 4096 (taille de pagesize et dedb_block_size)), toutes les √©critures d√©passant les 8000 octetsseront √©crites directement sur le disque (utile pour les processusdb writers)Sur les redolog, la base Oracle aime bien que l'on√©crive tr√®s vite, il faut donc privil√©gier le mode latence. Pourcela il faut d√©clarer un log device (ZIL) s√©par√© sur un ZPOOL desredologs pour ignorer le param√®tre ¬ę throughput ¬Ľ etgarder une bonne latence sur les redologs (vitesse descommits) CacheZFS ¬ę SSD aware ¬Ľ : adapt√© pour beaucoup de lecturesal√©atoires (SGA trop petite, cache primaire (RAM) ZFS trop petit, et beaucoup de lecture al√©atoire sur disques). Base avec 'sequentialread' tr√®s important (>  40%) dans le top 5 des √©v√©nements Oracle(Statpack ou Awr). Faites attention au warm up du cache secondairesur SSD... Et c'est bien entendu sans int√©r√™t pour les redologs(√©critures).Mettre les index sur SSD permet √©galement degagner en bande passante sur les I/O baies qui deviennent d√©di√©esau flux des donn√©es.Optimisation en fonction des profiles de bases1.Base avec un flux de modification importantutiliser un zpools√©par√© pour les redologs avec log device s√©par√© (m√™me sur lam√™me LUN en utilisant des slices) une slice pour le ZIL uneslice pour le redolog une slice pour les Donn√©es (tous les datafiles) Archivelog :n'importe quel disque, m√™me interne 2. Base tr√®s active ettr√®s importante (volume et activit√© de l'application)Las√©paration des IO sur des disques physiques diff√©rents est toujoursune optimisation valide : d√©finir des structures disques (zpool/ZFSfilesystems) pour s√©parer redo, tables, index, undo, temp.Plusle comportement est de type transactionnel (OLTP) plus ce d√©coupageest efficace.Si le profile est plut√īt d√©cisonnel, vous pouvezchoisir une approche de type ¬ę stripe everything ¬Ľ Bien entendu, tout cela doit √™tre mis en perspective en fonction de la vie de labaie et est li√© √† la probl√©matique de mutualisation de bases surune baie consolid√©e et des politiques d'√©volutions associ√©es.3.Serveur multi-bases (consolidation)Utilisez un zpool par usage(redo, datafiles,...) puis ensuite, cr√©ez des syst√®mes de fichiersZFS par base dans les zpools correspondants.Gestion desZPOOLGarder 20% de place libre dans chaque zpools avec √©critureset utilisez l'autoextend des datafiles car la pr√©-allocation eng√©n√©ral utile sur UFS pour jouer sur l'aspect contigu√ę des donn√©esn'a pas d'int√©r√™t avec ZFS (du fait du COW).ZFSCompressionSur les Archivelog : allez y !Sur les datafiles :cela semble une bonne id√©e... Il faut pr√©voir 10% √† 15% de CPU,mais le gain de place est important. Si vous h√©sitez √† le passerdirectement sur la production, allez y par contre dans vosenvironnements de d√©veloppement et int√©gration. Et aussi,b√©n√©ficiez des capacit√©s de gestion de compression dynamique deZFS. En effet, √† la cr√©ation de la base, mettez par d√©faut lacompression √† ¬ę on ¬Ľ : vous allez gagner en place et entemps de cr√©ation (compression du vide). Ensuite vous pouvezremettre la compression √† ¬ę off ¬Ľ : ainsi les blocs dedonn√©es r√©elles ne seront pas compress√©s (du fait du m√©canisme deCOW). ZFS ClonesB√©n√©ficier du m√©canisme de Copy on Write,pour r√©cup√©rer instantan√©ment un ou des clone(s) de vos bases de donn√©es :un moyen simple de fournir de multiple copies modifiables sur m√™mejeu de disques pour d√©veloppement, recette, int√©gration... Checksum Oracle + ZFS : gardez tout ! Plus de s√©curit√© pour unco√Ľt CPU faible. Oracle RAC et ZFS : ils sont incompatibles, car ZFS g√®re sesm√©ta-data localement. Les autres points √† prendre en comptesur l'optimisation des performances Oracle Mauvais SQL, Contentionapplicatives Manque de puissance CPU Configuration m√©moire (SGA, caches) Contention r√©seau D√©bit et capacit√© en nombre d'IO des disques Comme le pr√©cisait Alain en conclusion de cette sessiond'optimisation des bases Oracle sur ZFS, ¬ę pour aller plusloin, √† l'√®re des processeurs multi-cores, multi-thread, pensez auparall√®lisme !!! ¬Ľ : les statistiques, les sauvegardes, la construction d'index enparall√®le sont d√©j√† des bonnes choses en standard √† mettre enoeuvre. J'esp√®re que ces notes vous seront utiles dans la mise en oeuvrede vos bases Oracle sur ZFS. Merci encore √† Alain Ch√©reau pour son retour d'exp√©riencefouill√© sur le sujet. Et n'oubliez pas, demain √† Supinfo, la conf√©rence du groupe des utilisateurs Solaris autour de Dtrace, pour vous permettre d'appr√©hender au mieuxson usage et gagner en efficacit√© sur l'identification et lacorrection de vos incidents de production. Translate in English

Translate in English Le mois dernier, Alain Chéreau, expert Solaris et Oracle du Sun Solution Center, a partagé son retour d'expériencesur l'optimisation des bases Oracle sur ZFS. C'est donc avec...

Architectures and Technologies

Oracle Extreme Performance Data Warehousing

Translate in English Mardi dernier a eu lieu un √©v√®nement portant sur la prob√©matique de performance des environnements Data Warehouse et organis√© par Oracle. A cette occasion, Sun a √©t√©invit√© √† pr√©senter les infrastructures et solutions adressant les exigences toujours plusfortes dans ce domaine. Et BNP Paribas CIB, en la personne de Jim Duffy, Head of Electronic Market DW, aapport√© un t√©moignage tr√®s int√©ressant sur les phases d'√©volutionde leur Data Warehouse de gestion des flux financiers sur lequel jevais revenir √©galement dans ce post, en vous parlant infrastructure√©videment, socle majeur pour atteindre l'"Extreme Performance". Explosion des donn√©es num√©riques = fortimpact sur les infrastructures Les chiffres parlent d'eux m√™me. Nous assistons √† l'explosiondes donn√©es num√©riques. De 2006 √† 2009, les donn√©es num√©riquesont pratiquement quintupl√© pour atteindre pratiquement 500Exabytes, et IDC pr√©dit la m√™me croissance d'ici 2012, soit 2500 Exabytes dedonn√©es num√©riques dans le monde (source: IDC, DigitalUniverse 2007 et 2009).En tant que fournisseur de stockageet num√©ro #1 de la protection de la donn√©es, nous le vivons tousles jours √† vos c√īt√©s. Cette tendance √† des impacts √† plusieursniveaux : Sur la capacit√© √† stocker etsauvegarder les donn√©es Sur la capacit√© de traiter lesinformations pertinentes parmi une masse de donn√©es toujours plus cons√©quente Sur la capacit√© de g√©rer l'√©volution des unit√©s decalculs et de stockage n√©cessairestout en restant ‚Äúvert‚ÄĚ, c'est √† dire en ma√ģtrisant √©galement l'impact sur l'√©nergie, les capacit√©s de refroidissement, et l'encombrement dans vos Datacenter Les besoins sur les infrastructures desData Warehouse Tout cela induit de nombreux enjeux techniques √†couvrir pour les entrep√īts de donn√©es. D'autant plus que cettefonction est devenue une fonction capitale et critique pour lepilotage de l'entreprise. Le premier enjeu est la capacit√© de faire croitrel'ensemble de l'infrastructure pour faire face √† la croissancedes donn√©es et des utilisateurs. Ce que Jim Duffy a illustr√©clairement dans la pr√©sentation des phases d'√©volutions du projetd'analyse des flux financiers chez BNP. Apr√®s un d√©marrage avecquelques dizaines de Giga Octets en alimentation par jour, ils ont vula tendance √©voluer fortement pour atteindre pratiquement 500 GigaOctects sur 2010. Gr√Ęce aux diff√©rentes options de la base dedonn√©es Oracle (partitionnements, compressions) explicit√©esd'ailleurs lors de ce s√©minaire par Bruno Bottereau, avant-ventes technologies Oracle, la BNP a pucontr√īler l'explosion des donn√©es au sein de son Data Warehouse. En outre, compte-tenu de la tendance d'une augmentation importante des donn√©es √† traiter, les fonctions avanc√©esdisponibles dans la solution Sun Oracle Database Machine (Exadata) comme l'Hybride Columnar Compression s'av√©raient indispensables √† √©valuer pour contr√īler au mieuxcette croissance. Comme l'expliquait Jim Duffy, l'√©volutionparaissait naturelle et simplifi√©e, car restant sur des technologiesOracle, ils ont valid√© en r√©el lors d'un Proof of Concept lasimplicit√© de passage de la solution actuelle sur Oracle RAC 10gvers la solution Exadata en Oracle RAC 11gR2 en un temps record, avecun gain de performance important. L'enjeu suivant est laperformance avec la n√©cessit√© de prendre des d√©cisionsintelligentes souvent dans des temps de plus en plus courts et sur unemasse de donn√©es plus importante. Ce qui impacte √† la fois les unit√©sde traitement et la bande passante pour traiter les donn√©es. Cepoint a √©t√© clairement illustr√© par Jim dans son intervention, o√Ļil cherche a effectuer des analyses "quasi" en temps r√©el(minutes, voir secondes !) sur la masse de donn√©es collect√©e.Avecune √©conomie mondialis√©e, et un besoin de r√©ajuster la strat√©giepresque en temps r√©el, les entrep√īts de donn√©es ont vu leurbesoin en disponibilit√© s'accroitre de fa√ßon tr√®s importante.C'est d'ailleurs ce qui a pouss√© la BNP √† l'origine du projet √†d√©ployer un cluster Oracle RAC sur Solaris x86 pour supporter leurentrep√īt de donn√©es.Les entrep√īts de donn√©es conservantles informations de l'entreprise, la s√©curit√© est un √©l√©mentincontournable dans le traitement de l'information qui y est stock√©e: qui √† le droit d'acc√©der √† quoi ? Quel niveau de protection esten place (cryptographie,...) ? Fonctions √©videment couvertes par labase Oracle, mais √©galement dans l'ADN du syst√®me d'exploitationSolaris : un double avantage.Les solutions doivent√©videment √™tre rapide √† mettre en place, pour ne pas √™treobsol√®tes une fois le projet d'entrep√īt de donn√©es r√©alis√©.Et √©videmment, elles doivent r√©pondre √† une probl√©matique de co√Ľtd'infrastructure optimis√© aussi bien en terme de puissance detraitement, de capacit√© de stockage et de consommation√©nerg√©tique associ√©e. Tout en couvrant l'ensemble des crit√®res√©voqu√©s jusqu'ici : scalabilit√©, performance, disponibilit√©,s√©curit√©... Finalement, en s'appuyant sur des standards ouverts,√† tous les niveaux, elles doivent permettent d'int√©grer lesnouvelles √©volutions technologiques sans tout remettre en cause.En bref : √™tre flexible. L'approche des Syst√®mes Oracle Sun Pour r√©pondre √† tous ces besoins, l'approche de Sun a toujours √©t√© de ma√ģtriser l'ensemble des d√©veloppements des composants de l'infrastructure, ainsi que leur int√©gration. Afin de concevoir des syst√®mes homog√®nes et √©volutifs du serveur au stockage en incluant le syst√®me d'exploitation... et m√™me jusqu'√† l'application... au travers d'architectures de r√©f√©rences test√©es et valid√©es avec les √©diteurs, dont Oracle ! En clair, fournir un syst√®me complet, homog√®ne et pas uniquement un composant. La solution Sun Oracle Database Machine (Exadata) en est une bonne illustration, en solution "pr√™t √† porter". Cette philosophie s'applique √† l'ensemble de la gamme des syst√®mes, tout en permettant de couvrir √©galement des besoins "sur mesure", comme par exemple la sauvegarde. A titre d'exemple de solution "sur mesure", voici une illustration d'un entrep√īt de donn√©es, r√©alis√© pour un de nos clients, avec des contraintes tr√®s fortes de volum√©trie  √† traiter et de disponibilit√©. Plus de 300 To de volum√©trie pour le Data Warehouse et les Data Marts.Cette impl√©mentation s'appuie sur 3x serveurs Sun M9000, pouvant contenir chacun jusqu'√† 64 processeurs quadri-coeurs, soit 256 coeurs, jusqu'√† 4 To de m√©moire et 244 Go/s de bande passante E/S: de la capacit√© pour √©voluer en toute s√©r√©nit√©. Le coeur de l'entrep√īt tourne sur 1x M9000, les DataMarts √©tant r√©partis sur 2 autres M9000. La disponibilit√© est assur√©e par le serveur M9000 en lui-m√™me et sa redondance totale sans aucun point de rupture unique.Le passage sur la nouvelle architecture a permis d'am√©liorer par 2 le temps de r√©ponse de la plupart des requ√™tes, sur des donn√©es toujours croissantes. Cette infrastructure supporte plus de 1000 utilisateurs DW concurrents et la disponibilit√© a √©t√© am√©lior√©e de part la redondance interne des serveurs M9000 et des capacit√©s d'intervention √† chaud sur les composants. En outre, en entr√©e et milieu de gamme, la gamme Oracle Sun T-Series, bien que limit√©e √† 4 processeurs maximum offre une capacit√© de traitement parall√®le unique  de part son processeur 8 coeurs/8 threads, coupl√© √† des unit√©s d'E/S et de cryptographie int√©gr√©es au processeur, et d√©tient le record du nombre d'utilisateurs concurrents Oracle BI EE sur un serveur. Quelle solution choisir : du "sur mesure" au "pr√™t √† porter" ? 4 crit√®res majeurs vous aideront √† s√©lectionner le serveur r√©pondant le mieux √† vos besoins : le volume de donn√©es √† traiter, le type de requ√™tes, le niveau de service attendu, le temps de mise en oeuvre N'h√©sitez pas √† nous contacter pour que nous vous guidions vers la solution la plus adapt√©e √† vos besoins. Translate in English

Translate in English Mardi dernier a eu lieu un évènement portant sur la probématique de performance des environnements Data Warehouse et organisé par Oracle. A cette occasion, Sun a étéinvité à...

Innovation

La virtualisation Solaris pour quels gains ?

Translate in English Hier soir, nous nous sommes retrouv√©s avec une vingtaine d'utilisateurs Solaris pour la soir√©e du GUSES dont je vous avais parl√© la semaine derni√®re.Bruno Philippe, expert Solaris d'une grande banque fran√ßaise, nous a brillamment expos√© son retour d'exp√©rience surl'adoption de la technologie de virtualisation incluse dans Solaris, etles b√©n√©fices associ√©s. Des b√©n√©fices qui font que le projet alargement d√©bord√© du cadre initial de rafraichissement d'un parcvieillissant et s'applique maintenant aux nouveaux projets, nonseulement SPARC, mais √©galement x64. La m√©thode une fois mise en oeuvre√©tant applicable dans les 2 environnements, car Solaris est le seulUNIX d'entreprise multi plate-formes. La maturit√© ! Comme Bruno le rappelait en introduction, la virtualisation inclusedans Solaris (√† co√Ľt 0‚ā¨) est tr√®s mature car pr√©sente d√®s l'origine deSolaris 10 (janvier 2005) et d√©ploy√©e dans des environnements tr√®svari√©s comme  oracle, sybase, sap, avec les outils d'administrationassoci√©s : $u, cft, mqm, netbackup.... pour n'en citer que quelquesuns. En outre, c'est une solution qui s'enrichit tous les jours pour pouvoir offrir une virtualisation sans limites (ou presque) et des gains substantiels ! La virtualisation Solaris pour quels gains ? Avec une √©conomie de plus de 100K‚ā¨/an, l'int√©r√™t du projet √©tait√©vident (!) pour cette grande institution financi√®re et ne s'arr√™te pasl√† ! A cela s'ajoute une optimisation des op√©rations avec un ratio de30j/h vs. 125 h/j pr√©c√©demment, ainsi qu'un niveau de service am√©lior√©: pas au niveau d'un cluster, mais tr√®s correcte du faitde l'architecture permettant de d√©placer les containers Solaris tr√®srapidementd'un serveur √† un autre. D'ailleurs, pour les environnements plusexigeants, cette grande banque a d√©ploy√© 2x Solaris Cluster 3.2 g√©rant la bascule automatique des containers Solaris en cas d'incident : l'un en SPARC et l'autre en x86/x64. Les d√©tails du projet Pour ceux qui souhaitent se lancer dans l'aventure, voici les d√©tails duprojet tels que je les ai retenu hier. Je compte sur Bruno pourapporter les corrections n√©cessaires le cas √©ch√©ant. Les objectifs recherch√©s √©taient les suivants : R√©duction des co√Ľts. Plus il y a de machines, plus il y a dejours/hommes d'administration : patches, etc... Flexibilit√©. Cibler des serveurs plus capacitifs (notamment enm√©moire) et partager les ressources de fa√ßonplus efficace au travers de la virtualisation Optimisation des co√Ľts op√©rationnels de d√©ploiement et mise √† jour.Installation d'une seule version qui estd√©ploy√©e ensuite dans plusieurs containers, donc une seule mise √† jourpour les DBA. D'autres besoins importants des administrateurs UNIX et des DBA ont√©galement √©t√© adress√©s au travers de ce projet. J'en ai retenu 2principaux : la reprise d'activit√© simplifi√©e dans le cadre d'un DRP et la capacit√©de rafra√ģchir les donn√©es facilement (envisag√© via snapshot/clone ZFS). Un vrai ROI Une √©tude pr√©alable a √©t√© men√©e afin de d√©finir la bonne architecturecible et d'obtenir le ROI du projet, d√©montrant sa viabilit√©. Le p√©rim√®tre de l'√©tude portait sur 30 serveurs physiques (SPARC), 70instances Oracle, 30 instances Sybase. Elle passait par un monitoring du parcexistant pour calibrer les serveurs, d√©finir le type de serveur ciblepour la consolidation, et valoriser le ROI pour les achats. Les valeurs prises en compte pour le ROI ont √©t√© les suivantes : la consommation √©l√©ctrique (th√©orique) le nombre total de racks utilis√©s (surface) le nombre de connexions SAN et r√©seau (une connexion co√Ľte 1k‚ā¨/an - selon une √©tude constructeur) le co√Ľt en h/j pour diff√©rentes op√©rations syst√®me uniquement... avec 1 h/j syst√®me correspondant √† environ 500‚ā¨ (pour une entreprise, charges comprises) Sur la base de ces mesures, le ROI a √©t√© d√©montr√© en 2 ans (y compris la mise en oeuvre). Or, les DBA se sont aper√ßus apr√®scoup des gains et √©conomies suppl√©mentaires apport√©s, soit au final un ROI encore plus rapide. En compl√©ment de ces valeurs, la partie gain en maintenance a √©t√© √©galement prise en compte (ancien vs. nouveau)directement au niveau des achats. L'architecture cible du refresh technologique Suite au calibrage des serveurs existants (sur une base spec.org avecrequ√™tes OLTP type Oracle), la cible de la consolidation a √©t√© orient√©esur des serveurs Sun M5000,8 processeurs, 4 coeurs chacun, 2 threads par coeur, soit 64 threadsd'ex√©cution physique (ou processeurs virtuels vues de l'instanceSolaris) avec 128 Go de RAM (pour commencer) et le doublement del'ensemble des interfaces r√©seaux (IPMP) et SAN (MPXIO) offrant ainsi la capacit√© et la s√©curit√©. Une bonne base de travail en terme de ressources pour virtualiser,sachant que la technique de virtualisation Solaris ne n√©cessitel'administration que d'une seule instance d'OS, quelque soit le nombrede machines virtuelles (containers) cr√©√©es √† partir de celle-ci. Enoutre, contrairement √† d'autres solutions de virtualisation, la tailledes containers n'est pas limit√©e et peut prendre toute la machine (dansce cas, 64 CPUs virtuelles) si besoin. Pour plus d'information sur latechnologie proprement dite, je vous engage √† consulter ce petit guide: "How to Consolidate Servers and Applications using Solaris Containers". 2x M5000 ainsi configur√©s servent de socle pour la consolidation des bases Oracle.Pour la partie Sybase, qui n√©cessite moins de puissance unitaire enprocesseur, mais par contre beaucoup de m√©moire (notamment pour monterle tempdb sous tmpfs), 2x serveurs T5240 avec 256 Go de RAM ont √©t√© acquis. Au final, comme les containers peuvent indiff√©remment √™tre d√©marr√©s oud√©plac√©s sur une machine M5000 ou T5240, la r√©partition est g√©r√©e defa√ßon transparente en fonction des besoins r√©ellement constat√©s. Lep√©rim√®tre initial de refresh du parc de d√©veloppement et pr√©-productiona conduit √† la mise en oeuvre de 90 containers regroupant 110 basesOracle et environ 30 bases Sybase.En outre, du fait du succ√®s de l'op√©ration, le parc de serveursvirtualis√©s de cette fa√ßon s'est √©tendu, et un socle sur architecturex86/x64 constitu√© de plusieurs serveurs Sun X4600(8 processeurs de 6 coeurs chacun, soit 48 coeurs au total par serveur) est encours de d√©ploiement. Pour un projet initi√© en mars 2009, plus de 250 containers sontd√©ploy√©s √† ce jour sur une quinzaine de M5000. Avec en moyenne 20containers par instance Solaris sur le d√©veloppement et lapr√©-production et 10 containers sur la production (car le besoinm√©moire est le facteur limitant du nombre de containers possible, bienavant le besoin en processeurs). Je ne rentrerai pas plus loin dans les d√©tails d'impl√©mentations technique tels que : le choix de normalisation du nommage des zones globales, des containers, des poolZFS le choix d'un container de type sparse (vs. full) avec lezonepath sur le SAN (non mont√© √† partir de la racine, mais d'un sousr√©pertoire pour que le Liveupgrade fonctionne) le choix d'un pool ZFS par containers les options de mise en oeuvre de ZFS (vous pouvez d√©j√† avoir une petite id√©e ici) le choix d'un pool et d'un seul syst√®me de fichier ZFS par instance de base de donn√©es le choix d'un seul VDEV par pool (la s√©curit√© √©tant assur√©e parla baie disque). Des combinaisons qui ont surement eu un impact sur les performances des zfs send | zfs receive via le r√©seau... D'o√Ļle passage par RMAN pour certains besoins de refresh des donn√©es (oops,je rentre dans le d√©tail l√†, on dirait...) En tout cas (au moins pour les membres du GUSES) la pr√©sentation deBruno devrait √™tre en ligne prochainement et couvre l'ensemble deschoix d'impl√©mentation ainsi que les diverses options, ycompris pour Oracle et Sybase, qui ont √©t√© d√©velopp√©es en d√©tail pendant cette soir√©e. Je vous invite donc √† ne pas manquer les prochaines soir√©es GUSES pour b√©n√©ficier en direct de ces √©changes. Une petite note en guise de conclusion. Comme nous l'avons vu autravers du t√©moignage tr√®s riche de Bruno, si la virtualisation apportede nombreux gains, la s√©paration entre lephysique et le virtuel complexifie d'autant en revanche lesprobl√©matiques de localisation. C'est pourquoi, √† la suite de ceprojet, cet √©tablissement financier est en train de regarder de tr√®str√®s pr√®s la derni√®re version de notre outil de management : Ops Center2.5. Et il ne sont pas les seuls √† la vue du succ√®s du dernier √©v√®nement organis√© sur ce th√®me dans notre centre de Paris, et pour lequel une session de rattrapage est planifi√©e le 7 janvier 2010. N'h√©sitez pas √† vous y inscrire ! Comme j'esp√®re vous l'avoir retransmis par ce billet, pour lespersonnes qui n'ont pas pu assister √† cette soir√©e, un t√©moignagepassionnant qui a donn√© lieu √† un √©change tr√®s riche et qui a jou√© lesprolongations avec les participants... A ce titre, suite √† denombreuses demandes, voici le lien pour vous inscrire au GUSES :http://guses.org/home/divers/inscription. Encore merci √† Bruno Philippe, au GUSES et bien s√Ľr √† SUPINFO qui nous a accueilli pour cette soir√©e. Translate in English

Translate in English Hier soir, nous nous sommes retrouvés avec une vingtaine d'utilisateurs Solaris pour la soirée du GUSES dont je vous avais parlé la semaine dernière.Bruno Philippe, expert...

Innovation

Etat des lieux du Cloud en France

Translate in English Mardi dernier a eu lieu √† Paris la 3i√®me √©dition du CloudStorm (apr√®s Bruxelles et Londres). Ev√©nement destin√© √† pr√©senter les solutions existantes aujourd'hui dans le Cloud Computing. Une occasion √† ne pas manquer pour faire un √©tat des lieux, 6 mois apr√®s le Cloud Camp de Paris. Dans ce mouvement de remise en cause des mod√®les informatiques, il estclair que les solutions SaaS sont  maintenant bien pr√©sentes etmatures, notamment pour ce qui est des offres d'outils de collaborationen ligne. La probl√©matique d'int√©gration reste toutefois une questionfondamentale entre les applications SaaS (Software as a Service) et lesapplications internes de l'Entreprise, et a fortiori surl'infrastructure supportant les services SaaS. Les crit√®res de scalabilit√© qui s'appliquent aux SaaS, doivent s'appliquer √† l'infrastructure qui les supporte. De fait, des offres d'IaaS (Infrastructure as a Service) arrivent surle march√©. Elles permettent de r√©soudre entre autres la probl√©matiqued'int√©gration √©voqu√©e pr√©c√©demment, en fournissant de blocs deconstruction incluant serveurs, stockages et r√©seaux, et l'outillage demanagement du Cloud. Une solution int√©gr√©e et int√©grable dans unDatacenter existant.C'est ce qu'a expos√© la soci√©t√© Cloudsphere, en pr√©sentant la refonte de leur business de hosting vers une infrastructure hybride co-localis√©e. Clouds public, priv√©s, globaux, locaux ? M√™me si les acteurs du Cloud Computing tels que Amazon et Googlepr√©sentent un service globalis√©, nous voyons √©galement chez Sun unetendance vers une localisation des Clouds publics et priv√©s. Et celapour des raisons tr√®s pragmatiques et m√™me l√©gales.Comme l'√©voquait Patrick Crasson, Strategic Business Developer pour Sunet Business Angel, lors de son intervention: bien que "dans lesnuages", les donn√©es sont malgr√© tout localis√©es dans un pays qui a sal√©gislation propre et pas toujours en ad√©quation avec la v√ītre. Celapeut vite devenir contraignant, voire r√©dhibitoire si vous √™tes uneadministration et que le service soit destin√© √† stocker des donn√©es descitoyens.C'est pour les m√™mes raisons que les institutions financi√®res √©tudientla faisabilit√© de mise en oeuvre de clouds priv√©s, afin de garder uncontr√īle total sur leurs donn√©es. La proposition de Cloudsphere permet √† la fois de b√©n√©ficier desint√©r√™ts du Cloud Computing, en valorisant une architecture partag√©epour soutenir des pics de charge,  tout en permettant de garder uneconnexion vers un syst√®me priv√©, d√©di√© et co-localis√© (un mod√®le hybride co-localis√©). C'est uner√©ponse int√©ressante face aux probl√®mes de s√©curit√©, h√©bergement desdonn√©es, bande passante et points d'acc√®s r√©seau. Le mod√®le hybride co-localis√© r√©pond donc aux 2 champs de contraintes √©voqu√©s: simplifier l'int√©gration, permettre aux entreprises de b√©n√©ficier des avantages du Cloud Computing sans avoir √† subir ses inconv√©nients. Et vous vous en doutez surement, tout cela √©tant bas√© sur de latechnologie Sun, de l'infrastructure jusqu'√† la brique de virtualization: VirtualBox. Ce mod√®le est √©videment applicable directement au sein de votreentreprise pour cr√©er un Cloud Priv√© d'Infrastructure as a Servicefournissant une solution "√©lastique" et permettant √† l'informatiqued'offrir de la flexibilit√© et de la r√©activit√© accrue, en compl√©ment devotre existant. Par ailleurs, il ne faut pas penser qu'ajouter une couche devirtualisation √† votre infrastructure suffira √† la transformer en Cloudet la rendre "√©lastique". Il suffit pour cela de regarder les grandsacteurs du Cloud comme Google, et vous constaterez que tout a √©t√© con√ßude bout en bout en mode int√©gr√© et sp√©cifique (voir GoogleFS, jusqu'√† l'optimisation des couches d'inter-connexion r√©seaux [3.2 Data Flow]). L'apport de Sun provient √† la fois des technologies et de l'exp√©rienceacquises dans la fourniture de puissance informatique √† la demande.Nous en sommes √† la 5i√®me g√©n√©ration des blocs d'infrastructuresextensibles ou "POD" (Point of Delivery). Et nous avons donc appris √†les optimiser dans un mod√®le industriel, r√©p√©table et int√©gr√© (ycompris avec les solutions logicielles, comme VirtualBox, OpenStorage ou VDI). Si je peux me permettre une analogie, le POD est au Cloud ce qu'Exadata est √† la base de donn√©es. C'est pourquoi de grands acteurs comme AT&T nous ont fait confiance pour construire leur propre servicede Cloud Public ou pour mettre en place des solutions de Desktopas a Service. Translate in English

Translate in English Mardi dernier a eu lieu à Paris la 3ième édition du CloudStorm(après Bruxelles et Londres). Evénement destiné à présenter les solutions existantes aujourd'hui dans le Cloud...

Architectures and Technologies

Témoignage utilisateur : Virtualisation Solaris en environnement Oracle et Sybase

Translate in English Nous en avions parl√© √† la rentr√©e, et comme promis, je vous tiens inform√© de la prochaine Soir√©edu Groupe des Utilisateurs Solaris (GUSES) sur le retour d'exp√©rience de la virtualisation chez une GrandeBanque Fran√ßaise. Date de l'√©v√©nement : 15 d√©cembre 2009 Lieu : SUPINFO, 52 rue de Bassano, 75008 Paris Lavirtualisation est un axe majeur d'optimisation des ressources, et unepossibilit√© fournit en standard dans Solaris. Dans le cadre des soir√©esd'√©changes du GUSES, nous vous proposons de venir assister au retourd'exp√©rience sur ce sujet, pr√©sent√© par Bruno Philippe - ExpertSolaris, dans un contexte Oracle et Sybase (avec ZFS), chez une GrandeBanque Fran√ßaise. Si vous vous posez encore des questions sur commentle mettreen oeuvre, quels sont les b√©n√©fices et pi√®ges √† √©viter, n'h√©sitez √†venir prendre l'information √† la source. Merci encore √† Supinfo qui accueille le GUSES pour cet √©v√®nement. Agenda :Accueil √† partir de 18h30Conf√©rence et √©change de 19h00 √† 20h30 Inscrivez-vousd√®s maintenant Translate in English

Translate in English Nous en avions parlé à la rentrée, et comme promis, je vous tiens informé de la prochaine Soirée du Groupe des Utilisateurs Solaris (GUSES) sur le retour d'expérience de la...

Innovation

Run Best on Sun

Translate in English Voici en quelques mots la synth√®se des annonces faites pendantl'Oracle Open World qui s'est d√©roul√© au mois d'octobre √† San Fransisco. Le lancement d√©but septembre de la solution Sun/Oracle, ExadataV2, n'√©tait qu'un avant-go√Ľt des premiersr√©sultats d'une collaboration renforc√©e entre Sun et Oracle, nos √©quipes ayant en effet travaill√©en  partenariat pour tirer le meilleur des produits Oracle surplate-formes Sun. Il suffit pour cela de se r√©f√©rer √† quelquesbenchmarks der√©f√©rence, couvrant du transactionnel √† la paie, en passant par l'ERPavec SAP (sur base Oracle). Je vous laisse consulter les r√©sultatsd√©taill√©s dans les pointeurs ci-joints, surtout si vous avez d√©j√† une deces applications d√©ploy√©es chez vous ou si vous l'envisagez : Transactionnel : TPC-CWorld Record Sun - Oracle. Benchmark fortement centr√© sur les I/O,et sur lequel JoergMoellenkamp livre une analyseint√©ressante en terme de positionnement. Business Intelligence : SunT5440 Oracle BI EE Sun SPARC Enterprise T5440 World Record Analytics : OracleHyperion Sun M5000 and Sun Storage 7410 ERP : SAP2-tier SD Benchmark on Sun SPARC Enterprise M9000/32 SPARC64 VII Paie : OraclePeopleSoft Payroll (NA) Sun SPARC Enterprise M4000 and Sun StorageF5100 World Record Performance J'ai attendu un peu avant de publier ce billet, car un bon nombre deces benchmarks reposent sur la version Oracle 11gR2... Et cette version estdisponible sur Solaris depuis la semaine derni√®re, vous pouvez d'ailleurs la t√©l√©charger ici. Ceci est vrai pour l'ensemble des plate-formes Solaris : SPARC et x86/x64 ! Au del√† de ces benchmarks, l'int√©r√™t est de voir ce qui peut √™tredirectement applicable pour vous. Au travers notamment des technologiesmat√©rielles et logicielles employ√©es et int√©gr√©es pour mieux tirer partie de l'ensemble. J'attire particuli√®rement votre attention sur une nouvelle option d'Oracle 11gR2 qui permet d'utiliser les disques SSD "√† la" mode ZFS : DB Flash Cache, et d'obtenir des gains de performances d√©j√† d√©montr√©s jusqu'√† x4. Vous pouvez d√®s √† pr√©sent en b√©n√©ficier, gr√Ęce aux diff√©rentes technologies Flash que nous offrons, dont le Sun Storage F5100 qui fournit 1,9To sur 1U, consomme 300 watts et d√©livre plus de 1 Millions d'IOPS. Essayez d'y mettre vos indexes de bases de donn√©es et tenez moi au courant... Vous pouvez √©galement int√©grer des disques SSD dans nos serveurs et du cache Flash sur le bus PCIe avec la carte F20 (qui √©quipe les √©l√©ments de stockage de la solution Exadata v2). Translate in English

Translate in English Voici en quelques mots la synthèse des annonces faites pendant l'Oracle Open World qui s'est déroulé au mois d'octobre à San Fransisco. Le lancement début septembre de la solution...

Architectures and Technologies

7ième Conférence ITSMF : passez à l'offensive !

Translate in English J'ai pu assister hier √† la conf√©rence ITSMF annuelle. Grand rendez-vousdes DSI et responsables de production de France, maintenant qu'ITIL estvraiment la r√©f√©rence en production et un standard au travers de l'ISO 20000-1. La crise √©tait tr√®s pr√©sente, toutefois, Jean-Pierre Corniou\* invite lesDSI √† passer √† l'offensive en s'appuyant sur des messages forts quenous connaissons bien chez Sun depuis des ann√©es : le monde devientconnect√©, "anywhere, anytime, any device". Nous passons d'un mod√®leutilisateur/machine √† un mod√®le machine2machine, temps r√©el et mobile.Illustr√© d√©j√† par 4,6 Milliards d'abonn√©s mobiles et plus de 1,7 Milliards (26% population monde) connect√©s √† internet . Or l'ITdevient de plus en plus pr√©pond√©rante pour pouvoir g√©rer un mondedomin√© par l'information et o√Ļ notre capacit√© √†  l'exploiter etl'analyser devient critique pour rester comp√©titif. C'est pourquoi,Jean-Pierre conclut sur l'importance pour les dirigeants d'investirdans "La continuit√© num√©rique de l'√©co-mobilit√©"\*. En sachant lier la strat√©gie de l'entreprise avec les apports businessde l'IT par la ma√ģtrise qu'elle peut apporter de l'information "tempsr√©el". Ensuite, Jean-PaulAmoros - Vice-pr√©sident du CRIP nous a pr√©sent√© le succ√®s de l'alignement de l'IT avec la strat√©gie de l'entreprise,par son retour d'exp√©rience de 3 ans chez G√©n√©rali. Et comment, led√©veloppement d'une intimit√© avec les m√©tiers et la mise en place d'indicateurs repr√©sentatifs pour eux, lediff√©rencie en temps que production interne d'un "simple" out-sourcerg√©rant des SLA (IT). Pour conclure les sessions pl√©ni√®res, Pierre Thory nous a pr√©sent√© un√©tat des lieux de la norme ISO 20000-1 (tir√©e d'ITILv2) et des travaux internationaux associ√©s. O√Ļ il faut notamment rester vigilant sur l'implication d'autres pays poussant leurs propres standard autour des SOA et bient√īt du Cloud Computing. C√īt√© t√©moignages utilisateurs, Ludovic Lacote, responsable de laproduction d'ACCOR, a pr√©sent√© comment ils avaient mis en oeuvre unsyst√®me de gestion financi√®re des services IT, permettant de stopperla rumeur "l'informatique co√Ľte cher", de valoriser les besoins etl'apport de l'informatique aux m√©tiers ! Si ce processus ITIL est vucomme optionnel, il est clairement obligatoire chez ACCOR. LaGendarmerie Nationale, avec R√©gis Martin et Philippe Bouchet, a pr√©sent√© comment ils avaient industrialis√© leurproduction en passant √† ITIL et cela de fa√ßon rapide avec l'aide de Sunet de Cap. Une autre exp√©rience int√©ressante de l'adoption d'ITIL parles collaborateurs a √©t√© d√©velopp√©e par Robert Demory, de La PosteCourrier, avec une d√©marche centr√©e sur l'appropriation et lar√©alisation en interne. Pour finir, j'ai √©galement assist√© √† la pr√©sentation de Xavier Rambaud, DSI de Rhodia, qui a mis en oeuvre une solution permettant de remonter des indicateurs de qualit√© de service vue de l'utilisateur final. Tr√®s int√©ressant pour dialoguer avec les utilisateurs √† partir d'√©l√©ments factuels mais aussi pour valider ce qui marche, les tendances et les impacts sur un changement. En tout cas, comme vous pouvez le voir une journ√©e tr√®s enrichissante, charg√©e de messages et de savoirs faire dans l'ADN de Sun ! \*"la continuit√© num√©rique de l'√©co-mobilit√©": Jean-Pierre Corniou, Pr√©sident de l'Instance de coordination du programme TIC PME 2010, √† voulue illustrer ainsi la convergence entre la r√©volution num√©rique en y associant la mobilit√© non seulement de l'information, mais √©galement des objets et des personnes et l'√©co-responsabilit√©, enjeu majeur pour l'avenir de tous. Translate in English

Translate in English J'ai pu assister hier à la conférence ITSMF annuelle. Grand rendez-vous des DSI et responsables de production de France, maintenant qu'ITIL estvraiment la référence en production...

Innovation

Soirée GUSES - Groupe des Utilisateurs Solaris (et OpenSolaris)

Translate in English Hier soir c'√©tait la rentr√©e pourle GUSES, l'occasion de parler Solaris et OpenSolaris avec des casd'usages m√©tiers tr√®s concrets... Une soir√©e o√Ļ nos amisbanquiers √©taient assez bien repr√©sent√©s et nous ont fait part dequelques retour d'exp√©riences int√©ressantes. 1)ZFS et performances ZFS est tr√®s pris√© pour sasimplicit√© et l'ensemble des fonctionnalit√©s qu'il apporte :snapshot, checksum, .... Reste le point important desperformances. A ce titre, je tiens √† rappeler l'existence du siteZFS Evil Tuning Guideet notamment quelques param√®tres important √† positionner : Encadrer l'utilisation de la m√©moire syst√®me entre ZFSet vos applications (comme iozone, pour les personnes cherchant√† effectuer des comparaisons, par example). Dans le cas d'une√©valuation r√©cente face √† Ext3, sur un syst√®me avec 16Go de RAM,le bon param√®trage a √©t√© un positionnement du ARC limit√© √† 10Go (laissant 6 Go pour l'application -IOzone- ).Pour limiter letaille m√©moire utilis√©e par ZFS, positionner dans /etc/system : setzfs:zfs_arc_max = 0x280000000 (pour limiter √† 10Go)Enoutre, il faut dimensionner la swap de fa√ßon ad√©quate √©galementpour que la r√©clamation de pages m√©moires entre ZFS et lesapplications se face de fa√ßon optimum. D√©valider le flush des caches des disques dans le cas o√Ļle syst√®me de fichier ZFS se trouve sur une baie de disque avec uncache RAM s√©curis√©. Si ce param√®tre n'est pas positionn√©,toutes les 5 secondes environs ZFS va forcer la baie √† flusherl'ensemble de son cache sur les disques !! Pour plus d'explicationet quelques r√©sultats :http://milek.blogspot.com/2008/02/2530-array-and-zfs.htmlPourd√©valider le flush de fa√ßon permanente, ajouter dans /etc/system :set zfs:zfs_nocacheflush = 1 Utiliser le bon record size au niveau de chacun dessyst√®mes de fichier ZFS #zfs set recordsize=8k mypool/myDatafs;zfs set recordsize=128k mypool/myRedologfs http://blogs.sun.com/roch/entry/tuning_zfs_recordsize Attention au prefetch de ZFS qui peut avoir un effet debord additionnel avec le prefetch des baies disques, et conduire√† la "pollution" du cache (de la baie). Donc si les I/One sont pas s√©quentielles, il peut √™tre pertinent de d√©valider leprefetch de ZFS.Pour d√©valider le prefetch de ZFS, ajouter dans/etc/system : set zfs:zfs_prefetch_disable = 1 2) Solaris Temps r√©el vs. Linux Tempsr√©elUn point important recherch√© avec un syst√®metemps r√©el est son aspect d√©terministe. C'est ce qu'ont puobserver des utilisateurs de Solaris face √† Linux (avec un noyautemps r√©el) dans ce contexte. Linux a faible charge ayant untraitement plus rapide des requ√™tes que Solaris, mais √©tantfortement instable √† forte charge (traitement plus long desrequ√™tes que Solaris). L√† o√Ļ Solaris gardait un comportementconstant, et donc d√©terministe...  Pour cela, l'utilisationdes techniques de processeurs set (psrset(1M)) et de binding des interrupts (psrset(1M) options -f et -n) sont des √©l√©ments de Solaris int√©ressant √† utiliser.3)Les optimisations de Solaris avec Intel Nehalem (Xeon 5500 Series)J'aid√©j√† √©crit sur ce sujet,mais depuis, un document assez fouill√© sur ces optimisations estdisponible √† l'adresse suivante :TheSolaris Operating System---Optimized for the Intel Xeon Processor5500 Series Il comprend notamment un chapitre pour les d√©veloppeurs autourdes outils d'optimisation et des options de compilation permettantde tirer parti des avanc√©es offertes par le Xeon 5500. 4)Les fonctions avanc√©es d'OpenSolaris et Solaris pour construire desarchitectures CloudMikael Lofstrand,Sun Chief Technologist-Networking vient de publier un documentd√©crivant les architectures associ√©es, √©tay√© au travers de lamise en oeuvre d'une plate-forme Cloud s'appuyant sur ces principes,dont notamment la couche de virtualisation r√©seau Crossbow et lescontainers Solaris : The VeriScaleArchitecture: Towards a Scalable and Elastic Datacenter Pour conclure, sachez que les membres du GUSES nous pr√©parent une conf√©rence pour fin octobre, d√©but novembre sur un retour d'exp√©rience client avec la consolidation Solaris sur environnement Oracle utilisant les containers et ZFS. Translate in English

Translate in English Hier soir c'√©tait la rentr√©e pour le GUSES, l'occasion de parler Solaris et OpenSolaris avec des cas d'usages m√©tiers tr√®s concrets... Une soir√©e o√Ļ nos amisbanquiers √©taient...

Innovation

Table ronde à l'ITIForum : retour sur disques SSD et stockage x86

Translate in English C'est avec un petit d√©calage de 15 jours que je reviens sur la table ronde disques SSD et stockage x86, organis√©e par Jean-Pierre Dumoulin, lors du dernier ITIForum du CRIP. Je reprends √©galement ce billet en fran√ßais, ce qui est pr√©f√©rable, √† la vue du r√©sultat de la derni√®re traduction √©lectronique de l'anglais vers le fran√ßais ! Je vous en laisse juge en cliquant ici. Lors de cette table ronde, les membres du CRIP, nous avaient pos√© 2 questions, √† mes confr√®res fournisseurs de stockage, ainsi qu'√† quelques utilisateurs : Fran√ßois Dessables de PSA, Christian Jaubert de Bouygues T√©l√©com et Jacques-Alain Barret de Manpower.2 questions sur lesquelles je vais tenter de vous faire un bref r√©sum√©. 1) Utilisation des disques SSD ( Flash ) , quels enjeux pourl'industrie, pourquoi l'adoption est-elle plus lente en France ? Globalement un consensus s'est d√©gag√© sur cette question. Les SSD sont une tendance de fond qui se retrouve petit √† petit dans l'ensemble des offres. Sachant qu'il faut bien distinguer les disques SSD d'entreprise de ceux utilis√©s pour le grand public. Toutefois, 2 strat√©gies de mises en oeuvre apparaissent. Une strat√©gie classique, qui consiste √† int√©grer les disques SSD au sein des baies existantes, l'autre, celle de Sun notamment, qui s'appuie sur les disques SSD pour acc√©l√©rer les I/O de mani√®re transparentes aux applications et aux administrateurs. Dans le cas des disques SSD au sein des baies de stockages, il est n√©cessaire de d√©finir quelles seront les donn√©es qui en b√©n√©ficieront, tout un travail en perspective. Car vous n'allez pas remplir une baie avec un ensemble complet de disques SSD (pour ne pas avoir √† vous poser cette question) pour 2 raisons : le co√Ľt, la capacit√© des contr√īleurs des baies actuelles, qui de toute fa√ßon ne pourraient pas tenir les IOPS potentielles. Naturellement, Sun disposant de baies de stockage de ce type, nous sommes capable de r√©pondre √† ce besoin pour certains cas d'usages adapt√©s. Ce besoin de r√©fl√©chir au placement des donn√©es et √† sa pertinence, c'est peut-√™tre une des raisons qui explique l'adoption lente en France -outre le fait que tous les acteurs du stockage n'en disposent pas... L'autre axe de d√©veloppement sur lequel Sun investit et d√©livre des solutions d√©j√† disponibles aujourd'hui, c'est l'OpenStorage, commercialement connus sous les gammes x45xx et 7000. J'ai d√©j√† produit un billet sur cette ¬ę r√©volution ¬Ľ. En 2 mots, nous rapprochons les disques SSD au plus pr√™t des processeurs, au sein des serveurs, et nous nous en servons √©galement comme (tr√®s tr√®s gros) cache secondaire dans nos ¬ę baies ¬Ľ OpenStorage. Ainsi, toutes les applications en b√©n√©ficient de facto. Je vous invite √† consulter ce blog pour quelques chiffres de performance en stockage de type NAS. Maintenant, pour en faciliter l'adoption, l'enjeux pour les industriels, dans les 2 strat√©gies, est de fournir les guides de mises en oeuvre en fonction des usages. Ce que nous faisons bien √©videment... Si vous souhaiter d√©ployer, par exemple, vos bases Oracle sur Sun Storage 7000, je vous engage notamment √† lire ce Blueprint. Et si vous avez un doute, n'h√©sitez pas √† nous consulter, nous pourrons vous guider ! Pour plus d'information sur les disques SSD (technologies) et la d√©marche d'int√©gration, je vous renvoie √† cet article d'Adam Leventhal : "Can flash memory become the foundation for a new tier in the storage hierarchy?", ainsi qu'√† son blog. Maintenant passons √† la deuxi√®me question de cette table ronde... 2) L'√©mergence des offres de stockage sur base X86 ( √† l'image de cequ'utilisent les grands acteurs du web, Google, Amazon, ... ) , quelleutilisation pour l'industrie, est-ce une opportunit√© de r√©duire les co√Ľtsdans le contexte de crise actuel ? Pour r√©pondre √† cette derni√®re, je vais √™tre beaucoup plus synth√©tique en vous invitant √† (re)lire : "Openstorage: La r√©volution dans la gestion des donn√©es", d√©j√† cit√© pr√©c√©demment. Je compl√©terais juste par le fait qu'au-del√† du HPC (voir: Solving the HPC I/O bottleneck - Sun Lustre Storage System) et des stockages en grilles, les utilisations pour l'industrie que je vois d√®s aujourd'hui sur certains projets auxquels je participe se situent principalement dans le stockage d'environnement virtualis√©s type VMWare ainsi que les environnements de bases de donn√©es en d√©veloppement et int√©gration. En effet, les solutions de stockage x86 offrent une r√©elle opportunit√© de r√©duire les co√Ľts. D'autres cas d'usages existent et quelques uns sont r√©sum√©s dans l'image ci-contre. Comme pour les disques SSD, le stockage x86 est une tendance de fond (voir: Data Trends... Driving Storage (radical) Evolution), avec la performance des processeurs actuels et les solutions logiciels compl√®tes disponibles comme la stack OpenStorage d'OpenSolaris. Nous sommes au d√©but de la standardisation de l'offre de stockage avec : abandon des solutions propri√©taires, adoption de solution standards √† base de x86 et des logiciels open source associ√©s, s'appuyant sur des technologies innovantes (SSD, ZFS,...) Donc, fort potentiel de gain en performances et en co√Ľts. C'est sur ce constat qu'il y a bient√īt un an, Sun a f√©d√©r√© ses ing√©nieries serveurs et stockages. Nos serveurs x86 (incluant les disques SSD) devenant le composant de base pour construire nos solutions de stockage x45xx et Sun Storage 7000, en y ajoutant l'intelligence avec OpenSolaris et toutes ses fonctions OpenStorage (snapshot, r√©plications, ¬ę de-duplication ¬Ľ √† la mode ZFS...). Translate in English

Translate in English C'est avec un petit décalage de 15 jours que je reviens sur la table ronde disques SSD et stockage x86, organisée par Jean-Pierre Dumoulin, lors du dernier ITIForum du CRIP. Je...

Innovation

Behind the Clouds... follow up of Cloud Camp Paris

Translate in French First, I apologize for the french natives, as this time, they will have to use the electronic translation to get it in french... But as all the Cloud Camp Paris was delivered in English and brought people from all over Europe and even from the US with the venue of Dave Nielsen, I though I should do this directly in English. So, during this event, I had the opportunity to present, in 5 minutes - which was a challenge ! - what Sun is doing "Behind the Clouds". Due to the timing, I concentrate on few examples, in the different Cloud Computing categories : SaaS, PaaS and IaaS... on what Sun is/will provide from Storage/Compute resources, Developers Platforms Services to Software as a Service.... Let's start with the Infrastructure as a Service layer...  We do provide all the optimized building blocs to provide Storage and Compute services : through optimized HW, optimized OS and FS with OpenSolaris and ZFS up to the management of  heterogeneous (Linux, Windows, Solaris, OpenSolaris...) Virtual Machines through VirtualBox.  Also through its very scalable design pattern, MySQL is a major component of Could Computing providers. You will find a lot's of MySQL nodes inside Google or even Yahoo!... There was an interesting discussion during the Clouds Architecture Workshop about the models and Fran√ßois did an interesting summary on his blog. I just would like to take the opportunity to balance the statement that he reported about MySQL... Actually in any type of architecture, you need to understand how the underlying building blocs scales do get it right, and unfortunately everything is not on http://www.highscalability.com . So if you want to scale with MySQL, you can, if you don't know how to do it, ask us...So not only are we providing the building blocs for Cloud Computing providers, like A-Server, but we are also optimizing them with advance solutions like Hadoop, hosted on Cloud Providers Clouds. And in this regards, all knows the NYT example of using Clouds, but a less know fact is that they used the OpenSolaris Hadoop images on Amazon Web Services to do it.Last but not least, those are the components on which we are building our own Public Cloud that will be early acces this Summer and already available to Sun Employees... And as it's in Sun DNA, we are Open, to avoid Cloud Vendor lock-in, and are already providing an Open API AND Open Sourced it  to manage Clouds. I invite you to have a look into Project Kana√Į web site.So, I think that Sun was  heavily  involved in the Infrastructure as a Service was of no surprise for you. But what about Sun in the other layers of Cloud Computing... So what are we doing and providing TODAY as Platform as a Service... I took 2 examples. The first one is project DarkStar, which is an Online Gaming Massive Multiplayer Platform... on which Online Gamings are already develops and on which you as a developer can develop your own game, leveraging lot's of scalability directly out of this platform like sharding principle.The second example is Zembly (which is an acronym that's stand for assembly), This Platform offers an Online IDE for developers to share components and built-out application that are accesible through Social Network Plaforms like Google, Facebook, Instant Messaging applications like MSN,... up to iPhones.  And this is Open !  If you do have an API to acces to your own network, you can upload your API inside Zembly, and suddenly you have acces to Developers and Social Network at the same time, connecting your own network into it... Just imagine if  a Telco Operator API was available inside Social Network Application - indeed, you may want to check Ericson Labs modules on Zembly. Suddenly you could send SMS to your friends, geo locate them from your desktop... Bringing the mobility into the social network.... This is a Cloud Social Application Platform... Zembly is also a way for you  to acces Sun Open Cloud today, as it is running on it.  Last friday, I was at The Aquarium session in Paris, where Emmanuel De La Gardette provided a deep dive into Zembly. Here are the links to  my presentation and the one from Emmanuel on Zembly . There is also a book on Zembly ( Assemble the Social Web with zembly) and you can also find some examples on YouTube ( Creating a Facebook Application Using zembly (Part 1) ,Bring Your Own API). Last but not least, some example of Sun SaaS services in developments or already available...Deploy to the Cloud function inside VirtualBox.Save to / Open from the Cloud inside OpenOffice... already available for Sun Employees.. Identity Management for and from the Cloud with Sun IDM....And of course, in the Sun Philosophy : No Vendor Lock-in ! That's why we open our API and Open Sourced it, but also why we are actively involved through the OpenGridForum in the Open Cloud Computing Interface definition... Which was covered by Sam Johnston and Solomon Hykes during one of the breakout session. So this was a quick overview  of some real things that we are doing today ¬ę Behind the Clouds ¬Ľ... I also want to take the opportunity to thanks all the participants and organizers of this first cloud camp in France, as well as the experts from Sun that participates to this event. We share and learn a lot... also in the organization process... We will improve for the next one, that we plan to held around september/october timeframe. If you want to participate and provide your ideas and passion please register into the cloudcampparis google group. Eventually, I wanted to thanks Fran√ßois for having pushing me to release this blog... Update : Constantin Gonzalez did also blog on Cloud Camp Paris with a lot's of details, notes - especially the Clouds Architectures session - and even audio recording of the lightning talks ! I invite you to visit his blog, it's a very interesting reading and listening. Translate in French

Translate in French First, I apologize for the french natives, as this time, they will have to use the electronic translation to get it in french... But as all the Cloud Camp Paris was delivered in...

Innovation

Cloud Camp Paris, le 11 juin - 18h30 à Telecom Paris

Translate in English Le Cloud Computing voit l'√©mergence d'une nouvelle fa√ßon de faire et deconsommer l'IT. Avecles nombreux changements intervenus derni√®rement dans l'industrie,saisissez l'opportunit√© de partager votre exp√©rience gr√Ęce auxnombreuses discussions organis√©es lors de ce premier √©v√®nement enFrance.Utilisateurs finals, professionnels de l'IT et vendeurs sont tousencourag√©s √† y participer. Cet √©v√®nement est fait pour et par les acteurs du Cloud Computing et seveut interactif.Il est constitu√© de sessions informelles et de partages sur les th√®mescomme : Qu'est ce que le Cloud Computing, pourquoi et pourquoi pas ? Open Cloud - au sujet de l'interop√©rabilit√© Clouds Architecture - au sujet des "Design Pattern" pour faire etutiliser les Clouds Clouds Security - nouveaut√©s ? diff√©rences ? La conf√©rence sera introduite par Sam Johnston, qui participeactivement √† des projets d'utilisation du Cloud dans les entreprises,mais aussi autravers de groupes de travail, dont l'Open Grid Forum et l'initiativeOpen Cloud Computing Interface. Si vous souhaitez vous m√™me contribuer √† cet √©v√®nement, pour pr√©sentervos solutions pour le Cloud Computing ou contribuer √† une des sessions,n'h√©sitez pas √† me contacter. Pour plus de d√©tail sur l'agenda, et venir nous rejoindre le 11 juin √†Telecom Paris, √† partir de 18h30 : Translate in English

Translate in English Le Cloud Computing voit l'émergence d'une nouvelle façon de faire et de consommer l'IT. Avec les nombreux changements intervenus dernièrement dans l'industrie,saisissez...

Innovation

Sur les pas du premier Cloud Camp à Paris...

Translate in English Premi√®re r√©union hier avec quelques acteurs du Cloud Computing, pourlancer l'organisation d'un Cloud Camp √† Paris fin mai. L'occasion de faire un √©tat des lieux du Cloud Computing en France. Eneffet, terme tr√®s √† la mode, il reste √† aider les entreprises √† identifierles cas d'usages et le mod√®le √©conomique associ√©. Le Cloud Computing : pourquoi ?Premi√®rement, le Cloud Computing repr√©sente une √©volution du mod√®led'acquisition : l'informatique comme un service. Pour unepart cela n'est pas nouveau et nous avons d√©j√† une forte exp√©rience surce point au travers des mod√®les ASP (ou SaaS), commeSalesforce.com, o√Ļ d'autres solutions bien connues comme PayPal. L√† o√Ļl'on voit une √©volution aujourd'hui, c'est dans l'extension du mod√®led'acquisition de service qui ne s'arr√™te plus seulement √† l'applicationmais qui couvre aujourd'hui la possibilit√© soit de d√©velopperdirectement sur une plate-forme externe (PaaS), soit d'acqu√©rirde la puissance informatique en tant que service en fonction de sesbesoins et cela de mani√®re imm√©diate (IaaS). L'un des ma√ģtre mot duCloud Computing √©tant la flexibilit√© par le payement √† l'usage : "je nepaie que ce dont j'ai besoin, quand j'en ai besoin". Alors pourquoiseulement maintenant ? Une conjonction de plusieurs facteurs: technologiques  : la bande passante r√©seau, la maturit√© dumod√®le et des logiciels open source couvrant tous les besoins d'unsyst√®me d'information, les nouvelles technologies permettant devaloriser la capacit√© de traitement des serveurs et du stockage associ√©,tous ces facteurs contribuent √† permettre la mise en placed'infrastructures mat√©rielles et logicielles avec la connexion ad√©quate aumeilleur co√Ľt.Comme le disait hier Sam Johnston : les technologies pour le "Cloud Operating Environnement" sociaux : la soci√©t√© a √©volu√©, internet est devenu un vecteurd'√©change compris par tous, et tout le monde √† l'habitude d'utiliser des services sur la toile. Le Cloud Computing : pour qui ?La r√©ponse est simple : pour tous... Toutefois, les facteurs d'adoptionet d'usage seront diff√©rents si l'on se trouve dans une entreprise duCAC40 ou dans une startup.Les startups sont les premi√®res √† utiliser massivement le CloudComputing, car pour les d√©veloppeurs c'est un moyen √† co√Ľtd'acquisition mat√©riel nul, voir √† co√Ľt d'administration nul -enfonction du mod√®le choisie (PaaS ou IaaS)-. Les startups partant d'uneinfrastructure vierge, peuvent adopter d'autant plus facilement lemod√®le. D'ailleurs, chez Sun, nous avons d√©j√† mis √† disposition desstartups etISV des environnements de type IaaS sur la toile, afin de leurpermettre d'√©valuer leurs solutions sur nos environnements. L'√©tapesuivante arrivera cet √©t√© avec l'ouverture en B√™ta du Sun's Open Cloud.Pour les entreprises plus mature disposant d'un historiqueinformatique, avec des applications plus complexe, l'adoption du mod√®leCloud Computing se fait plus par fonction, et souvent en commen√ßant enmode SaaS : l'extension de l'utilisation d'un Saleforce.com surd'autres applications comme la messagerie par exemple. Mais tr√®srapidement, ce mod√®le se doit d'√©voluer pour apporter la flexibilit√©attendue par les d√©veloppeurs et donc le business, tout en r√©pondant √†certaines contraintes souvent √©voqu√©es, comme la s√©curit√© des donn√©es... Et cela se fera de2 fa√ßons : la capacit√© pour les d√©veloppeurs de valoriser le Cloud Computing"Public" (c'est √† dire au travers d'un fournisseur externe), tout enpouvant d√©ployer ensuite en interne (usage "Hybride") la capacit√© de l'informatique interne √† adopter un mod√®le Cloud Computing : "Priv√©", pour les cas d'usages les mieux adapt√©s, pour √™tre plus agile tout en optimisant au mieux l'utilisation de l'infrastructure. Qu'est-ce qui diff√©rencie ce mod√®le d'un mod√®le d'h√©bergement ou d'infog√©rance ? 2 points cl√©s : le moded'acquisition, qui notamment pour l'IaaS consiste √† l'auto-provisioning et la facturation √† lademande. l'adaptabilit√© en quasi-temps r√©el : la capacit√© d'ajouter ou deretirer de la ressource en fonction du besoin et de fa√ßonquasi-instantan√©e La flexibilit√©, oui mais.... Attention, pour en tirer partie, il fautsavoir s'adapter au Cloud Computing, comme Smugmug, et souvent concevoir "at design time" en pensant Cloud, pour que celafonctionne de fa√ßon flexible (c'est √† dire par un mod√®le de croissancehorizontale, o√Ļ "scale out") "at run time" (merci √† Emmanuel De La Gardette pour la formulation). Le ma√ģtre mot ici √©tant :la ma√ģtrise de l'asynchronisme et... des capacit√©s des fournisseurs deCloud. L'importance de l'interop√©rabilit√©  et des standardsDans la m√™me philosophie o√Ļ pour des raisons de gouvernances vousdisposez de plusieurs fournisseurs, et vous vous appuyez sur des standardspour pouvoir passer de l'un √† l'autre facilement, il en va de m√™mepour le Cloud Computing, surtout si vous appuiez une fonction critique devotre entreprise sur ce mod√®le. C'est le m√™meprincipe de r√©versibilit√© dans l'infog√©rance, √† ceci pr√™t que dans le cas du Cloud Computing "Public", vous ner√©cup√©rez pas le mat√©riel, voir m√™me pas l'application. C'est l√† o√Ļ il faut fairetr√®s attention au format de stockage des donn√©es et au mode de r√©versibilit√© associ√©... C'est une des raisonspour laquelle le Sun Cloud arrive avec une API de management ouverte,comme l'explique Tim Bray, afin desimplifier l'interop√©rabilit√©. Quant √† vos donn√©es et √† la gouvernance en g√©n√©rale dans le Cloud deSun,Michelle Dennedy est notre Chief Gouvernance Officer √† ce titre.Vous voulez en savoir plus, vous √™tes un acteur et d√©sirez partager surce sujet passionnant, c'est l'occasion de venir nous rejoindre lors du prochain Cloud Camp, dont je ne manquerais pas de vous tenir inform√© dans un prochain post. Sinon je vousinvite √† consulter √©galement le white paper de Jim Baty et Jim Remmell pour situer le Cloud Computing dans la r√©alit√© : nonseulement en terme de cas d'usages, mais aussi en terme de technologiessous-jacentes. Mise √† jour - 9 mai 2009 : http://www.cloudcamp.com/paris  pour vous enregistrer au Cloud Camp qui aura lieu finalement le 11 juin. Translate in English

Translate in English Première réunion hier avec quelques acteurs du Cloud Computing, pour lancer l'organisation d'un Cloud Camp à Paris fin mai.L'occasion de faire un état des lieux du Cloud Computing...

Innovation

OpenSolaris et Intel Xeon Processor Nehalem

Translate in English Lorsque nous mettons en place un partenariat, comme dans le casd'Intel, cen'est pas pour √™tre un simple revendeur, mais bien pour innoverensemble, et apporter le meilleur des 2 soci√©t√©s √†  nos clients. A ce titre, je vous ai d√©j√† parl√© de l'ing√©nierie optimis√©e de nossyst√®mes x86/x64, mais notre collaboration va bien au-del√†...Solaris (et OpenSolaris) est une des raisons majeures de l'accord departenariat qui nous lie avec Intel. De part sa stabilit√© et sacapacit√© √† exploiter des syst√®mes multiprocesseurs et multi-coeurs,Solaris dispose de fonctions avanc√©es... Fonctions qu'Intel int√®gre √†sa nouvelle architecture multi-coeur,Nehalem  pour : exploiter un grand nombre de Threads, gr√Ęce au "Dispatcher"optimis√© de Solaris tirer partie des architectures NUMA, avec la fonction "MemoryPlacement Optimization" (MPO) g√©rer la consommation d'√©nergie, au travers du projet TESLA optimiser les performances des machines virtuelles en collaborantdans le projet de Virtualization xVM Server int√©grer les nouveaux jeux d'instructions dans les outils Solaris(Studio, ...) pour tirer partie des nouvelles fonctions mat√©rielles duprocesseur (XML instruction, loop CPC, counters...) (note: ce n'est pas moi sur la vid√©o, mais David Stewart, Software Engineering Manager d'Intel)  Toutes ces int√©grations sont disponibles dans les distributionsOpenSolaris2008.11 et  Solaris 10 Update 6.A cela s'ajoute √©galement l'optimisation des couches logicielles pourles architectures mutli-coeurs.Sun fournit des logiciels opensource recompil√©s pour ces architectures,au travers des distributions CoolStack.Ces logiciels sont disponibles sur architectures x86/x64, mais√©galement SPARC.Car il ne faut pas l'oublier, Sun a toujours une avance importante surles technologies mutli-coeurs. Nous avons lanc√© d√®s 2005 un processeurSPARC CMT (CMT pour Chip Multi-Threading) 8 coeurs, avec 4 threads parcoeur, soit 32 threads mat√©riels d'ex√©cution. Ceci a pos√© un certainnombre d'enjeux au niveau du syst√®me d'exploitation. Enjeux quipermettent aujourd'hui √† Solaris d'exceller sur ce typed'architectures. Nous sommes aujourd'hui √† la 3i√®me g√©n√©ration de ceprocesseur (eh oui, une par an, pas mal pour le monde des processeurs),qui supporte 8 coeurs, 8 threads par coeur et des syst√®mes jusqu'√† 4processeurs (256 threads d'ex√©cutions mat√©riels !). Maintenant, la question que nous avons souvent : quand utiliser x86/x64et quand utiliser le processeur SPARC CMT massivementmulti-coeurs/multi-threads ? De fa√ßon synth√©tique, l'architecture x86/x64 est aujourd'hui plusg√©n√©raliste, et est √† privil√©gier dans les cas o√Ļ l'application n'estpas mullti-thread, et o√Ļ la performance d'un thread ainsi que le tempsde r√©ponse associ√© est le facteur important, en bref, cl√© pour le HPC. A contrario, le SPARC CMT est vraiment sp√©cialis√© pour : les applications fortement multi-threads (Java en fait partiebien s√Ľr, ce qui rend √©ligible un nombre important d'applications) la pr√©dictibilit√© du comportement (m√™me sur tr√®s fortes chargestransactionnelles) : pas de surprise ! la consommation √©lectrique optimis√©e (fr√©quence moins √©lev√©e =moins de dissipation calorifique) le MTBF √©lev√©, de part une int√©gration importante de fonctions auniveau du processeur (gestionnaire m√©moire, I/O, r√©seau et decryptographie !) Un point √† ne pas n√©gliger non plus : la configuration et leparam√©trage des logiciels dans un environnement multi-coeur change ! Il faut penser diff√©remment et parfois m√™me revoir son param√©trageapplicatif √† l'inverse des habitudes. Tuning JVM pour mutli-cores : GC! , poolthread java...! Donc si vous souhaitez tirer partie au mieux des nouvellesarchitecturesmulti-coeurs : s√©lectionnez le mat√©riel par rapport √† vos besoins : x86/x64 ouSPARC CMT utilisez le bon OS : Solaris ou OpenSolaris utilisez la bonne stack logiciel utilisez les bons param√®tres et obtenez le meilleur meilleur ratio prix/performance/watt/m¬≤ Note : je n'ai pas √©voqu√© ici les syst√®mes "high-end", SPARC64, car dans une autre classe de serveurs que ceux de type x86/x64 et SPARC CMT. Toutefois, ces syst√®mes ont un r√īle √† jouer dans des environnements n√©cessitant des besoins de croissance applicative verticale (SMP, pour Symetric Multi-Processing), beaucoup d'entr√©es/sorties et avec un niveau de criticit√© √©lev√© (car ils disposent notamment de fonctions d'intervention √† chaud). Translate in English

Translate in English Lorsque nous mettons en place un partenariat, comme dans le cas d'Intel, ce n'est pas pour être un simple revendeur, mais bien pour innoverensemble, et apporter le meilleur des 2...

Innovation

OpenStorage : la révolution dans la gestion des données

Avant de plonger dans la naissancede la r√©volution OpenStorage, j'aimerais commencer par souligner 2√©l√©ments importants de lagestion des donn√©es aujourd'hui : l'explosion des donn√©es √† g√©rer, stocker, analyser.Nous ensommes d√©j√† aux PB dans les entreprises (j'ai personnellement 2TB surmon bureau...)... bient√īt les Exabytes... un march√© propri√©taire et captif pour les stocker.Si je mepermets une analogie, un march√© qui ressemble fortement au march√© desimprimantes : grosse comp√©tition sur le stockage au prix au giga(l'imprimante au moins cher) et ensuite, un prix tr√®s √©lev√© sur lesfonctions propri√©taires additionnelles indispensables pour g√©rer detellesvolum√©tries -logiciels de r√©plications, de snapshots, de compression...- (cartouches propri√©taires et non standard pour les imprimantes) Et Sun dans tout √ßa ? Nous sommeseffectivement tr√®s pr√©sents sur lemarch√© du stockage, et comme nous aimons beaucoup l'innovation,nous avons pris unvirage radical dans l'√©conomie du stockage et de la gestion desdonn√©es : l'OpenStorage. La premi√®re solution qui nous a servi devalidation du concept s'appelle le X4500 : un serveur hybride, moiti√©stockage, moiti√© serveur, permettant de stocker 48TB dans 4U, maissurtout une solution performante, ouverte et int√©gr√©e, qui repr√©sented√©j√† 11PB chezun de nos clients fran√ßais dans le monde de la recherche. Une solutionqui fournit toutes les fonctions n√©cessaires d√®s le d√©part : le prix augiga incluant la r√©plication, le snapshot, la compression...et plus besoin defsck() car le syst√®me de fichier (ZFS)garantit l'int√©grit√© : une des raisons majeures pour laquelle notreclient avec 11PB a retenu cette solution (imaginez que vous soyezoblig√© de v√©rifier l'int√©grit√© d'un syst√®me de fichier de 48TB : √ßaprend du temps !). Comme cette solution est bas√©e √† 100% sur nostechnologies mat√©rielles et logicielles, cela nous permetd'avoir uneapproche au meilleur co√Ľt, surtout que nous valorisonsl'open source pour l'enrichir. Voil√† pour le deuxi√®me point √©voqu√© plushaut : fini le march√© propri√©taire et captif, vive l'Open Storage ! Maintenant, il est √©galement important de r√©pondre au premier point : l'explosion des donn√©es √† traiter. Cepoint est critique et suppose une toute nouvelle approche par rapportaux syst√®mes de stockages classiques SAN ou NAS actuels. En effet,comment traiter toujours plus de donn√©es ? Un premier √©l√©ment der√©ponse nous est donn√© par Google, qui, ne trouvant pas de solution surle march√© pour classer toutes les donn√©es d'internet a d√©velopp√© sapropre solution. Le principe est simple : comme il est impossible deramener des PB, voir Exabytes de donn√©es vers des serveurs detraitements, ils ont int√©gr√© l'application au plus pr√®s des donn√©es,donn√©es qui sontr√©parties sur les briques √† la fois serveur et stockage : le GoogleFSet l'algorithme Map/Reduce... qui sont maintenant disponibles en opensource, au travers des projets hadoop (Map/Reduce) et HDFS(GoogleFS)... Je viens d'ailleurs de r√©cup√©rer l'image ISO OpenSolaris (livehadoop)incluant l'ensemble pour jouer un peu avec (gr√Ęce √† VirtualBox).Evidement, la brique SunX4540 (extension du X4500) correspondparfaitement √† ce type de d√©ploiement. C'est d'ailleurs ce qu'a fait Greenplumpour sa solution de Business Intelligence. Bien entendu, tout le monde n'a pas encore Hadoop chez soi, quoi que,les personnes cherchant √† faire de l'analyse sur des donn√©es non structur√©es (donc massives) regardent cela de tr√®s pr√®s. Par contre,tout le monde poss√®de des serveurs de fichiers, qui, eux aussi voientleur besoin en stockage croitre de fa√ßon dramatique... C'est l√† quenous avons d√©cid√© d'agir avec les derni√®res solutions Open Storage(S7110,S7210et S7410)avec, en prime, des fonctionsd'analyses  du stockage (adaptables√† vos besoins) et des performancesuniques √† ce jour, y compris pour stockerles donn√©es du "Cloud" avec MySQL. Notre capacit√© √† combiner les innovations mat√©rielles ET le logiciel ausein des syst√®mes Open Storage nous permet d'obtenir des performancesextr√®mes, de part la combinaison des disquesSSD et du syst√®me de fichier ZFS capable de l'exploiter(avoir des disques SSD est une condition n√©cessaire mais pas suffisante- pour ceux qui n'auraient que cela √† leur catalogue- il faut √©galementun syst√®me de fichier "SSD aware" - merci ZFS). Jusqu'√† : 5x et 40x !!! sur les IOPS avec un temps de r√©ponse autour de 1ms ! (une fois lecache SSD "chaud" - ce qui peut prendre un peu de temps) ... mais quand m√™me, pour les suspicieux, lesr√©sultats sont l√† avec en plus pas mal de r√®gles de mise en oeuvreen fonction des types de profils d'I/O qui sont donn√©s par BrendanGregg. Comme je le disais r√©cemment, l'avantage de la d√©marche d'adoption destechnologies ouvertes de Sun, c'est que non seulement vous pouvez t√©l√©chargerle logiciel mais aussi lemat√©riel ! pour l'essayer chez vous gratuitement ! Et en plus tr√®ssimple √† installer, √† en croire ceux qui l'ont d√©j√† test√© : http://opensrs.com/blog/2008/12/testing-suns-open-storage-platform/ http://www.pcpro.co.uk/reviews/245627/sun-storage-7110-unified-storage-system.html http://www.backupanytime.com/blog/2008/11/18/my-review-of-sun-storage-7210-unified-storage-system/ Si vous voulez en savoir plus, l'un des d√©veloppeurs de cettetechnologie, expert en performance, sera √† Paris le 18 Mars, Roch Bourbonnais. Je voustiendrai inform√© prochainement de la logistique pour ceux que celaint√©resse. Mais vous pouvez d√®s maintenant r√©server votre soir√©e... Translate in English

Avant de plonger dans la naissance de la révolution OpenStorage, j'aimerais commencer par souligner 2 éléments importants de la gestion des données aujourd'hui :l'explosion des données à gérer,...

Innovation

L'innovation technologique de Sun Microsystems... pour baisser vos co√Ľts

Simon Phipps, Chief Open Source Officer de Sun, a r√©cemment particip√© au Club Innov-IT -initiative soutenue par Jean-Louis Missika- √† la mairie de Paris. Suite √† sa venue,  j'ai voulu faire un √©tat des lieux des innovations technologiques de Sun, qui couvrent non seulement les logiciels Open Source mais pas seulement. Sun Microsystems a toujours investi fortement en recherche etd√©veloppement(1,8 Milliard de $ aujourd'hui), car notre strat√©gie, r√©sum√©epar "The Network is the Computer", repose sur lavalorisation des technologies pour simplifier le syst√®med'information et les solutions qui s'y rapportent. Pour cela nousdisposons d'une offre nous permettantde couvrir l'ensemble de la cha√ģne : ducentre informatique,au travers d'innovations "Green IT" tel que le Data Center mobile etextensible, Sun Modular Data Center 20 (40% plus efficace qu'un DataCenter classique), d√©clinable en "POD" au sein de Data Centerexistants, ayant permis √† Sun d'atteindre un PUE ( PowerUsage Effectiveness) de1,28 ! L'optimisation d'un Data Center passant bien entendu par unemesure effective, nous avons √©galement d√©velopp√© une solutionpermettant de fournir en temps r√©el la consommation √©lectrique, lacharge et la climatisation n√©cessaire par syst√®me. Intelligent PowerMonitoring fournit de ce fait une cartographie compl√®te d'un centreinformatique sur le plan √©nerg√©tique (http://www.sun.com/service/power/) -offre d√©j√† pr√©curseur du "Cloud Computing", car d√©livr√©e en tant que SaaS-.Au-del√† de ces technologies, nous avons d√©velopp√© un savoir faire enservices d'accompagnement pour vous aider √† optimiser votre centreinformatique existant ou √† en b√Ętir un nouveau. des donn√©es, avec une offre compl√®te des disques auxlibrairies, et notamment avec un mod√®le disruptif : OpenStorage. Eneffet, par le mod√®le OpenStorage, nous fournissons une solution ouverteincluant l'ensemble des logiciels de stockage (r√©plication, snapshot, compression...) d√®s le d√©part avec labaie, √©vitent toute surprise ult√©rieure. Cette offre couvreactuellement des volum√©tries de 2TB √† 266TB et va √©voluer rapidement en volum√©trie sur l'ann√©e 2009. Cette offre se compl√®tepar des solutions int√©gr√©es permettant d'optimiser la gestion du cyclede vie de la donn√©e autour de l'archivage de mail ou la gestion dedonn√©es vid√©o par exemple. des syst√®mes,avec la couverture du march√© du volume au travers de nos partenariatsavec Intel et AMD, et une int√©gration novatrice, permettant d'optimiserla consommation √©lectrique, l'espace et la climatisation requis pourdes serveurs √† architecture x86/x64. Ce savoir faire, coupl√© √† notreexpertise dans le logiciel open source, nous place comme l'acteurmajeur des solutions de calculs intensifs HPC aussi bien scientifiqueque commercial, ainsi que des technologies Web 2.0 d'aujourd'hui et dedemain. En outre,  notre gamme deserveurs bas√©s sur des processeurs SPARCnous permet d'accroitre la comp√©titivit√© avec une offre unique et fournit √† nos clients des gains cons√©quents enperformance et en √©nergie, avec par exemple, jusqu'√† 14000 utilisateursSiebel sur un serveur T5440 (http://www.sun.com/solutions/enterprise/siebel/)pour 1900 watts de consommation. Avec le haut de gamme des serveurs Sun, nous fournissons √©galement une garantie unique sur le march√© en terme de p√©rennit√© des investissements. En effet, nous supportons l'ajout des processeurs SPARC64 de nouvel g√©n√©ration dans les syst√®mes existant et cela conjointement avec les processeurs de g√©n√©ration pr√©c√©dente. de la virtualisation des ressources informatique. Point√©galement majeur autour de l'optimisation de l'utilisation des syst√®meset du stockage pour en ma√ģtriser √† la fois le nombre et laconsommation. Nous fournissons des solutions de virtualisation quicouvrent les environnements de d√©veloppement, de tests et de support (xVMvirtualbox), lesutilisateurs finaux (xVM VDI), les serveurs (Solaris, Partitions Physiques, Partitions Logiques, et plus √† venir), lestockage (thin provisioning, virtualisation SE9990v) et la sauvegarde(VTL). des logiciels. Avec unmod√®leopen source garantissant l'acc√®s au moindre co√Ľt et de fa√ßonp√©renne √† dessolutions d√©velopp√©es pour vos besoins. Les solutions logiciels de Suncouvrent les domaines suivants: le syst√®me d'exploitation, au travers de Solaris,qui est support√© sur un grand nombre de plate-formes Sun et non-Sun etqui constitue une des cl√©s de l'accord entre Sun et Intel, du fait desa capacit√© √† prendre en charge les processeurs multi-coeurs (tendanceforte du march√© des processeurs). la base de donn√©es, avec MySQL,tr√®s utilis√©e dans les syst√®mes embarqu√©s et sur les technologies Webou t√©l√©com √† forte disponibilit√©. Avec l√† encore un TCO optimum (http://www.mysql.com/tco/) le serveur d'application Java, Glassfish,offrant toutes les fonctionnalit√©s attendues sur ce typed'environnement et une int√©gration ais√©e pour les d√©veloppeurs le bus d'int√©gration d'applications, OpenESB la s√©curit√© avec notre offre degestion d'identit√© leader du march√© et d√©clin√©e en opensource avec openSSO,openDS, openID la plate-forme de d√©veloppement Netbeans,ainsi que les compilateurs et outils d'analyses au travers de l'offre SunStudio 12. Il faut noter que l'adoption de Sun Studio 12 permet desgains substantiel dans le cycle de d√©veloppement, par la fournitured'outils d'analyse de performances et d'optimisation uniques enstandard. les logiciels pour le calculintensifs avec LUSTRE et Sun Grid Engine Java. En tant que cr√©ateur decette technologie, nous contribuons √† son enrichissement et √† sonoptimisation, avec r√©cemment des machines virtuelles avec performancesoptimis√©es (HotSpot) et des solutions de d√©veloppement graphiqueenrichie (JavaFX). Nous fournissons √©galement une formationd'optimisation de d√©veloppement logiciel autour de JAVA, C/C++ pouraider √† tirer partie des processeurs multi-coeurs, au travers d'unatelier de 5 jours, s'appuyant sur le code de nos clients : donc avecdes b√©n√©fices imm√©diat et une capitalisation au sein de l'entreprise. Staroffice. Offre bureautique dominante du monde libreau travers de sa version OpenOffice et d√©j√† largement d√©ploy√©eau sein de l'administration fran√ßaise, de l'√©ducation nationale et desparticuliers. jusqu'√† l'entreprise √©tendue, au travers de notre solution debureau virtuel s√©curis√©. Cette solution offre un ROI important surla gestion des postes clients et de l'√©nergie associ√©e avec nos clientsl√©gers SunRay qui consomment 4w (+4w sur le serveur) et un co√Ľtd'exploitation minimal (chez Sun, 2 personnes pour toute l'Europe). Et elle offre √©galement au travers de sa seconde composante, SunSecure Global Desktop, la possibilit√© de d√©porter l'acc√®s de  n'importequelle application de l'entreprise (microsoft, linux, unix,mainframe,...) en toute s√©curit√©, quelque soit l'endroit dans le monde(unique sur le march√© pour une couverture mondiale). Chaque composante(SunRay / Sun Secure Global Desktop) peuvent √™tre d√©ploy√©s s√©par√©mentou conjointement en fonction des besoins et des calendriers. Comme vous le voyez, Sun innove pour optimiser le syst√®med'information et les solutions qui y sont associ√©es. Tout cela en respectant les standards et l'ouverture de ses solutions au plus grand nombre. NB: La liste des solutions et innovations pr√©sent√©e dans cet article est loin d'√™tre exhaustive, sachez notamment que Sun contribue √† plus de 750 projets Open Source. Translate in english

Simon Phipps, Chief Open Source Officer de Sun, a récemment participé au Club Innov-IT -initiative soutenue par Jean-Louis Missika- à la mairie de Paris. Suite à sa venue,  j'ai voulu faire un état...

Oracle

Integrated Cloud Applications & Platform Services