Tuesday Aug 27, 2013

Delivering Cloud with DBaaS - new webcast and e-book

Recently I participated in a live webcast in which Tim Mooney from Oracle and Carl Olofson from IDC discussed customer experiences with building public and private database clouds.  The recorded webcast is now available for on-demand viewing:  Delivering Cloud through Database as a Service

The webcast focuses on how Database as a Service delivers these key cloud benefits:

  • Greater IT efficiency
  • Higher capital utilization
  • Faster time to market

 You may also be interested in the free e-book, Building a Database Cloud for Dummies.

And at this point I'll digress for a moment, as the title of the e-book reminds me of a question that arose during the webcast, and continues to cloud many of our discussions about Database as a Service: are you a consumer, or a provider? 

To see the importance of understanding the consumer/provider point of view, consider the possible answers to this question:  "How much will a typical DBaaS cost?"

If a consumer is asking the question, the answer will be "whatever the provider charges" -- and from there we can look at examples of what public cloud providers charge for DBaaS.

If a provider is asking the question, we have a much more detailed discussion which must cover the entire solution that will host the DBaaS environment, including software, hardware, people and processes.

So when asking questions about DBaaS, make sure to identify your role up front -- this helps discussions get to the point more quickly.

You might wonder, how did the e-book title lead to this digression?  It's simple: the title does not indicate whether the dummies in question are those building the cloud, or are the future consumers of the cloud, or both ... in any case, it's a nicely written book despite the ambiguous title. 



Friday Aug 23, 2013

Database Consolidation onto Private Clouds white paper - updated for Oracle Database 12c

One of our most popular white papers on private database clouds has been expanded and updated to discuss Oracle Database 12c.  Available on OTN, the new version of Database Consolidation onto Private Clouds covers best practices for consolidation with pluggable databases that the new mulitenant architecture provides, and expanded information on the database and schema consolidation options.  These are the database consolidation models the paper evaluates:  

server  database  schema
pluggable databases 


Key considerations for consolidating workloads which the paper explores:

  • Choosing a consolidation model
  • How PDBs solve the IT complexity problem
  • Isolation in consolidated environments
  • Cloud pool design
  • Complementary workloads
  • Enterprise Manager 12c for consolidation planning and operations

 The paper's OTN page is the landing pad for information on private database clouds with the Oracle Database.  Drop in to learn more about isolation and security, consolidation on Exadata Database Machine, and more.


Tuesday Aug 20, 2013

Oracle Database 12c for Private Database Clouds

The release of Oracle Database 12 is accompanied by extensive supporting collateral that details the new features and options of this major release.  But with so much to read and investigate, where to start?  If you don't have the time to pore through everything, then you may wish organize your reading in terms of the use case you're most interested in.  If your interest is Database as a Service in private database clouds then may I suggest that you start here:

Accelerate the Journey to Enterprise Cloud with Oracle Database 12c

This paper describes the phases of the journey to enterprise cloud, and enumerates the new features and options in Oracle Database 12c that support each phase.  Oracle Multitenant figures prominently, but it's not the only cloud-enabling topic: Oracle Database Quality of Service Management, Application Continuity, Automatic Data Optimization, Global Data Services and Active Data Guard Far Sync all deliver key benefits for delivering database as a service. 

Further reading and research is suggested by the references included in the paper.

Happy clouding!

Monday Feb 18, 2013

Blueprints for your Enterprise Private Cloud

Last week, Oracle introduced Cloud blueprints for enterprise private clouds and posted information of about it at Oracle Technology Network. The Cloud blueprints are used to describe a desired set of inter-related cloud resources.

Like architectural blueprints, for example, more than a century old blueprint , Waldhaus Gasterntal Plan5 , they describe what you want including how they are configured to interact with each other, but not how to build them. For instance, a blueprint doesn’t describe the order in which to create the components; rather, the blueprint orchestration logic figures that out based on inter-resource dependencies.

For example, in order to create a set private cloud resources, such as a WebLogic server instance, an application and a database, that interact with each other, you must first create the database and WebLogic server instance, deploy the application, and create a JEE data source that is to be used by the WebLogic server to connect to the database.

You could always perform all these operations manually, through the Oracle Enterprise Manager Cloud Self Service Portal. You would request creation of the WLS server and database and wait for either to complete. Periodically, you would check the status of the creation requests. Once the WLS server is created, you could deploy the application. When both the WLS server and database are created, you could create the JEE data source.

Alternatively, you can use a blueprint that describes the four cloud resources to automate the process. You request instantiation of the blueprint and provide any input parameter values required by the blueprint. The blueprint initiates the creation of the resources and monitors the creation process to ensure that the dependent resources are automatically created as soon as the required resources are created.

A blueprint can be used to automate the creation of cloud service instances. An Oracle Enterprise Manager self-service portal user can use blueprints for various reasons:

• To create an application composed of several service instances and related cloud resources.

• To create such sets several times (possibly with small variations).

• To facilitate instance creation for other self-service portal users.

• To eliminate the manual interactive steps that would otherwise be needed to create the set of instances

• You want a textual representation, e.g. to place it under version control or so that you can give it to someone else in a form he can review and modify.

For example, the Quality Assurance team in an enterprise needs to allocate and release resources required to test a Web application. Instead of manually creating the service instances using the Enterprise Manager Cloud Self Service application, a blueprint can be used to perform this task. One person authors a blueprint so that all QA engineers can simply invoke the blueprint and enter a few input parameter values, after which the resources are created. Each user can watch as the blueprint processor displays the status for creation of each resource.

Another example illustrates a blueprint’s use to address simplicity and consistency concerns. An IT shop has a service template that accepts 8 input parameters. For a specific group of users, the same set of values should be used for 6 of those 8 parameters. A simple blueprint accepts 2 parameters and uses the template to instantiate the instances with the other 6 parameters consistently defined.

The zipped package on cloud blueprints posted on Oracle Technology Network   has the documentation and source code to discover how to use and how to create your own cloud blueprints. The overview document introduces the blueprint concepts including how to deploy an existing blueprint as well as how to write your own. The reference manual goes into much greater detail on blueprints. 

Thursday Jan 03, 2013

Key Principles of Cloud Chargeback System

Contributed by Eric Tran-Le, VP, Product Management, Oracle Enterprise Manager

With public clouds users can now compare compute pricing pretty much like they would compare a car models. One could argue that compute power and services are different than a car but the fact is that users can compare costs.

Nothing has changed and everything has changed

Cost wise nothing has changed in a sense that the decision to implement a private cloud versus hosting on a public cloud is driven by the same factors than outsourcing or on-premise. It goes well beyond simple cost comparisons to touch upon key principles such as:

- Control of costs
- Visibility of costs
- Fairness of costs

Everything has changed in a sense that there is now a cost “predictability” expectation from users derived from a notion of “unit of compute” (the equivalent of standard energy Unit of Work) from which you can predict your resources consumption and infer total costs (a 45mpg rating means that you can drive 450 miles with 10 gallons or $40 if $4 per gallon). This is a more fundamental change in a sense that IT needs to work with the finance department to rationalize the cost-of-compute(Infrastructure + SW platform) and link it to the cost-to-serve (incident, problem, change & configuration management), the end goal is to produce a standard unit of compute that can be applied to various products and services configurations.

In part 1 of this blog, we will highlight how a cloud chargeback system can help addressing the key principles mentioned above and in part 2 we talk about how to create a profitable cost recovery model.

Key Principles of Chargeback System

Control of Costs

What is not defined cannot be controlled”, “What is not controlled cannot be measured”, “What is not measured cannot be charged”.
A cloud chargeback system provides a framework by which you can define a standard unit of cost for products and services. You can set unit of costs for resources based on hardware tiers or database high availability tiers, you can add fixed costs for on-going support costs or for facilities driven cost such as real-estate, power, HVAC,… but more importantly you can organized them by business units and cost centers so that you do have a cost structure hierarchy to report in aggregate or by business units. To define the right set of configurations, you need to baseline the current workloads utilization, characterize the patterns group of users and type of workloads and since you will use based compute metrics (CPU, memory, I/O,…) to measure utilization it is important for you to define which layers of computing ( Servers, Middleware, Databases,…) will reflect fairly the actual usage pattern of an application.

Visibility of Costs

Among all the features showing business units their VM consumption and allocations is the one that will start to change the relationship between IT and users more fundamentally. Actual utilization rate, historical trending, heat map displaying under and over utilized servers but also virtual environment configuration attributes such as high availability and regulatory compliance reports will contribute to a better cost-to-serve transparency. Another important capability to look for is how you can identify servers that cannot be part of a shared pool of resources due either to legacy applications or simply very high workload requiring dedicated machine this will help when you start sharing with the user the calculation of his charge plan.

Fairness of Costs

This the ONE difference with traditional chargeback models in a sense that metrics collected for charge calculation are actually “IT compute metrics” on top of which you add “services fee-based” metrics. In traditional chargeback direct or indirect model, the finance department tends to use non IT metrics such as # of users in a business unit as a % of aggregate IT costs or data center space taken by rack of servers,…this has led to a lot to the lack of transparency and the sense of “unfair” chargeback. As part of the charge plan calculation the characterization of fixed versus variable costs is essential to calculate the unit of charge.

Conclusion

Remember that in the cloud, users do not want to pay for resources they do not use and there is an expectation that they can dynamically request more resources, allocate and de-allocate resources. A cloud chargeback system will help you transition to a usage-based pricing of your IT resources will the key principles of control, visibility and fairness of costs. Last but not least, you do not need to go for the big-bang and turn chargeback right away you can implement showback and specific chargeback initiatives so that the business units can work with you on an efficient chargeback model.

Monday Dec 10, 2012

CISDI Cloud - Industrial Cloud Computing Platform based on Oracle Products

In today's era, Cloud Computing is becoming integral to the vision and corporate strategy of leading organizations and is often seen as a key business driver to achieve growth and innovation. Headquartered in Chongqing, China, CISDI Engineering Co., Ltd. is a large state-owned engineering company, offering consulting, engineering design, EPC contracting, and equipment integration services to steel producers all over the world. With over 50 years of experience, CISDI offers quality services for every aspect of production for projects in the metal industry and the company has evolved into a leading international engineering service group with 18 subsidiaries providing complete lifecycle for E&C projects.

CISDI group delegation led by Mr. Zhaohui Yu, CEO of CISDI Group, Mr. Zhiyou Li, CEO of CISDI Info, Mr. Qing Peng, CTO of CISDI Info and Mr. Xin Xiao, Head of CISDI Info's R&D joined Oracle OpenWorld 2012 and presented a very impressive cloud initiative case in their session titled “E&C Industry Solution in CISDI Cloud - An Industrial Cloud Computing Platform Based on Oracle Products”.

CISDI group plans to expand through three phases in the construction of its cloud computing platform: first, it will relocate its existing technologies to Oracle systems, along with establishing private cloud for CISDI; secondly, it will gradually provide mixed cloud services for its subsidiaries and partners; and finally it plans to launch an industrial cloud with a highly mature, secure and scalable environment providing cloud services for customers in the engineering construction and steel industries, among others. “CISDI Cloud” will become the growth engine for the organization to expand its global reach through online services and achieving the strategic objective of being the preferred choice of E&C companies worldwide.


The new cloud computing platform is designed to provide access to the shared computing resources pool in a self-service, dynamic, elastic and measurable way. It’s flexible and scalable grid structure can support elastic expansion and sustainable growth, and can bring significant benefits in speed, agility and efficiency. Further, the platform can greatly cut down deployment and maintenance costs.

CISDI delegation highlighted these points as the key reasons why the group decided to have a strategic collaboration with Oracle for building this world class industrial cloud -
- Oracle’s strategy: Open, Complete and Integrated
- Oracle as the only company who can provide engineered system, with complete product chain of hardware and software
- Exadata, Exalogic, EM 12c to provide solid foundation for "CISDI Cloud"

The cloud blueprint and advanced architecture for industrial cloud computing platform presented in the session shows how Oracle products and technologies together with industrial applications from CISDI can provide end-end portfolio of E&C industry services in cloud.



CISDI group was recognized for business leadership and innovative solutions and was presented with Engineering and Construction Industry Excellence Award during Oracle OpenWorld.

Wednesday Dec 05, 2012

Amazon Web Services (AWS) Plug-in for Oracle Enterprise Manager

Contributed by Sunil Kunisetty and Daniel Chan

Introduction and Architecture

As more and more enterprises deploy some of their non-critical workload on Amazon Web Services (AWS), it’s becoming critical to monitor those public AWS resources along side with their on-premise resources.

Oracle recently announced Oracle Enterprise Manager Plug-in for Amazon Web Services (AWS) allows you to achieve that goal. The on-premise Oracle Enterprise Manager (EM12c) acts as a single tool to get a comprehensive view of your public AWS resources as well as your private cloud resources.  By deploying the plug-in within your Cloud Control environment, you gain the following management features:

  • Monitor EBS, EC2 and RDS instances on Amazon Web Services
  • Gather performance metrics and configuration details for AWS instances
  • Raise alerts and violations based on thresholds set on monitoring
  • Generate reports based on the gathered data

Users of this Plug-in can leverage the rich Enterprise Manager features such as system promotion, incident generation based on thresholds, integration with 3rd party ticketing applications etc. AWS Monitoring via this Plug-in is enabled via Amazon CloudWatch API and the users of this Plug-in are responsible for supplying credentials for accessing AWS and the CloudWatch API.

This Plug-in can only be deployed on an EM12C R2 platform and agent version should be at minimum 12c R2.Here is a pictorial view of the overall architecture:


Here are a few key features:

  • Rich and exhaustive list of metrics. Metrics can be gathered from an Agent running outside AWS.
  • Critical configuration information.
  • Custom Home Pages with charts and AWS configuration information.
  • Generate incidents based on thresholds set on monitoring data.

Discovery and Monitoring

AWS instances can be added to EM12C either via the EM12c User Interface (UI) or the EM12c Command Line Interface ( EMCLI)  by providing the AWS credentials (Secret Key and Access Key Id) as well as resource specific properties as target properties. Here is a quick mapping of target types and properties for each AWS resources

AWS Resource Type

Target Type

Resource specific properties

EBS Resource

Amazon EBS Service

CloudWatch base URI, EC2 Base URI, Period, Volume Id, Proxy Server and Port

EC2 Resource

Amazon EC2 Service

CloudWatch base URI, EC2 Base URI, Period, Instance  Id, Proxy Server and Port

RDS Resource

Amazon RDS Service

CloudWatch base URI, RDS Base URI, Period, Instance  Id, Proxy Server and Port

Proxy server and port are optional and are only needed if the agent is within the firewall.

Here is an emcli example to add an EC2 target. Please read the Installation and Readme guide for more details and step-by-step instructions to deploy  the plugin and adding the AWS the instances.

./emcli add_target \

      -name="<target name>" \

      -type="AmazonEC2Service" \

      -host="<host>" \

      -properties="ProxyHost=<proxy server>;ProxyPort=<proxy port>;EC2_BaseURI=http://ec2.<region>.amazonaws.com;BaseURI=http://monitoring.<region>.amazonaws.com;InstanceId=<EC2 instance Id>;Period=<data point periond>"  \

    -subseparator=properties="="

./emcli set_monitoring_credential \

                -set_name="AWSKeyCredentialSet"  \

                -target_name="<target name>"  \

                -target_type="AmazonEC2Service" \

                -cred_type="AWSKeyCredential"  \

                -attributes="AccessKeyId:<access key id>;SecretKey:<secret key>"

Emcli utility is found under the ORACLE_HOME of EM12C install. Once the instance is discovered, the target will show up under the ‘All Targets’ list under “Amazon EC2 Service’.

Once the instances are added, one can navigate to the custom homepages for these resource types. The custom home pages not only include critical metrics, but also vital configuration parameters and incidents raised for these instances.  By mapping the configuration parameters as instance properties, we can slice-and-dice and group various AWS instance by leveraging the EM12C Config search feature. The following configuration properties and metrics are collected for these Resource types.

Resource Type

Configuration Properties

Metrics

EBS Resource

Volume Id, Volume Type, Device Name, Size, Availability Zone

Response: Status

Utilization: QueueLength, IdleTime

Volume Statistics: ReadBrandwith, WriteBandwidth, ReadThroughput, WriteThroughput

Operation Statistics: ReadSize, WriteSize, ReadLatency, WriteLatency

EC2 Resource

Instance ID, Owner Id, Root Device type, Instance Type. Availability Zone

Response: Status

CPU Utilization: CPU Utilization

Disk I/O:  DiskReadBytes, DiskWriteBytes, DiskReadOps, DiskWriteOps, DiskReadRate, DiskWriteRate, DiskIOThroughput, DiskReadOpsRate, DiskWriteOpsRate, DiskOperationThroughput

Network I/O : NetworkIn, NetworkOut, NetworkInRate, NetworkOutRate, NetworkThroughput

RDS Resource

Instance ID, Database Engine Name, Database Engine Version, Database Instance Class, Allocated Storage Size, Availability Zone

Response: Status

Disk I/O:  ReadIOPS, WriteIOPS, ReadLatency, WriteLatency, ReadThroughput, WriteThroughput

DB Utilization:  BinLogDiskUsage, CPUUtilization, DatabaseConnections, FreeableMemory, ReplicaLag, SwapUsage

Custom Home Pages

As mentioned above, we have custom home pages for these target types that include basic configuration information,  last 24 hours availability, top metrics and the incidents generated. Here are few snapshots.

EBS Instance Home Page:


EC2 Instance Home Page:


RDS Instance Home Page:


Further Reading:

1)      AWS Plugin download

2)      Installation and  Read Me.

3)      Screenwatch on SlideShare

4)      Extensibility Programmer's Guide

5)      Amazon Web Services

Wednesday Nov 14, 2012

Slap the App on the VM for every private cloud solution! Really ?

One of the key attractions of the general session "Managing Enterprise Private Cloud" at Oracle OpenWorld 2012 was an interactive role play depicting how to address some of the key challenges of planning, deploying and managing an enterprise private cloud. It was a face-off between Don DeVM, IT manager at a fictitious enterprise 'Vulcan' and Ed Muntz, the Enterprise Manager hero .

 


Don DeVM is really excited about the efficiency and cost savings from virtualization. The success he enjoyed from the infrastructure virtualization made him believe that for all cloud service delivery models ( database, testing or applications as-a-service ), he has a single solution - slap the app on the VM and here you go .

However, Ed Muntz believes in delivering cloud services that allows the business units and enterprise users to manage the complete lifecycle of the cloud services they are providing, for example, setting up cloud, provisioning it to users through a self-service portal ,  managing and tuning the performance, monitoring and applying patches for database or applications.

Watch the video of the face-off , see how Don and Ed address s
ome of the key challenges of planning, deploying and managing an enterprise private cloud and be the judge !


Thursday Nov 08, 2012

HDFC Bank's Journey to Oracle Private Database Cloud

One of the key takeaways from a recent post by Sushil Kumar is the importance of business initiative that drives the transformational journey from legacy IT to enterprise private cloud. The journey that leads to a agile, self-service and efficient infrastructure with reduced complexity and enables IT to deliver services more closely aligned with business requirements.

Nilanjay Bhattacharjee, AVP, IT of HDFC Bank presented a real-world case study based on one such initiative in his Oracle OpenWorld session titled "HDFC BANK Journey into Oracle Database Cloud with EM 12c DBaaS". The case study highlighted in this session is from HDFC Bank’s Lending Business Segment, which comprises roughly 50% of Bank’s top line. Bank’s Lending Business is always under pressure to launch “New Schemes” to compete and stay ahead in this segment and IT has to keep up with this challenging business requirement. Lending related applications are highly dynamic and go through constant changes and every single and minor change in each related application is required to be thoroughly UAT tested certified before they are certified for production rollout. This leads to a constant pressure in IT for rapid provisioning of UAT databases on an ongoing basis to enable faster time to market.

Nilanjay joined Sushil Kumar, VP, Product Strategy, Oracle, during the Enterprise Manager general session at Oracle OpenWorld 2012. Let's watch what Nilanjay had to say about their recent Database cloud deployment.

“Agility” in launching new business schemes became the key business driver for private database cloud adoption in the Bank. Nilanjay spent an hour discussing it during his session. Let's look at why Database-as-a-Service(DBaaS) model was need of the hour in this case  -

  • Average 3 days to provision UAT Database for Loan Management Application
  • Silo’ed UAT environment with Average 30% utilization
  • Compliance requirement consume UAT testing resources
  • DBA activities leads to $$ paid to SI for provisioning databases manually
  • Overhead in managing configuration drift between production and test environments
  • Rollout impact/delay on new business initiatives

The private database cloud implementation progressed through 4 fundamental phases - Standardization, Consolidation, Automation, Optimization of UAT infrastructure. Project scoping was carried out and end users and stakeholders were engaged early on right from planning phase and including all phases of implementation.

  • Standardization and Consolidation phase involved multiple iterations of planning to first standardize on infrastructure, db versions, patch levels, configuration, IT processes etc and with database level consolidation project onto Exadata platform. It was also decided to have existing AIX UAT DB landscape covered and EM 12c DBaaS solution being platform agnostic supported this model well.
  • Automation and Optimization phase provided the necessary Agility, Self-Service and efficiency and this was made possible via EM 12c DBaaS. EM 12c DBaaS Self-Service/SSA Portal was setup with required zones, quotas, service templates, charge plan defined. There were 2 zones implemented - Exadata zone  primarily for UAT and benchmark testing for databases running on Exadata platform and second zone was for AIX setup to cover other databases those running on AIX. Metering and Chargeback/Showback capabilities provided business and IT the framework for cloud optimization and also visibility into cloud usage.

More details on UAT cloud implementation, related building blocks and EM 12c DBaaS solution are covered in Nilanjay's OpenWorld session here.

Some of the key Benefits achieved from UAT cloud initiative are -

  • New business initiatives can be easily launched due to rapid provisioning of UAT Databases [ ~3 hours ]
  • Drastically cut down $$ on SI for DBA Activities due to Self-Service
  • Effective usage of infrastructure leading to  better ROI
  • Empowering  consumers to provision database using Self-Service
  • Control on project schedule with DB end date aligned to project plan submitted during provisioning
  • Databases provisioned through Self-Service are monitored in EM and auto configured for Alerts and KPI
  • Regulatory requirement of database does not impact existing project in queue

This table below shows typical list of activities and tasks involved when a end user requests for a UAT database. EM 12c DBaaS solution helped reduce UAT database provisioning time from roughly 3 days down to 3 hours and this timing also includes provisioning time for database with production scale data (ranging from 250 G to 2 TB of data) -


And it's not just about time to provision,  this initiative has enabled an agile, efficient and transparent UAT environment where end users are empowered with real control of cloud resources and IT's role is shifted as enabler of strategic services instead of being administrator of all user requests. The strong collaboration between IT and business community right from planning to implementation to go-live has played the key role in achieving this common goal of enterprise private cloud. Finally, real cloud is here and this cloud is accompanied with rain (business benefits) as well !

For more information, please go to Oracle Enterprise Manager  web page or  follow us at : 

Twitter | Facebook | YouTube | Linkedin | Newsletter

Tuesday Nov 06, 2012

FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully.

For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about:

1. Does it make life easier for your end users?

Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues.

The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor.

2. Does it make life easier for your administrators?

Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases.

Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare.

That leads us to our next question.

3. Does it satisfy your Security Officers and Compliance Auditors?

Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise.

Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics.

If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner.

4. Does it satisfy your CIO?

Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next?

As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance.

Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware.

As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

Monday Nov 05, 2012

Clouds Everywhere But not a Drop of Rain – Part 3

I was sharing with you how a broad-based transformation such as cloud will increase agility and efficiency of an organization if process re-engineering is part of the plan.  I have also stressed on the key enterprise requirements such as “broad and deep solutions, “running your mission critical applications” and “automated and integrated set of capabilities”. Let me walk you through some key cloud attributes such as “elasticity” and “self-service” and what they mean for an enterprise class cloud. I will also talk about how we at Oracle have taken a very enterprise centric view to developing cloud solutions and how our products have been specifically engineered to address enterprise cloud needs.

Cloud Elasticity and Enterprise Applications Requirements

Easy and quick scalability for a short-period of time is the signature of cloud based solutions. It is this elasticity that allows you to dynamically redistribute your resources according to business priorities, helps increase your overall resource utilization, and reduces operational costs by allowing you to get the most out of your existing investment.

Most public clouds are offering a instant provisioning mechanism of compute power (CPU, RAM, Disk), customer pay for the instance-hours(and bandwidth) they use, adding computing resources at peak times and removing them when they are no longer needed. This type of “just-in-time” serving of compute resources is well known for mid-tiers “state less” servers such as web application servers and web servers that just need another machine to start and run on it but what does it really mean for an enterprise application and its underlying data? Most enterprise applications are not as quite as “state less” and justifiably so. As such, how do you take advantage of cloud elasticity and make it relevant for your enterprise apps? This is where Cloud meets Grid Computing.

At Oracle, we have invested enormous amount of time, energy and resources in creating enterprise grid solutions. All our technology products offer built-in elasticity via clustering and dynamic scaling. With products like Real Application Clusters (RAC), Automatic Storage Management, WebLogic Clustering, and Coherence In-Memory Grid, we allow all your enterprise applications to benefit from Cloud elasticity –both vertically and horizontally - without requiring any application changes.

A number of technology vendors take a rather simplistic route of starting up additional or removing unneeded VM as the "Cloud Scale-Out" solution. While this may work for stateless mid-tier servers where load balancers can handle the addition and remove of instances transparently but following a similar approach for the database tier - often called as "database sharding" - requires significant application modification and typically does not work with off the shelf packaged applications. Technologies like Oracle Database Real Application Clusters, Automatic Storage Management, etc. on the other hand bring the benefits of incremental scalability and on-demand elasticity to ANY application by providing a simplified abstraction layers where the application does not need deal with data spread over multiple database instances. Rather they just talk to a single database and the database software takes care of aggregating resources across multiple hardware components. It is the technologies like these that truly make a cloud solution relevant for enterprises. 

For customers who are looking for a next generation hardware consolidation platform, our engineered systems (e.g. Exadata, Exalogic) not only provide incredible amount of performance and capacity, they also reduce the data center complexity and simplify operations.

Assemble, Deploy and Manage Enterprise Applications for Cloud

Products like Oracle Virtual assembly builder (OVAB) resolve the complex problem of bringing the cloud speed to complex multi-tier applications. With assemblies, you can not only provision all components of a multi-tier application and wire them together by push of a button, other aspects of application lifecycle, such as real-time application testing, scale-up/scale-down, performance and availability monitoring, etc., are also automated using Oracle Enterprise Manager. 

An essential criteria for an enterprise cloud to succeed is the ability to ensure business service levels especially when business users have either full visibility on the usage cost with a “show back” or a “charge back”. With Oracle Enterprise Manager 12c, we have created the most comprehensive cloud management solution in the industry that is capable of managing business service levels “applications-to-disk” in a enterprise private cloud – all from a single console. It is the only cloud management platform in the industry that allows you to deliver infrastructure, platform and application cloud services out of the box. Moreover, it offers integrated and complete lifecycle management of the cloud - including planning and set up, service delivery, operations management, metering and chargeback, etc .  Sounds unbelievable? Well, just watch this space for more details on how Oracle Enterprise Manager 12c is the nerve center of Oracle Cloud!

Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies. 

Summary 

But the intent of this blog post isn't to dwell on how great our solutions are (these are just some examples to illustrate how we at Oracle have approached this problem space). Rather it is to help you ask the right questions before you embark on your cloud journey.  So to summarize, here are the key takeaways.      

  • It is critical that you are clear on why you are building the cloud. Successful organizations keep business benefits as the first and foremost cloud objective. On the other hand, those who approach this purely as a technology project are more likely to fail.
  • Think about where you want to be in 3-5 years before you get started. Your long terms objectives should determine what your first step ought to be. As obvious as it may seem, more people than not make the first move without knowing where they are headed. 
  • Don’t make the mistake of equating cloud to virtualization and Infrastructure-as-a-Service (IaaS). Spinning a VM on-demand will give some short term relief to your IT staff but is unlikely to solve your larger business problems. As such, even if IaaS is your first step towards a more comprehensive cloud, plan the roadmap around those higher level services before you begin. And ask your vendors on how they are going to be your partners in this journey.
  • Capabilities like self-service access and chargeback/showback are absolutely critical if you really expect your cloud to be transformational. Your business won't see the full benefits of the cloud until it empowers them with same kind of control and transparency that they are used to while using a public cloud service. 
  • Evaluate the benefits of integration, as opposed to blindly following the best-of-breed strategy. Integration is a huge challenge and more so in a cloud environment. There are enormous costs associated with stitching a solution out of disparate components and even more in maintaining it.

Hope you found these ideas helpful. Looking forward to hearing your thoughts and experiences.

Monday Oct 29, 2012

Beyond Cloud Technology, Enabling A More Agile and Responsive Organization

This is the second part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain”. In the first part,  I was sharing with you how a broad-based transformation makes cloud more than a technology initiative, I will describe in this section how it requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology.

People: Most IT organizations have a fairly complex organizational structure. There are different groups, managing different pieces of the puzzle, and yet, they don't always work together. Provisioning a new application therefore may require a request to float endlessly through system administrators, DBAs and middleware admin worlds – resulting in long delays and constant finger pointing.  Cloud users expect end-to-end automation - which requires these silos to be greatly simplified, if not completely eliminated.  Most customers I talk to acknowledge this problem but are quick to admit that such a transformation is hard. As hard as it may be, I am afraid that the status quo is no longer an option. Sticking to an organizational structure that was created ages back will not only impede cloud adoption,  it also risks making the IT skills increasingly irrelevant in a world that is rapidly moving towards converged applications and infrastructure.  

Process: Most IT organizations today operate with a mindset that they must fully "control" access to any and all types of IT services. This in turn leads to people clinging on to outdated manual approval processes .  While requiring approvals for scarce resources makes sense, insisting that every single request must be manually approved defeats the very purpose of cloud. Not only this causes delays, thereby at least partially negating the agility benefits, it also results in gross inefficiency. In a cloud environment, self-service access should be governed by policies, quotas that the administrators can define upfront . For a cloud initiative to be successful, IT organizations MUST be ready to empower users by giving them real control rather than insisting on brokering every single interaction between users and the cloud resources.

Technology: From a technology perspective, cloud is about consolidation, standardization and automation. A consolidated and standardized infrastructure helps increase utilization and reduces cost. Additionally, it  enables a much higher degree of automation - thereby providing users the required agility while minimizing operational costs.  Obviously, automation is the key to cloud. Unfortunately it hasn’t received as much attention within enterprises as it should have.  Many organizations are just now waking up to the criticality of automation and it still often gets relegated to back burner in favor of other "high priority" projects. However, it is important to understand that without the right type and level of automation, cloud will remain a distant dream for most enterprises. This in turn makes the choice of the cloud management software extremely critical. 

For a cloud management software to be effective in an enterprise environment, it must meet the following qualifications:

Broad and Deep Solution

It should offer a broad and deep solution to enable the kind of broad-based transformation we are talking about.  Its footprint must cover physical and virtual systems, as well as infrastructure, database and application tiers. Too many enterprises choose to equate cloud with virtualization. While virtualization is a critical component of a cloud solution, it is just a component and not the whole solution. Similarly, too many people tend to equate cloud with Infrastructure-as-a-Service (IaaS). While it is perfectly reasonable to treat IaaS as a starting point, it is important to realize that it is just the first stepping stone - and on its own it can only provide limited business benefits. It is actually the higher level services, such as (application) platform and business applications, that will bring about a more meaningful transformation to your enterprise.

Run and Manage Efficiently Your Mission Critical Applications

It should not only be able to run your mission critical applications, it should do so better than before.  For enterprises, applications and data are the critical business assets  As such, if you are building a cloud platform that cannot run your ERP application, it isn't truly a "enterprise cloud".  Also, be wary of  vendors who try to sell you the idea that your applications must be written in a certain way to be able to run on the cloud. That is nothing but a bogus, self-serving argument. For the cloud to be meaningful to enterprises, it should adopt to your applications - and not the other way around. 

Automated, Integrated Set of Cloud Management Capabilities

At the root of many of the problems plaguing enterprise IT today is complexity. A complex maze of tools and technology, coupled with archaic  processes, results in an environment which is inflexible, inefficient and simply too hard to manage. Management tool consolidation, therefore, is key to the success of your cloud as tool proliferation adds to complexity, encourages compartmentalization and defeats the very purpose that you are building the cloud for. Decision makers ought to be extra cautious about vendors trying to sell them a "suite" of disparate and loosely integrated products as a cloud solution.  An effective enterprise cloud management solution needs to provide a tightly integrated set of capabilities for all aspects of cloud lifecycle management. A simple question to ask: will your environment be more or less complex after you implement your cloud? More often than not, the answer will surprise you. 

At Oracle, we have understood these challenges and have been working hard to create cloud solutions that are relevant and meaningful for enterprises.  And we have been doing it for much longer than you may think. Oracle was one of the very first enterprise software companies to make our products available on the Amazon Cloud. As far back as in 2007, we created new cloud solutions such as Cloud Database Backup that are helping customers like Amazon save millions every year.  Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies.

I will dedicate the third and final part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain” to Oracle Cloud Technologies Building Blocks and how they mapped into our vision of Enterprise Cloud. Stay Tuned.

Tuesday Oct 23, 2012

Clouds, Clouds, Clouds Everywhere, Not a Drop of Rain!

At the recently concluded Oracle OpenWorld 2012, the center of discussion was clearly Cloud. Over the five action packed days, I got to meet a large number of customers and most of them had serious interest in all things cloud.  Public Cloud - particularly the Oracle Cloud - clearly got a lot of attention and interest. I think the use cases and the value proposition for public cloud is pretty straight forward. However, when it comes to private cloud, there were some interesting revelations.  Well, I shouldn’t really call them revelations since they are pretty consistent with what I have heard from customers at other conferences as well as during 1:1 interactions.

While the interest in enterprise private cloud remains to be very high, only a handful of enterprises have truly embarked on a journey to create what the purists would call true private cloud - with capabilities such as self-service and chargeback/show back. For a large majority, today's reality is simply consolidation and virtualization - and they are quite far off from creating an agile, self-service and transparent IT infrastructure which is what the enterprise cloud is all about.  Even a handful of those who have actually implemented a close-to-real enterprise private cloud have taken an infrastructure centric approach and are seeing only limited business upside. Quite a few were frank enough to admit that chargeback and self-service isn’t something that they see an immediate need for. 

This is in quite contrast to the picture being painted by all those surveys out there that show a large number of enterprises having already implemented an enterprise private cloud.  On the face of it, this seems quite contrary to the observations outlined above. So what exactly is the reality?

Well, the reality is that there is undoubtedly a huge amount of interest among enterprises about transforming their legacy IT environment - which is often seen as too rigid, too fragmented, and ultimately too expensive - to something more agile, transparent and business-focused. At the same time however, there is a great deal of confusion among CIOs and architects about how to get there. This isn't very surprising given all the buzz and hype surrounding cloud computing. Every IT vendor claims to have the most unique solution and there isn't a single IT product out there that does not have a cloud angle to it. Add to this the chatter on the blogosphere, it will get even a sane mind spinning.  Consequently, most  enterprises are still struggling to fully understand the concept and value of enterprise private cloud.  Even among those who have chosen to move forward relatively early, quite a few have made their decisions more based on vendor influence/preferences rather than what their businesses actually need.  Clearly, there is a disconnect between the promise of the enterprise private cloud and the current adoption trends. 

So what is the way forward?  I certainly do not claim to have all the answers. But here is a perspective that many cloud practitioners have found useful and thus worth sharing.

To take a step back, the fundamental premise of the enterprise private cloud is IT transformation. It is the quest to create a more agile, transparent and efficient IT infrastructure that is driven more by business needs rather than constrained by operational and procedural inefficiencies. It is the new way of delivering and consuming IT services - where the IT organizations operate more like enablers of  strategic services rather than just being the gatekeepers of IT resources. In an enterprise private cloud environment, IT organizations are expected to empower the end users via self-service access/control and provide the business stakeholders a transparent view of how the resources are being used, what’s the cost of delivering a given service, how well are the customers being served, etc.  But the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment. Surprised? You shouldn’t be. Just remember how the business users have been at the forefront of public cloud adoption within enterprises and private cloud is no exception.  

Such a broad-based transformation makes cloud more than a technology initiative. It requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. In my next blog,  I will share how essential it is for enterprise cloud technology to go hand-in hand with process re-engineering and organization changes to unlock true value of  enterprise cloud.

I am sharing a short video from my session "Managing your private Cloud" at Oracle OpenWorld 2012. More videos from this session will be posted at the recently introduced Zero to Cloud resource page.

Many other experts of Oracle enterprise private cloud solution will join me on this blog "Zero to Cloud"  and share best practices , deployment tips and information on how to plan, build, deploy, monitor, manage , meter and optimize the enterprise private cloud. We look forward to your feedback, suggestions and having an engaging conversion with you on this blog.

About

"Zero to Cloud" Blog is dedicated to Enterprise Private Cloud Solution.
Zero To Cloud Resource Page

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today