Tuesday Aug 27, 2013

Delivering Cloud with DBaaS - new webcast and e-book

Recently I participated in a live webcast in which Tim Mooney from Oracle and Carl Olofson from IDC discussed customer experiences with building public and private database clouds.  The recorded webcast is now available for on-demand viewing:  Delivering Cloud through Database as a Service

The webcast focuses on how Database as a Service delivers these key cloud benefits:

  • Greater IT efficiency
  • Higher capital utilization
  • Faster time to market

 You may also be interested in the free e-book, Building a Database Cloud for Dummies.

And at this point I'll digress for a moment, as the title of the e-book reminds me of a question that arose during the webcast, and continues to cloud many of our discussions about Database as a Service: are you a consumer, or a provider? 

To see the importance of understanding the consumer/provider point of view, consider the possible answers to this question:  "How much will a typical DBaaS cost?"

If a consumer is asking the question, the answer will be "whatever the provider charges" -- and from there we can look at examples of what public cloud providers charge for DBaaS.

If a provider is asking the question, we have a much more detailed discussion which must cover the entire solution that will host the DBaaS environment, including software, hardware, people and processes.

So when asking questions about DBaaS, make sure to identify your role up front -- this helps discussions get to the point more quickly.

You might wonder, how did the e-book title lead to this digression?  It's simple: the title does not indicate whether the dummies in question are those building the cloud, or are the future consumers of the cloud, or both ... in any case, it's a nicely written book despite the ambiguous title. 



Friday Aug 23, 2013

Database Consolidation onto Private Clouds white paper - updated for Oracle Database 12c

One of our most popular white papers on private database clouds has been expanded and updated to discuss Oracle Database 12c.  Available on OTN, the new version of Database Consolidation onto Private Clouds covers best practices for consolidation with pluggable databases that the new mulitenant architecture provides, and expanded information on the database and schema consolidation options.  These are the database consolidation models the paper evaluates:  

server  database  schema
pluggable databases 


Key considerations for consolidating workloads which the paper explores:

  • Choosing a consolidation model
  • How PDBs solve the IT complexity problem
  • Isolation in consolidated environments
  • Cloud pool design
  • Complementary workloads
  • Enterprise Manager 12c for consolidation planning and operations

 The paper's OTN page is the landing pad for information on private database clouds with the Oracle Database.  Drop in to learn more about isolation and security, consolidation on Exadata Database Machine, and more.


Tuesday Aug 20, 2013

Oracle Database 12c for Private Database Clouds

The release of Oracle Database 12 is accompanied by extensive supporting collateral that details the new features and options of this major release.  But with so much to read and investigate, where to start?  If you don't have the time to pore through everything, then you may wish organize your reading in terms of the use case you're most interested in.  If your interest is Database as a Service in private database clouds then may I suggest that you start here:

Accelerate the Journey to Enterprise Cloud with Oracle Database 12c

This paper describes the phases of the journey to enterprise cloud, and enumerates the new features and options in Oracle Database 12c that support each phase.  Oracle Multitenant figures prominently, but it's not the only cloud-enabling topic: Oracle Database Quality of Service Management, Application Continuity, Automatic Data Optimization, Global Data Services and Active Data Guard Far Sync all deliver key benefits for delivering database as a service. 

Further reading and research is suggested by the references included in the paper.

Happy clouding!

Wednesday May 22, 2013

Video : Meet Growing IT Demand for Databases with Private DBaaS

 


This video discusses the difference between traditional database deployment and database as a service. It also provides an overview of Oracle Enterprise Manager's capabilities for rapid deployment of database as a service.

Thursday Jan 03, 2013

Key Principles of Cloud Chargeback System

Contributed by Eric Tran-Le, VP, Product Management, Oracle Enterprise Manager

With public clouds users can now compare compute pricing pretty much like they would compare a car models. One could argue that compute power and services are different than a car but the fact is that users can compare costs.

Nothing has changed and everything has changed

Cost wise nothing has changed in a sense that the decision to implement a private cloud versus hosting on a public cloud is driven by the same factors than outsourcing or on-premise. It goes well beyond simple cost comparisons to touch upon key principles such as:

- Control of costs
- Visibility of costs
- Fairness of costs

Everything has changed in a sense that there is now a cost “predictability” expectation from users derived from a notion of “unit of compute” (the equivalent of standard energy Unit of Work) from which you can predict your resources consumption and infer total costs (a 45mpg rating means that you can drive 450 miles with 10 gallons or $40 if $4 per gallon). This is a more fundamental change in a sense that IT needs to work with the finance department to rationalize the cost-of-compute(Infrastructure + SW platform) and link it to the cost-to-serve (incident, problem, change & configuration management), the end goal is to produce a standard unit of compute that can be applied to various products and services configurations.

In part 1 of this blog, we will highlight how a cloud chargeback system can help addressing the key principles mentioned above and in part 2 we talk about how to create a profitable cost recovery model.

Key Principles of Chargeback System

Control of Costs

What is not defined cannot be controlled”, “What is not controlled cannot be measured”, “What is not measured cannot be charged”.
A cloud chargeback system provides a framework by which you can define a standard unit of cost for products and services. You can set unit of costs for resources based on hardware tiers or database high availability tiers, you can add fixed costs for on-going support costs or for facilities driven cost such as real-estate, power, HVAC,… but more importantly you can organized them by business units and cost centers so that you do have a cost structure hierarchy to report in aggregate or by business units. To define the right set of configurations, you need to baseline the current workloads utilization, characterize the patterns group of users and type of workloads and since you will use based compute metrics (CPU, memory, I/O,…) to measure utilization it is important for you to define which layers of computing ( Servers, Middleware, Databases,…) will reflect fairly the actual usage pattern of an application.

Visibility of Costs

Among all the features showing business units their VM consumption and allocations is the one that will start to change the relationship between IT and users more fundamentally. Actual utilization rate, historical trending, heat map displaying under and over utilized servers but also virtual environment configuration attributes such as high availability and regulatory compliance reports will contribute to a better cost-to-serve transparency. Another important capability to look for is how you can identify servers that cannot be part of a shared pool of resources due either to legacy applications or simply very high workload requiring dedicated machine this will help when you start sharing with the user the calculation of his charge plan.

Fairness of Costs

This the ONE difference with traditional chargeback models in a sense that metrics collected for charge calculation are actually “IT compute metrics” on top of which you add “services fee-based” metrics. In traditional chargeback direct or indirect model, the finance department tends to use non IT metrics such as # of users in a business unit as a % of aggregate IT costs or data center space taken by rack of servers,…this has led to a lot to the lack of transparency and the sense of “unfair” chargeback. As part of the charge plan calculation the characterization of fixed versus variable costs is essential to calculate the unit of charge.

Conclusion

Remember that in the cloud, users do not want to pay for resources they do not use and there is an expectation that they can dynamically request more resources, allocate and de-allocate resources. A cloud chargeback system will help you transition to a usage-based pricing of your IT resources will the key principles of control, visibility and fairness of costs. Last but not least, you do not need to go for the big-bang and turn chargeback right away you can implement showback and specific chargeback initiatives so that the business units can work with you on an efficient chargeback model.

Wednesday Dec 05, 2012

Amazon Web Services (AWS) Plug-in for Oracle Enterprise Manager

Contributed by Sunil Kunisetty and Daniel Chan

Introduction and Architecture

As more and more enterprises deploy some of their non-critical workload on Amazon Web Services (AWS), it’s becoming critical to monitor those public AWS resources along side with their on-premise resources.

Oracle recently announced Oracle Enterprise Manager Plug-in for Amazon Web Services (AWS) allows you to achieve that goal. The on-premise Oracle Enterprise Manager (EM12c) acts as a single tool to get a comprehensive view of your public AWS resources as well as your private cloud resources.  By deploying the plug-in within your Cloud Control environment, you gain the following management features:

  • Monitor EBS, EC2 and RDS instances on Amazon Web Services
  • Gather performance metrics and configuration details for AWS instances
  • Raise alerts and violations based on thresholds set on monitoring
  • Generate reports based on the gathered data

Users of this Plug-in can leverage the rich Enterprise Manager features such as system promotion, incident generation based on thresholds, integration with 3rd party ticketing applications etc. AWS Monitoring via this Plug-in is enabled via Amazon CloudWatch API and the users of this Plug-in are responsible for supplying credentials for accessing AWS and the CloudWatch API.

This Plug-in can only be deployed on an EM12C R2 platform and agent version should be at minimum 12c R2.Here is a pictorial view of the overall architecture:


Here are a few key features:

  • Rich and exhaustive list of metrics. Metrics can be gathered from an Agent running outside AWS.
  • Critical configuration information.
  • Custom Home Pages with charts and AWS configuration information.
  • Generate incidents based on thresholds set on monitoring data.

Discovery and Monitoring

AWS instances can be added to EM12C either via the EM12c User Interface (UI) or the EM12c Command Line Interface ( EMCLI)  by providing the AWS credentials (Secret Key and Access Key Id) as well as resource specific properties as target properties. Here is a quick mapping of target types and properties for each AWS resources

AWS Resource Type

Target Type

Resource specific properties

EBS Resource

Amazon EBS Service

CloudWatch base URI, EC2 Base URI, Period, Volume Id, Proxy Server and Port

EC2 Resource

Amazon EC2 Service

CloudWatch base URI, EC2 Base URI, Period, Instance  Id, Proxy Server and Port

RDS Resource

Amazon RDS Service

CloudWatch base URI, RDS Base URI, Period, Instance  Id, Proxy Server and Port

Proxy server and port are optional and are only needed if the agent is within the firewall.

Here is an emcli example to add an EC2 target. Please read the Installation and Readme guide for more details and step-by-step instructions to deploy  the plugin and adding the AWS the instances.

./emcli add_target \

      -name="<target name>" \

      -type="AmazonEC2Service" \

      -host="<host>" \

      -properties="ProxyHost=<proxy server>;ProxyPort=<proxy port>;EC2_BaseURI=http://ec2.<region>.amazonaws.com;BaseURI=http://monitoring.<region>.amazonaws.com;InstanceId=<EC2 instance Id>;Period=<data point periond>"  \

    -subseparator=properties="="

./emcli set_monitoring_credential \

                -set_name="AWSKeyCredentialSet"  \

                -target_name="<target name>"  \

                -target_type="AmazonEC2Service" \

                -cred_type="AWSKeyCredential"  \

                -attributes="AccessKeyId:<access key id>;SecretKey:<secret key>"

Emcli utility is found under the ORACLE_HOME of EM12C install. Once the instance is discovered, the target will show up under the ‘All Targets’ list under “Amazon EC2 Service’.

Once the instances are added, one can navigate to the custom homepages for these resource types. The custom home pages not only include critical metrics, but also vital configuration parameters and incidents raised for these instances.  By mapping the configuration parameters as instance properties, we can slice-and-dice and group various AWS instance by leveraging the EM12C Config search feature. The following configuration properties and metrics are collected for these Resource types.

Resource Type

Configuration Properties

Metrics

EBS Resource

Volume Id, Volume Type, Device Name, Size, Availability Zone

Response: Status

Utilization: QueueLength, IdleTime

Volume Statistics: ReadBrandwith, WriteBandwidth, ReadThroughput, WriteThroughput

Operation Statistics: ReadSize, WriteSize, ReadLatency, WriteLatency

EC2 Resource

Instance ID, Owner Id, Root Device type, Instance Type. Availability Zone

Response: Status

CPU Utilization: CPU Utilization

Disk I/O:  DiskReadBytes, DiskWriteBytes, DiskReadOps, DiskWriteOps, DiskReadRate, DiskWriteRate, DiskIOThroughput, DiskReadOpsRate, DiskWriteOpsRate, DiskOperationThroughput

Network I/O : NetworkIn, NetworkOut, NetworkInRate, NetworkOutRate, NetworkThroughput

RDS Resource

Instance ID, Database Engine Name, Database Engine Version, Database Instance Class, Allocated Storage Size, Availability Zone

Response: Status

Disk I/O:  ReadIOPS, WriteIOPS, ReadLatency, WriteLatency, ReadThroughput, WriteThroughput

DB Utilization:  BinLogDiskUsage, CPUUtilization, DatabaseConnections, FreeableMemory, ReplicaLag, SwapUsage

Custom Home Pages

As mentioned above, we have custom home pages for these target types that include basic configuration information,  last 24 hours availability, top metrics and the incidents generated. Here are few snapshots.

EBS Instance Home Page:


EC2 Instance Home Page:


RDS Instance Home Page:


Further Reading:

1)      AWS Plugin download

2)      Installation and  Read Me.

3)      Screenwatch on SlideShare

4)      Extensibility Programmer's Guide

5)      Amazon Web Services

Wednesday Nov 14, 2012

Slap the App on the VM for every private cloud solution! Really ?

One of the key attractions of the general session "Managing Enterprise Private Cloud" at Oracle OpenWorld 2012 was an interactive role play depicting how to address some of the key challenges of planning, deploying and managing an enterprise private cloud. It was a face-off between Don DeVM, IT manager at a fictitious enterprise 'Vulcan' and Ed Muntz, the Enterprise Manager hero .

 


Don DeVM is really excited about the efficiency and cost savings from virtualization. The success he enjoyed from the infrastructure virtualization made him believe that for all cloud service delivery models ( database, testing or applications as-a-service ), he has a single solution - slap the app on the VM and here you go .

However, Ed Muntz believes in delivering cloud services that allows the business units and enterprise users to manage the complete lifecycle of the cloud services they are providing, for example, setting up cloud, provisioning it to users through a self-service portal ,  managing and tuning the performance, monitoring and applying patches for database or applications.

Watch the video of the face-off , see how Don and Ed address s
ome of the key challenges of planning, deploying and managing an enterprise private cloud and be the judge !


Thursday Nov 08, 2012

HDFC Bank's Journey to Oracle Private Database Cloud

One of the key takeaways from a recent post by Sushil Kumar is the importance of business initiative that drives the transformational journey from legacy IT to enterprise private cloud. The journey that leads to a agile, self-service and efficient infrastructure with reduced complexity and enables IT to deliver services more closely aligned with business requirements.

Nilanjay Bhattacharjee, AVP, IT of HDFC Bank presented a real-world case study based on one such initiative in his Oracle OpenWorld session titled "HDFC BANK Journey into Oracle Database Cloud with EM 12c DBaaS". The case study highlighted in this session is from HDFC Bank’s Lending Business Segment, which comprises roughly 50% of Bank’s top line. Bank’s Lending Business is always under pressure to launch “New Schemes” to compete and stay ahead in this segment and IT has to keep up with this challenging business requirement. Lending related applications are highly dynamic and go through constant changes and every single and minor change in each related application is required to be thoroughly UAT tested certified before they are certified for production rollout. This leads to a constant pressure in IT for rapid provisioning of UAT databases on an ongoing basis to enable faster time to market.

Nilanjay joined Sushil Kumar, VP, Product Strategy, Oracle, during the Enterprise Manager general session at Oracle OpenWorld 2012. Let's watch what Nilanjay had to say about their recent Database cloud deployment.

“Agility” in launching new business schemes became the key business driver for private database cloud adoption in the Bank. Nilanjay spent an hour discussing it during his session. Let's look at why Database-as-a-Service(DBaaS) model was need of the hour in this case  -

  • Average 3 days to provision UAT Database for Loan Management Application
  • Silo’ed UAT environment with Average 30% utilization
  • Compliance requirement consume UAT testing resources
  • DBA activities leads to $$ paid to SI for provisioning databases manually
  • Overhead in managing configuration drift between production and test environments
  • Rollout impact/delay on new business initiatives

The private database cloud implementation progressed through 4 fundamental phases - Standardization, Consolidation, Automation, Optimization of UAT infrastructure. Project scoping was carried out and end users and stakeholders were engaged early on right from planning phase and including all phases of implementation.

  • Standardization and Consolidation phase involved multiple iterations of planning to first standardize on infrastructure, db versions, patch levels, configuration, IT processes etc and with database level consolidation project onto Exadata platform. It was also decided to have existing AIX UAT DB landscape covered and EM 12c DBaaS solution being platform agnostic supported this model well.
  • Automation and Optimization phase provided the necessary Agility, Self-Service and efficiency and this was made possible via EM 12c DBaaS. EM 12c DBaaS Self-Service/SSA Portal was setup with required zones, quotas, service templates, charge plan defined. There were 2 zones implemented - Exadata zone  primarily for UAT and benchmark testing for databases running on Exadata platform and second zone was for AIX setup to cover other databases those running on AIX. Metering and Chargeback/Showback capabilities provided business and IT the framework for cloud optimization and also visibility into cloud usage.

More details on UAT cloud implementation, related building blocks and EM 12c DBaaS solution are covered in Nilanjay's OpenWorld session here.

Some of the key Benefits achieved from UAT cloud initiative are -

  • New business initiatives can be easily launched due to rapid provisioning of UAT Databases [ ~3 hours ]
  • Drastically cut down $$ on SI for DBA Activities due to Self-Service
  • Effective usage of infrastructure leading to  better ROI
  • Empowering  consumers to provision database using Self-Service
  • Control on project schedule with DB end date aligned to project plan submitted during provisioning
  • Databases provisioned through Self-Service are monitored in EM and auto configured for Alerts and KPI
  • Regulatory requirement of database does not impact existing project in queue

This table below shows typical list of activities and tasks involved when a end user requests for a UAT database. EM 12c DBaaS solution helped reduce UAT database provisioning time from roughly 3 days down to 3 hours and this timing also includes provisioning time for database with production scale data (ranging from 250 G to 2 TB of data) -


And it's not just about time to provision,  this initiative has enabled an agile, efficient and transparent UAT environment where end users are empowered with real control of cloud resources and IT's role is shifted as enabler of strategic services instead of being administrator of all user requests. The strong collaboration between IT and business community right from planning to implementation to go-live has played the key role in achieving this common goal of enterprise private cloud. Finally, real cloud is here and this cloud is accompanied with rain (business benefits) as well !

For more information, please go to Oracle Enterprise Manager  web page or  follow us at : 

Twitter | Facebook | YouTube | Linkedin | Newsletter

Tuesday Nov 06, 2012

FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully.

For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about:

1. Does it make life easier for your end users?

Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues.

The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor.

2. Does it make life easier for your administrators?

Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases.

Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare.

That leads us to our next question.

3. Does it satisfy your Security Officers and Compliance Auditors?

Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise.

Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics.

If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner.

4. Does it satisfy your CIO?

Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next?

As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance.

Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware.

As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

About

"Zero to Cloud" Blog is dedicated to Enterprise Private Cloud Solution.
Zero To Cloud Resource Page

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today