Wednesday May 22, 2013

Video : Meet Growing IT Demand for Databases with Private DBaaS

 


This video discusses the difference between traditional database deployment and database as a service. It also provides an overview of Oracle Enterprise Manager's capabilities for rapid deployment of database as a service.

Thursday Jan 03, 2013

Key Principles of Cloud Chargeback System

Contributed by Eric Tran-Le, VP, Product Management, Oracle Enterprise Manager

With public clouds users can now compare compute pricing pretty much like they would compare a car models. One could argue that compute power and services are different than a car but the fact is that users can compare costs.

Nothing has changed and everything has changed

Cost wise nothing has changed in a sense that the decision to implement a private cloud versus hosting on a public cloud is driven by the same factors than outsourcing or on-premise. It goes well beyond simple cost comparisons to touch upon key principles such as:

- Control of costs
- Visibility of costs
- Fairness of costs

Everything has changed in a sense that there is now a cost “predictability” expectation from users derived from a notion of “unit of compute” (the equivalent of standard energy Unit of Work) from which you can predict your resources consumption and infer total costs (a 45mpg rating means that you can drive 450 miles with 10 gallons or $40 if $4 per gallon). This is a more fundamental change in a sense that IT needs to work with the finance department to rationalize the cost-of-compute(Infrastructure + SW platform) and link it to the cost-to-serve (incident, problem, change & configuration management), the end goal is to produce a standard unit of compute that can be applied to various products and services configurations.

In part 1 of this blog, we will highlight how a cloud chargeback system can help addressing the key principles mentioned above and in part 2 we talk about how to create a profitable cost recovery model.

Key Principles of Chargeback System

Control of Costs

What is not defined cannot be controlled”, “What is not controlled cannot be measured”, “What is not measured cannot be charged”.
A cloud chargeback system provides a framework by which you can define a standard unit of cost for products and services. You can set unit of costs for resources based on hardware tiers or database high availability tiers, you can add fixed costs for on-going support costs or for facilities driven cost such as real-estate, power, HVAC,… but more importantly you can organized them by business units and cost centers so that you do have a cost structure hierarchy to report in aggregate or by business units. To define the right set of configurations, you need to baseline the current workloads utilization, characterize the patterns group of users and type of workloads and since you will use based compute metrics (CPU, memory, I/O,…) to measure utilization it is important for you to define which layers of computing ( Servers, Middleware, Databases,…) will reflect fairly the actual usage pattern of an application.

Visibility of Costs

Among all the features showing business units their VM consumption and allocations is the one that will start to change the relationship between IT and users more fundamentally. Actual utilization rate, historical trending, heat map displaying under and over utilized servers but also virtual environment configuration attributes such as high availability and regulatory compliance reports will contribute to a better cost-to-serve transparency. Another important capability to look for is how you can identify servers that cannot be part of a shared pool of resources due either to legacy applications or simply very high workload requiring dedicated machine this will help when you start sharing with the user the calculation of his charge plan.

Fairness of Costs

This the ONE difference with traditional chargeback models in a sense that metrics collected for charge calculation are actually “IT compute metrics” on top of which you add “services fee-based” metrics. In traditional chargeback direct or indirect model, the finance department tends to use non IT metrics such as # of users in a business unit as a % of aggregate IT costs or data center space taken by rack of servers,…this has led to a lot to the lack of transparency and the sense of “unfair” chargeback. As part of the charge plan calculation the characterization of fixed versus variable costs is essential to calculate the unit of charge.

Conclusion

Remember that in the cloud, users do not want to pay for resources they do not use and there is an expectation that they can dynamically request more resources, allocate and de-allocate resources. A cloud chargeback system will help you transition to a usage-based pricing of your IT resources will the key principles of control, visibility and fairness of costs. Last but not least, you do not need to go for the big-bang and turn chargeback right away you can implement showback and specific chargeback initiatives so that the business units can work with you on an efficient chargeback model.

Monday Dec 10, 2012

CISDI Cloud - Industrial Cloud Computing Platform based on Oracle Products

In today's era, Cloud Computing is becoming integral to the vision and corporate strategy of leading organizations and is often seen as a key business driver to achieve growth and innovation. Headquartered in Chongqing, China, CISDI Engineering Co., Ltd. is a large state-owned engineering company, offering consulting, engineering design, EPC contracting, and equipment integration services to steel producers all over the world. With over 50 years of experience, CISDI offers quality services for every aspect of production for projects in the metal industry and the company has evolved into a leading international engineering service group with 18 subsidiaries providing complete lifecycle for E&C projects.

CISDI group delegation led by Mr. Zhaohui Yu, CEO of CISDI Group, Mr. Zhiyou Li, CEO of CISDI Info, Mr. Qing Peng, CTO of CISDI Info and Mr. Xin Xiao, Head of CISDI Info's R&D joined Oracle OpenWorld 2012 and presented a very impressive cloud initiative case in their session titled “E&C Industry Solution in CISDI Cloud - An Industrial Cloud Computing Platform Based on Oracle Products”.

CISDI group plans to expand through three phases in the construction of its cloud computing platform: first, it will relocate its existing technologies to Oracle systems, along with establishing private cloud for CISDI; secondly, it will gradually provide mixed cloud services for its subsidiaries and partners; and finally it plans to launch an industrial cloud with a highly mature, secure and scalable environment providing cloud services for customers in the engineering construction and steel industries, among others. “CISDI Cloud” will become the growth engine for the organization to expand its global reach through online services and achieving the strategic objective of being the preferred choice of E&C companies worldwide.


The new cloud computing platform is designed to provide access to the shared computing resources pool in a self-service, dynamic, elastic and measurable way. It’s flexible and scalable grid structure can support elastic expansion and sustainable growth, and can bring significant benefits in speed, agility and efficiency. Further, the platform can greatly cut down deployment and maintenance costs.

CISDI delegation highlighted these points as the key reasons why the group decided to have a strategic collaboration with Oracle for building this world class industrial cloud -
- Oracle’s strategy: Open, Complete and Integrated
- Oracle as the only company who can provide engineered system, with complete product chain of hardware and software
- Exadata, Exalogic, EM 12c to provide solid foundation for "CISDI Cloud"

The cloud blueprint and advanced architecture for industrial cloud computing platform presented in the session shows how Oracle products and technologies together with industrial applications from CISDI can provide end-end portfolio of E&C industry services in cloud.



CISDI group was recognized for business leadership and innovative solutions and was presented with Engineering and Construction Industry Excellence Award during Oracle OpenWorld.

Wednesday Dec 05, 2012

Amazon Web Services (AWS) Plug-in for Oracle Enterprise Manager

Contributed by Sunil Kunisetty and Daniel Chan

Introduction and Architecture

As more and more enterprises deploy some of their non-critical workload on Amazon Web Services (AWS), it’s becoming critical to monitor those public AWS resources along side with their on-premise resources.

Oracle recently announced Oracle Enterprise Manager Plug-in for Amazon Web Services (AWS) allows you to achieve that goal. The on-premise Oracle Enterprise Manager (EM12c) acts as a single tool to get a comprehensive view of your public AWS resources as well as your private cloud resources.  By deploying the plug-in within your Cloud Control environment, you gain the following management features:

  • Monitor EBS, EC2 and RDS instances on Amazon Web Services
  • Gather performance metrics and configuration details for AWS instances
  • Raise alerts and violations based on thresholds set on monitoring
  • Generate reports based on the gathered data

Users of this Plug-in can leverage the rich Enterprise Manager features such as system promotion, incident generation based on thresholds, integration with 3rd party ticketing applications etc. AWS Monitoring via this Plug-in is enabled via Amazon CloudWatch API and the users of this Plug-in are responsible for supplying credentials for accessing AWS and the CloudWatch API.

This Plug-in can only be deployed on an EM12C R2 platform and agent version should be at minimum 12c R2.Here is a pictorial view of the overall architecture:


Here are a few key features:

  • Rich and exhaustive list of metrics. Metrics can be gathered from an Agent running outside AWS.
  • Critical configuration information.
  • Custom Home Pages with charts and AWS configuration information.
  • Generate incidents based on thresholds set on monitoring data.

Discovery and Monitoring

AWS instances can be added to EM12C either via the EM12c User Interface (UI) or the EM12c Command Line Interface ( EMCLI)  by providing the AWS credentials (Secret Key and Access Key Id) as well as resource specific properties as target properties. Here is a quick mapping of target types and properties for each AWS resources

AWS Resource Type

Target Type

Resource specific properties

EBS Resource

Amazon EBS Service

CloudWatch base URI, EC2 Base URI, Period, Volume Id, Proxy Server and Port

EC2 Resource

Amazon EC2 Service

CloudWatch base URI, EC2 Base URI, Period, Instance  Id, Proxy Server and Port

RDS Resource

Amazon RDS Service

CloudWatch base URI, RDS Base URI, Period, Instance  Id, Proxy Server and Port

Proxy server and port are optional and are only needed if the agent is within the firewall.

Here is an emcli example to add an EC2 target. Please read the Installation and Readme guide for more details and step-by-step instructions to deploy  the plugin and adding the AWS the instances.

./emcli add_target \

      -name="<target name>" \

      -type="AmazonEC2Service" \

      -host="<host>" \

      -properties="ProxyHost=<proxy server>;ProxyPort=<proxy port>;EC2_BaseURI=http://ec2.<region>.amazonaws.com;BaseURI=http://monitoring.<region>.amazonaws.com;InstanceId=<EC2 instance Id>;Period=<data point periond>"  \

    -subseparator=properties="="

./emcli set_monitoring_credential \

                -set_name="AWSKeyCredentialSet"  \

                -target_name="<target name>"  \

                -target_type="AmazonEC2Service" \

                -cred_type="AWSKeyCredential"  \

                -attributes="AccessKeyId:<access key id>;SecretKey:<secret key>"

Emcli utility is found under the ORACLE_HOME of EM12C install. Once the instance is discovered, the target will show up under the ‘All Targets’ list under “Amazon EC2 Service’.

Once the instances are added, one can navigate to the custom homepages for these resource types. The custom home pages not only include critical metrics, but also vital configuration parameters and incidents raised for these instances.  By mapping the configuration parameters as instance properties, we can slice-and-dice and group various AWS instance by leveraging the EM12C Config search feature. The following configuration properties and metrics are collected for these Resource types.

Resource Type

Configuration Properties

Metrics

EBS Resource

Volume Id, Volume Type, Device Name, Size, Availability Zone

Response: Status

Utilization: QueueLength, IdleTime

Volume Statistics: ReadBrandwith, WriteBandwidth, ReadThroughput, WriteThroughput

Operation Statistics: ReadSize, WriteSize, ReadLatency, WriteLatency

EC2 Resource

Instance ID, Owner Id, Root Device type, Instance Type. Availability Zone

Response: Status

CPU Utilization: CPU Utilization

Disk I/O:  DiskReadBytes, DiskWriteBytes, DiskReadOps, DiskWriteOps, DiskReadRate, DiskWriteRate, DiskIOThroughput, DiskReadOpsRate, DiskWriteOpsRate, DiskOperationThroughput

Network I/O : NetworkIn, NetworkOut, NetworkInRate, NetworkOutRate, NetworkThroughput

RDS Resource

Instance ID, Database Engine Name, Database Engine Version, Database Instance Class, Allocated Storage Size, Availability Zone

Response: Status

Disk I/O:  ReadIOPS, WriteIOPS, ReadLatency, WriteLatency, ReadThroughput, WriteThroughput

DB Utilization:  BinLogDiskUsage, CPUUtilization, DatabaseConnections, FreeableMemory, ReplicaLag, SwapUsage

Custom Home Pages

As mentioned above, we have custom home pages for these target types that include basic configuration information,  last 24 hours availability, top metrics and the incidents generated. Here are few snapshots.

EBS Instance Home Page:


EC2 Instance Home Page:


RDS Instance Home Page:


Further Reading:

1)      AWS Plugin download

2)      Installation and  Read Me.

3)      Screenwatch on SlideShare

4)      Extensibility Programmer's Guide

5)      Amazon Web Services

Wednesday Nov 14, 2012

Slap the App on the VM for every private cloud solution! Really ?

One of the key attractions of the general session "Managing Enterprise Private Cloud" at Oracle OpenWorld 2012 was an interactive role play depicting how to address some of the key challenges of planning, deploying and managing an enterprise private cloud. It was a face-off between Don DeVM, IT manager at a fictitious enterprise 'Vulcan' and Ed Muntz, the Enterprise Manager hero .

 


Don DeVM is really excited about the efficiency and cost savings from virtualization. The success he enjoyed from the infrastructure virtualization made him believe that for all cloud service delivery models ( database, testing or applications as-a-service ), he has a single solution - slap the app on the VM and here you go .

However, Ed Muntz believes in delivering cloud services that allows the business units and enterprise users to manage the complete lifecycle of the cloud services they are providing, for example, setting up cloud, provisioning it to users through a self-service portal ,  managing and tuning the performance, monitoring and applying patches for database or applications.

Watch the video of the face-off , see how Don and Ed address s
ome of the key challenges of planning, deploying and managing an enterprise private cloud and be the judge !


Tuesday Nov 06, 2012

FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully.

For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about:

1. Does it make life easier for your end users?

Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues.

The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor.

2. Does it make life easier for your administrators?

Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases.

Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare.

That leads us to our next question.

3. Does it satisfy your Security Officers and Compliance Auditors?

Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise.

Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics.

If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner.

4. Does it satisfy your CIO?

Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next?

As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance.

Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware.

As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

About

"Zero to Cloud" Blog is dedicated to Enterprise Private Cloud Solution.
Zero To Cloud Resource Page

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today