X

An Oracle blog about Database and Grid Infrasructure Maintenance

Recent Posts

Oracle OpenWorld 2017 speaking session published - Database Management with Gold Images: Easily Provision, Patch, and Upgrade

The speaking schedule for this year's Oracle OpenWorld gathering has been published. If you are interested in managing Oracle Databases with a powerful set of workflows that use Gold Images and out-of-place deployment for all scenarios, then you won't want to to miss my session on Monday October 2 at 11:00 a.m. in Moscone West, room 3004 (CON 6706) I will cover current functionality, and a preview of what's coming in the next generation. I'll be presenting with two DBAs from a large financial institution who are using this functionality in their estate, so this is a great chance to hear about their experiences and lessons learned. It's also a good chance to hear from me before my voice gives out as it usually does after a few days in the demo area. An interesting related speaking session covers online planned maintenance  and complementary technologies that enable zero impact patching and upgrades for Oracle Databases.  This session is on Tuesday at 3:45 so you won't have to make the difficult choice of choosing between the two.  To make things even easier, it's in the same room as my session. See you there - arrive early to make sure you get a seat!  

The speaking schedule for this year's Oracle OpenWorld gathering has been published. If you are interested in managing Oracle Databases with a powerful set of workflows that use Gold Images and...

Some more Q and A from the Rapid Home Provisioning and Maintenance world

Our previous blog entry looked at some common "introductory" questions about RHP.  This blog looks in to some more detailed questions. Q: Does RHP have any special capabilities for updating a RAC environment? A: Indeed it does!  In fact, RHP is developed by the same team that develops RAC and the Grid Infrastructure, so RHP's capabilities are tightly integrated with both of those products, enabling simple and flexible operations that maximize service availability during maintenance. For example, RHP can be directed to drain workload from each node of a cluster before updating the DB and / or Grid software, so that no work is disrupted when the stack is restarted, To see a demo of a RAC patching operation, visit the Oracle Learning Library.   Q: Does RHP understand Data Guard environments? A: Yes.  Specifically, at the end of a Database update operation, RHP will NOT run Datapatch if the Database is a Data Guard secondary.  (Otherwise, RHP does run Datapatch.)   Q: What happens if I am applying a non-RAC-rolling Database patch, but I ask RHP to apply it rolling? A: RHP checks the meta data of the patches being applied, and will stop and report an error if you attempt to roll a non-rolling patch.   Watch for more ...      

Our previous blog entry looked at some common "introductory" questions about RHP.  This blog looks in to some more detailed questions. Q: Does RHP have any special capabilities for updating a RAC...

Rapid Home Provisioning and Maintenance - some recent Q & A

In the recent RAC SIG webcast, and during conversations with various customers, several specific questions often come up.  This blog will address some of them. Q: Does the RHP Server need to be a RAC Cluster? A: No.  The RHP Server code is part of Grid Infrastructure, so all that is needed is a GI installation of one or more nodes. Of course a multi-node cluster provides HA for the RHP Server, but targets and RHP Clients operate independently of the server, and function normally whether the server is online or not.  (If the RHPS is offline, RHP Clients cannot execute rhpctl commands, but are otherwise unaffected.)   Q: Does the RHP Server need to be running the same OS at the targets it manages? A: No.  The RHP Server can be Linux, Solaris or AIX, and the targets can be any mix of the same. Note that Windows is not supported - neither as an RHP Server nor as a target.   Q: In order for RHP to manage my existing Database estate, do I need to install any software on the existing deployments, or configure the servers in any way? A: No - no configuration updates, no agents, no daemons - all you need is connectivity from the RHPS to the machines you wish to manage.  From that point you can provision new Database, Grid Infrastructure etc. homes, and perform patching and upgrade operations on existing homes.   Q: The RHP collateral details capabilities for provisioning and maintaining DB and Grid homes. Can RHP manage other types of software? A; Yes, RHP can provision and manage any type of software home.  An RHP admin can create an arbitrary Gold Image type, and assign images to the new type.  In addition, custom workflows can be associated with the new type, to handle operations for images of that category. Any number of new types can be defined, each with its own custom workflows.   Coming soon: Q & A for specific scenarios such as patching in Oracle RAC, OJVM and Oracle Data Guard environments. And of course answers to any questions you post !    

In the recent RAC SIG webcast, and during conversations with various customers, several specific questions often come up.  This blog will address some of them. Q: Does the RHP Server need to be a...

RAC SIG Webcast: Rapid Home Provisioning and Maintenance - Wednesday May 17 9:00 am PT

Please join this week's RAC SIG webcast for an overview of Rapid Home Provisioning and Maintenance with Oracle Database 12.2. Details of the event and how to register are posted here. "See" you there ! From the RAC SIG website: Description Automating and streamlining the tasks associated with software distribution and maintenance allows IT to accelerate and scale operations. A standardized approach to provision, patch and upgrade across all layers of software infrastructure improves predictability and control. Rapid Home Provisioning (RHP) is a new feature in Oracle Database 12c that allows for centralized software deployment. Software is installed once, stored on an RHP server, and then provisioned to any node or cluster in the private cloud. For maintenance, RHP minimizes impact by distributing new homes out-of-place, and applying rolling updates whenever possible. A single command performs the entire operation and a simple switchback command is available in case problems arise. Upgrades are handled in a similar fashion. By relieving the burdens of software maintenance, the IT organization can meet the demands of the scale and pace that cloud is bringing, and focus on innovative activities that bring the most business value. Key benefits include: » Enables and enforces standardization » Simplifies provisioning, patching, scaling and upgrading » Minimizes the impact and risk of maintenance » Increases automation and reduces touch points » Supports large scale deployments Key features include: » Centralized repository of gold images – Grid Infrastructure, Database, application, middleware and generic homes » Manage existing 11.2 and 12.1 deployments with no changes needed » Provision new pools and databases onto base machines » Non-disruptive distribution of homes to minimize maintenance windows » Provision, scale, patch or upgrade with a single command » Built-in fallback capabilities » Notification model » Custom workflow support » Auditing » Supports all deployment models – base machines, VMs, Zones; SIDB, RAC One, RAC, Multitentant  

Please join this week's RAC SIG webcast for an overview of Rapid Home Provisioning and Maintenance with Oracle Database 12.2. Details of the event and how to register are posted here. "See" you there ! F...

What's new in 12.2 for Rapid Home Provisioning and Maintenance?

Now available for your on-premises deployments, Oracle Grid Infrastructure 12.2 includes many enhancements to the Rapid Home Provisioning and Maintenance (RHP) functionality   Rapid Home Provisioning and Maintenance (RHP) has evolved significantly since its initial release in Grid Infrastructure 12.1, which focused on provisioning and patching Oracle Database homes.  With RHP 12.2 we now deliver a full range of provisioning and maintenance features: Efficient Gold Image inventory and out-of-place distribution Manage existing 11.2 and 12.x deployments as-is -- no agent / daemon / target config required Provision new clusters with a single command Database and Grid Infrastructure: provision, patch, scale, upgrade with a single command Deploy and manage any software home Custom workflow framework Notification model Auditing capabilities Ready to learn more?  This Database Learning Stream video is a good place to start - a full overview of 12.2 functionality.  Prefer the printed word?  The data sheet and white paper have been updated for 12.2, and the white paper now includes a step-by-step cookbook that takes you through the simple commands to create Gold Images, organize them in to series, and use them to create, patch and upgrade target deployments. Skeptical?  Then we invite you to peruse the RHP demo playlist on the Oracle Learning Library YouTube channel.  Questions, comments, suggestions for new enhancements ... please leave comments !

Now available for your on-premises deployments, Oracle Grid Infrastructure 12.2 includes many enhancements to the Rapid Home Provisioning and Maintenance (RHP) functionality   Rapid Home Provisioning...

gDBClone reach version 3.0, now available on OTN

gDBClone 3.0 A powerful Database Clone/Snapshot Management Tool Database Provisioning Lifecycle and Challenges Managing database test and development (test & dev) environments can be challenging and costly in time and resources. Production databases often require 8-10 or more copies for varies types of test and development purposes. Each copy of a database consumes significant storage space. Database copies are typically recycled (created, deleted or refreshed) often. Conventional ways of manually managing test & dev environments can be complex, costly and time consuming. Managing Test & Dev Environments Does Not Have to be Complex gDBClone is a tool that was developed to provide a simple and efficient method for cloning a database for test and dev environments. gDBClone leverages Oracle Cloud File System (ACFS) snapshot functionality to create space efficient copies of databases and manage a test and dev database life cycle. Purpose of Database Duplication A duplicate database is useful for a variety of purposes, most of which involve testing & upgrade. You can perform the following tasks in a duplicate  database: •Test backup and recovery procedures •Test an upgrade to a new release of Oracle Database •Test the effect of applications on database performance •Create a standby database (Dataguard) •Leverage on Transient Logical Standby (TLS) to perform an upgrade •Generate reports gDBClone performs seven key functions: 1. Clone: Creates a clone database (as Primary or as Standby) from a production database copying the DB to the target test and dev cluster 2. Snap: Creates sparse snapshots of running DB to be used for test and development 3. Convert: Converts a given database to RAC (Real Application Cluster) OneNode, RAC or from non-CDB (non-container database) to a PDB (pluggable database) of a given CDB 4. ListDBs: Lists the cloned databases and its snapshots 5. DelDB: Deletes cloned databases and/or its snapshots 6. ListHomes: Lists the available oracle home 7. SYSPwF: Creates an encrypted password file gDBClone - Why is it unique? • Integrated database cloning/snapshot solution in “one command” w/o deep skills or knowledge • Automatic tool to duplicate SI/RACOne/RAC to SI/RACOne/RAC from ASM/ACFS/filesystem • Test/Dev database environment creation in one click • “one command” for Dataguard/Standby setup w/o downtime and w/o storage duplication when “snap” option is in use • “one command” to get a snapshot database from standby database without downtime • “one command” to convert a database to RAC/RACone w/o specific knowledge/skills • “one command” to convert a non-CDB database to PDB w/o specific knowledge/skills For more informations and case studies, see gDBCloneReference Guide WhitePaper (pdf). You can get gDBClone 3.0 from OTN: gDBClone-3-0.1.noarch.rpm Ruggero CItton RAC Pack Team, Oracle Product Development Copyright © 2017, Oracle. All rights reserved. gDBClone on OTN      

gDBClone 3.0 A powerful Database Clone/Snapshot Management Tool Database Provisioning Lifecycle and ChallengesManaging database test and development (test & dev) environments can be challenging andcostly...

Rapid Home Provisioning Server - Minimun Requirement

Introduction Rapid Home Provisioning (RHP) represents a standard way for provisioning, patching and update at organizational level, in a unified manner, across all architectural layers of software infrastructure – Oracle databases and custom software. Rapid Home Provisioning is a method of deploying software homes from single cluster where you create, store and manage templates of Oracle homes as images - called gold images - of Oracle software. The DBA can make a working copy of any gold image and then provision that working copy to any RHP Client in a data center. RHP is installed as part of Grid Infrastructure. Oracle Clusterware manages the components that form the Rapid Home Provisioning Server. These components include the RHP server itself, Grid Naming Service (GNS) which is used to advertise the location of the RHP server, a VIP to support HA-NFS (required if there are any clients that you want to provision to - whether you use NFS storage for the workingcopies or not) and Oracle ASM Cluster File System (ACFS) which is used to store snapshots of the working copies. The gold images represent an installed home, whether that is an Oracle Database software home or some custom software home. The gold image is stored in an Oracle Automatic Software Management Cluster File System (Oracle ACFS). Metadata describing an installed home is stored as an image series in the Management Repository. The Management Repository (or Management Database MGMTDB) is created when installing Oracle Grid Infrastructure. Rapid Home Provisioning is a very powerful feature without demanding so much power. RHP Server Software Requirements You need to install Oracle Grid Infrastructure 12.1.0.2 (and above) for a New Cluster (single node if High Availability is not required). Note: Oracle Grid Infrastructure standalone (Oracle Restart) is not supported with RHP Server RHP Server Memory Minimum Requirements Ensure that your system meets the following memory requirements for installing Oracle Grid Infrastructure for a New Cluster (single node if High Availability is not required): - At least 4 GB of RAM for Oracle Grid Infrastructure for a standalone server- Swap Space Requirement    --> Equal to the size of the RAM if RAM between 4 GB and 16 GB    --> More than 16 GB if RAM is more then 16 GB RHP Server Storage Minimum Requirements Ensure that your system meets the following minimum disk space requirements for installing Oracle Grid Infrastructure (single node if High Availability is not required):- At least 6.9 GB of disk space- At least 1 GB of space in the /tmp directory.- At least 100Gb of space in the ASM diskgroup used by RHP Server RHP Server Network Minimum Requirements - 1 Ethernet interface card for the Oracle Grid Infrastructure public network- 1 Ethernet interface card for the Oracle Grid Infrastructure private network RHP Server Network IP Minimum Requirements - 1 Host IP- 1 GNS VIP (without Zone Delegation) (*)- 1 HA-VIP for RHP HANFS usage (*)- 1 host VIP for Oracle Grid Infrastructure- SCAN IPs:    1 single name that resolves to 3 IP addresses on the same subnet as your default public network (if DNS is in use)    1 single name that resolves to 1 IP addresses in "/etc/hosts" (if the DNS is not in use) (*) The requirement for NFS applies if there are any clients that you want to provision to - whether you use NFS storage for the workingcopies or not. (Even if you use a local file system on the client, RHP Server uses a temporary NFS mount point to do the transfer, so the HA-VIP is required).The same is true for GNS: if you have zero clients, you don't need GNS or HAVIP. If you have one or more clients, you need GNS and VIP. Software License needs Rapid Home Provisioning (RHP) is a feature of Grid Infrastructure 12.1 and later. The architecture consists of a Server and one or more Clients. The Server may provision and patch Homes locally without any extra license needed. If Clients are configured, they require Database Lifecycle Management Pack licensing.

Introduction Rapid Home Provisioning (RHP) represents a standard way for provisioning, patching and update at organizational level, in a unifiedmanner, across all architectural layers of software...

Cross-Site Load Balancing - Prerequisites and Steps to Be Performed

Summary of the Prerequisites and Steps to Be Performed Let’s consider the followingconfiguration: Site A: Primary is replicated byActive Data Guard to Site B: Secondary Site B: Primary is replicated byActive Data Guard to Site A: Secondary Let's assume that we have a pdb (MY_PDB) which runs on Site A: Primary. If, for example the workload is too high on Site A: Primary and a specific SLA can not be fulfilled, we can perform the following steps in order to move MY_PDB to Site B: Primary where the SLA can be met: 1. Stop MY_PDBon Site A: Primary and Site B: Secondary(Log files will continue to apply inMOUNT DB Mode) 2. UnplugMY_PDB from Site A: Primary 3. Copy XMLmanifest to the other site – Site B: Primary 4. Setupaliases on Site B: Primary and Site A: Secondary 5. Ensuremedia recovery (MRP) is running on the Site A: Secondary 6. PluginMY_PDB on Site B: Primary 7. CheckMY_PDB was replicated on Site A: Secondary 8. Open MY_PDBon Site B: Primary 9. Open MY_PDB(Read Only) on Site A: Secondary (if Active DG is running) 10. Drop MY_PDB on Site A: Primary (keep Datafiles) These steps show that load balancing across sites is possible using a combination of Oracle Multitenant and Oracle Data Guard. When coupled with the flexibility offered by Oracle Real Application Clusters for local site management, it is an excellent example of the level agility and flexibility that enable a full-featured Database as a Service deployment.

Summary of the Prerequisites and Steps to Be Performed Let’s consider the following configuration: Site A: Primary is replicated by Active Data Guard to Site B: Secondary Site B: Primary is replicated byA...

RHP Use Cases Series: Create a Gold Image from the Patched Working Copy

Use Case 4 – Create a Gold Image from the PatchedWorking Copy I. Preparation Action 1: Checkwhether this is the right working copy, i.e., the patched Oracle Home(see Interim patches installed line).This working copy (my_wk1) will beused in further commands. Action Description Check done through Query workingcopy command Executed on (Server / Client) Either Purpose Check characteristics of my_wk1 to validate that it is the patched Oracle Home Action 2:Find out how to use the import imagecommand: Action Description Check import image command’s parameters (use -h option) Executed on (Server / Client) Either Purpose Find out how to use the import image command II. Execution Action:Create a gold image from the patched working copy Action Description Run the import image Command: Executed on (Server / Client) Either Purpose Creates a gold image: - named my_patch_img - from the patched working copy home …/wmy_wk1/swhome - Create a new ACFS file system for my_patch_img - Export file system for my_patch_img - Copies files - Copies home contents - Changes home ownership - Transfers data to 1 node III. ResultsValidation Action: Checkcharacteristics of newly created gold image my_patch_imgin order to validate that those characteristics (e.g. owner, access, path) areas expected (also the right patch name is applied). Action Description Validation done through Query image command Executed on (Server / Client) Either Purpose Check characteristics of newly created gold image my_patch_img in order to validate that those characteristics (e.g. owner, access, path) are as expected. In our case also to validate the right patch name is applied. In addition to the four use cases we have presented, there are some others which can be considered, such as: - Create a Working Copy from the Patched Image- Switch an existing DB to the new Working Copy- Mass Patch Apply- Switch an unmanaged Home to a Working Copy Stay tuned for more examples in 2015!

Use Case 4 – Create a Gold Image from the Patched Working Copy I. Preparation Action 1: Check whether this is the right working copy, i.e., the patched Oracle Home (see Interim patches installed line).Th...

RHP Use Cases Series: Switch an active database to a Patched Working Copy

Use Case 3 – Switch an active database to a PatchedWorking Copy Notes:       - Prerequisite:The working copy has already been patched using oPatch I. Preparation Action 1:Preview the already patched Oracle Home (see Interim patches installed line). This working copy (my_wk1) will be used in further commands. Action Description Check done through Query workingcopy command Executed on (Server / Client) Either Purpose Check characteristics of my_wk1 to validate that it is the patched Oracle Home Action 2: Confirm that our database (mydb) is a different working copy thanthe patched version above, namely, wrkghc2. Action description Check with Query workingcopy command Executed on (Server / Client) Either Purpose Confirm that mydb is referencing the non-patched working copy wrkghc2(note that mydb is one of three databases using the unpatched working copy; later we will switch mydb to the patched home but leave the other two unchanged). Action 3:[Optional] Check if DB is up and running Action description Use srvctl status command to check DB status The RHP move database command is the same whether the DB is running or not (behind the scenes, if running, RHP will stop the instance in rolling fashion as the default) Executed on (Server / Client) Client Purpose Check on client whether DB is running Action 4:Find out how to use move command: Action Description Check move command’s parameters (use -h option) Executed on (Server / Client) Either Purpose Find out how to use move command II. Execution Action: SwitchDB from current home to the patched one: Action Description Execute move command Executed on (Server / Client) Either Purpose Switch the mydb DB from initial unpatched working copy (wrkghc2) to the patched working copy (my_wk1) Several steps performed by the RHP command: - Switch database from unpatched working copy (wrkghc2) to the patched one (my_wk1) - Identify a patch is available for mydb to be applied     -Apply the patch Note the mention aboutpatches applied. This is produced by the Datapatch command provided by Patch Team III. ResultsValidation Action 1: Checkconfiguration information of mydb tovalidate that the DB is located in the patched home Action Description Validation done through srvctl config command Executed on (Server / Client) Client Purpose Check configuration information for the mydb DB in order to validate that the DB is located in the patched home Action 2:Check status information of mydb tovalidate that the DB is up and running Action Description Validation done through srvctl status command Executed on (Server / Client) Client Purpose Check status information for the mydb DB in order to validate that the DB is up and running after the patch was applied. We want to avoid the risk that after the patch the DB does not properly work. If this happens we will take corrective actions. Steps ComparisonTable If there are m Cluster environments, each Cluster with ndatabases, then in a pre-RHP approach we have to apply the patch, Cluster byCluster, for each DB in the cluster. Pre-RHP RHP Provisioning For each of the clusters: One move command: per Cluster for all Databases in that Cluster - Patch DB1 - Patch DB2 … - Patch DBn

Use Case 3 – Switch an active database to a Patched Working Copy Notes:       - Prerequisite: The working copy has already been patched using oPatch I. Preparation Action 1:Preview the already patched...

RHP Use Cases Series: Provision a New Oracle Home plus DB Creation

Use Case 2 – Provision a New Oracle Home plus DBCreation I. Preparation Action:Find out how to use add workingcopycommand: Action Description Check add workingcopy command’s parameters (use -h option) Executed on (Server / Client) Either Purpose Find out how to use add workingcopy command II. Execution Action:Create a working copy, provision Oracle Home, create and start DB instance Action Description Run the add workingcopy command Executed on (Server / Client) Either Purpose Creates a working copy (Oracle Home) - named my_wk1 - from the gold image 12c - on the client rwsad0910 - for the user racusr plus: - database name - ASM Diskgroup - list of nodes on which database will be created Then, - Clone of my_wk working copy - Setup Oracle Base - Provision Oracle Home plus: - Create a two-node cluster o Creating and starting Oracle instance o Creating cluster database views Continue… Continue… So far, similar to Use Case 1. But in addition, for this use case RHP also performs Databasecreation: III. ResultsValidation Action:Check characteristics of newly created my_wkto validate that all required parameters (e.g. client, gold image, user, owner,Oracle Home path, configured databases) are asexpected Action Description Validation done through Query workingcopy command Executed on (Server / Client) Either. Command results are the same no matter where it is launched Purpose Check characteristics of newly created my_wk1 to validate that all required parameters (e.g. client, gold image, user, owner, Oracle Home path, configured databases) are as expected Steps ComparisonTable Pre-RHP RHP Provisioning Prepare installation media for each cluster One command: add workingcopy per cluster Log into every cluster to invoke OUI Log into every node to run root.sh Log into very node to invoke dbca (SW only) Create an 2 nodes cluster Oracle Database: - Creating and starting Oracle instance 1 - Creating and starting Oracle instance 2 - Install RAC  That's all for the second use case. Stay tuned for further use cases and connect with us through blog comments if you are interested in specific use cases !

Use Case 2 – Provision a New Oracle Home plus DB Creation I. Preparation Action: Find out how to use add workingcopy command: Action Description Check add workingcopy command’s parameters (use -h option) Ex...

Compare and Contrast (and mix-n-match) Oracle's SPARC V12N choices

A  new white paper from Oracle's Elite Engineering Exchange team provides an excellent guide to the different virtualization technologies available on SPARC platforms, including guidance on when to employ each - either alone, or in combination with another technology.  PDoms, LDoms, and different flavors of Solaris Zones are evaluated in terms of Security Isolation Resource Isolation Efficiency Availability Serviceability Flexibility Agility Although the paper is not specific to database deployments, the key points apply to all workload tiers.  And while the discussion is on SPARC technologies, many points apply to all virtualization technologies.  For example, from our DBaaS perspective, the following quote from the paper couldn't say it better: "When a traditional  monolithic virtualization approach is taken where machines are mapped one-to-one to virtual machines, there is no overall reduction in the operational complexity of the system, because there are still the same number of entities to be managed ... the aim should be to consolidate workloads, not simply to consolidate machines, because it is workload consolidation that will drive the operational efficiencies of the data center."  This summary from the paper shows the full scope of the discussion - the paper looks at each of these rows in detail, and finishes with an evaluation of the combinations that make sense (and when they are indicated).  Great reading for anyone looking to consolidate workloads onto SPARC platforms.

A  new white paper from Oracle's Elite Engineering Exchange team provides an excellent guide to the different virtualization technologies available on SPARC platforms, including guidance on when to...

RHP Use Cases Series: Provision a New Oracle Home

New in Oracle Database 12.1.0.2., Rapid Home Provisioning (RHP) providesa standard solution for provisioning, patching and upgrading at the organizational level,in a unified manner, across all architectural layers of software infrastructure.RHP increases performance and improves efficiency in provisioning and managingtemplates of Oracle software, such as Oracle databases, on all nodes in aprivate cloud.Rapid Home Provisioning allows theadministrative tasks related to database software distribution to be performedin an automated and standardized manner, thus allowing key people in theorganization to focus on innovative activities that bring the most value. DBAs can use Rapid Home Provisioning in different use cases.  In the next several blog entries we'll explore some of these use cases.  Note that the list is not limited to those we will present. Approach in presenting the Use Cases For each of the UseCases, the approach is structured as follows: Preparation Execution Results Validation Also, for each of the use cases we provide a comparisonbetween the pre-RHP approach and the new approach using Rapid HomeProvisioning. For the commands in each of the above phases, the following structureis used: Action Description Describes the action performed Executed on (Server / Client) Describes where the command can be executed: Server, Client or Both Purpose Describes the purpose of the action/command Assumption: The use cases are all based on Oracle DB Software (Oracle DBHomes). In this case all working copies are synonymous with Oracle DB Homes. Use Case 1 – Provision a New Oracle Home I. Preparation Action:Find out how to use add workingcopycommand: Action Description Check add workingcopy command’s parameters (use -h option) Executed on (Server / Client) Either Purpose Find out how to use add workingcopy command II. Execution Action:Create a working copy and provision new Oracle Home Action Description Run the add workingcopy command Executed on (Server / Client) Either. This example is executed on the server, which connects to the client for some of the operations. Purpose Creates a working copy (Oracle Home) - named my_wk1 - from the gold image 12c - provisioned in Oracle Base path - on the client rwsad0910 - for the user racusr Sets up Oracle Base Provisions Oracle Home Continue… III. ResultsValidation Action: Checkcharacteristics of newly created my_wk1to validate that all required parameters (e.g. client, gold image, user, owner,Oracle Home path) are as expected Action Description Validation done through Query workingcopy command Executed on (Server / Client) Either. Purpose Check characteristics of newly created my_wk1 to validate that all required parameters (e.g. client, gold image, user, owner, Oracle Home path) are as expected Steps ComparisonTable Pre-RHP RHP Provisioning Prepare installation media for each cluster One command: add workingcopy per cluster Log into every cluster to invoke OUI Log into every node to run root.sh Log into very node to invoke dbca (SW only) Stay tuned for further use cases and connect with us through blog comments if you are interested in specific use cases !

New in Oracle Database 12.1.0.2., Rapid Home Provisioning (RHP) provides a standard solution for provisioning, patching and upgrading at the organizational level,in a unified manner, across all...

A Thought Experiment Showing the Value of Separating the Business and Technical Catalogs

One common error when creating a business catalog is to expose the underlying technologies that should be mentioned only in the technical catalog.  For IT professionals whose day-to-day lingo is based on products and features, this is an understandable mistake.  And from their perspective, perhaps not at all noteworthy. Imagine a large enterprise's business catalog which covers all the different types of workloads and storage models the enterprise deploys - OLTP, Data Warehouse, batch and real-time analytics, structured and semi-structured data -- so far so good.  Now suppose that the business catalog exposes the database and analytic engines delivering each type of data source.  This might look OK if the expected vendors and products appear in the appropriate slots.   But what happens when a vendor takes a big leap forward and is now the new best choice to deliver a given service?  Once IT is ready to make that shift in their delivery model, they will need to update the business catalog accordingly.   This is typically not the kind of the change consumers are eager to see.  Instead, they should only see the lower costs and higher SLAs that IT can now provide - without being led to worry about the changes behind the scenes. This is exactly the scenario that a product-centric business catalog might be facing soon, thanks to some upcoming releases at Oracle, namely, Oracle Database In-Memory and the Big Data Breakthrough.  And the innovation won't end there.  So make sure your business catalog takes the future in to account with the right structure (and of course the right products to support the business services!)

One common error when creating a business catalog is to expose the underlying technologies that should be mentioned only in the technical catalog.  For IT professionals whose day-to-day lingo is based...

New IOUG IT survey highlights the importance of data center standardization

A new IOUG research report "Efficiency Isn't Enough: Data Centers Lead the Drive to Innovation" presents the results of a survey of 285 data managers and professionals. The survey aimed to learn where IT is spending its data center resources, and identify key strategies for improvement.High on the list of current activities: maintenance, such as patching; maintaining availability; creating and managing copies of data; performance tuning ... routine chores to keep the lights on.  And due to rising complexities, the costs are growing.  If these efforts could be reduced, IT would have more time for innovative initiatives.  So how can that be achieved?Not surprisingly, survey respondents cited standardization as the top strategy to reduce efforts devoted to routine administration.  Standardizing reduces complexity - fewer elements to learn and understand, fewer permutations to orchestrate, fewer vendors to work with.  And higher standardization enables better automation - which happens to be the second-most important strategy.One point the survey does not address is how to approach standardization.  One of the keys is to develop a service catalog which describes the services offered, and how they are delivered.  See this recent blog entry for more on service catalogs - and watch for more to come soon.You can find a summary of the survey on the IOUG website.  Become an IOUG member and read the full report.

A new IOUG research report "Efficiency Isn't Enough: Data Centers Lead the Drive to Innovation" presents the results of a survey of 285 data managers and professionals. The survey aimed to learn where...

Some light reading while the clouds lift

In case you took a break over New Year's and aren't quite ready for "real" work, but still feel obligated to visit sites such as this one, here's a reward for your efforts.  (If you're ready for serious work, the following may disappoint you.) I've been working in this database cloud / DBaaS area for a few years now.  One of the perks is the term itself:  adding cloud images to presentations is pleasing, exchanging meteorological puns and banter is entertaining,  etc. etc.  I do have one issue with the term though - it's too close to my last name.  And thanks to my poor typing skills, plus autocorrect, plus inopportune inattention, it's not unusual for me to swap "clouse" for "cloud" - such as when registering for an event last year.  When I approached the check-in table to get my expo badge, it took three registration workers to eventually figure out that I had registered as "Burt Cloud".  It was nice to see the harried staff have a good laugh, but when they insisted I present an ad lib keynote I had to flee to the booth. But back to the positives.  Being associated with 'cloud' encourages friends and colleagues to share anything about moisture in the air, such as this amazing video.  Enjoy and Happy New Year!

In case you took a break over New Year's and aren't quite ready for "real" work, but still feel obligated to visit sites such as this one, here's a reward for your efforts.  (If you're ready for...

Service Catalogs for Database as a Service

At the end of last month, I had the opportunity to present a speaking session at Oracle OpenWorld: Database as a Service: Creating a Database Cloud Service Catalog.  The session was well-attended which would have surprised me several months ago when I started researching this topic.  At that time, I thought of service catalogs as something trivial which could be explained in a few simple slides.  But while looking at all the different options and approaches available, I came to learn that designing a succinct and effective catalog is not a trivial task, and mistakes can lead to confusion and unintended side effects.  And when the room filled up, my new point of view was confirmed. In case you missed the session, or were able to attend but would like more details, I've posted a white paper that covers the topics from the session, and more.  We start with an overview of the components of a service catalog: And then look at several customer case studies of service catalogs for DBaaS.  Synthesizing those examples, we summarize the main options for defining the service categories and their levels.  We end with a template for defining Bronze | Silver | Gold service tiers for Oracle Database Services. The paper is now available here - watch for updates as we work to expand some sections and incorporate readers' feedback (hint - that includes your feedback). Visit our OTN page for additional Database Cloud collateral.

At the end of last month, I had the opportunity to present a speaking session at Oracle OpenWorld: Database as a Service: Creating a Database Cloud Service Catalog.  The session was well-attended...

The High Price of Over-Virtualizing

It seems that most of the collateral we read about cloud will blithely assert that the first step in creating a cloud environment is to virtualize.  Often we're not told specifics until we read the details, when we discover that the advice is to shovel everything in to virtual machines. Other times, the author will simply lead with virtual machines as the entry point to cloud.  In both cases, the proposition that a cloud must be based on virtual machines is simply taken for granted.  And many people seem to have no qualms about this, and they start their evolution to the cloud by shuffling their physical server silos into VM silos.    Is that always the right thing to do? Let's consider the idea that "more is better."  A friend of mine is looking for a home to buy and debating different down payment vs. loan options.  I'm reminded of when I was on the market and someone gave me this advice: since you can deduct home mortgage interest from your federal taxes, you should make the smallest possible down payment.  This will maximize your interest payment, and therefore your tax deduction.  So my question was - if a bigger deduction is better, why not look for a loan with a high interest rate?  Then I can pay more interest and get a bigger deduction! The same fallacy is plaguing many discussions about virtualization in the move to cloud.  Virtualization has many benefits, and comes in many forms.  Assuming that virtualizing as much as possible - i.e., deploying in VMs - leads you down a path that will simply replace your physical silos with virtual silos.  If you want to simplify your environment and make better use of pooled resources, consider the virtualization available in the applications you are deploying.  With a product such as the Oracle Database, you'll discover that features and options such as Database Resource Manager, Instance Caging, and Oracle Multitenant will handle the vast majority of use cases you thought you needed VMs for - without the added elements to deploy and manage.

It seems that most of the collateral we read about cloud will blithely assert that the first step in creating a cloud environment is to virtualize.  Often we're not told specifics until we read the...

New on-demand DBaaS webcast, and complimentary e-book

Earlier this week I participated in a live webcast in which Tim Mooney from Oracle and Carl Olofson from IDC discussed customer experiences with building public and private database clouds.  The webcast is now available for on-demand viewing:  Delivering Cloud through Database as a Service The webcast focuses on how Database as a Service delivers these key cloud benefits: Greater IT efficiency Higher capital utilization Faster time to market  You may also be interested in the free e-book, Building a Database Cloud for Dummies. And at this point I'll digress for a moment, as the title of the e-book reminds me of a question that arose during the webcast, and continues to cloud many of our discussions about Database as a Service: are you a consumer, or a provider?  To see the importance of understanding the consumer/provider point of view, consider the possible answers to this question:  "How much will a typical DBaaS cost?" If a consumer is asking the question, the answer will be "whatever the provider you use charges" -- and from there we can look at examples of what public cloud providers charge for DBaaS. If a provider is asking the question, we have a much more detailed discussion which must cover the entire solution that will host the DBaaS environment, including software, hardware, people and processes. So when asking questions about DBaaS, make sure to identify your role up front -- this helps discussions get to the point more quickly. You might wonder, how did the e-book title lead to this digression?  It's simple: the title does not indicate whether the dummies in question are those building the cloud, or are the future consumers of the cloud ... in any case, it's a nicely written book despite the ambiguous title.  Enjoy !

Earlier this week I participated in a live webcast in which Tim Mooney from Oracle and Carl Olofson from IDC discussed customer experiences withbuilding public and private database clouds.  The...

Northern California Oracle Users Group (NoCOUG) - Spring 2013 Conference

The NoCOUG holds its spring 2013 conference next Wednesday, 22 May 2013, from 8:00 AM to 5:00 PM at The California Center Pleasanton (formerly CarrAmerica Conference Center).More details and registration are available on their website. Fellow dbclouders, please take note of Mark Scardina's session from 2:30 to 3:30 in the auditorium: Why and how you should be using Policy-Managed Databases. From the abstract: Did you know that policy-managed databases are the default database type for Oracle Database 12c RAC implementations? If you are running Administration-managed databases and would like to learn more about Policy-managed database deployments, introduced in Oracle Database 11gR2, you should attend this session. This presentation will detail how Oracle RAC policy-managed database deployments solve longstanding customer requirements such as database service start order and last service standing, database zero-configuration scaling, HA event shutdown of less critical services and managing to performance objectives while maximizing utilization. Oracle RAC database deployment, upgrade and conversion to policy management along with the use of server pools will be explained and relevant use cases presented. There are several other topics and speakers of interest - we hope you can make it for the entire conference!

The NoCOUG holds its spring 2013 conference next Wednesday, 22 May 2013, from 8:00 AM to 5:00 PM at The California Center Pleasanton (formerly CarrAmerica Conference Center).More details and...

The Lone Star State: On the Journey to Cloud

When we talk about the journey to cloud (see 19 January 2013 entry below), we highlight the fact that we developed this methodology with guidance from customers who are making successful transitions to cloud.  Many of these customers are in sectors such as finance and online travel -- enterprises with large scale and extreme cost-awareness.  So you might wonder whether those customers were limited, special cases -- or, is the journey being applied at more sites, across more sectors.  In fact, the journey is underway at data centers of all shapes and sizes.  A new example from the public sector is the State of Texas.  Working with the Oracle Enterprise Architecture team, they are making the journey to cloud.  To quote from the recently published Oracle Enterprise Architecture White Paper: "The State of Texas is setting a progressive example for other state governments by relying on cloud service providers to provision IT resources to dozens of state agencies. Led by the Texas Department of Information Resources (DIR), the state is creating the Texas Cloud Marketplace, a private cloud that utilizes engineered systems such as Oracle Exadata and Oracle Exalogic to deliver new technology while fulfilling legislative mandates. Oracle is helping to transform the state’s widespread infrastructure, which spans hundreds of databases and tens of thousands of applications. The billion-dollar consolidation project was designed to help 300,000 government employees serve 25 million citizens in a more flexible and cost-effective way." Too busy to read the white paper?  Then check out this article in Profit Magazine for a quick overview.

When we talk about the journey to cloud (see 19 January 2013 entry below), we highlight the fact that we developed this methodology with guidance from customers who are making successful transitions...

Oracle RAC in Solaris 11 Zones

In database cloud deployments, companies hostmultiple databases for use by various internal groups (private clouds) orexternal clients (public or community clouds). Whenever multiple databases are deployed together on sharedinfrastructure, the solution must take into account the degree of isolationeach database will require with respect to faults, operations, security, andshared resources. In many database cloud deployments, OracleDatabase features and options will provide the required isolation. This allows consolidating multiple Oracledatabases natively onto a shared infrastructure, without the need for furtherisolation. In native consolidations, alldatabases share a single Oracle Grid Infrastructure. This approach is describedin detail in the Oracle white paper "Best Practices for Database Consolidation in Private Clouds" which is posted on our OTN page. Database clouds hosting databases withsecurity or compliance considerations have higher requirements for isolation. These could include sensitive data withprivacy requirements, or data from multiple companies who cannot be aware ofeach other (i.e., a public cloud). Such deploymentsmay need to apply additional technologies or controls beyond those available ina native consolidation. Implementing higher degrees of isolation canbe accomplished by encapsulating each database environment. Encapsulation can be accomplished withphysical or logical isolation techniques. Oracle recently certified 11gR2 RAC in Solaris 11 Zones,  which is an important capability for database clouds, because it enables strong isolation between databases consolidated together on a shared hardware and O/S infrastructure. We've just published a new white paper that describes the options and makes a detailed analysis of howOracle Solaris 11 Zones efficiently provide encapsulation to Oracle databaseclouds.  To see how this technology set can be leveraged on SPARC SuperCluster, read about the Oracle Optimized Solution for Enterprise Database Cloud.

In database cloud deployments, companies host multiple databases for use by various internal groups (private clouds) orexternal clients (public or community clouds). Whenever multiple databases are...

Journey to Database Cloud

Understanding the benefits of a database cloud usually leads to the question “How do I get there?” As the question itself implies, making thetransition from a complex, legacy environment to a database cloud is ajourney. Like any journey, clear goalsand a plan to achieve them are the keys to success. Oracle’s “Journey to Database Cloud” is amaturity model that guides this process. Each step of this journey delivers specific benefits. Knowing whatyou want to accomplish will help you identify the phase you need to implement. The key benefits and characteristics of eachphase are: Standardization: Simplify to reduce operational costs and business risk Standardized deployments limitthe number of environments and the processes that manage them to the smallest possible setof options. Hardware and software infrastructure are deployed in modular“building blocks.” Database versions are limited. Because each environment is simplified, each iseasier to manage and maintain, which lowers operational costs. A service catalog defines thedeployment options for building blocks, and for the database configurations whichend users may choose from. Because a small set of standard deployment patterns are followed, new deployments are easier to implementand can be activated with less risk. Standardized components can beconsolidated effectively since they can share a common infrastructure. And the higher degree of standardization thatis applied, the higher degree of consolidation that can be achieved. Keep that in mind when evaluating theproposal from some vendors that transforming a datacenter into a private cloudis a simple matter of shuffling software stacks into virtual machines. This “quick fix” approach sounds attractive,but like most “quick fixes” it does not address the underlying problem ofdatacenter complexity. On the contrary, the added complexity of this approachresults in lower standardization. Consolidation: Efficiency reduces datacenterfootprint - both hardware and software In a traditional environment,servers are generally underutilized. Consolidating workloads onto shared infrastructure allows higherutilization and therefore a reduction in server footprint. This lowers both capital  and operationalcosts: lower power consumption and lower IT management expense since thereare fewer physical environments to operate and manage.  Software environments are also decreased, meaning there are fewer software elements to purchase, monitor and maintain. Several features and options ofthe Oracle database enable the consolidation of multiple databases onto shared hardwareand software infrastructures. Featuressuch as Database Resource Manager and Instance Caging facilitate sharingcompute resources. Products such asOracle Audit Vault and Database Firewall enable enterprise-grade security inconsolidated environments. That brings up another problemwith the “build your cloud by putting everything in virtual machines”idea: virtual machines consume footprintand introduce performance overhead. They also require special skills and tools.  Youcan avoid those issues by consolidating directly onto the operating platform andachieve higher densities and better performance. Service Delivery: Automate to increase business agility Once organizations havesuccessfully standardized their deployments and implemented their consolidationstrategy, the next opportunity is to enable service delivery. By replacing manual processes with automatedand dynamic capabilities, the environment responds quickly to changingworkload conditions and requirements, which translates to faster operations and betteragility. In the context of privatedatabase clouds, this means delivering Database as a Service (DBaaS). In the standardization phase wedefined a service catalog to describe the deployments to choose from. One focus of service delivery is to providethose choices via self-service, with as little manual attention from IT staff aspossible. Self-service for end usersallows them to choose from a menu of service options to create their owndatabase environments online. This freesup IT for higher value initiatives. Automated, dynamic management ofresources is another key characteristic of a service delivery environment. In a consolidated environment managed withmanual processes, adjusting a database’s footprint or resource allocationrequires human intervention. Evennoticing the need to make an adjustment requires human attention. By contrast, a service delivery environment uses tools to monitor and dynamically adjustresource allocations, without human intervention and without impact to runningworkloads. Features such as Oracle Qualityof Service Management apply policies to monitor and manage workloadsautomatically and dynamically. Enterprise Cloud: Unify services for location independence andunlimited capacity One limitation of the servicedelivery environment is that workloads are bound to specific servers in a cloud pool. And while resource allocationwithin the pool is dynamic, pools are fixed in size (each one is typically oneof the “building blocks” defined in the service catalog). This means that if a workload outgrows itspool, manual intervention will be needed to add compute resources to the pool,or to move the workload to a larger pool. Or, if thepool goes offline, there will be a service interruption while the service isreinstated on a different pool. In an Enterprise Cloud, pools aredynamic and may grow or shrink as workloads dictate. Workloads are not bound to any specific pool,so if changes in workload patterns indicate that moving a given workload to adifferent pool is the best choice, the workload will be moved there withoutservice interruption. Some workloads mayeven be distributed across geographically separated pools. The ability for pools to grow andshrink dynamically and for workloads to migrate among pools of a unified cloud allowsworkloads access to virtually unlimited capacity. Since pools can be geographically separated,planned and unplanned outages of entire sites will not impact serviceavailability. The unified cloud is analogous to a utility service such as the electricity grid. Each user sees unlimitedcapacity and uninterrupted availability, and pays for resources as they are consumed. Choice and Flexibility are Essential You may choose to make a largechange to reach your end goal, or you may choose to make incremental changes. For example, some customers havechosen to work with a standardized environment for some time before starting theirconsolidation efforts. Other customersmade the move immediately to consolidation because they were keen on savingcosts and floor space as aggressively as possible. You might treat differentworkloads and environments differently. For example, one group within an organization may be technically andculturally ready to move their databases into a service deliveryenvironment. Another group may be readyto standardize and consolidate, but not ready to implement service delivery. In this example, if the “consolidate only”group sees the tangible benefits the service delivery group enjoys, they’ll probably decide to makethat step too. You’re probably on the journey already… While you were reading about thephases of the journey, you probably noticed several guidelines that youimplement to varying degrees today. Standardization, for example, is a well-established approach that didnot spring into existence when the industry starting talking aboutconsolidation and service delivery. Forstandardization, what’s new in the context of the journey is recognizing theimportance of this step and how the choices made here will have downstream impacts. Wherever you are starting fromand wherever you want to go in the journey to database cloud, Oraclehas the products and services to get you there. And we have years of collaboration with customers in all areas ofenterprise, government and education who have made this journey successfully,and continue to evolve their solutions with our cloud-enabling portfolio. We look forward to being a partner and mentor in your journey.

Understanding the benefits of a database cloud usually leads to the question “How do I get there?” As the question itself implies, making thetransition from a complex, legacy environment to a database...

Cloud Deployment Models

As the cloud paradigm grows in depth and breadth, more readers are approaching the topic for the first time, or from a new perspective.  This blog is a basic review of  cloud deployment models, to help orient newcomers and neophytes.Most cloud deployments today are either private or public. Itis also possible to connect a private cloud and a public cloud to form a hybridcloud. A private cloud is for the exclusive use of an organization.Enterprises, universities and government agencies throughout the world areusing private clouds. Some have designed, built and now manage their privateclouds. Others use a private cloud that was built by and is now managed by aprovider, hosted either onsite or at the provider’s datacenter. Because privateclouds are for exclusive use, they are usually the option chosen byorganizations with concerns about data security and guaranteed performance. Public clouds are open to anyone with an Internetconnection. Because they require no capital investment from their users, theyare particularly attractive to companies with limited resources in lessregulated environments and for temporary workloads such as development and testenvironments. Public clouds offer arange of products, from end-user software packages to more basic services suchas databases or operating environments. Public clouds may also offer cloud services such as adisaster recovery for a private cloud, or the ability to “cloudburst” atemporary workload spike from a private cloud to a public cloud. These areexamples of a hybrid cloud. These are most feasible when the private and publicclouds are built with similar technologies. Usually people think of a public cloud in terms of a userrole, e.g., “Which public cloud should I consider using?” But someone needs toown and manage that public cloud. The company who owns and operates a publiccloud is known as a public cloud provider. Oracle Database Cloud Service,Amazon RDS, database.com and Savvis Symphony Database are examples of publiccloud database services. When evaluating deployment models, be aware that you can useany or all of the available options. Some workloads may be best-suited for a private cloud, some for a publicor hybrid cloud. And you might deploymultiple private clouds in your organization. If you are going to combine multiple clouds, then you want to make surethat each cloud is based on a consistent technology portfolio and architecture. This simplifies management and gives you thegreatest flexibility in moving resources and workloads among your differentclouds. Oracle’s portfolio of cloud products and services enablesboth deployment models. Oracle canmanage either model. Universities,government agencies and companies in all types of business everywhere in theworld are using clouds built with the Oracle portfolio. By employing a consistent portfolio, thesecustomers are able to run all of their workloads – from test and development tothe most mission-critical -- in a consistent manner: One Enterprise Cloud,powered by Oracle.

As the cloud paradigm grows in depth and breadth, more readers are approaching the topic for the first time, or from a new perspective.  This blog is a basic review of  cloud deployment models, to...

Consolidation in a Database Cloud

Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is...

You may be tempted by IaaS, but you should PaaS on that or your database cloud journey will be a short one

Before we examine Consolidation, the next step in the journey to cloud, let's take a short detour to address a critical choice you will face at the outset of your journey: whether to deploy your databases in virtual machines or not. A common misconception we've encountered is the belief that moving to cloud computing can be accomplished by simply hosting one's current operating environment as-is within virtual machines, and then stacking those VMs together in a consolidated environment.  This solution is often described as "Infrastructure as a Service" (IaaS) because the building block for deployments is a VM, which behaves like a full complement of infrastructure.  This approach is easy to understand and may feel like a good first step, but it won't take your databases very far in the journey to cloud computing.  In fact, if you follow the IaaS fork in the road, your journey will end quickly, without realizing the full benefits of cloud computing.  The better option to is to rationalize the deployment stack so that VMs are needed only for exceptional cases.  By settling on a standard operating system and patch level, you create an infrastructure that potentially all of your databases can share.  Now, the building block will be database instances or possibly schemas within databases.  These components are the platforms on which you will deploy workloads, hence this is known as "Platform as a Service" (PaaS). PaaS opens the door to higher degrees of consolidation than IaaS, because with PaaS you will not need to accommodate the footprint (operating system, hypervisor, processes, ...) that each VM brings with it.  You will also reduce your maintenance overheard if you move forward without the VMs and their O/Ses to patch and monitor.  So while IaaS simply shuffles complex and varied environments into VMs,  PaaS actually reduces complexity by rationalizing to the small possible set of components.  Now we're ready to look at the consolidation options that PaaS provides -- in our next blog posting.

Before we examine Consolidation, the next step in the journey to cloud, let's take a short detour to address a critical choice you will face at the outset of your journey: whether to deploy yourdatabas...

Rationalization - How much complexity do you want to handle?

Once you are in control of the information on the applications and resources within your data center, you are in a position to begin the process of rationalization. So how many different infrastructure stacks: server, operating system, database, and middleware, do you think you can manage? Bear in mind that is not just the initial construction of these stacks that you need to think about. These stacks demand on-going monitoring, management and maintenance, and furthermore, you will need the in-house skills to cope with these demands. Even with just these four layers, the potential combinations expand very quickly: Two options for each layer leads to sixteen different stacks (24), three results in eighty-one (34) possibilities. This assuming that all options are available at each level, but does not take into consideration patching to the server firmware, the operating system, the database, or the middleware! So it's fairly clear that you must restrict yourself to one, or possibly two options at each layer to give yourself a chance of gaining, and maintaining, a grip on the complexity in your data center.So what options do you pick? It's hard to be prescriptive here, but in general the latest versions of software have the most features, and fix the majority of the high priority bugs known on the previous release. Consequently, there is value in building on standard components that are as close to the leading edge as is reasonable. For example, you might pick Oracle Solaris 10 8/11 with a view to adopting Oracle Solaris 11 11/11 if and when the next release of Oracle Solaris 11 becomes available. In addition, you might choose to employ Oracle Grid Infrastructure 11g Release 2 as your primary availability platform for either 11.2.0.3 or 10.2.0.x Oracle databases.Having said all that, there is still room for exception handling. Some applications just cannot be coerced onto one of your standard platforms, so they must be treated as exceptions. But again, you must keep the number of these down to keep complexity under control as you marshal your resources for the consolidation phase.

Once you are in control of the information on the applications and resources within your data center, you are in a position to begin the process of rationalization. So how many...

Journey into the Cloud - Introduction

No doubt you'll have read the articles, seen the webcasts, attended the conferences that all extolled the virtues of Cloud Computing. So to recast the oft-asked question by children on long journeys "Are you there yet?" No? If it's that great what's putting you off implementing it? May be it's because you're not sure how to go about it, or may be you're not convinced of the return on investment (ROI) you'll get? What ever the inhibitor is, we hope the following series of blog entries will help you on your way to achieving the benefits that Cloud Computing can bring.Before we dive into detail, here is a quick overview of the route-map to Cloud Computing: Collect - before you change anything, you need to understand what you've got. This includes servers, storage, networks and software, as well as your operational procedures and any other requirements or constraints you have. In order to pick the appropriate application components to consolidate you'll need their associated resource consumption (performance) data. Finally, you'll need to know of what your biggest costs and operational inhibitors are so that you can determine what will give you the biggest returns. Rationalize - determine what your standard components will be. Then minimise the number of combinations of server, operating system release and database version you need to manage. This ultimately simplifies your management processes and costs. Consolidate - use schema or database consolidation, as appropriate, to minimise database management overheads and increase server utilization. Virtualize - use virtualization to isolate and encapsulate database deployments. The virtualized containers can then be collocated on the underlying physical servers to maximum utilization where there is spare capacity. Package - create service delivery templates for server, database and middleware to enable Infrastructure as a Service (IaaS), Database as a Service (DBaaS), or Platform as a Service (PaaS). Deliver - make your service templates available through a self service portal to enable users to provision what they need, when they need it, within the confines of their entitlements. Meter and Monitor - use monitoring and chargeback to gain insight into further consolidation and optimization opportunities. Don't be fooled into thinking this is a one-off trip, or something that must be rushed. Far from it. Think of it as a daily commute with familiar landmarks, traffic lights, and intersections along the way. As new roads are built or widened, you can take advantage of them to hasten your journey.As your knowledge of the journey increases, so you can optimize your data center, enjoy greater agility and lower costs through standardization and increased asset utilization.

No doubt you'll have read the articles, seen the webcasts, attended the conferences that all extolled the virtues of Cloud Computing. So to recast the oft-asked question by children on long...