X

An Oracle blog about Exadata

Recent Posts

Gen 2 Exadata Cloud@Customer New Features: Scaling OCPUs Without Cloud Connectivity

Exadata Cloud Development team is pleased to announce "Scaling OCPUs Without Cloud Connectivity" for Gen 2 Exadata Cloud@Customer The "Scaling OCPUs without cloud connectivity" functionality allows a customer to change the number of OCPUs used by the guest VM even if the connectivity between Gen 2 ExaC@C and Oracle public cloud control plane is lost. The disconnected scaling operation will allow customers to scale guest VM OCPUs up or down using two special dbaascli commands exclusively designed  to work while there is no connectivity between Gen 2 ExaC@C and Oracle public cloud control plane. The command can be run from any node inside a VM cluster to change the CPU core count for that cluster. If the customer has more than one VM Cluster, they will need to issue a separate command from inside each VM cluster they wish to scale up or down. The following command can be used to scale the OCPUs up or down in a VM Cluster: dbaascli cpuscale update --coreCount <core count> --message <message> (example: dbaascli cpuscale update --coreCount 6 --message Scaling_Up_from_4_OCPU_per_VM_to_6_OCPU_per_VM) The coreCount value refers to the desired value of core count per VM Guest in the cluster. In the above example, the core count in each node will be changed to 6 OCPU per VM. The – message option allows you to associate a message that is visible in the log file located at /var/opt/oracle/log/cpuscale_status folder with the name format cpuscale_status_<timestamp>.log The second command designed to be used exclusively in disconnected mode is: dbaascli cpuscale get_status (example: dbaascli cpuscale get_status) The dbaascli cpuscale get_status command is typically used right after the dbaascli cpuscale update command is issued to get the real time status on the execution state of dbaascli cpuscale update command. It will display various command execution states as it progresses from scheduled, running and finally to success or failure. The standard linux OS command "top" can issued to find out current OCPUs active in the node (# OCPU = # vCPU / 2) before or after dbaascli cpuscale update command. The above two commands are designed to not work if issued during the normal connected mode. Operations and Billing: Customers can subscribe to be notified when the Infrastructure changes from "Active" to "Disconnected" mode by subscribing to the Exadata Infrastructure - Connectivity Status event. When the customer infrastructure changes status to "Disconnected", Oracle Ops team immediately gets notified and a Sev 2 ticket is automatically opened. Oracle Ops team will first work to identify and rectify the issue if under Oracle control. Otherwise, they will work with the customer to resolve the issue to reestablish the lost connectivity. During the period of lost connectivity, Oracle will continue to bill the customer for the last value known of active OCPUs to the cloud control plane before the loss of connectivity . The customer billing will transition to the new value (If changed) of configured OCPU when the connectivity is reestablished. Oracle considers scale up and scale down operations as one of the most important aspects of a true cloud offering and as a result is offering this ability for customers to change their OCPU usage regardless of their connectivity status to the Oracle public cloud. With this capability, customers can continue to scale their critical workloads to meet the needs of their end-users, without worrying about potential side effects of temporary loss of connectivity. Documentation  Events Are you looking for more Gen 2 ExaC@C Blog Posts? Announcing Gen 2 Exadata Cloud at Customer (Sep, 2019) Gen 2 Exadata Cloud@Customer New Features: Shared ORACLE_HOME (June, 2020) Gen 2 Exadata Cloud@Customer New Features: DB Sync Part-1 (June, 2020) Gen 2 Exadata Cloud@Customer New Features: Multiple VMs per Exadata System (June, 2020)

Exadata Cloud Development team is pleased to announce "Scaling OCPUs Without Cloud Connectivity" for Gen 2 Exadata Cloud@Customer The "Scaling OCPUs without cloud connectivity" functionality allows a...

Gen 2 Exadata Cloud@Customer New Features: Multiple VMs per Exadata System

Exadata Cloud Development team is pleased to announce "Multiple VMs per Exadata system (Multi-VM)" for Gen 2 Exadata Cloud@Customer The Multi-VM capability allows customers to create Multiple VMs in a compute node of an Exadata Cloud@Customer to enable better isolation and consolidation.  All the Gen 2 ExaC@C systems deployed up until now have been deployed with a single VM cluster per ExaC@C. This single VM cluster consumed all of the memory, local disk space and Exadata storage available in the system. Customers decided the number of OCPUs they wanted to allocate to the single VM cluster. For example, for a quarter rack system, the VM cluster would be a 2-node RAC cluster with one guest VM in each node that occupies all of the local disk space as well as all of the memory of that node. The total OCPUs enabled during the VM cluster creation process were equally divided between the two nodes and this VM cluster consumed all of the available Exadata storage distributed among DATA, RECO and SPARSE (If selected) disk groups.  Below are the details on how existing and new customers can use this new feature: Scaling down Single VM Cluster (Pre-requisite for Existing Customers): Since the first VM cluster consumed all of the local disk, memory and Exadata storage, the first task all customers deployed with a single VM cluster will need to do, in order to create multiple VM clusters, is to shrink their first and the only VM cluster before resources can be freed up to create additional VM clusters. The new UI interface “Scale VM Cluster” introduced with Multi-VM enables customers to scale down the OCPUs, memory, local disk and exadata storage for the VM cluster allowing them to shrink their existing VM cluster as shown in the screenshot below.   Once resources are freed up after the first VM cluster is shrunk, customers will be able to create additional VM clusters as well as shrink or grow existing VM clusters. Notice that the interface above allows customer to provide a new value for all of the four resources – OCPUs, Memory, local file system size and Exadata storage - at once.  Clicking on the “save changes” button will create a master work request with four child work requests, one for each resource, that can be monitored from the work request page.  The OCPUs and Exadata storage changes are online while the memory and local file system changes are done in a rolling manner, one node at a time. Creating New VM Cluster (Existing and New Customers):  New customers or existing customers who have scaled down their single VM cluster and whose systems have received the Multi-VM update for their region (see rollout plan section at the bottom), can now create multiple VM clusters as shown in the screen below. It is imperative that these customers do a careful planning involving their consolidation and Exadata deployment strategy to decide the size and number of VM clusters they want to deploy per ExaC@C rack. Even though Multi-VM gives them the flexibility to adjust the VM cluster size after deployment or even to add or remove clusters, changing memory and local disk space does require rolling changes and could affect their workloads.  Multi-VM introduces the following service minimums and maximums. Minimum of 2 OCPUs must be allocated per VM Minimum of 1TB of usable storage per VM Minimum of 30 GB memory must be allocated per VM Minimum 60 GB of local space per VM Maximum* number of VMs for ExaCC X8: 5 and for ExaCC X7: 6  One additional change that is accompanying the Multi-VM rollout is the default Release Update (RU) used for creating a new VM cluster or database. The default RU, which is currently April 2019, is changing to Jan 2020. This means when the new customers whose systems were deployed after the Multi-VM feature was rolled out to their region creates a new VM cluster or a database, it will be created with Jan 2020 RU. Existing customers will need to use "dbaascli cswlib download" command to download and "dbaascli dbimage activateBP" command to activate any image other than the default image. Multi-VM delivers a highly sought-after functionality to Gen 2 ExaC@C customers. This capability now allows enterprise customers to enable database consolidation along with workload isolation, ensuring that critical application workloads are not impacted by unexpected demand surges for other applications deployed in the same system. * Limited by local disk space  Documentation  Console API/SDK  Events Permissions   Are you looking for more Gen 2 ExaC@C Blog Posts? Announcing Gen 2 Exadata Cloud at Customer (Sep, 2019) Gen 2 Exadata Cloud@Customer New Features: Shared ORACLE_HOME (June, 2020) Gen 2 Exadata Cloud@Customer New Features: DB Sync Part-1 (June, 2020) Gen 2 Exadata Cloud@Customer New Features: Scaling OCPUs Without Cloud Connectivity (June, 2020)

Exadata Cloud Development team is pleased to announce "Multiple VMs per Exadata system (Multi-VM)" for Gen 2 Exadata Cloud@Customer The Multi-VM capability allows customers to create Multiple VMs in a...

Gen 2 Exadata Cloud@Customer New Features: DB Sync Part-1

DB Sync with Gen 2 ExaC@C:  Synchronizing operating system command line database management actions with Exadata Cloud@Customer automation Have you ever wondered about breaking the Gen 2 ExaC@C automation because you you wanted to directly call the underlying tools in the customer VM, like dbaascli, because you needed the deployment flexibility afforded by those tools to create database or modify database characteristics but were not sure if you can execute these actions outside of Gen 2 Cloud@Customer UI? Well, I have a good news for you.  Introducing DB Sync: Oracle recently introduced a feature called DB Sync with Gen 2 ExaC@C that syncrhonizes certain command-line actions performed with dbaascli and/or dbaasapi commands with the Cloud UI* so you can use the method that works best for your database provisioning needs. This feature work with both database homes and databases such that both actions - creation and modification from dbaascli / dbaasapi - are synchronized with the UI.  Another good news is that there is nothing you have to do for the synchronization to work - it is fully automatic - managed by a daemon that wakes up every 10 min and synchronizes the metadata between dbaasapi/dbaascli and Cloud UI. Benefits Of DB Sync: Database & System Administrators use command-line simply because they like having that control of saved scripts  that can be reused easily after making some simple changes. Another reason could be that they want to perform operations from command-line that are not yet available from the Cloud UI. For example, as of the time of writing of this blog, Oracle has not introduced DB/GI Patching of databases from UI for Gen 2 ExaCC but it is available from command-line tools. The DB Sync feature enables synchronization of the effects of certain command line based database operations with the Cloud Control Plane metadata such that the same changes can now be reflected in the UI. For example, I may apply an Oct 2019 (19.5) update to  my DB Home using the command line, and the DB Sync capability will ensure that the UI eventually reflects this update. DB Sync basics: For Cloud UI to accurately reflect the changes made via command-line, the metadata updated by dbaascli/dbaasapi needs to be reflected in the Cloud UI metadata. To do this, the OCI control plane issues a request to dbcs-agent (database cloud service agent) running on customer VM every 10 mins and asks the agent to run dbaasapi commands to get any changes done via command-line operations. Upon receiving any such changes back from the dbcs agent, metadata maintained by the OCI control plane is updated and is reflected on the Cloud console. In the example above, if a DB Home was updated with Apr 2020 Release Update (RU), that information will be captured by dbcs-agent and sent to control plane.  The OCI Control plane in return will update the RU displayed on the Cloud console from Oct 2019 to Apr 2020. To see the list of all attributes synced between host tooling (command-line metadata) and control plane (cloud-console metadata) by DB Sync, refer to documentation.  Figure 1:  Command output showing the running dbcs-agent process     Figure 2: DB Sync Process Flow What actions does DB Sync Synchronize: The following actions are synchronized with the UI DB Homes created outside of UI (with dbaasapi) are synched with the UI Databases created outside of UI (with dbaasapi) are synched with the UI DB Home and database modified outside of UI (with dbaasapi) are synched with the UI After the properties are synced, UI based life cycle operation can be performed on synced object. In the part-2 of this blog post, I will talk about each of the above use-case with an example. For each of the above use-cases, I will validate that DB Sync works by first performing the command-line action and then checking if the changes are reflected in the UI. Finally, I will validate that UI based life cycle operation can be performed on synced object.  Conclusion:  DB Sync is a foundational feature to Gen 2 ExaC@C cloud functionality that allows you to use dbaasapi to create/modify databases/database homes and automatically sync the changes to the OCI Control Plane.  Resources: DB Sync Attribute Mapping Public Documentation REST APIs Documentation SDKs and CLI Terraform Database Events Database Service Release Notes "*" Cloud UI - Terms Cloud UI, Cloud Console and The OCI Control plane have been used interchangeably in this blog and it represents all of the following interfaces Console/APIs/OCI CLI/SDK and Terraform Are you looking for more Gen 2 ExaC@C Blog Posts Announcing Gen 2 Exadata Cloud at Customer (Sep, 2019) Gen 2 Exadata Cloud@Customer New Features: Shared ORACLE_HOME (June, 2020) Gen 2 Exadata Cloud@Customer New Features: Scaling OCPUs Without Cloud Connectivity (June, 2020) Gen 2 Exadata Cloud@Customer New Features: Multiple VMs per Exadata System (June, 2020)

DB Sync with Gen 2 ExaC@C:  Synchronizing operating system command line database management actions with Exadata Cloud@Customer automation Have you ever wondered about breaking the Gen 2 ExaC@C...

Gen 2 Exadata Cloud at Customer New Features: Shared ORACLE_HOME

Shared ORACLE_HOME with Gen 2 Exadata Cloud at Customer: Consolidate More, Administer Less Back in Sept 2019, we introduced Gen 2 Exadata Cloud at Customer (Gen 2 ExaCC for short)  and explained the initial release features in this blog.  With the initial release, every database created on ExaCC was automatically created in its own home. There was no way to use the same home to add more databases. We are now introducing a new feature called Shared ORACLE_HOME. Shared ORACLE_HOME allows you to create multiple databases in a single ORACLE_HOME. In this post, I will explain how this new functionality works — and how to get the most out of it. First, we are introducing a new resource called ORACLE_HOME. When you create a new ORACLE_HOME, you specify which Oracle Database version you want to use to create it.  Every database subsequently created within this home will be of the same version. (see Screen 1,2 and 3). Alternatively, when you create a new database, you specify a new ORACLE_HOME you want the database to be created in. The database will be created with the version you specified and the database will reside in the ORACLE_HOME you specified. (Screen 4) Sharing ORACLE_HOME for multiple databases allows you to consolidate more databases on your ExaCC. It is Oracle MAA best practice to use as few ORACLE_HOMEs as possible to keep maintenance and administration to minimum. For example, when you patch an ORACLE_HOME that is shared between multiple databases, all databases in the home get patched.  Another big reason to use Shared ORACLE_HOME is to optimize the finite local space that is available to each compute node. Just like the old days, let’s say you have five databases, each of version 12.2.0.1, 18c (18.0.0.00) and 19c (19.0.0.0.0). In an ideal world, you would create three ORACLE_HOMEs, one for each of the three versions, and place all databases of the same version in their respective home. The Shared ORACLE_HOME feature on Exadata Cloud at Customer Gen 2 allows you to do exactly that — and it is supported from both GUI as well as command-line. All of the associated functionality you’d expect, like viewing all of the homes in a VM Cluster, viewing the databases in any home, as well as being able to delete an empty ORACLE_HOME is also available in the new UI.  Few screens below demonstrate the key changes in the UI associated with this new feature. Conclusion: The Shared ORACLE_HOME feature fulfills a commonly needed functionality for Gen 2 ExaCC. It also lays a foundation for upcoming functionalities like applying DB/GI updates from the UI and moving databases from one home to another so you can upgrade/downgrade your database from one RU to another. Resources: Public Documentation REST APIs Documentation SDKs and CLI Terraform Database Events Database Service Release Notes Screen 1: Gen 2 ExaCC Cloud UI Showing New Resource Database Home Screen 2: You can now create databases in existing ORACLE HOME Screen 3: Create Database Screen Screen 4: Creating new HOME at the same time when new Database is being created Are you looking for more Gen 2 ExaC@C Blog Posts? Announcing Gen 2 Exadata Cloud at Customer (Sep, 2019) Gen 2 Exadata Cloud@Customer New Features: DB Sync Part-1 (June, 2020) Gen 2 Exadata Cloud@Customer New Features: Scaling OCPUs Without Cloud Connectivity (June, 2020) Gen 2 Exadata Cloud@Customer New Features: Multiple VMs per Exadata System (June, 2020)

Shared ORACLE_HOME with Gen 2 Exadata Cloud at Customer: Consolidate More, Administer Less Back in Sept 2019, we introduced Gen 2 Exadata Cloud at Customer (Gen 2 ExaCC for short)  and explained...

Exadata Database Machine

Exadata System Software Updates - May 2020

To give you a single place to keep up with Exadata technical news, we'll post Exadata System Software releases here on blogs.oracle.com/exadata as soon as they are available. This post details the May 2020 releases.  Exadata System Software Update 19.3.8.0.0 Exadata System Software 19.3.8.0.0 Update is now generally available. 19.3.8.0.0 is a maintenance release that adds critical bug fixes and security fixes on top of 19.3.X releases. It contains the following: Exadata Software updates for important bugs discovered after 19.3.7.0.0 was released in early May 2020 Java Security Update (CPUApril2020) For further information, see: Exadata 19.3.8.0.0 release and patch (31322052) (Doc ID 2661500.1) Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1) Exadata System Software Update 19.2.14.0.0 Exadata System Software 19.2.14.0.0 Update is now generally available. 19.2.14.0.0 is a maintenance release that adds critical bug fixes and security fixes on top of 19.2.X releases. It contains the following: Exadata Software updates for important bugs discovered after 19.2.13.0.0 was released in early May 2020 Java Security Update (CPUApr2020) 19.2.14.0.0 is the best upgrade target for customers who want to move to Oracle Linux 7. Customers upgrading to Oracle Database 19c will have to upgrade Exadata to 19.2.X.0.0. For further information, see: Exadata 19.2.14.0.0 release and patch (31321902) (Doc ID 2661499.1) Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1) Exadata System Software Update 18.1.28.0.0 Exadata System Software 18.1.28.0.0 Update is now generally available. 18.1.28.0.0 is a maintenance release that adds critical bug fixes and security fixes on top of 18.1.X releases. It contains the following: Exadata Software updates for important bugs discovered after 18.1.27.0.0 was released in early May 2020 Java Security Update (CPUApr2020) 18.1.28.0.0 is the best upgrade target for customers who want to continue to use Oracle Linux 6. 12.2.1.1.8 which was released in July 2018 is the last patch in the 12.2.x line, so customers should plan to update to 18.1.x.0.0 if continuing with Oracle Linux 6, otherwise plan to update to 19.2.x. Important considerations before upgrading in 18.x Please refer to MOS Document 888828.1 for important considerations before upgrading to the latest Exadata System Software release. As stated, before updating database servers (non-OVM and OVM domU) to Exadata 18.1.5 or higher, ensure ACFS drivers that support the most efficient CVE-2017-5715 (Spectre Variant 2) mitigation are installed. These updated ACFS drivers, required for Exadata versions >= 18.1.5.0.0 and >= 12.2.1.1.7, are included in the July 2018 quarterly database releases. Quarterly database releases from April 2018 and earlier still require a separate ACFS patch. See MOS Document 2356385.1 for details. MOS Document 2357480.1 discusses the performance impact of mitigation measures against CVE-2017-5754, CVE-2017-5753, and CVE-2017-5715 on Oracle Database, Oracle Exadata and Oracle Zero Data Loss Recovery Appliance. Database servers that have been customized by installing third-party kernel drivers must contact the third-party vendor to obtain updated third-party drivers. See MOS Document 2356385.1 for details. For further information, see:  Exadata 18.1.28.0.0 release and patch (31321897) (Doc ID 2661498.1) Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1)   Huge thanks to our Exadata Sustaining Team, who provide these releases to protect and enhance the value of your Exadata investments.  We are always interested in your feedback. You are welcome to engage with us via comments here. Related posts Exadata System Software Updates - April 2020 (early May 2020) Exadata System Software Updates - March 2020 (March 2020) What’s New with Oracle Database 19c and Exadata 19.1 (February 2019) Exadata System Software 19.1 (October 2018) Introducing Exadata Database Machine X8M (September 2019) Introducing Exadata Database Machine X8 (June 2019)  

To give you a single place to keep up with Exadata technical news, we'll post Exadata System Software releases here on blogs.oracle.com/exadata as soon as they are available. This post details the May...

Database Consolidation - Why and How

Database consolidation means running databases on a common set of infrastructure, most often to reduce costs and increase operational efficiency. The business drivers and approaches to consolidation are the same, regardless of whether consolidation is done in an on-premises data center, or in the Cloud.   In this article, we will look at why organizations choose to consolidate databases, options for how to achieve consolidation, and addressing the following questions and issues: Why Consolidate Databases? Why Consolidate Databases on Exadata? Providing Isolation with Consolidation Why Isolate Databases? How to Isolate Databases on Exadata Don't Over-Isolate: Limit Virtual Sprawl How to Stop Noisy Neighbors: Use Resource Manager How to Simplify: Use Resource Shapes Reducing Blast Area Still Means a Blast! How does Oracle Deliver Database Availability? How does Oracle Converge Database Help? This article is intended to give the reader a better understanding of the business drivers behind consolidation as well as the tools available to strike the best balance between consolidation and isolation to meet business goals. The overly simplistic approaches advocated by some often result in higher costs and increased work for operations teams. Why Consolidate Databases?  Business considerations are the primary driver behind database consolidation. Organizations are constantly driving toward greater efficiency, accomplishing more work with less effort, while still meeting business goals. The main business drivers of consolidation are: Cost Reduction Simplicity Security Costs are lower with consolidation because the same databases run on less hardware. This also applies in Cloud deployments because companies are still paying for use of the underlying hardware. It might be harder to track and visualize in some Cloud environments, but more hardware means higher cost. Regardless of whether you databases are deployed on hardware in your on-premises data center or in a Cloud data center, utilization of those systems will drive costs. Simplicity is essentially another cost reduction factor because simplicity results in less labor. We prefer to focus on the simplification aspects because there is not always a straight line from labor reduction to cost reduction. Simplifying one area typically means that personnel are shifted to other areas and focus on higher value work. Consolidation means fewer configuration items an organization needs to manage, which takes less effort and allows people to focus on other, higher value work. Simplifying Information Technology also makes an organization more agile and more able to respond to business needs rather than focusing on managing the complexity inherent in divergent systems. Application Developers also gain benefits from simplification through standardization on the Oracle converged database. Oracle Database meets the needs of ALL application developers because it's a single converged database rather than multiple, incompatible technologies. One technology to learn and develop with rather than many technologies.  Developers don't have to choose a specific database that meets a single purpose, making development decisions easier and allowing developers to focus on new business requirements. Security is improved by consolidation and standardization, which minimizes the number of vulnerabilities an organization has to protect themselves against. Consolidation forces companies to standardize on fewer technologies, which allows the organization to focus on employing best practices needed to secure those technologies. Consolidation also results in fewer instances of each, giving companies fewer points of vulnerability as well as commonality across systems. Consolidation also allows organizations to deploy security fixes more quickly, simply because there are fewer systems to manage. Why Consolidate Databases on Exadata? Exadata is the ideal platform for consolidating databases because it builds upon and strengthens the fundamental business drivers outlined above. Exadata includes a wide range of innovations specifically designed to make it the most effective platform for consolidation. With more than 10 years on the market and with thousands of systems deployed, Exadata is the best choice for database consolidation due to considerations of: Cost Simplicity Security Availability Performance Cost is a fundamental business driver, and costs are reduced by fully utilizing the capabilities of Exadata. The high performance of Exadata allows it to service more databases, resulting in lower costs. Many organizations deployed their first Exadata systems only for mission critical systems, where over-investment was often justified. For example, one customer runs over $1B worth of business on a single large Exadata system, so if that system is only 20% used it's easily cost-justified by the business critical nature of the application. Making better use of Exadata systems results in much lower costs, and make Exadata cost-competitive with other platforms.  Any asset that is only 10% or 20% utilized will not be cost-effective, so we lower the cost of Exadata simply by using it more through consolidation. Simplicity of operations is improved by consolidation alone, but consolidation onto Exadata results in further simplification. Exadata is a single platform that can run any Oracle Database, at any scale, with any workload. Exadata is the only fully integrated enterprise scale system that includes servers, storage, operating system, drivers, logical volume manager, Clusterware, and Oracle Database in a single package. Other converged (or hyper-converged) platforms also lack critical components such as RDMA over Converged Ethernet and Persistent Memory that Exadata contains by default. With more than 11 years of development, Exadata includes standardized operational practices that contribute to the simplicity of operation. The standardized care & feeding of Exadata has been built into the Oracle Cloud (and Exadata Cloud at Customer) to drive even greater simplicity. Security is improved simply through standardization and consolidation, but Oracle Database and Exadata provide even greater security than other solutions. The reason is because Exadata is a "full stack" system, comprised of servers, operating system, networking, storage, virtual machines, logical volume manager, and the Oracle Database software. Only Oracle provides a single full-stack solution that include the Oracle Database. Oracle then ensures security by executing the most commonly used security scanners against the full stack, and Oracle works to address any security concerns through each successive product release. No other product on the market provides such a secure foundation, and this same foundation is then used in the Oracle Cloud to provide the most comprehensive database security solution possible. Availability is always a concern in consolidated environments because more databases and more business applications are reliant on fewer systems. The scope of failure impact is potentially increased when databases are consolidated, so availability is always a concern. Exadata addresses these availability concerns through the tight integration with Oracle Maximum Availability Architecture (MAA) principals. Exadata is able to deliver the highest levels of availability in the industry. Exadata also provides multiple levels of consolidation vs. isolation, so consolidating onto Exadata is not simply an either/or proposition. All of the tools needed to address availability requirements are included in Exadata as we will discuss in later sessions. Performance of databases does not always mean delivering the highest levels of performance, which Exadata certainly can achieve.  The unequaled performance of Exadata is leveraged to achieve greater density in database consolidation environments.  Exadata also allows administrators to deliver the appropriate level of performance for the needs of each business application. The level of performance can be easily controlled with Exadata simply by controlling the amount of resources allocated to each database. Providing Isolation with Consolidation Database consolidation and isolation are not mutually exclusive. It is still possible to meet isolation requirements in a consolidation environment. Exadata (on-prem or in the Cloud) includes all of the capabilities necessary to consolidate databases, while still providing isolation where necessary between those databases. The goal is to reduce costs, simplify management, and improve security, while still delivering the needed availability and performance that businesses require. We need to first consider WHY databases need to be isolated, then we will look at HOW we use the tools available on Exadata to provide the needed isolation. Why Isolate Databases? There are 6 factors that govern why databases need to be isolated vs. consolidated within the operating environment as follows: Physical Location Administrative Separation Security Separation Maintenance (patching and upgrades) Blast Radius (scope of failure impact) Resource Management Physical Location includes placement of databases and systems within certain geographic regions or data centers, as well as within specific sub-divisions of the region or data center, commonly referred to as Availability Domains. Businesses typically require production and disaster recovery systems to be placed in different physical locations. Business needs may also dictate that "local standby" systems reside within a different Availability Domain adjacent to the location of either the production or disaster recovery system. Business requirements may also dictate the physical location of development and test systems. The physical location of a system may also be dictated by location of users, or location of application-tier systems. Administrative Separation refers to organizations that have multiple DBA teams or other needs to separate databases from an administrative standpoint, such as having different administrators for development, test, and production systems. This is also common in SaaS (Software as a Service) providers, where customers of the service provider have administrative access, or the service provider has separate teams that administer databases for their clients. In either case, multiple administrative teams means those databases need to be isolated from each other administratively. Security Separation means applying different security controls on more sensitive data, often including the use of dedicated networking to service specific databases (quarantine LAN, etc.). These more sensitive databases are typically subjected to higher security standards such as for regulatory reasons (HIPAA, PCI, PII, etc.). These higher security standards typically raise the cost of these systems, so these databases are typically isolated from other databases for reasons of cost containment. Databases must be isolated from each other in cases where security requirements differ widely between them. Maintenance considerations include patching and upgrades of servers, databases, and any supporting infrastructure. The biggest concern with maintenance is major version upgrades at the infrastructure layer (O/S, database, etc.), where there is greater potential for functional changes that might impact how services operate, as well as longer potential downtime to perform those upgrades. Maintenance also includes application of individual patches or product updates, which can also impact database availability. Databases or groups of databases therefore need to be isolated from each other for the purposes of maintenance. Blast radius (or fault isolation) considerations refer to the scope of impact any failure has. Grouping databases together means  the scope of impact can be wider and affect more parts of the business. Blast radius is a consideration when consolidating databases into physical servers, virtual machines, or when consolidating databases as Pluggable Databases into Container Databases. Resource Management refers to ensuring each database receives the resources it requires, as well as guarding against the “noisy neighbor” problem. One approach to resource management is to isolate databases onto dedicated physical or virtual machines, but we recommend to simply use Oracle Resource Manager instead. Exadata automatically manages prioritization of work through the entire stack of system resources, from CPU to I/O, including prioritizing critical I/O requests over non-critical I/O, and prioritizing OLTP over analytics workloads. Exadata also gives administrators control over prioritization through resource plans. How to Isolate Databases on Exadata Now that we have established why to isolate databases outlined, there is also the question of how databases are deployed in an isolated manner. With Exadata (either on-prem or in the Cloud), we have 5 options for how to isolate databases as follows: Isolate by Physical Servers Isolate by Virtual Machine Multiple Databases per Physical/Virtual Machine Multiple Pluggable Databases Per Container Oracle Resource Manager (DBRM & IORM) We recommend a judicious use of ALL these approaches for deploying databases across your Information Technology infrastructure, whether deployed on-prem or in the Cloud. Managers of the infrastructure must determine how and when to isolate vs. consolidate databases. Excessive isolation such as the “one database per virtual machine” approach results in high cost of operation, so we recommend a more balanced approach. Over-use of virtualization leads to what is known as "virtual sprawl" as outlined in the next section. Don’t Over-Isolate: Limit Virtual Sprawl Data Centers in the past often suffered from sprawl (physical sprawl), where each application and it’s database(s) ran on their own physical servers. Virtual Machine technology has allowed IT organizations to stem the tide of physical sprawl, but this has often resulted in what is known as virtual sprawl. Increasing compute and storage density has also allowed more workload to be laid on top of the same physical footprint, using virtualization to maintain the same (or even greater) isolation between workloads. You will quickly build an administrative nightmare and higher costs if each database is deployed on a dedicated Virtual Machine. You can easily meet the needs of your business application users by taking a more judicious approach to deploying databases by using the full range of options at your disposal. The fundamental guidance is to use the right tool for the task such as using Oracle Resource Manager to manage resources. How to Stop Noisy Neighbors: Use Resource Manager There is no reason to isolate Oracle Databases simply for the purposes of resource management. Isolate databases for the right reasons, such as for administrative separation, security separation, maintenance purposes, and blast radius, not for resource management. We can easily ensure each database receives the appropriate amount of resources using Oracle Database Resource Manager (DBRM), and deploy systems that are much simpler to manage compared to over-use of virtualization.  There are 4 primary resources that need to be managed in any system: CPU Memory Processes I/O CPU is managed using Oracle Database Resource Manager (DBRM), using shares & limits (or Dynamic CPU Scaling in 19c) inside of a Container Database, Instance Caging (CPU_COUNT) across containers and non-pluggable databases, and within databases using Consumer Groups.  DBRM gives us control of ALL types of databases and workloads, and it’s integrated with IORM (see below for detail). Memory usage for Oracle databases is managed by controlling SGA (System Global Area) and PGA (Process Global Area) settings, which is covered extensively in The MAA Database Consolidation White Paper here. Processes within Oracle databases are managed by controls for sessions and parallel query servers, which is also covered in the MAA Database Consolidation White Paper mentioned above. I/O resources are much simpler to manage on Exadata because we have IORM (I/O Resource Manager). Exadata is integrated hardware and software, spanning both compute and storage. Using IORM Objective “auto” allows IORM to inherit the same resource ratios set for CPU in DBRM, so you have one set of controls that handles everything. See our demo series on Resource Management (starting here) that shows how you can manage resources between pluggable databases, between container databases and non-pluggable databases, and even within databases using Consumer Groups.     How to Simplify: Use Resource Shapes Of course we recommend establishing a standardized set of “resource shapes” to choose from rather than making each database a unique and special snowflake. It’s much easier to manage a large population of databases if they all follow some standards, including the resources assigned to them. Using Resource Shapes means you establish shapes with the same ratios for the following resources: CPU Memory (SGA & PGA) Processes (Sessions & PQ Servers) I/O I covered this in detail in the MAA Best Practices white paper on Database Consolidation available here. The white paper includes different allocations for DW vs. OLTP databases, so you’ll see 2 different tables of Resource Shapes. DW systems need larger PGA memory and more processes are devoted to parallel processing than on OLTP systems.  You will see those differences in the resource shape tables. Reducing Blast Area Still Means a BLAST! Reducing the “blast area” by isolating databases onto different physical servers or virtual machines doesn’t ELIMINATE the blast (meaning “failure”). What a smaller blast area gives you is pieces that are smaller and easier to fix, simply because they are smaller. Deploying smaller systems as virtual machines reduces the blast area, but is only one piece of the puzzle.  In short, reducing the blast area doesn’t necessarily increase availability, but might help in reducing the duration of an outage.  Larger numbers of smaller (virtual) systems also increases labor associated with managing those systems, so we still need to consider the full range of tools that ensure systems remain available. We therefore need to focus on the full set of tools available to deliver high availability for Oracle Databases. How does Oracle Database Deliver Availability? Availability is even more important when databases are consolidated. There can still be a “blast” that takes down a database or set of databases, which will impact at least some of your users if not all of them depending on how your application uses that data. Core HA Features of Oracle Database Oracle Real Application Clusters Oracle Active Data Guard Oracle Application Continuity Core HA features of Oracle Database have become something that we almost don’t even consider these days, but are still huge differentiators compared to other databases on the market. Oracle experts often take for granted things like online index rebuilds, online table move, and other features that have been developed over the years. I am often surprised to find other databases haven’t caught up, and some of the newer database engines are still years behind. Oracle Real Application Clusters (RAC) is still unique in the market after all of these years. We often think of RAC for scalability, but it’s a huge part of the Oracle Database availability solution as well. Oracle Active Data Guard is the Oracle solution for providing Read Replicas  (or “reader farms”), but it’s also a critical part of the Oracle Database availability solution. Active Data Guard provides for availability when a “blast” (failure) occurs, but it’s also used to provide availability during proactive maintenance. Oracle Application Continuity hasn’t gotten enough press over the past 5+ years, but it’s yet another completely unique capability that Oracle offers. People seem to remember TAF (Transparent Application Failover), but Application Continuity is a completely different animal.  TAF required application code changes, while AC has essentially pushed much of that complexity into the SQL driver for Oracle. For more information on Application Continuity see here. Oracle Database has long exceeded the capabilities of applications, and Application Continuity closes the gap. Applications need to tolerate rolling outages of a RAC Cluster, and Application Continuity is what makes this happen. Application Continuity is implemented through CONFIGURATION changes, not within application code. How does Oracle Converged Database Help? The Oracle converged database brings a number of critical advantages for data driven applications compared to point-solution databases.  See Juan Loaiza’s talk on Data Driven Apps at OOW London 2019 for a great talk on this topic. Although point-solution databases are sometimes referred to as “best of breed”, many of them don’t live up to that claim. You would suppose a database with a narrow focus might do that job better, but that’s not necessarily the case. The Oracle converged database advantages include: Data Modeling Flexibility Simplified Data Movement Portable Developer Skills Increased Developer Productivity Any Workload, Any Data Simplified Consolidation & Isolation Any Size, Any Scale Data Modeling Flexibility comes from having a single database that provides the full range of data modeling needs. While developers might make the perfect decision and choose the exact point-solution database needed, requirements often change. A development team might choose a relational database, only to find that some data really needs a different modeling approach such as JSON Documents. While Oracle Databases can be deployed using a SINGLE modeling technique (Relational, Document, Star Schema, Property Graph, etc.) you can also use MULTIPLE modeling approaches within the same Oracle Database. I have worked with data that didn’t fit easily into the relational model, so it’s great to be able to use JSON Documents or another technique that fits better for certain portions of your data model. Simplified Data Movement comes from using the same Oracle Database on both the sending and receiving side. That data movement is made vastly easier if the data resides with a single shared database, but is easier between Oracle Databases because both databases use the same tools, same drivers, same datatypes, etc. Data movement can be completely eliminated in cases where applications share the same data, such as combined OTLP and Reporting in a single database (yes, this is possible and quite common with Oracle). Data Movement can also be quite costly in Cloud environments, which is another benefit of using Oracle converged database rather than multiple point-solution databases. Portable Developer Skills are a key benefit of the Oracle converged database, allowing developers to work on ANY application or microservice that uses the same database without extensive re-training. Developers who write analytic or Data Mining code can easily move into a development team working on transactional applications or vice-versa. Increased Developer Productivity comes from having a common set of interfaces for all databases, regardless of what features are used within each database. Any feature provided by the database represents code that DOES NOT have to be written by a developer. Any Workload, Any Data includes traditional OLTP and Analytic applications, but also includes Machine Learning, block chain, property graphs, Time Series analysis, Spatial, Internet of Things, and Event Processing. Oracle Database includes robust capabilities in these areas that have been proven over many deployments at customer sites worldwide. Simplified Consolidation & Isolation comes by having databases that all run the same converged Database. Administrators have the flexibility to isolate databases where needed, or consolidate them to ease the administrative burden as well as to improve resource efficiency. Oracle Databases can even be consolidated into a Container Database, for much greater density and lower cost of operation. Yes, this also applies to Cloud environments, where costs can easily escalate. The bottom line is you simply can’t do this with multiple divergent database engines such as one for Relational OLTP, one for Relational Star Schema DW, one for JSON Documents, etc. Any Size, Any Scale database or workload can be handled by Oracle Database. It’s amazing to me that other databases haven’t caught up to Oracle after all these years. We still encounter customers who are faced with a database migration (to Oracle) because their chosen database engine simply can’t keep up. Oracle Database is able to easily support databases at least 10X larger than our competitors can handle, and the Exadata platform simply takes that number higher. Summary There are common business drivers behind database consolidation including the need to reduce costs, simplify information technology, and improve the security posture of the organization. Database consolidation typically raises concerns over the impact on availability and risk to the business that can accompany any consolidation effort. Exadata addresses these concerns by providing benefits in terms of cost, simplicity, and improved security, while still delivering the level of availability that organizations require. The high performance of Exadata can be used to deliver world-class application performance, and can also be used to drive higher consolidation density in order to lower costs. We recommend using the full range of capabilities available to achieve the needed levels of database isolation when consolidating on Exadata, including use of physical isolation, Virtual Machines, Multiple Container Databases, and Oracle Resource Manager to control noisy neighbors. Finally, it is the use of the converged Oracle Database that allows organizations to standardize as well as to consolidate.

Database consolidation means running databases on a common set of infrastructure, most often to reduce costs and increase operational efficiency. The business drivers and approaches to consolidation...

Exadata System Software Updates - April 2020

To give you a single place to keep up with Exadata technical news, we'll post Exadata System Software releases here on blogs.oracle.com/exadata as soon as they are available. This post details the April 2020 releases.  Exadata System Software Update 19.3.7.0.0 Exadata System Software 19.3.7.0.0 Update is now generally available. 19.3.7.0.0 is a maintenance release that adds critical bug fixes and security fixes on top of 19.3.X releases. It contains the following: Exadata Software updates for important bugs discovered after 19.3.6.0.0 was released in March 2020 CPUApril2020 WebLogic Server security updates (12.2.1.3) InfiniBand (IB) Switch Firmware update (version 2.2.15-1) X8-8 ILOM Update (SW 1.3.0 RC) Critical Linux and Xen security updates For further information, see: Exadata 19.3.7.0.0 release and patch (31205818) (Doc ID 2648874.1) Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1) Exadata System Software Update 19.2.13.0.0 Exadata System Software 19.2.13.0.0 Update is now generally available. 19.2.13.0.0 is a maintenance release that adds critical bug fixes and security fixes on top of 19.2.X releases. It contains the following: Exadata Software updates for important bugs discovered after 19.2.12.0.0 was released in March 2020 CPUApril2020 WebLogic Server security updates (12.2.1.3) InfiniBand (IB) Switch Firmware update (version 2.2.15-1) X8-8 ILOM Update (SW 1.3.0 RC) Critical Linux and Xen security updates 19.2.13.0.0 is the best upgrade target for customers who want to move to Oracle Linux 7. Customers upgrading to Oracle Database 19c will have to upgrade Exadata to 19.2.X.0.0. For further information, see: Exadata 19.2.13.0.0 release and patch (31205859) (Doc ID 2648873.1) Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1) Exadata System Software Update 18.1.27.0.0 Exadata System Software 18.1.27.0.0 Update is now generally available. 18.1.27.0.0 is a maintenance release that adds critical bug fixes and security fixes on top of 18.1.X releases. It contains the following: Exadata Software updates for important bugs discovered after 18.1.26.0.0 was released in March 2020 CPUApril2020 WebLogic Server security updates (10.3.6) InfiniBand (IB) Switch Firmware update (version 2.2.15-1) Critical Linux and Xen security updates 18.1.27.0.0 is the best upgrade target for customers who want to continue to use Oracle Linux 6. 12.2.1.1.8 which was released in July 2018 is the last patch in the 12.2.x line, so customers should plan to update to 18.1.x.0.0 if continuing with Oracle Linux 6, otherwise plan to update to 19.2.x. Important considerations before upgrading in 18.x Please refer to MOS Document 888828.1 for important considerations before upgrading to the latest Exadata System Software release. As stated, before updating database servers (non-OVM and OVM domU) to Exadata 18.1.5 or higher, ensure ACFS drivers that support the most efficient CVE-2017-5715 (Spectre Variant 2) mitigation are installed. These updated ACFS drivers, required for Exadata versions >= 18.1.5.0.0 and >= 12.2.1.1.7, are included in the July 2018 quarterly database releases. Quarterly database releases from April 2018 and earlier still require a separate ACFS patch. See MOS Document 2356385.1 for details. MOS Document 2357480.1 discusses the performance impact of mitigation measures against CVE-2017-5754, CVE-2017-5753, and CVE-2017-5715 on Oracle Database, Oracle Exadata and Oracle Zero Data Loss Recovery Appliance. Database servers that have been customized by installing third-party kernel drivers must contact the third-party vendor to obtain updated third-party drivers. See MOS Document 2356385.1 for details. For further information, see:  Exadata 18.1.27.0.0 release and patch (31205960) (Doc ID 2648851.1) Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1)   Huge thanks to our Exadata Sustaining Team, who provide these releases to protect and enhance the value of your Exadata investments.  We are always interested in your feedback. You are welcome to engage with us via comments here. Related posts Exadata System Software Updates - March 2020 (March 2020) Exadata System Software Updates - February 2020 (February 2020) What’s New with Oracle Database 19c and Exadata 19.1 (February 2019) Exadata System Software 19.1 (October 2018) Introducing Exadata Database Machine X8M (September 2019) Introducing Exadata Database Machine X8 (June 2019)  

To give you a single place to keep up with Exadata technical news, we'll post Exadata System Software releases here on blogs.oracle.com/exadata as soon as they are available. This post details the...

Exadata Database Machine

Resource Manager Using Consumer Groups

High performance database deployment relies on effective management of system resources. Only Oracle Exadata provides fully integrated management of system resources including Oracle databases, server compute resources, and I/O. This blog and accompanying video show how Consumer Groups can be used in Oracle Resource Manager for fine-grained control of resource usage by users, groups of users, applications, and other criteria. Demo Video (Consumer Groups) This demo video shows how to manage resources WITHIN a database using Consumer Groups. Smaller, less critical databases will typically use resource applications for the entire database, without fine grained control of resources inside the database. However, larger, more critical databases such as Data Warehouses or critical OLTP systems might also require management of resources to ensure critical business processes are completed in a timely manner or critical users receive the proper amount of resources to complete their work in the database. The full demo can be viewed here: https://youtu.be/UhfHLZ0sU8E   Consumer Groups - Manage Resources Within a Database Our two previous demos showed how Resource Manager is used for managing resources between databases to guard against the "Noisy Neighbor" problem and ensure each database receives the system resources it requires to meet business needs. This demo shows how to use Consumer Groups, which consist of 3 components that allow administrators to manage resources within a database: Consumer Group Mapping Rules Resource Plans Consumer Groups serve as an identifier that provides a linkage between users (or other consumers) and system resources. Consumer Group Mappings are rules the database executes when connections are established to the database and place that connection under a Consumer Group. Of course database connections can be for users, but also for batch jobs or other processes that act upon the database, and mapping rules cover all connections regardless of their source. Finally, Resource Plans control the resources assigned to each Consumer Group. Shares & Limits Resource Plans for Consumer Groups are based on the concept of shares and limits, which provides a minimum guarantee (or share) of resources for each Consumer Group, while also allowing Consumer Groups to use any idle resources (up to their limit). For example, Extract Transform and Load jobs (ETL) might have a high share of resources, but might also need to be limited in the amount of resources they consume. In this sample screenshot from Oracle Enterprise Manager (OEM), the Resource Plan does not include utilization limits. This means that each consumer group is able to use ALL of the system resources if other groups are not using those resources at a given point in time. Utilization Limits should be configured in cases where users need a more consistent experience regardless of other activity in the database. IORM Can Inherit DBRM Ratios! Exadata I/O Resource Manager (IORM) can be configured to simply inherit the same resource ratios used for Database Resource Manager (DBRM) by using IORM Objective "auto". For example, if a Consumer Group is supposed to have 25% of compute resources (CPU) on a system, that Consumer Group should (in most cases) also have 25% of I/O resources. A noisy neighbor can consume excessive CPU, but might also consume excessive I/O on a system. It's important to control both, and using a single setting works best. The following screenshot from OEM shows the running sessions for each consumer group. For systems configured with Exadata IORM Objective "auto", the same resource ratios from the database are applied to the storage level as well. This means a single set of controls are used to govern resources on the system, making the system simpler to manage. Parallel Statement Queueing Parallel statement queueing is used to increase overall throughput by allocating more resources to queries in turn rather than spreading those resources across multiple sessions simultaneously. In other words, this is reducing concurrency (through queueing) but increases overall throughput. Parallel Statement Queueing is a feature of Oracle Database Resource manager, and database administrators can easily see how it is performing through Oracle Enterprise Manager (OEM). Each consumer group is allowed a certain share of Parallel Query resources. Parallel Statement Queueing (if configured) will queue statements if the Consumer Group exceeds the provisioned amount of Parallel Servers. Runaway Query Management Runaway queries are managed by setting resource limits and taking action when those limits are exceeded. The action to take when resource limits are exceeded include the following: Switch Group Cancel SQL Kill Session Log Only Sessions can be switched into a lower consumer group if the session runs a query exceeding the resource limits. Cancelling SQL statements or killing sessions is a much more drastic measure to take, but might be appropriate in a ad-hoc query environment. The "log only" action is useful for investigating design of a runaway query management strategy. Automatic Consumer Group Switching can be used to downgrade priority of queries that exceed resource consumption rules.  The following graphic shows an example of this approach: Database Resource Shapes Consumer Groups are assigned a share or percentage of resources within a database, while resource shapes are applied to each database within a server, virtual machine, or cluster. Resource Shapes define the overall resources allocated to a particular database. It is important to note that Resource Shapes are typically used in Database Consolidation environments, Resource Shapes can also be used to prevent a single database from overrunning dedicated resources as well. Database Resource Shapes are outlined in the Oracle Maximum Availability Architecture (MAA) Best Practices for Database Consolidation white paper available here: https://www.oracle.com/technetwork/database/availability/maa-consolidation-5648225.pdf Summary Oracle Resource Manager provides the most flexible method for managing system resources across databases, as well as within databases using Consumer Groups. Resources can be easily allocated and managed at any level required to meet business needs. Exadata I/O Resource Manager (IORM) is fully integrated with Oracle Database Resource Manager (DBRM) to provide a single set of integrated controls that govern use of system resources. Resource Management is designed to prevent the "Noisy Neighbor" problem in multi-user and multi-database (consolidation) environments. Resource Manager also guards against over-use of resources even in dedicated environments, providing improved system stability and availability.  See Also... Resource Manager Demo #1. Resource Manager Demo #2.

High performance database deployment relies on effective management of system resources. Only Oracle Exadata provides fully integrated management of system resources including Oracle databases, server...

Exadata System Software Updates - March 2020

To give you a single place to keep up with Exadata technical news, we'll post Exadata System Software releases here on blogs.oracle.com/exadata as soon as they are available. This post details the March 2020 releases.  Exadata System Software Update 19.3.6.0.0 Exadata System Software 19.3.6.0.0 Update is now generally available. 19.3.6.0.0 is a maintenance release that adds critical bug fixes and security fixes on top of 19.3.X releases. It contains the following: Exadata Software updates for important bugs discovered after 19.3.5.0.0 was released in February 2020 Critical Linux security updates Customers reviewing Exadata Storage Software Release 19.3.0.0.0 should uptake 19.3.1.0.0 or higher. For further information, see: Exadata 19.3.6.0.0 release and patch (31027642) (Doc ID 2638622.1) Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1) Exadata System Software Update 19.2.12.0.0 Exadata System Software 19.2.12.0.0 Update is now generally available. 19.2.12.0.0 is a maintenance release that adds critical bug fixes and security fixes on top of 19.2.X releases. It contains the following: Exadata Software updates for important bugs discovered after 19.2.11.0.0 shipped in February 2020 Critical Linux security updates 19.2.12.0.0 is the best upgrade target for customers who want to move to Oracle Linux 7. Customers upgrading to Oracle Database 19c will have to upgrade Exadata to 19.2.X.0.0. For further information, see: Exadata 19.2.12.0.0 release and patch (31028685) (Doc ID 2638621.1) Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1) Exadata System Software Update 18.1.26.0.0 Exadata System Software 18.1.26.0.0 Update is now generally available. 18.1.26.0.0 is a maintenance release that adds critical bug fixes and security fixes on top of 18.1.X releases. It contains the following: Exadata Software updates for important bugs discovered after 18.1.25.0.0 shipped in February 2020 Critical Linux security updates 18.1.26.0.0 is the best upgrade target for customers who want to continue to use Oracle Linux 6. 12.2.1.1.8 which was released in July 2018 is the last patch in the 12.2.x line, so customers should plan to update to 18.1.x.0.0. Important considerations before upgrading in 18.x Please refer to MOS Document 888828.1 for important considerations before upgrading to the latest Exadata System Software release. As stated, before updating database servers (non-OVM and OVM domU) to Exadata 18.1.5 or higher, ensure ACFS drivers that support the most efficient CVE-2017-5715 (Spectre Variant 2) mitigation are installed. These updated ACFS drivers, required for Exadata versions >= 18.1.5.0.0 and >= 12.2.1.1.7, are included in the July 2018 quarterly database releases. Quarterly database releases from April 2018 and earlier still require a separate ACFS patch. See MOS Document 2356385.1 for details. MOS Document 2357480.1 discusses the performance impact of mitigation measures against CVE-2017-5754, CVE-2017-5753, and CVE-2017-5715 on Oracle Database, Oracle Exadata and Oracle Zero Data Loss Recovery Appliance. Database servers that have been customized by installing third-party kernel drivers must contact the third-party vendor to obtain updated third-party drivers. See MOS Document 2356385.1 for details. For further information, see:  Exadata 18.1.26.0.0 release and patch (31027153) (Doc ID 2638620.1) Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1)   Huge thanks to our Exadata Sustaining Team, who provide these releases to protect and enhance the value of your Exadata investments.  We are always interested in your feedback. You are welcome to engage with us via comments here. Related posts Exadata System Software Updates - February 2020 (February 2020) Exadata System Software Updates - January 2020 (January 2020) What’s New with Oracle Database 19c and Exadata 19.1 (February 2019) Exadata System Software 19.1 (October 2018) Introducing Exadata Database Machine X8M (September 2019) Introducing Exadata Database Machine X8 (June 2019)  

To give you a single place to keep up with Exadata technical news, we'll post Exadata System Software releases here on blogs.oracle.com/exadata as soon as they are available. This post details the...

Ten Tips for Database Performance Tuning (on Exadata, and in general) from Performance Expert Cecilia Grant

We’ve noticed several performance tuning sessions being presented and blogged about recently, so thought we’d add to the collective by going to the source for the best database performance tuning techniques, the Exadata Engineering team (they do this for a living!). And there’s no-one more qualified to give performance tuning tips than our own Cecilia Grant, presenter of the popular (yet blink-and-you've-missed-it) session "Exadata Performance Diagnostics" theater talk from Oracle Openworld SF last year. Cecilia is one of our resident Performance tuning experts here at Oracle. We've asked Cecilia to give us ten tips for database performance tuning. Here we go: 1. Identify a key application metric for measuring performance When measuring performance, identify a key application metric that reflects how the application is performing, or that would be a good proxy for the end user experience. This metric should be external to the database. For example, orders processed per second or the elapsed time of a batch job. Having an application level metric allows you to take an objective measurement as to whether or not performance is improving where it matters.    Based on the key metric, define the success criteria. This allows you to measure progress, and also allows you to know when you can stop. It is sometimes tempting to keep making changes, but if you've already achieved your goal, and any change will not make a material impact, then stop. There should only be one key metric - if you have too many metrics, some may improve, while others may regress, so you won't be able to cleanly evaluate performance changes. If you must have multiple measurements, then measure them in such a way that they are completely independent of each other and can be evaluated separately. 2. Define the performance problem, and understand its scope It is necessary and critical to have a clear definition of the performance problem, specifically what is slow, and how slow is it? As part of defining the performance problem, also understand the scope – is it limited to a set of queries, or a set of users? Or is it more widespread, affecting all users in the database instance, or perhaps even multiple databases? By understanding the scope of the problem, you can then use diagnostic data that matches the scope of the problem. Using statistics for the entire system may not be relevant if the problems are only limited to a few users.   Similarly, any solution should also match the scope of the problem – for example, if the problem is limited to a few queries or a few users, then the solution should only be focused on those users/queries – for example, do not change an init.ora parameter that can potentially negatively impact all users of the system. 3. Change one thing at a time First of all, this assumes that the performance problems are reproducible to begin with, or at least reproducible within a known tolerance range. If the problems are not reproducible, then you won't be able to measure the effects of any changes that are made. So go back and structure a workload or a test case in such a way that the results are reproducible. Once you've ensured that you have a reproducible performance issue, then only change one thing at a time, to help you identify whether or not the change helped. There may be times when you may need to change multiple things in the interest of time (especially when you want to reduce required downtime or outages). In that case, change things that are not going to impact the same area, and that can be evaluated separately. For example, you can implement a SQL profile that affects a single SQL statement and a mid-tier change that controls connection pooling behavior. 4. Performance tuning is an iterative process Remember, performance tuning is an iterative process. People tend to get impatient and want to see immediate improvements. But realize that some changes you make may help, while others may not. In some cases, there may be a bottleneck right behind the one you're fixing. This doesn't mean the fix doesn't help – it simply means further improvements are needed. Analyze the performance statistics provided by the database, OS and Exadata to help determine where the bottlenecks are, and the areas to improve. The key application metric is simply an overall measure to help you gauge performance, while the individual statistics are there to help focus the performance tuning efforts. This is also why defining a success criteria up front, as stated in the first tip is important. Continuing to make changes when the goal has been met, risks resulting in a regression. 5. Be familiar with performance reports, and when to use them. Take scope into account The Oracle database provides a large number of performance statistics and performance reports. The AWR report is the most commonly used performance diagnostic tool - and there's actually a whole family of AWR reports (global, single instance, compare period, global compare period, AWR SQL report, PDB report). The AWR report tends to be useful when problems are widespread.   If the problems are affecting a subset of users or queries, the AWR SQL report, the SQL Monitor report, and even the SQL Details report tend to be extremely useful to look at.   The ASH report can be used in either case – it has active session history for the entire instance/database, and it can also be filtered based on specific users, SQL statements, or other dimensions of the session. The Exadata sections available in the AWR report show the statistics as maintained and collected by the storage cells, which means that they show activity on the storage cells, regardless of instance or database. So the scope of the Exadata statistics would typically be larger than the scope of the AWR report(s), and as such, they are not included in the AWR PDB report or the AWR SQL report. 6. Understand performance data sources The performance reports summarize performance data that is readily available in the database, through dynamic performance views, also known as v$ views. However, there are different types of measurements and statistics exposed by the database views: a. measured/counted - this includes most of what is exposed in AWR, including the Exadata information.  b. derived (metrics) - these statistics are derived or calculated based on the measured/counted statistics to come up with per second or per transaction rates, and are typically exposed in v$*metric* views (such as v$sysmetric, v$sysmetric_history, etc). c. sampled - such as ASH. Active sessions are sampled at regular intervals and can be used to determine how time in the database is spent.  ASH is an extremely useful utility, however, because it is sampled, you need to use it wisely.  Specifically, beware of "Bad ASH Math" as has been presented extensively by John Beresniewicz and Graham Wood. On Exadata, there are additional data sources as well, such as a. ExaWatcher - this contains granular data (every 5 seconds), and includes both OS statistics and cellsrv statistics b. cell metric history - this contains both cumulative and derived (per second, per request) statistics, as measured by the cell software 7. On Exadata, performance tuning methodologies do not change Use DB time. Even though the Exadata statistics have been exposed in the AWR report, ensure you're running into an I/O issue first, as reflected by the database wait events, before diving into the Exadata statistics.   Continue practicing good database design principles. We see more and more cases of applications that ignore good database design principles, as most of the time, users are able to get away with it due to the smart features that Exadata provides. However, you will get more from your system if you follow good database design principles. For example, just because it is running on Exadata, it doesn't mean everything should be a smart scan. Review your schema to see if indexes or partitioning are appropriate based on the access patterns of the SQL statements. Also, having better storage on Exadata is not going to help address issues that aren't storage related. For example, poor connection management, or repeated parsing are application-level issues that will still occur even with a superior storage solution.  Similarly, row by row processing (or as coined by Tom Kyte, "Row by row = slow by slow"), has tremendous overhead compared to set processing, and is not necessarily going to be improved by a better storage solution.   8. Bad SQL is bad SQL A badly written SQL statement will execute poorly - regardless of the environment that it's running on. The cost-based optimizer can only attempt to find the optimal execution plan of the SQL as it is written. Running a badly written SQL statement on Exadata may give you a boost in terms of offloaded processing, or faster I/O, but it will still not perform as well as a well-written SQL statement with a good execution plan. For example, SQL statements accessing a table multiple times, when it can be done once, result in unnecessary extra processing while accessing the table. Another example would be applications that execute SQL statements that do not specify the filter predicates, but instead rely on the application to filter data out. In such a case, it would be better to reduce the data set that is being processed by the database up front, rather than relying on the application to filter the data. Relying on the application would typically result in unnecessary processing of large data sets in the database. 9. Data driven analysis Oftentimes, we see users make changes based on gut feel, making assumptions on where the issues are, without looking at the multitude of performance data available. Or, users choose to ignore the available data, especially if it doesn't support what they believe is occurring. Instead, use the available data to formulate a hypothesis on why the performance problem is occurring, and then make your tuning recommendations based on that. 10. Don't reinvent the wheel, update the cart Keep up with the new features that the database provides, and use them wisely rather than doing things the way it's always been done. For example, with DB 19c's real-time stats and auto indexing, the database can auto-index and keep statistics using machine learning, reducing the reliance on manual intervention such as locking statistics, using stored outlines, etc. Similarly, if you have an opportunity to use a best of class solution from a vendor, such as Exadata for the Oracle Database, use it! The system has been designed by professionals, it removes a lot of the guess work, reduces the amount of mundane tasks, allowing you to focus more on higher value projects and innovations, adding value.     So there you have it, we hope these 10 tips help. If you have questions or comments, Cecilia will be joining Gavin and Cris at the Exadata Office Hours chat in March, so make sure you register for that, there'll be a link to the recording there too if you've missed it.  

We’ve noticed several performance tuning sessions being presented and blogged about recently, so thought we’d add to the collective by going to the source for the best database performance tuning...

Exadata System Software Updates - February 2020

To give you a single place to keep up with Exadata technical news, we'll post Exadata System Software releases here on blogs.oracle.com/exadata as soon as they are available. This post details the February 2020 releases.  Exadata System Software Update 19.3.5.0.0 Exadata System Software 19.3.5.0.0 Update is now generally available. 19.3.5.0.0 is a maintenance release that adds critical bug fixes and security fixes on top of 19.3.X releases. It contains the following: Exadata Software updates for important bugs discovered after 19.3.4.0.0 was released in January 2020 Critical Linux security updates Java Security Update (CPUJan2020) Updated ILOM firmware on X8-2 servers with the latest important fixes and security updates Updated the Persistent Memory Firmware on X8 servers Customers reviewing Exadata Storage Software Release 19.3.0.0.0 should uptake 19.3.1.0.0 or higher. For further information, see: Exadata 19.3.5.0.0 release and patch (30886173) (Doc ID 2628344.1) Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1) Exadata System Software Update 19.2.11.0.0 Exadata System Software 19.2.11.0.0 Update is now generally available. 19.2.11.0.0 is a maintenance release that adds critical bug fixes and security fixes on top of 19.2.X releases. It contains the following: Exadata Software updates for important bugs discovered after 19.2.10.0.0 shipped in January 2020 Critical Linux security updates Java Security Update (CPUJan2020) Updated ILOM firmware on X8-2 servers with the latest important fixes and security updates 19.2.11.0.0 is the best upgrade target for customers who want to move to Oracle Linux 7. Customers upgrading to Oracle Database 19c will have to upgrade Exadata to 19.2.X.0.0. For further information, see: Exadata 19.2.11.0.0 release and patch (30886159) (Doc ID 2628343.1) Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1) Exadata System Software Update 18.1.25.0.0 Exadata System Software 18.1.25.0.0 Update is now generally available. 18.1.25.0.0 is a maintenance release that adds critical bug fixes and security fixes on top of 18.1.X releases. It contains the following: Exadata Software updates for important bugs discovered after 18.1.24.0.0 shipped in January 2020 Critical Linux security updates Java Security Update (CPUJan2020) 18.1.25.0.0 is the best upgrade target for customers who want to continue to use Oracle Linux 6. 12.2.1.1.8 which was released in July 2018 is the last patch in the 12.2.x line, so customers should plan to update to 18.1.x.0.0. Important considerations before upgrading in 18.x Please refer to MOS Document 888828.1 for important considerations before upgrading to the latest Exadata System Software release. As stated, before updating database servers (non-OVM and OVM domU) to Exadata 18.1.5 or higher, ensure ACFS drivers that support the most efficient CVE-2017-5715 (Spectre Variant 2) mitigation are installed. These updated ACFS drivers, required for Exadata versions >= 18.1.5.0.0 and >= 12.2.1.1.7, are included in the July 2018 quarterly database releases. Quarterly database releases from April 2018 and earlier still require a separate ACFS patch. See MOS Document 2356385.1 for details. MOS Document 2357480.1 discusses the performance impact of mitigation measures against CVE-2017-5754, CVE-2017-5753, and CVE-2017-5715 on Oracle Database, Oracle Exadata and Oracle Zero Data Loss Recovery Appliance. Database servers that have been customized by installing third-party kernel drivers must contact the third-party vendor to obtain updated third-party drivers. See MOS Document 2356385.1 for details. For further information, see:  Exadata 18.1.25.0.0 release and patch (30886149) (Doc ID 2629373.1) Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1)   Huge thanks to our Exadata Sustaining Team, who provide these releases to protect and enhance the value of your Exadata investments.  We are always interested in your feedback. You are welcome to engage with us via comments here. Related posts Exadata System Software Updates - January 2020 (January 2020) What’s New with Oracle Database 19c and Exadata 19.1 (February 2019) Exadata System Software 19.1 (October 2018) Introducing Exadata Database Machine X8M (September 2019) Introducing Exadata Database Machine X8 (June 2019)  

To give you a single place to keep up with Exadata technical news, we'll post Exadata System Software releases here on blogs.oracle.com/exadata as soon as they are available. This post details the...

Exadata System Software Updates - January 2020

Welcome to 2020! I can't believe we're 1/12th of the way through already... To give you a single place to keep up with Exadata technical news, we'll post Exadata System Software releases here on blogs.oracle.com/exadata as soon as they are available. This post details the January 2020 releases.  Exadata System Software Update 19.3.4.0.0 Exadata System Software 19.3.4.0.0 Update is now generally available. 19.3.4.0.0 is a maintenance release that adds critical bug fixes and security fixes on top of 19.3.X releases. It contains the following: Exadata Software updates for important bugs discovered after 19.3.3.0.0 was released in December 2019 Critical Linux security updates WebLogic Server security updates Updated ILOM firmware on X2, X4, X5 and X7 servers with the latest important fixes and security updates New firmware update for M2 SSD 240GB hard disks Customers reviewing Exadata Storage Software Release 19.3.0.0.0 should uptake 19.3.1.0.0 or higher. For further information, see: Exadata 19.3.4.0.0 release and patch (30752525) (Doc ID 2620008.1) Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1) Exadata System Software Update 19.2.10.0.0 Exadata System Software 19.2.10.0.0 Update is now generally available. 19.2.10.0.0 is a maintenance release that adds critical bug fixes and security fixes on top of 19.2.X releases. It contains the following: Exadata Software updates for important bugs discovered after 19.2.9.0.0 shipped in December 2019 Critical Linux security updates WebLogic Server security updates Updated ILOM firmware on X2, X4, X5 and X7 servers with the latest important fixes and security updates New firmware update for M2 SSD 240GB hard disks 19.2.10.0.0 is the best upgrade target for customers who want to move to Oracle Linux 7. Customers upgrading to Oracle Database 19c will have to upgrade Exadata to 19.2.X.0.0. For further information, see: Exadata 19.2.10.0.0 release and patch (30752520) (Doc ID 2619987.1) Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1) Exadata System Software Update 18.1.24.0.0 Exadata System Software 18.1.24.0.0 Update is now generally available. 18.1.24.0.0 is a maintenance release that adds critical bug fixes and security fixes on top of 18.1.X releases. It contains the following: Exadata Software updates for important bugs discovered after 18.1.23.0.0 shipped in December 2019 Critical Linux security updates WebLogic Server security updates Updated ILOM firmware on X2, X4, X5 and X7 servers with the latest important fixes and security updates Updated the firmware for IB Controller versions beginning with X4 servers 18.1.24.0.0 is the best upgrade target for customers who want to continue to use Oracle Linux 6. 12.2.1.1.8 which was released in July 2018 is the last patch in the 12.2.x line, so customers should plan to update to 18.1.x.0.0. Important considerations before upgrading Please refer to MOS Document 888828.1 for important considerations before upgrading to the latest Exadata System Software release. As stated, before updating database servers (non-OVM and OVM domU) to Exadata 18.1.5 or higher, ensure ACFS drivers that support the most efficient CVE-2017-5715 (Spectre Variant 2) mitigation are installed. These updated ACFS drivers, required for Exadata versions >= 18.1.5.0.0 and >= 12.2.1.1.7, are included in the July 2018 quarterly database releases. Quarterly database releases from April 2018 and earlier still require a separate ACFS patch. See MOS Document 2356385.1 for details. MOS Document 2357480.1 discusses the performance impact of mitigation measures against CVE-2017-5754, CVE-2017-5753, and CVE-2017-5715 on Oracle Database, Oracle Exadata and Oracle Zero Data Loss Recovery Appliance. Database servers that have been customized by installing third-party kernel drivers must contact the third-party vendor to obtain updated third-party drivers. See MOS Document 2356385.1 for details. For further information, see:  Exadata 18.1.24.0.0 release and patch (30752513) (Doc ID 2619988.1) Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1) That's all for now... Huge thanks to our Exadata Sustaining Team, who provide these releases to protect and enhance the value of your Exadata investments.  We are always interested in your feedback. You are welcome to engage with us via comments here. Related posts What’s New with Oracle Database 19c and Exadata 19.1 (February 2019) Exadata System Software 19.1 (October 2018) Introducing Exadata Database Machine X8M (September 2019) Introducing Exadata Database Machine X8 (June 2019)  

Welcome to 2020! I can't believe we're 1/12th of the way through already... To give you a single place to keep up with Exadata technical news, we'll post Exadata System Software releases here on blogs....

2019: A Year in Review for the Exadata Family

Back in 2018, we celebrated a major milestone, the 10th Anniversary of Exadata. As 2019 comes to an end, I thought it would be timely to have a quick look back to see what the Exadata Family has been up to in its 11th year. This year we celebrated another major undertaking, the release of two Exadata models in the same year. An amazing achievement! But I'm getting ahead of myself, let's start at the beginning. In February, Exadata became the first platform to welcome Oracle Database 19c, the much anticipated long term support version of the Oracle Database. Along with the supporting Exadata 19.1 software, version 19 enabled many new database and platform features, including some Exadata-only Machine Learning features such as Automatic Indexing and Real Time Statistics. Then, in March, Exadata X8 was launched, ushering in the latest Intel Cascade Lake CPUs, 14TB drives and a new, "Extended" Storage Server, enabling you to store historic and regulatory data without needing to offload data outside the database. A solid step forward in hardware performance, and in step with Exadata 19.2, continued driving innovation across Analytics, OLTP and Consolidation workloads, along with management and automation. And throughout the year, monthly software releases continued to ensure critical and security fixes were applied in line with Oracle’s longstanding commitment to security. Just a short 6 months after the launch of X8, at Oracle Openworld San Francisco in September, Larry Ellison announced Exadata X8M, and the game changed again . After many years in close collaboration with Intel, the Exadata X8M is the epitome of why so many customers trust and rely on Oracle for their database needs. Rather than accept the norm, Exadata Engineering spent the time working with Intel to investigate, research and develop the best architecture to take advantage of this leading edge technology. The resulting combination of Persistent Memory, RDMA over Converged Ethernet network fabric, and Exadata 19.3 software saw a dramatic increase in real world database performance. Other members of the Exadata family had significant milestones during 2019 as well: Another announcement from Larry during #OOW19 included Oracle Generation 2 Exadata Cloud at Customer, which now leverages the same powerful and sophisticated control plane as the Oracle Public Cloud. This continues the push towards a simplified, agile, and elastic Oracle Cloud, with the power of Oracle Exadata hardware, managed by Oracle in your data center. Recovery Appliance also had a busy year, releasing ZDLRA X8 and ZDLRA X8M in step with its cousin Exadata, continuing the critical partnership that ensures the Oracle Database is safe and secure. 2020 will see additional features, enhancements and innovations for the Exadata family of products, including more big news for Exadata Cloud at Customer, Exadata Cloud Service, Autonomous Database, and of course Exadata's flagship on-prem hardware, the Exadata Database Machine. A big thank you to our Support, Services, Operations, Field, Marketing, Sales, and Engineering teams. A special thank you to our partners, and especially you, our customers, for a great and productive 2019.  As we move into the new decade, and face new challenges and new technologies, we look forward to working and collaborating with you. Have a happy and safe holiday season. All the best for the new year.  See you in 2020.  

Back in 2018, we celebrated a major milestone, the 10th Anniversary of Exadata. As 2019 comes to an end, I thought it would be timely to have a quick look back to see what the Exadata Family has been...

High Performance Database Deployment with Resource Manager

High performance database deployment relies on effective management of system resources.  Only Oracle provides fully integrated management of system resources including the Oracle database, server compute resources, and I/O on the Exadata platform. Demo Video The Resource Manager demo video shows 4 pluggable databases (PDBs) within a container database (CDB) sharing a single Exadata quarter rack.  The demo shows how Resource Manager controls how system resources including CPU and I/O are shared across databases.   The full demo can be viewed here: https://youtu.be/yZtFBPIEdYk Across Databases & Within Databases The two major use-cases for Oracle Resource Manager are to manage resource allocations across databases and within databases.  Resource Manager is used to ensure databases each receive their allotted resources and prevents the "noisy neighbor" problem.  Resource Manager can also be used within a single database using Consumer Groups to prevent users or jobs from consuming too many resources. Virtual Machines for Isolation Oracle strongly recommends AGAINST using Virtual Machine (VM) technology as a resource management vehicle.  VMs should be used for ISOLATION where necessary, whereas Resource Manager (as the name implies) is for Resource Management.  Proliferation of VMs simply produces virtual-sprawl and Oracle Resource Manager is easier, more flexible, more dynamic, and more effective. Virtual Machines should be used for cases where isolation is required for mission critical applications, for purposes of Data Governance where sensitive data requires greater security and cases where maintenance schedules dictate isolation. Demo Workload The demo uses 4 databases running simulated workloads created using Swingbench from Dominic Giles.  One of these databases (PDB1) runs a simulated Data Warehouse workload, while the remaining 3 databases (PDB2, 3, and 4) simulate OLTP workloads.  The Swingbench tool can be downloaded from Dominic's web site here: http://dominicgiles.com/swingbench.html Container Database Home The demo begins at the Container Database Home page, which shows activity on the container, including the 4 Pluggable databases.  Notice that all of the databases are consuming equal amounts of resource on the system as shown in the consolidated (Container Level) Active Sessions graph.  This is because Resource Manager is constraining all of these databases to the same amount of resource.  The CPU graph (in green at bottom left) peaks at approximately 40% CPU usage due to the Resource Manager plan. High Scheduler Waits Resource Manager is throttling these databases, so we see high waits for the "scheduler" in the Average Active Sessions detailed performance graph.  This same wait event will be seen as "resmgr: cpu quantum" in some tools, which simply means sessions are waiting for Resource Manager (resmgr) to allocate a "quantum" (or quantity) of CPU time.  Based on this information, we know the users of these databases are requesting more resources than we have allocated to them. Available System Resources  We can easily see the system has sufficient free resources including CPU and I/O by looking in Oracle Enterprise Manager (OEM) as well as using tools such as the Linux "top" command.  This tells us that we can certainly allocate more resources to these databases to improve performance.  Output from the TOP command shows 29.4% user, 4.6% system, and 64.7% idle time.  This system clearly has extra capacity available. OEM will also show the same information, although the collection interval of OEM means that the information is slightly delayed and will be an average number over the collection period.  There may be slight variations in the information displayed, but it's otherwise telling us the same story.  This system has sufficient free capacity. OEM also shows us the I/O utilization on the system at the Exadata storage layer.  We see that the system is using a maximum of around 18% of Exadata I/O capacity.  It's possible to closely manage CPU and I/O separately on an Exadata system, but in this demonstration we are using a SINGLE setting for both.  The Exadata storage inherits the same percentage allocation of I/O resources used in the CPU layer. The tight integration of Database Resource Manager (DBRM) with the Exadata I/O Resource Manager (IORM) means that only a single setting is required to manage the entire stack of resources. Workload Characteristics This system includes 4 databases with different workload characteristics, including a Data Warehouse (PDB1) and 3 OLTP databases (PDB2, PDB3, and PDB4).  The Data Warehouse workload is clearly I/O bound as we can see from the blue in the graph below: The remaining 3 databases all have a similar workload profile that is more CPU bound, but have high waits on CPU as shown in orange below: Based on this information, we know the Resource Plan needs to be changed for this system.  We know the Data Warehouse needs significantly more resources, including CPU, parallel servers, and I/O.  We will also be over-provisioning the resources slightly to allow these databases to take advantage of quiet periods during the day or week. Resource Plans In this demo, we are using CDB Resource Plans to control the allocation of resources across databases.  It's also possible to manage the resources using the "Instance Caging" feature of the Oracle Resource Manager, but that approach is not shown in this demo.  The Instance Caging feature is simply another way to denote how much resource is allocated to a particular database using the CPU_COUNT setting.  As with Resource Plans, IORM allocates the same percentage of I/O resources to each database based on the CPU allocation (whether specified through a Resource Plan or CPU_COUNT).  The currently active Resource Plan looks like this: As shown in the video, we activate a new Resource Plan that gives 70% of resources (CPU and parallel processes) to the Data Warehouse, and 20% to the remaining 3 OLTP databases.  Of course this means the system is over-provisioned at 130% of capacity.  That over-provisioning means that the databases will take advantage of quiet periods:   Immediate Impact! Resource Manager changes take effect immediately.  We can see this using a command such as "top", while OEM takes a while to see the changes.  We could set a shorter OEM data collection interval, but this simply generates unnecessary overhead on the system.  We can see the changes taking effect looking at "top" below: We can see the system is now running at 65.9% user, 12.0% system, and 16.3% idle.  The databases have clearly started using more CPU immediately once the new plan was activated. OEM Starts Showing Changes Within about a minute, we start seeing the Resource Plan changes taking effect in OEM.  The Cluster Database Home shows higher usage of CPU resources: The detailed Activity Class graph for PDB1 shows the database waits on I/O are starting to disappear. If we allowed this to run longer, we would see nearly all of the blue disappear from the graph: We see a similar impact on the OLTP databases (PDB2, PDB3, and PDB4), where the waits have started to lessen.  We see much less orange in the graph, and more time "on CPU" shown in green: If we allow this demo to run longer, you will eventually see most of the orange disappear as well.  We decided that the Data Warehouse was more critical for the purposes of this demo, so these OLTP databases will still have some waits for CPU. Scheduler Waits Almost Gone Recall the wide band of "scheduler" waits from earlier in this demo.  Those waits have almost entirely disappeared and the databases are making much better use of the available system resources: I/O Resources Follow CPU Settings As noted previously, the Exadata I/O Resource Manager (IORM) inherits the same percentage allocations set in the Database Resource Manager.  This applies to Resource Plans as well as Instance Caging (setting CPU_COUNT).  There is not need to set IORM allocations separately if you simply want to match the DBRM resource allocations.  You simply need to set the IORM objective to "auto" for IORM to follow the DBRM percentages. Conclusion Managing resources is made simpler using the built-in resource management capabilities of Exadata, including the Oracle Database Resource Manager (DBRM) and it's companion, the Exadata I/O Resource Manager (IORM).  This capability is much more effective and does not carry the heavy administrative burden of using Virtual Machines for the purpose of Resource Management.  Oracle recommends using VMs for purposes of ISOLATION where necessary, and Resource Manager where it fits best.  In short, Resource Manager is: Integrated Dynamic Easy to Manage Be sure to watch for additional content on this and other topics related to Oracle database and Exadata. See Also... Resource Manager Demo #2 Resource Manager Demo #3  

High performance database deployment relies on effective management of system resources.  Only Oracle provides fully integrated management of system resources including the Oracle database, server...

Exadata Database Machine

Introducing Exadata X8M: In-Memory Performance with All the Benefits of Shared Storage for both OLTP and Analytics

Oracle Exadata Database Machine X8M has just been announced at Oracle OpenWorld by Larry Ellison (stay tuned for the link), and is available now. Building on the Exadata X8 state-of-the-art hardware and software, the Exadata X8M family adopts two new cutting-edge technologies: RDMA over Converged Ethernet (RoCE) network fabric, enabling 100 Gb/sec RDMA, and Persistent Memory, adding a new shared storage acceleration tier Exadata X8M delivers record-breaking performance, attaining 16 Million OLTP read IOPS (8K I/Os) and OLTP I/O latency under 19 microseconds. It's not just the new components: Exadata's ability to evolve its architecture to incorporate new technologies, integrated with co-designed software enhancements, multiplies the value of individual component technologies. In Exadata X8M, data movement is accelerated by faster RDMA (via RoCE) and a new persistent memory tier, using database-aware protocols and software. RDMA over Converged Ethernet (RoCE) Network Fabric Remote Direct Memory Access (RDMA) is the ability for one computer to access data from a remote computer without OS or CPU involvement: the network card directly reads/writes memory with no extra copying or buffering resulting in very low latency. RDMA was introduced to Exadata with InfiniBand and is a foundational part of Exadata's high-performance architecture. RDMA enables several unique Exadata features, such as Direct-to-Wire Protocol and Smart Fusion Block Transfer. RDMA over Converged Ethernet (RoCE) is a set of protocols defined by an open consortium, developed in open-source, and maintained in upstream Linux. RoCE's protocols enable InfiniBand RDMA software to run on top of Ethernet. This allows the same software to be used at the upper levels of network protocol stack, while transporting the InfiniBand packets across ethernet as UDP over IP at the lower level. As the API infrastructure is shared between InfiniBand and Ethernet, all existing InfiniBand RDMA benefits are also available on RoCE, including over a decade's worth of performance engineering on Exadata. Fig.1 Exadata X8M Shared Network Heritage Exadata RoCE Network fabric provides transparent prioritization of traffic by type, ensuring the best performance for critical messages requiring the lowest latency. Low latency messages such as cluster heartbeat, transaction commits and cache fusion, are not slowed by higher throughput messages (such as backups, reporting or batch messages).  Exadata RoCE Network also optimizes communications by ensuring packets are delivered on the first try without costly retransmissions. Exadata RoCE avoids packet drops by utilizing RoCE protocols to manage the traffic flow, requesting the sender to slow down if the receiver's buffer is full. Through smart Exadata System Software 19.3.0, Exadata X8M also practically eliminates database stalls due to failure by immediately detecting server failures. Server failure detection normally requires a long timeout to avoid false server evictions from the cluster, however it is hard to distinguish between a server failure, and a slow response to the heartbeat due to a busy CPU. Exadata X8M Instant Failure Detection is not affected by OS or CPU response times, as it uses the hardware based RDMA to quickly confirm server response. Four RDMA reads are sent to the suspect server across all combinations of source/target ports. If all four reads fail, the server is evicted from the cluster. If a port responds, the hardware is available, even if the software is slow. Fig.2 Exadata X8M Instant Failure Detection Persistent Memory Acceleration Persistent memory is a new silicon technology, adding a distinct storage tier of performance, capacity and price between DRAM and Flash. For the Exadata X8M release, 1.5 TB of persistent memory is added to High Capacity and Extreme Flash Storage Servers. Persistent memory enables reads at memory speed, and ensures writes survive any power failures that may occur. In combination with the new RoCE 100Gb/s Network Fabric, smart Exadata System Software is able to fully leverage the benefits of persistent memory on remote storage servers via specialized data and commit accelerators. Persistent Memory Data Accelerator Exadata X8M Storage Servers transparently incorporate persistent memory in front of flash memory. This opens up the ability for Oracle Database to use RDMA instead of I/O to read remote persistent memory. By bypassing the network, I/O software, interrupts and context switches, latency is reduced from 250 microseconds to less than 19 microseconds, over 10x improvement. Adding persistent memory to the storage tier makes the aggregate performance across all storage servers dynamically available to any database on any server. Also, as only the hottest data is moved into persistent memory, it means a better utilization of persistent memory capacity can be achieved. For fault-tolerance, Exadata's smart software mirrors this cached data across storage server’s persistent memory tier, protecting data from hardware failures.  Fig.3 Persistent Memory Data Accelerator Persistent Memory Commit Accelerator Consistent low latency for redo log writes is critical for the performance of OLTP Databases. Transaction commits can be completed only when redo logs are persisted, that is permanently written to storage. With the persistent memory commit accelerator, Oracle Database 19c is able to directly place the redo log record in persistent memory on multiple storage servers using RDMA. Since the database uses RDMA for writing the redo logs, up to an 8x faster redo log writes are seen. Since the redo log is persisted on multiple storage servers, it provides resilience. The persistent memory log on the storage server is not the database’s entire redo log, it only contains the recently written records. Therefore hundreds of databases can share a pool of buffers enabling consolidation with consistent performance.   Fig.4 Persistent Memory Commit Accelerator Security and Management are automated. PMEMCache and PMEMLog, (the software layer above the persistent memory hardware), is configured automatically on installation. Persistent memory is only accessible to databases using database access controls, so no OS or local access is available, ensuring end to end security of data. Hardware monitoring and fault management is performed via ILOM and includes persistent memory hardware modules. And when decommissioning or reinstalling storage servers, a secure erase is automatically run on the underlying persistent memory modules. This makes the addition of persistent memory in Exadata X8M effectively transparent. Summary Recall Exadata X8 was released in April 2019, and hit 6.57 Million 8K Read IOPS during benchmarking, breaking 6 Million IOPS within a single rack for the first time. This was a huge milestone, and confirmed that year over year Exadata continues improving the performance, cost-effectiveness, security and availability of the highest performing, most strategic platform for Oracle Database. Exadata X8M continues this great tradition of seamlessly integrating new technologies, leveraging unique co-designed hardware and database-aware software, to further increase the advantages of the flagship platform for running the Oracle Database. While each new technology is cutting edge, over a decade of engineering excellence and thousands of engineer-years of experience insure that your data is safe and secure, provides consistency at the highest performing rate, with easy management, for the same price. All the while not requiring any application changes. Exadata X8M benefits OLTP, Analytics and mixed workloads significantly. For Analytics and mixed workloads, the combination of persistent memory and RDMA frees up CPU cycles of the storage servers, allowing more processing power for more smart scan operations, and the 100Gb/sec RoCE network gives a higher net throughput for operations that need it. For OLTP, the near-instant retrieval of cached data from storage, near-instant write of commit records and extreme increase in IOs, makes Exadata X8M the best platform to run Oracle Database. For more information on Exadata X8M, see oracle.com/exadata. Footnote By the way, the (1) on the first image reads: "These are real-world end-to-end performance figures measured running SQL workloads with standard 8K database I/O sizes inside a single rack Exadata system, unlike storage vendor performance figures based on small I/O sizes and low-level I/O tools and are therefore many times higher than can be achieved from realistic SQL workloads. Exadata’s performance on real database workloads is orders of magnitude faster than traditional storage array architectures, and is much faster than current all-flash storage arrays, whose architecture bottlenecks on flash throughput."  

Oracle Exadata Database Machine X8M has just been announced at Oracle OpenWorld by Larry Ellison (stay tuned for the link), and is available now. Building on the Exadata X8 state-of-the-art hardware...

Announcing Gen 2 Exadata Cloud at Customer

Announcing Gen 2 Exadata Cloud at Customer   Oracle is pleased to announce immediate availability of Gen 2 Exadata Cloud at Customer.  Exadata Cloud at Customer has been one of Oracle’s most popular cloud services.  It provides the performance, scalability, elasticity, and security of on-premise Oracle Exadata, but in a cloud service.  It provides cloud economics, infrastructure managed by Oracle, and cloud automation, exactly like the related Exadata Cloud Service, but in your data center, behind your firewall. In this post, I am going to highlight what’s new in Gen 2 Exadata Cloud at Customer.  If you are not familiar with the previous Gen 1 Exadata Cloud at Customer service, you can find more information in the extensive data sheet here. Gen 2 Exadata Cloud at Customer brings new hardware, a new management infrastructure, new connectivity, and even a new database.  It now is based on Exadata X8, the latest version of the Exadata platform.  Management is new and simplified, with the Gen 2 public cloud managing your on-premise Exadata service.  We give you more control over connecting the system to your client and backup networks, and lastly, provide full support for Oracle Database 19c. New X8 Hardware The new hardware brings the usual benefits—faster CPUs, more cores, and more storage.  Each compute server in Gen 2 Exadata Cloud at Customer now has two 26 core CPUs, with 50 cores available per server for user VMs (Quarter, Half, Full Rack—see table 1 for more specifications).  This beats the 48 cores available for a bare metal on-prem system, ensuring you will have no problems migrating your workloads to Cloud at Customer.  These new Intel Cascade Lake CPUs feature fixes for Spectre and Meltdown vulnerabilities in silicon, eliminating the software workarounds required for older chips.  Just like in the X7 version, we connect those database servers to storage servers using a 40Gb/sec Infiniband fabric, providing high performance and low latency.  In the storage servers, we now include the latest 24 core Intel CPUs, a 140% improvement over the previous systems.  In fact, this is even 50% more cores than in on-premise Exadata.  Lastly, we bump the HDDs from 10TB to 14TB.  With more cores and more storage, now there’s no reason not to move to Cloud at Customer. Table 1:  Gen 2 Exadata Cloud at Customer Shapes New Management Infrastructure A big change is the new management infrastructure.  With Gen 1, we shipped an entire rack of control plane equipment onsite to your data center to manage the Exadata Cloud at Customer.  That took up a tile in your data center, and consumed power and cooling.  It was also billed as a separate infrastructure service and was thus a direct expense.  Now with Gen 2 Exadata Cloud at Customer, the management control plane is in the public cloud—there is no separate rack of management equipment.    What does this mean for you?  It means Gen 2 is simpler, deploys faster, and costs less.  You’ll get new features delivered faster, as you won’t need to wait for a complex control plane upgrade—that will just magically happen in the cloud.  If you are a hybrid on-prem cloud and public cloud customer, you will get an identical user experience across all your services.  Both use the same UI, same APIs and SDKs, and run the same database (the latter being true for all on-prem and Cloud database products and services).  As you transition between on-prem cloud and the public cloud, there will be nothing new to learn, and all your scripts and tools written for one environment should work in the other.  From the console where you manage our cloud resources, you’ll be able to see and manage both public cloud and Cloud at Customer services.  And, lastly, the new management infrastructure has finer-grained permissions and controls.  Customers sharing a cloud tenancy across multiple services and databases are now able to assign user roles and more effectively protect and isolate their resources. One concern that always comes up when thinking about public cloud management of an on-premise system is “what happens if the internet is down?”  Be assured that we’ve identified critical operations like scaling, backup, and restore, and have built in local capability to perform those functions even if the internet is down.  Your Data is Secure Is managing your system from the public internet secure?  Oracle security experts have gone through great pains to ensure the service is secure, and your critical data is safe.  We establish a secure tunnel from the ExaCC to the public cloud, and all control plane traffic, telemetry data, and operations access flows through the tunnel.  Just as with previous versions, we isolate control and management network traffic from your data, and limit operations access to only the underlying infrastructure.  Only you can access the database VMs and the databases running inside them—including the transparent data encryption keys. Easier Connections to Your Network We’ve simplified connecting the Gen 2 Exadata Cloud at Customer to you client and backup networks.  You can directly connect from your switches to the database servers via layer2.  This gives you the flexibility to run VLANs directly from your application servers to the database servers, or to independently route client and backup traffic to different switches.  All servers support your choice of 10Gb copper or 25Gb fiber network connections. Support for Oracle Database 19c One more important new feature is support for Oracle Database 19c.  This database version will have long-term support from Oracle, and many customers will want to move from other long-term supported versions like 11.2.0.4 and 12.1.0.2 directly to Oracle Database 19c.  The creation and lifecycle management of Oracle Database 19c is fully supported by Gen 2 Exadata Cloud at Customer through its cloud automation.  Of course, Gen 2 Exadata Cloud at Customer also provides full support for older supported releases, including 11.2.0.4, 12.1.0.2, 12.2.0.1, and 18c. Autonomous Ready There’s one more key feature of Gen 2 Exadata Cloud at Customer.  It will have the ability to run Oracle Autonomous Database Cloud at Customer when available in the future.  This brings all the benefits of Autonomous Database Dedicated to your data center.  It will fully automate and manage the database VMs and the databases running in those VMs, providing a self-driving, self-securing, and self-repairing database behind your firewall. Now in its second generation, Gen 2 Exadata Cloud at Customer extends the benefits of the cloud to databases that are restricted to your data center.  With improvements in hardware, manageability and database support, the new generation of Cloud at Customer is ready for your most critical workloads.  It’s also ready for the future, and will support Autonomous Database Cloud at Customer when available.  Find out more at http://oracle.com/exadata. Are you looking for more Gen 2 ExaC@C Blog Posts? Gen 2 Exadata Cloud@Customer New Features: Shared ORACLE_HOME (June, 2020) Gen 2 Exadata Cloud@Customer New Features: DB Sync Part-1 (June, 2020) Gen 2 Exadata Cloud@Customer New Features: Scaling OCPUs Without Cloud Connectivity (June, 2020) Gen 2 Exadata Cloud@Customer New Features: Multiple VMs per Exadata System (June, 2020)

Announcing Gen 2 Exadata Cloud at Customer   Oracle is pleased to announce immediate availability of Gen 2 Exadata Cloud at Customer.  Exadata Cloud at Customer has been one of Oracle’s most popular...

Win a VIP Ticket to Oracle Openworld's CloudFest.19

Want to win a VIP ticket to the #OOW19 CloudFest.19 concert featuring John Mayer and Flo Rida? Be the 1st to take a selfie with me, @GavinAtHQ in front of the Exadata at the demo booth (ADB-008) & post it on twitter to win. Don't forget to tag #ExadataContest #OOW19Contest & @Intel @OracleExadata @GavinAtHQ @oracleopenworld . Big thanks to Intel for providing the VIP ticket! See terms & conditions below. Terms & Conditions: PRIZE: The prize will be one (1) VIP ticket to CloudFest.19. GIVEAWAY PERIOD: The Giveaway period begins on Monday 16th at 9:00am and finishes on Wednesday 18th at 6:00pm ELIGIBILITY: Must be a full delegate of Oracle Openworld 2019 and be physically present at the Exadata Demo booth to be eligible to win.  No purchase necessary. Must be 18 or older.  Government employees, Oracle employees and their family members, and residents of Quebec, Italy, and countries under US embargo are not eligible to participate. There will be only be 1 prize awarded. WINNER SELECTION: The winner will be the first eligible delegate to post a selfie taken with Gavin Parish - Exadata Product Manager (@GavinAtHQ) in front of the Exadata Machine at demo booth ADB-008 of Oracle Openworld 2019 using the hashtag string: "#ExadataContest #OOW19Contest" and tagging the twitter accounts @Intel @OracleExadata @GavinAtHQ @OracleOpenworld WINNER NOTIFICATION: If I hand you the ticket, you’ve won. No discussion will be entered into, no second prize. OTHER RULES: Not transferable for cash, one winner only. @GavinAtHQ is not responsible for delegate’s action at CloudFest.19

Want to win a VIP ticket to the #OOW19 CloudFest.19 concert featuring John Mayer and Flo Rida? Be the 1st to take a selfie with me, @GavinAtHQ in front of the Exadata at the demo booth (ADB-008) & post...

A Simple Guide to Exadata at Oracle Openworld 2019

There's a lot going on about Exadata at Oracle OpenWorld this September in San Francisco! An Exadata keyword search in the Session Catalog returns several dozen curated sessions where you can learn from and interact with a range of experts, including customers, partners and consultants, as well as a product managers and developers. In this blog post we highlight the Exadata sessions from Oracle Development experts – product managers and developers who define, design, and implement Exadata software and hardware. Sessions include technology updates, to tips and tricks, to best practices, and include panel discussions on Exadata present and future. Even our suggested list of sessions is long, this graphic might help:     Monday Brian Spendolini and Sravan Sunkaranam review DOs and DON'Ts of Exadata Cloud, with Edson Morales reporting on Office Depot's experiences with Exadata Cloud Service. CON5226: Oracle Database Exadata Cloud Do's and Don'ts (10:00am, Room 156A).  Then, just before the Monday Keynote, Cecilia Grant from Exadata Engineering has a 20 min session at the Cloud Mini Theater (near the Exadata Showcase) on performance tuning the Oracle Database on Exadata. THT6610: Exadata Performance Expert Tips and Tricks (2:15 PM, Cloud Infrastructure Theater) Tuesday On Tuesday, we have our senior technical management on the #ExadataPowerHours! Juan Loaiza (EVP, Systems Technology) will discuss the Exadata strategy and roadmap for new technologies, Cloud, and On-Premises – this is where Juan announces what's coming for Exadata. PRO4859: Exadata: Strategy and Roadmap for New Technologies, Cloud, and On-Premises (3:15 PM, Room 215/216). Stay in the room, as Kodi Umamageswaran (SVP, Exadata Development) does a deep dive of the Exadata Architecture and Internals, and elaborates on Juan's announcements. TRN4863: Exadata: Architecture and Internals Technical Deep Dive (4:15 PM, Room 215/216). If you're interested in Exadata in the Cloud, then don't miss Jeffrey Wright's session with Rachna Thusoo present the fundamentals of Exadata Cloud at Customer next generation. CON5222: Next-Generation Oracle Exadata Cloud at Customer 101 (3:15 PM, Room 155D). Wednesday Kodi takes the stage again on Wednesday morning, this time with Shasank Chavan (VP, In-Memory Technologies). They'll be discussing the potent combination of Database In-Memory technologies on Exadata. CON4752: Oracle Database In-Memory Meets Exadata: Extreme Performance Unleashed (9:00 AM, Room 211).  Brian's also back first thing Wednesday morning, this time going deeper into the technology innovations behind Exadata Cloud. TRN4861: Oracle Database Exadata Cloud Service: Technical Deep Dive (9:00 AM, Room 215/216). Right after Brian, Manish takes the stage to dive deeper in to the Exadata Cloud at Customer next generation. TRN4862: Oracle Exadata Cloud at Customer: Technical Deep Dive and What's Next (10:00 AM, Room 215/216). Michael Brown also takes the stage with Vodafone to discuss Vodafone's global deployment of Exadata in support of their Oracle database standardization. Stephen Bendall and Felipe Viveros join Michael on stage. CON6491: Vodafone’s Experience with More Than 90 Exadata Systems (10:00 AM, Room 151D). After all this, you'll need some lunch, but make sure you're back to listen to Michael Nowak from the Oracle MAA Solutions team, who will go through Exadata MAA Best Practices and Recommendations. TRN4868: Exadata: Maximum Availability Best Practices and Recommendations (12:30 PM, Room 215/216). Then you can see me (Gavin Parish) in a quick 20 minute theater session on Next-Generation Exadata. THT6611: Next-Generation Exadata - (4:00 PM, Cloud Infrastructure Theater). To round out the day, Brian's back (again! something tells me he loves his job!) with fellow Product Manager Glen Hawkins and customer Pradeep Chowdhury from Kaiser Permanente to discuss MAA Best Practices for Exadata Cloud. TRN4848: Oracle Maximum Availability Architecture (MAA): Best Practices for Oracle Cloud (4:45 PM, Room 215/216). Thursday Make sure your alarm clock is set to loud Thursday morning after the Wednesday party, so as not to miss Bob Thome and Lawrence To discussing the MAA Best Practices for Exadata Cloud. PRO4864 - Best Practices for Oracle Exadata Cloud Deployments (9:00 AM, Room 215/216). Then, Jia Shi joins me (Gavin) to talk about next generation In-Memory technologies for Exadata. You don't want to miss cool new architecture + technology innovations (although I may be biased...). PRO4865 - Exadata: Next-Generation In-Memory Technologies (10:00 AM, Room 215/216). Competing for your time, when you should be in my session, is Jeffrey Wright, back to discuss Data Security in Exadata Cloud at Customer... (hmm, sounds important, I wonder if Jia can take our session by herself.) PRO4867 - Oracle Exadata Cloud at Customer: Data Security 101 (10:00 AM, Room 213). Rounding out the Exadata Product Management sessions for OOW19 this year, is Markus Michalewicz, on stage with Mauricio Feria to discuss the most impactful features from Oracle Database 18c and 19c, tips and tricks a plenty I hear in this session. TIP4855 - Best Practices for the Most Impactful Oracle Database 18c and 19c Features (2:15 PM, Room 205). Also, don't forget to drop by the Engineered Systems Showcase to get your selfie with an Exadata machine and chat with Exadata experts. Hope to see you there!

There's a lot going on about Exadata at Oracle OpenWorld this September in San Francisco! An Exadata keyword search in the Session Catalog returns several dozen curated sessions where you can learn...

Exadata Database Machine

Elastic Expansions Now Possible with Exadata Cloud at Customer 18.4.6

Exadata Cloud at Customer 18.4.6 We are pleased to announce the availability of Oracle Exadata Cloud at Customer release 18.4.6. This release introduces Elastic Database and Storage Expansion, long-awaited features for (Gen1) Exadata Cloud at Customer. This elastic functionality will provide new customers an option to order a custom sized Exadata Cloud at Customer configuration that exactly meets their workload requirements, as well as allow existing customers to expand their current configurations with just the right type of resource (compute, storage, or both). Release 18.4.6 also enhances the VM Cluster Subsetting feature that was introduced in release 18.1.4.4. It is also the first release of Exadata Cloud at Customer that supports Exadata platform version 19.x and builds a foundation for supporting Oracle Database 19c in the future. For complete information, please see the Exadata Cloud at Customer documentation. This post covers the highlights of the above marquee features, additional enhancements introduced with this release, and how customers can benefit from these features. Elastic Compute Server and Storage Server Expansion: Feature Description: Elastic Database and Storage Expansion allows customers to add individual compute and storage servers on top of the standard Base System, Quarter Rack, and Half Rack shapes. The server additions can be one or more compute server(s), one or more storage server(s) or a combination of both. The maximum number of compute servers allowed in the ExaCC cabinet is eight and the maximum number of storage servers allowed is twelve. Business Benefits: The Elastic Expansion feature will benefit both new as well as existing customers. New customers are now able to order custom sized configurations that do not conform to standard Base System, Quarter Rack or Half Rack shapes. For example, if a new customer workload assessment reveals that they need to subscribe to 3 compute servers and 5 storage servers for their Exadata Cloud at Customer rack, they will be able to order the exact configuration they need. This was not possible until today. Similarly, existing customers can now add as many individual storage and compute server(s) needed as long as they are within the prescribed maximum limits. VM Cluster Subsetting V2: Feature Description:  The original VM Cluster Subsetting feature, introduced in Exadata Cloud at Customer release 18.1.4.4, allowed customers to create up to eight VM Clusters using a subset of available compute servers in the rack. For example, with the original feature, a customer could create a three node VM Cluster in a four node Half Rack ExaCC. However, the customer was not able to modify their existing VM Cluster by adding or removing a node. VM Cluster Subsetting V2 in the 18.4.6 release allows customers to expand or reduce the number of compute nodes in their existing VM Cluster. The clusters created using the VM Cluster Subsetting feature will utilize storage from all storage servers in the rack, spreading the IO across all servers, resulting in faster performance. Business Benefits:  This feature offers the customer a method to implement efficient consolidation while incorporating an isolation strategy. There are many reasons why customers may want to create clusters that are smaller than the total available compute servers in the rack. Customer may want to create a smaller cluster to host databases that have low resource and scalability requirement, or to host a smaller number of databases that require isolation from the rest of the workload. Each VM Cluster has dedicated client and backup networks, isolating network traffic from each VM Cluster. Finally, resource requirements are constantly changing in production environments. The ability to stretch or shrink an existing cluster by adding and removing nodes provides agility, allowing customers to ensure optimal utilization of available resources. Exadata 19.2.4 and Oracle Linux 7.5 Support: Feature Description:  The Exadata Cloud at Customer 18.4.6 release supports Exadata software 19.2.4. With this Exadata release, Oracle Linux 7.5 is the operating system installed on newly provisioned database servers and storage servers. The Exadata software 19.x is also a requirement for creating Oracle Database 19c on Exadata. Business Benefits:  This release lays the foundation for 19c database support by supporting Oracle Linux 7.5.  Other Enhancements: Another key customer requirement we received was to have the ability to associate a specific network subnet to a specific cluster. This generally resulted from a business requirement of having certain databases in a specific network. The 18.4.6 release has introduced a feature to do exactly this. While creating a VM cluster, the Create VM Cluster dialog box will now have a drop-down listing of all available client networks. Customers can select the network they wish to be associated with the cluster being created. An additional enhancement introduced with the 18.4.6 release is the integration of Oracle Wallet with Exadata Cloud at Customer. This combination provides a greater level of security for the Exadata Cloud at Customer service, providing encrypted password for internal components. We encourage you to give these new features a try once your service is upgraded. We hope you will find they increase your overall agility, and will enable you to better realize the benefits of Exadata Cloud at Customer. For more information, please see the Exadata Cloud at Customer documentation.

Exadata Cloud at Customer 18.4.6 We are pleased to announce the availability of Oracle Exadata Cloud at Customer release 18.4.6. This release introduces Elastic Database and Storage Expansion,...

Exadata Database Machine

Scaling Database Hardware

I stumbled across this rather convoluted statement from a storage vendor: The HBAs, HCAs, or NICs on a host must support the type of port (SAS, InfiniBand, iSCSI, or Fibre Channel) to which they connect on the controller-drive tray. For the best performance, the HBAs, HCAs, or NICs should support the highest data rate supported by the HICs to which they connect. which got me wondering, how scary can scaling database hardware get? And how do we ensure that (the inevitable) hardware obsolescence won’t leave us scouring online auction sites for that last compatible component? If you have to scale a bespoke configuration, I wish you luck. All I can suggest is test test test, have a valid backup, and a bulletproof back-out plan in case it all goes awry. If you’re on Exadata and need to scale, you’re in a much better situation, because you are part of a carefully curated ecosystem. All you need to decide is what you want to scale, which follows from your reasons to expand. For example, you may be running low on storage for your data warehouse. Or you may need more cores to consolidate another round of databases into your Exadata. You don't have to review and reassemble the entire stack of components, hoping for the best. Every Exadata configuration has been tested thoroughly by a team of top-notch engineers. The hardware has been assembled in a meticulously run factory that produces hundreds of identical configurations every month. And you have 24/7 access to a single-point of response to any and every issue that may arise. To expand Exadata hardware follow these steps (see diagram): Start with what you have Add Database and/or Storage Servers to meet your increased requirements If a full rack is not enough, add another rack and repeat 2 (and maybe 3) until you get to where you need to be There is no 4: I wanted to add that 2 and 3 scale linearly, which is why this is so easy That's it, you're done.  The following sections elaborate how to expand Exadata in specific scenarios.  Database CPU Scaling One of the most common expansion use cases is needing to increase the number of CPUs to run your database workload. There are a few ways this can be achieved: Capacity on Demand - If you are running Capacity on Demand licensing on your Exadata with a subset of cores active, you can purchase more licenses and activate additional cores in your existing servers. Though a server reboot will be required, it can be done in rolling fashion in a cluster to avoid database outages. See the documentation for details. In-Rack Additional Database Servers - If you’ve already activated all cores on your database servers, there are a few paths you can take add more CPUs, depending on your current configuration: Eighth Rack - The Eighth rack, our smallest footprint, consists of two single-CPU database servers and storage servers. Expand via the Eighth Rack to Quarter Rack Database Server Upgrade, which adds the additional CPU, giving you access to the additional cores. Both Database servers get this upgrade, so you will double the available core count. The servers need downtime to add in the hardware, however in a cluster this can be done in a rolling fashion so your database doesn’t see an outage. After this upgrade you have the equivalent of a pair of Quarter Rack Database Servers, scaling CPU further by adding additional database servers (see next item). Quarter and/or Elastic Rack - Earlier generations of Exadata required predefined expansion configurations, i.e., eighth to quarter, quarter to half, half to full, with N Database Servers and M Storage Servers in each step up. As of X5 (and also available to X4 upgrades), scaling is more flexible. All you need to do is add one or more additional Database Servers of the latest generation. These can then be used to balance existing workload and/or add additional database workload. Existing servers continue running and the new database servers are "hot-added" to the setup. There are a few prerequisites, e.g., ensuring you're running the latest Exadata System Software version (see documentation). Unlike the initial quote at the top, there's no need to look for compatible NICs, or figure out what firmware to use on the internal RAID controller. Exadata's smart System Software ensures the correct firmware is loaded, and compatibility between all components has been tested and verified. Multi-rack - If you are at capacity within one rack, just add another rack. Any combination of Database and Storage Servers can be connected across the InfiniBand fabric. CPU scaling using Capacity on Demand, Add-on DB Servers and Multi-racking allow you to scale up the number of CPUs available for database workload from a minimum of two, into the thousands. Database Memory Scaling Maybe you’ve got enough cores, but you want to squeeze in some extra PDBs and need more memory for the Database server, or you realize you could give your application an extra shot in the arm by using Database In-Memory. The later generations of Exadata use DDR4 memory, and can scale up to 1.5TB per database node for the 2-socket variant, (bumping the CPU:RAM ratio from a healthy 8:1 ratio to 32:1). For the 8-socket variant, the memory can be doubled from 3TB to a whopping 6TB of memory, increasing CPU:RAM ratio from 16:1 to 32:1. In-Memory here we come!  Again, how far you can go depends on your current configuration, but the scale by which memory is added couldn't be easier. For Exadata X7 and X8, there's a single 12 x 64GB DIMMs memory kit for all occasions, you just need to determine the number of kits. The table shows how this works:   What I have What I want What I add N number of Exadata X7-2/X8-2 Database Nodes with 384 per node 768GB per node N x 1 x Memory Kits N number of Exadata X7-2/X8-2 Database Nodes with 384 per node* 1.5TB per node N x 2 x Memory Kits N number of Exadata X7-2/X8-2 Database Nodes with 768 per node 1.5TB per node N x 1 x Memory Kits N number of Exadata X7-8/X8-8 Database Nodes with 3TB per node 6TB per node N x 4 x Memory Kits * If you are starting with Eighth Rack database nodes, you are limited to max 768GB per node. Earlier generations have somewhat different memory kit configurations, but upgrading is still simple. Like with previous hardware upgrades, individual server outages are required, but upgrades can be done in a rolling fashion to ensure continued database service. Client Network Scaling There are a few situations where additional physical network connections to the database server are required. Perhaps your new security policy requires that backup networks be physically isolated, or you are expanding into another network segment and don't want to disrupt existing VLAN setups (?!?). Whatever the reason, it is easy with Exadata. As described in the documentation, there is a free PCIe slot that can be used to expand the networking solution. In Exadata X8, the options are: The Oracle Quad Port 10GBase-T card - which provides 4 x RJ45 ports of 10Gb The Oracle Dual Port 25 Gb Ethernet Adapter - which provides 2 x SFP28 ports of 25/10Gb Note: the free PCIe slot is not available in the 1/8th rack configuration. The database server automatically detects the new card after installation and exposes the additional network interfaces. Similar to CPU scaling, the installation requires a database server shutdown, but again, rolling maintenance ensures no database outages. Storage Capacity Scaling Probably the most common scenario in scaling database hardware is adding storage. I don't think I've ever met a customer who achieved a balance between incoming and outgoing data and thus avoided the need to add storage (if you have, please let me know via a comment below). Exadata's Storage solution is second-to-none, as it maintains truly linear performance when scaling up, and achieves massive compression (averaging around 1:10) with Hybrid Columnar Compression. Adding storage capacity couldn't be easier (are you seeing a pattern here?).  No matter what generation of Exadata Hardware you have under support, you just add current-generation storage servers and create a new ASM group, or extend your existing ASM disk groups (see this MAA white paper for best practices), and keep on going. You can add storage via any current generation High Capacity, Extreme Flash, or Extended Storage Server (XT) per your cost and performance/capacity considerations.  Like with Database Scalability, if you hit the physical limit of a rack, add a storage expansion rack to the network fabric. The Storage Expansion Rack (SER for short) is exactly that, an Exadata rack dedicated to storage. (It's pretty much an Exadata Rack Elastic model without the database servers, and the spine switch added in by default. See details here). Storage Server Memory Scaling By the way, the memory kits I mentioned above with Database Servers can also be applied to the X7 and X8 High Capacity (HC) and Extreme Flash (EF) Storage Servers, bumping the memory from 192GB to 768GB per server. Why scale storage memory? Because this!  Exadata Scales for Best Performance and Reliability Exadata is engineered for maximum flexibility, while the technical rigor delivers legendary performance and reliability. While it is simple to flexibly add capacity as demand requires it, Exadata also ensures that during hardware expansion, replacement and maintenance, the system remains up, with minimal performance impact. We are always interested in your feedback. You are welcome to engage with us via Twitter @GavinAtHQ or @ExadataPM and by comments here.

I stumbled across this rather convoluted statement from a storage vendor: The HBAs, HCAs, or NICs on a host must support the type of port (SAS, InfiniBand, iSCSI, or Fibre Channel) to which they...

Exadata Database Machine

Introducing Exadata Database Machine X8: The Foundation for Mission Critical On-Premises, Cloud, and Autonomous Databases

We are excited to introduce the Exadata Database Machine X8 family, the latest in over a decade of innovation integrating state-of-the-art hardware with revolutionary software advances that continues to redefine database technologies. Exadata X8 further improves on the performance, cost-effectiveness, security and availability that enterprise customers require and expect from Exadata, with no change in price from the previous generation, and a lower-cost storage option. The X8 hardware release updates 2- and 8-socket database (compute) servers and intelligent storage servers with the latest Intel chips and significant improvements in storage. The X8 hardware release also adds a new lower-cost storage server that extends Exadata benefits to low-use data. Exadata Database Machine X8 is the new standard of excellence, delivering the fastest performance for all database workloads in the most efficient manner. While this post focuses on Exadata hardware, Exadata’s many software innovations, co-designed and integrated at all levels of hardware and OS, multiply Exadata’s advantages way beyond what state-of-the-art hardware can deliver. For instance, Exadata System Software 19.1 includes Automatic Performance Monitoring that proactively detects and reports anomalies in CPU, memory, and networking. Exadata-enabled enhancements that deliver even better performance and availability for database in-memory functionality are another example of integrating software and hardware innovation. Just in 2018, 20+ Exadata software innovations improved Oracle Database analytics, transaction processing, consolidation, security, as well as better management and upgrades. All new Exadata features and enhancements are available on all supported Exadata generations. The remainder of this post elaborates on the new hardware capabilities, licensing, pricing, and software requirements. You can also watch the X8 highlights (video, 7 min) or the full X8 overview (video, 23 min) presented by Juan Loaiza, the technical leader responsible for the Exadata family of products and cloud services. Exadata X8-2 Database Server Exadata X8 two socket database servers use the latest twenty-four core Intel Xeon 8260 processors. These processors implement fixes for the Spectre and Meltdown security vulnerabilities, eliminating the slowdown due to software mitigations. Each database server’s local storage is doubled from prior generations with four 1.2 TB HDDs. This 100% increase will especially benefit consolidated environments with multiple virtual machines or Oracle homes. Database servers ship with a default configuration of 384 GB DDR4 DRAM that can be expanded up to 1.5 TB, with memory installed in the factory or at the customer’s premises. Ethernet connectivity can also be expanded by adding a quad-port copper 10 Gigabit per second card or an additional dual-port optical 25 Gigabit per second card. Exadata X8-8 Database Server Exadata X8 eight socket database servers use the latest twenty-four core Intel Xeon 8268 processors. These processors also implement fixes in silicon to mitigate Spectre and Meltdown vulnerabilities. Database servers ship with a default configuration of 3TB DDR4 DRAM that can be expanded up to 6 TB per server and 18 TB per rack. While Exadata excels at database workloads such as transaction processing and data warehousing, the new Exadata Database Machine X8-8, with up to 18 TB of DRAM memory, can run the largest workloads, consolidate hundreds of databases, or run massive databases entirely in-memory. Exadata X8-2 Intelligent Storage Server High Capacity (HC) Exadata X8-2 HC intelligent storage server adopts the latest 2-socket sixteen-core Intel Xeon 5218 processors, a 60% processing increase available for smart scan processing compared to X7. Each HC storage server has 12 Helium-filled 14 TB drives, with a total raw capacity of 168 TB, for a 40% increase over X7. Each HC storage server includes four 6.4 TB Sun Accelerator Flash F640 NVMe PCIe cards, for 25.6 TB total flash cache. Exadata X8-2 Intelligent Storage Server Extreme Flash (EF) Exadata X8-2 EF Storage Server adopts the latest 2-socket sixteen-core Intel Xeon 5218 processors, a 60% processing increase available for smart scan compared to X7. Each EF storage server includes eight 6.4 TB Sun Accelerator Flash F640 NVMe PCIe cards, for a total 51.2 TB flash capacity Introducing Exadata X8-2 Storage Server Extended (XT) We are also adding a significantly lower-cost storage server to the Exadata family: Exadata X8-2 Storage Server Extended (XT). XT storage servers extend the benefits of Exadata to less-accessed data: historical data, data kept for compliance or regulatory purposes, and archival data. The XT server includes twelve 14 TB SAS disk drives with 168 TB total raw disk capacity, and achieve the same capacity at lower cost than High Capacity servers by forgoing flash drives and reducing processor sockets from two to one. Hybrid Columnar Compression is included, while licensing Exadata System Software is optional for Exadata Storage Server Extended, enabling customers to lower storage costs further for low-access data. XT servers can be added to any supported Exadata rack – a minimum of two servers must be added initially to each rack for normal redundancy, while three servers are required for high redundancy. We will cover XT servers at length in a dedicated post soon. New Exadata X8-2 Storage Expansion Rack Exadata Storage Expansion Rack adopts the latest Exadata X8-2 HC and EF servers. Additional HC, EF, and XT storage servers can be added to any existing or new storage expansion rack. New 14 TB Disk Swap Kits Exadata X8 updates the disk swap kits to twelve 14TB 7200 RPM drives, a 75% increase compared to the previous 8TB disk swap kits. 14TB disk kits will support X4, X5, X6 and X7 generations of HC intelligent storage servers. Exadata X8 continues to lead in database performance Exadata provides the best database performance by any and every measure, and continues to improve with each generation. A single X8 rack can achieve a record breaking 560 GB/sec data scans from SQL, a 60% increase over X7. Exadata delivers 6.5 million 8K read IOPS and 5.7 million 8K write IOPS from SQL. Exadata X8 can sustain a throughput of 3.5 million IOPS while keeping I/O latency under 250 μsec. And it delivers 25% more IOPS per storage server than X7, reducing the cost of elastic configurations where storage I/Os are the bottleneck. And unlike storage arrays, Exadata performance scales linearly as more racks are added. License cores as workloads require Exadata X8 allows users to limit the number of active cores in the database servers in order to restrict the number of required database software licenses. At installation time, a minimum of 14 DB server cores must be enabled to limit the DB and Options software licensing costs on the X8-2 servers. A minimum of 56 DB server cores must be enabled on the X8-8 servers. The reduction of processor cores is implemented during software installation using Oracle Exadata Deployment Assistant (OEDA). The number of active cores can be increased at a later time, when more capacity is needed, but not decreased. The minimum number of active cores required is unchanged from Exadata X5, X6 and X7. The increased capacity and higher performance of Exadata X8 is delivered with no change in price compared to the X7 generation. The new Exadata X8-2 Storage Server Extended (XT) is priced at a 70% reduction from the standard Exadata intelligent storage servers. Exadata System Software 19.2.0.0.0 The Exadata X8 family requires Exadata System Software release 19.2.0.0.0 at a minimum. Exadata 19.2 builds on all the innovations delivered in Exadata 19.1 with additional software support for new hardware servers and components. Summary Exadata continues to be the most strategic platform for Oracle. The new Exadata X8 is the most secure, highly available and fastest database platform ever built. With combination of unique-to-Exadata functionality in Oracle Database 19c, Exadata X8 is the foundation for the Autonomous Database and the ideal platform for Oracle databases. We are always interested in your feedback. You are welcome to engage with us via Twitter @ExadataPM and by comments here.  

We are excited to introduce the Exadata Database Machine X8 family, the latest in over a decade of innovation integrating state-of-the-art hardware with revolutionary software advances that continues...

Exadata Database Machine

In-Memory and Exadata: A Potent Combination

The Oracle Database supports a broad range of data formats, access methods, and programming languages under common administration, security policies, availability, and transactional consistency, for all kinds of workloads. For analytical workloads especially, in-memory technologies deliver dramatic speed and space efficiencies, and Exadata goes further with unique in-memory enhancements. In this post we discuss two Exadata-only in-memory features, including short videos where Shasank Chavan (VP, In-Memory Technologies) explains how they each work. In-Memory Availability and Performance Data stored in in-memory columnar (IMC) format are simply a columnar representation of data also stored in the traditional row format, and therefore enjoy the protections afforded regular database blocks, as the IMC store can always be reconstructed from the relevant blocks should. However, some applications cannot tolerate the performance impact of re-populating a node's IMC store, for example after a node crash. Exadata offers the ability to specify that their columnar data be duplicated to another database node, so that if node crashes, the application is transparently redirected to a surviving node. In this 3 min video Shasank Chavan explains In-Memory fault tolerance features unique to Exadata. Duplicate mode provides for some or all of the data in a node's IMC store to be duplicated to another node. With full duplicate, a table can replicated in all the nodes, enabling even higher performance as all access to that table will be local to each node.  Standby Database Extends In-Memory Capacity As a pillar of database high availability, Active Data Guard complements its protection role with the ability to run workloads on a standby database node. Analytical reports are often offloaded to standby databases making them good candidates for using in-memory functionality. Exadata enables populating the in-memory store of the standby database with content different than those of the in-memory store of its corresponding primary database.  In this 2 min video Shasank Chavan discusses Active Data Guard enhancements for In-Memory in Exadata that adds to the in-memory capacity of the primary by using the standby for different data.  Exadata Delivers the Best Database In-Memory Experience The Oracle database offers class-leading in-memory functionality that coexists with and enhances all other database functionality, without any compromises. Because it is designed and engineered by the database team, Exadata fully integrates in-memory technologies to deliver functionality that is simply unattainable on general-purpose systems.  See also the In-Memory blog, the Maximum Availability Architecture blog, and our previous post on how high availability is built into Exadata. This paper discusses multimodel database management systems. For further technical material, see In-Memory, Data Guard, MAA, and Exadata sites. We are always interested in your feedback. You are welcome to engage with us via Twitter @ExadataPM and by comments here.

The Oracle Database supports a broad range of data formats, access methods, and programming languages under common administration, security policies, availability, and transactional consistency, for...

How Oracle Builds Maximum Availability into Exadata

This post offers an insider's view to how the Exadata team thinks about high availability with three examples of availability challenges and how Exadata addresses them. Each section below describes an availability challenge and includes a video where Michael Nowak (Architect, MAA) explains each solution and discusses how Oracle technical staff continually identify and address availability challenges. Ensuring I/O Problems Do Not Affect Service Quality Slow or hung I/Os and sick disks are fact of life, and Exadata implements a range of Machine Learning and other techniques to identify and remedy problematic I/Os. This enables Exadata to maintain service levels in the face of these real-life problems.    In this 4 min video, Michael Nowak explains how storage servers detect and cancel or repair slow I/Os and hung I/Os, and confine sick disks. And how Database servers cooperate with Storage servers to deal with undetected issues via I/O latency capping.   Data Availability Requires Protecting Storage and Its Software When thinking about protecting and keeping data highly available, a first consideration is to introduce redundancy in how and where the data are stored. This addresses failure risks for individual (nonvolatile) storage devices (e.g., magnetic disks and flash memory). It is just as important to ensure the availability of the system managing the storage devices. As of Exadata X7, each storage server includes two redundant M.2 solid state drives to house the operating system and the Exadata storage server software. When needed, an M.2 drive can be replaced online while the storage server continues to service the application, with redundancy via Intel RSTe RAID technology. In this 3 min video, Michael Nowak explains how this solution evolved and why it is an important improvement.  Beyond Software and Hardware Redundancy: Operator Error Operator errors create challenges to availability beyond hardware and software remedies, for example, if a data center operator mistakenly removes a disk at a time when its removal would compromise storage redundancy. Starting with Exadata X7, storage servers include a “do-not-service” LED to alert datacenter personnel that shutting down a storage cell when the redundancy of a storage cell would be compromised. In this 4 min video, Michael Nowak explains ASM disk partnering and how it drives this LED warning light. Exadata: Built-in High Availability For the Exadata product team at Oracle, high availability is a fundamental design principle and an ongoing commitment. Exadata embodies the leading edge of Oracle's Maximum Availability Architecture, a best practices blueprint based on proven Oracle high availability technologies, end-to-end validation, expert recommendations and customer experiences (see also the technical overview and MAA blog). You can learn more about Exadata here and of course by perusing this blog.  We are always interested in your feedback. You are welcome to engage with us via Twitter @ExadataPM and by comments here.

This post offers an insider's view to how the Exadata team thinks about high availability with three examples of availability challenges and how Exadata addresses them. Each section below describes an...

Exadata Database Machine

What’s New with Oracle Database 19c and Exadata 19.1

Oracle Database 19c is now available on the Oracle Exadata in addition to LiveSQL.oracle.com. Database 19c is the final release of the Oracle Database 12c family of products and has been nominated as the extended support release (in the old version name scheme, 19c is equivalent to 12.2.0.3). Though we encourage our Exadata customers to upgrade to the latest version of the Oracle Database to benefit from all our continuous innovation, we do understand that customers may take up new versions more slowly. Release 19c comes with 4 years of premium support and an additional 3 years of extended support. As always, curious readers can refer to MOS Note 742060.1 for details on the support schedule. Dom Giles’s blog discusses Database 19c in detail. This post focuses on the unique benefits of Database 19c on Exadata, the best platform for running the Oracle Database. Before that, it’s useful to quickly go over some highlights of Exadata System Software 19.1, which is required to upgrade to Oracle Database 19c. Exadata System Software 19.1 Highlights Exadata System Software 19.1, generally available last year, was one of the most ground-breaking Exadata software release to date (see post and webcast), and is required to run Database 19c on Exadata. Upgrading to Exadata 19.1 will also upgrade the operating system to Oracle Linux 7 Update 5 in a rolling fashion, without requiring any downtime. The most popular innovation of Exadata 19.1 was Automated Performance Monitoring, which combines machine learning techniques with the deep lessons learned from thousands of mission critical real-world deployments to automatically detect infrastructure anomalies and alert software or persons so they can take corrective actions.  Better Performance with Unique Optimizer Enhancements For many years optimizing the performance of the Oracle database required bespoke tuning by performance experts with the help of the Oracle optimizer. Database 19c introduces Automatic Indexing, an expert system that emulates a performance expert. This expert system continually analyzes executing SQL and determines which existing indexes are useful and which are not. It also creates new indexes as it deems them useful depending on executing SQL and underlying tables. To learn more about this unique capability please read here (Dom’s Blog). Automatic Indexing continually learns and keep tuning the database as the underlying data model or usage path changes. Some of the most critical database systems in the world run on Exadata. Tuning these critical systems requires capturing most current statistics. But capturing statistics is a resource intensive task that impinges on the operation of these critical systems. Database 19c solves this dilemma by introducing Real Time Statistics. Now they can be collected as DML operations insert, update or delete the data in real time. More In-Memory Database innovations The future of analytics is In-Memory, and Exadata is the ideal platform for In-Memory processing. Exadata’s unique capability to execute vector instructions against in-memory columnar formatted data in flash makes it possible to use In-Memory technology for all your data sets and not just the most critical ones. Every database and Exadata software release continues to extend the capabilities of In-Memory technology on Exadata. Database 19c unlocks another unique capability for In-Memory, Memoptimized Rowstore - Fast Ingest. Some modern applications, such as Internet of Things (IoT) applications, need to process high frequency data streams. These data streams are generated by a potentially large number of data sources (e.g., devices, sensors). The Memoptimized Rowstore - Fast Ingest feature enables fast data inserts into an Oracle database. These “deferred inserts” are buffered and written to disk asynchronously by background processes. This enables the Oracle database to easily keep up with the high-frequency, single-row data inserts characteristic of modern data streaming applications. Summary Enabled by unique-to-Exadata functionality, Oracle Database 19c delivers substantial performance and ease-of-management improvements for all workloads. Machine learning complements lessons from real-world deployments to monitor performance and provide safe and efficient optimizations. In-memory enhancements enable the best analytics functionality, and support for deferred inserts enable fast on-line operations. We are always interested in your feedback. You are welcome to engage with us via Twitter @ExadataPM and by comments here.

Oracle Database 19c is now available on the Oracle Exadata in addition to LiveSQL.oracle.com. Database 19c is the final release of the Oracle Database 12c family of products and has been nominated as...

Exadata Cloud at Customer 18.1.4.2 and 18.1.4.4

Exadata Cloud at Customer Product Management is pleased to announce immediate availability of Exadata Cloud at Customer release 18.1.4.4. This release follows release 18.1.4.2, introduced in October, and together they include important new features that provide improved management experience and better control over resource utilization. For complete information, please see the Exadata Cloud at Customer documentation. This post covers the highlights of each release, and then discusses some of their benefits.  Release 18.1.4.2 Assisted Patching of User Domain OS. You can now apply user domain (DomU) operating system patches to the compute nodes by using the Web-based user interface in the Oracle Database Cloud Service Console. This simplifies customer management of the database VMs. Monitoring and managing Exadata Storage Servers using the ExaCLI utility. You can use the ExaCLI utility to perform monitoring and management functions on the Exadata Storage Servers that are associated with your Exadata Cloud at Customer instance. This gives customers visibility into advanced metrics on their cell servers, and even finer-grained control of IO resource plans than is available via the service console. Support for Infrastructure/PaaS Split Subscriptions. This release introduces support for a new subscription model for Exadata Cloud at Customer. Under this new model, you can subscribe to the Exadata Cloud at Customer infrastructure using a term-based subscription and enable the compute node OCPUs as required using universal credits. This model also removes the minimum OCPU threshold that previously existed with Exadata Cloud at Customer and enables ultimate flexibility to manage OCPU usage and subscription costs. Shared Oracle Home Dropdown List. When creating a database that will use a shared home, you can now just select the shared home to use from a dropdown list. No need to copy and paste (or type) the name of the shared home you wish to use. Delegation of Administrative Privileges. This release enables additional roles to allow delegation of administration privileges. Any administrator with the role ExadataCM Service Based Entitlement Administrator is now allowed to manage the Exadata virtual machine clusters and manage service usage.  Cloud Notification Service. This release extends integration with Cloud Notification Service, which provides notification for events that affect the status of system components. This proactively alerts customers about issues with their service via the service dashboard or via email subscriptions. Release 18.1.4.4 (there is no release 18.1.4.3) OCPU Oversubscription. CPU oversubscription enables you to allocate more virtual CPU cores to your VM cluster than the total number of physical CPU cores that are available to the service instance, which enables you to better utilize compute node CPU resources shared across multiple VMs. VM Cluster Subsetting. You can configure a virtual machine (VM) cluster that contains a subset of the available compute nodes (database servers). This allows you to have more control over how CPU, memory, and local storage are used on an individual node, as it need no longer be allocated to every cluster in the machine. Bursting REST API. You can use the Oracle My Services REST API to perform management functions on the Exadata Cloud at Customer instance. Customers can now create automation that can scale their service up or down depending on their specific requirements. Separation of Administrative Privileges. Service Administration can now be assigned by service instance (machine), allowing multiple administrators to each have privileges only for the service instances that they need to manage. Some of the cool things can you do with the new functionality round out the remainder of this post. The additional management functionality enables you to better utilize your Exadata system, while simplifying its administration and use. Consider the use case of test/dev environment in a quarter, half or full rack. Cloning and shared homes (you're going to love that new Shared Oracle Home Dropdown List) makes it easy to create lots of test databases, without consuming a lot of storage. But, each database will consume resources, and without subsetting, it will consume memory and CPU from each node in the cluster. Does a developer need an 8 node test cluster to do development? Probably not, and with VM Cluster subsetting, you can create a one- or two-node cluster for testing, consuming memory and CPU only from those nodes needed for testing. You can also now deploy databases with large SGA requirements in their own subset cluster, freeing up resources on the other servers not hosting that database for other work. Another benefit is improved isolation: You can deploy a set of databases for one department on a virtual cluster hosted on one set of physical nodes, and another set of databases for a different department on a virtual cluster hosted on a different set of physical nodes. You can now leverage the power of a larger half or full rack service, while still enforcing isolation requirements. Many customers have dedicated Exadata services supporting different user communities and they would like to isolate the workloads using multiple virtual clusters, yet still be able to share CPU resources across those communities to increase overall utilization. With OCPU Oversubscription, CPU is treated like a large resource pool, shared across all VMs, providing both isolation and resource sharing. Another eagerly awaited feature is the ability to manage bursting. Using the Bursting REST API, customers can now create their own automation to scale their service up and down based on usage metrics and policies. This makes it easier to use and pay for only those resources they require, ultimately reducing costs. The new Infrastructure/PaaS Split Subscription model further enhances this benefit, as it eliminates any restrictions on scaling the service up and down as needs dictate. We encourage you to give these new features a try once your service is upgraded. We hope you will find they increase your overall agility, and will enable you to better realize the benefits of Exadata Cloud at Customer. For more information, please see the Exadata Cloud at Customer documentation.

Exadata Cloud at Customer Product Management is pleased to announce immediate availability of Exadata Cloud at Customer release 18.1.4.4. This release follows release 18.1.4.2, introduced in October,...

2018: A Landmark Year for Exadata

As we celebrated Exadata’s 10th anniversary in 2018, Exadata’s ongoing commercial success and fast pace of innovation signal that this fabulous journey is just starting. This year Exadata continued to raise the bar for Database availability, manageability and performance. During 2018 Exadata got Database 18c availability, Oracle Linux 7 support, and new Machine Learning capabilities to help automate system management. We also accelerated software releases to a monthly cadence for faster security and critical fixes, in line with Oracle’s longstanding commitment to security. In addition, Exadata Cloud also got a major functionality bump with Exadata Cloud at Customer Release 18.1.4. Besides continual innovations in software, a key to Exadata’s success is our commitment to integrate and leverage new technologies. We look forward to embracing Intel’s ground breaking Optane DC Persistent Memory in our Database and Exadata architecture, and in 2018 we announced a joint hardware beta with Intel that enables our customers to discover the potential of Persistent Memory for their own use cases. Our innovations are meaningful only insofar as they are proven in the field by real customers with real workloads, and our customers are adopting them a steady pace, whether it be Exadata Snapshots or Database In-Memory or Exadata Cloud. Exadata’s superior capabilities and value are validated by its deep and wide adoption in the marketplace.  ​Today 3 out of 4 Fortune Global 100 companies run Exadata, and a quarter have already adopted Exadata Cloud -- the ongoing addition of highly differentiated Autonomous services will further this momentum. 2019 will see more innovations in Autonomous Database and the release of Database 19c, the most advanced database in the world. And we will continue to bring in more hardware and software innovations to Exadata including Persistent Memory. We couldn’t have accomplished all of this without the support and engagement of our customers, our partners, and our own field, support, and marketing teams. A big thank you from all of us at Oracle Development. We wish you happy holidays and all the best for the new year. See you in 2019.

As we celebrated Exadata’s 10th anniversary in 2018, Exadata’s ongoing commercial success and fast pace of innovation signal that this fabulous journey is just starting. This year Exadata continued to...

Exadata Persistent Memory Accelerator: Partnering with Intel on Optane DC Persistent Memory

To address the increasingly complex needs around data centric workloads, Oracle has partnered with Intel to bring solutions that can move, store and process data better and faster than previously possible. Customers increasingly rely on Exadata as their database platform of choice as enterprise workloads continue to grow and requirements become more stringent. Since its inception 10 years ago, Exadata has continued to meet and exceed the demands of all database workloads, delivering performance, value, and reliability unmatched by any other solution. The advantages of Exadata continue to grow because besides benefiting from improvements of underlying hardware components, Exadata continues to develop and add smart database-aware software, and to adopt novel technologies as they become available. Engineering software and hardware together enables a deep integration of novel technologies as soon as they meet Oracle’s stringent requirements. The Oracle team has seamlessly incorporated Intel® Optane™ DC persistent memory to develop Persistent Memory Accelerator for Exadata. Persistent Memory Accelerator provides direct access to persistent memory via RDMA. Workloads that require ultra low response time such as stock trades and IOT devices can now operate on consistent data stored in the database with no application overhead. This innovation in Exadata extends the value of Intel® Optane™ technology beyond fast caching and storage of today’s PCI NVMe Flash storage, to a new, breakthrough persistent memory tier for larger, faster data center scale processing. This close and fruitful technology collaboration between Intel and Oracle sets a new bar for platform innovation and unique requirements of data-centric workloads. As our customer workloads drive ever larger in-memory workload footprints, expect lower latency for transactional applications and real-time analytics, the addition of Intel® Optane™ DC persistent memory enables Exadata to meet and exceed their needs, by increasing available memory for compute-oriented tasks, as well as providing increased locality and direct access by enriching the memory hierarchy of storage servers with RDMA-enabled Intel® Optane™ DC persistent memory.  Because all our innovations are meaningful only insofar as they are proven in the field by real customers with real workloads, we are inviting customers to partner with Intel and Oracle to evaluate Exadata with Intel® Optane™ DC persistent memory. Please fill out this form to register your interest.  

To address the increasingly complex needs around data centric workloads, Oracle has partnered with Intel to bring solutions that can move, store and process data better and faster than...

Exadata Database Machine

Exadata System Software 19.1 – The Foundation for the Autonomous Database

Exadata System Software Release 19.1.0.0.0 is now generally available. This release lays the foundation for the Autonomous Database and improves every aspect of Exadata with 20+ features and enhancements. All new features and enhancements are available on all supported Exadata generations, protecting customer investment across all Exadata deployment models: on-premises, Cloud Service, and Cloud @ Customer. In this post we outline key 19.1.0.0.0 enhancements for automated performance management, improved operations via Oracle Linux 7, and improved availability and security. We also discuss new improvements for even faster Smart Scan performance. Later posts will discuss individual features, and the impatient can also review the full documentation or tune in to this webcast. Automated, Cloud-scale Performance Monitoring The ability to automatically detect and report anomalies to enable software or persons to take corrective actions is crucial to cloud-scale autonomous infrastructure. Release 19.1.0.0.0 automates monitoring of CPU, memory, Input/Output, file system, and network – customers need only configure the notification mechanism. This automation combines machine learning techniques with the deep lessons learned from thousands of mission critical real-world deployments. For example, Exadata can detect that hogging of system resources is affecting database performance, identify the culprit runaway process, and issue an alert, without any pre-existing set-up. To benefit from automatic performance monitoring, all you need to do is configure the notifications. Oracle Linux 7 With Release 19.1.0.0.0, both database and storage servers now run on Oracle Linux 7 Update 5, which brings further performance and reliability improvements. Upgrading to Oracle Linux 7 does not require re-imaging the database servers, and can be done in a rolling fashion, without requiring any downtime. Our customers can now easily upgrade to the most modern Operating System for running the Oracle Database with no outage, no re-imaging, period. Oracle Linux 7 enables Exadata servers to boot faster, and it improves ksplice and kexec, key enablers to nonstop Exadata operation. Also, with Oracle Linux 7, Exadata moves to chrony to further reduce jitter via better and faster time synchronization with NTP servers. Even Higher Availability For OLTP workloads, Release 19.1.0.0.0 makes the addition of new Flash devices to Exadata more transparent, for even higher availability. When a Flash card is replaced, I/O from that card can be delayed as data must be read from disk.  This delay is now mitigated by prefetching, in the background, the most frequently accessed OLTP data into the Flash cache. The key is using algorithms similar to the ones introduced for In-Memory OLTP Acceleration, which ensure that the blocks that were most important to the database are fetched first. This feature provides better application performance after the failure of a Flash device or a storage server by partially duplicating hot data in the Flash cache of multiple storage servers. It works for write-back Flash cache (as there is no secondary mirroring for write-through Flash cache), and it is useful for OLTP workloads, as it does not cache scan data. Even Better Security Release 19.1.0.0.0 security improvements include support for Advanced Intrusion Detection Environment (AIDE), enhanced Secure Eraser, and the addition of access control lists to the Exadata RESTful service. See documentation for the full list of security improvements. AIDE tracks and ensures the integrity of files on the system to detect system intrusions. Secure Eraser now supports erasing data disks only (i.e., excluding system disks), and erasing individual devices. On supported hardware, Secure-Erase is automatically started during the re-imaging of hardware, which simplifies re-imaging without performance penalty. Also, when users specify the disk erasure method 1pass, 3pass, or 7pass, Exadata System Software 19.1.0.0.0 uses Secure Erase if the hardware supports it, for a dramatically shorter erasure time, especially on today’s very large disks. Customers can now specify a list of IP addresses or IP subnet masks to control access to the Exadata RESTful service, or they can disable access altogether. Another security-related change is that Smart Scan and ExaWatcher processes now run under new, less privileged, operating system users. For details, see documentation. Even Faster Smart Scans We call out further improvements to Smart Scan, a flagship innovation in the first Exadata Database Machine, as a good example of Oracle’s commitment to continuing technical innovation. Release 19.1.0.0.0 increases Smart Scan performance by enabling checksum computation at the column level. Checksum validation is key to Exadata’s comprehensive error detection, and it may happen during data storage or retrieval. For in-memory columnar format data on Exadata Smart Flash Cache, the checksum is now computed and validated at the column level rather than full block, enabling two important optimizations, as follows. Selective checksum computation: When Smart Scan reads a Flash block checksum, it now performs checksum verification only on the specific (scan) column Compression Units (CUs), ignoring other columns in the cache line, reducing CPU usage. For example: SQL> SELECT temperature FROM weather WHERE city = 'NASHUA' AND weather = 'SNOW' AND weather_date BETWEEN '01-JUN-18' AND '30-DEC-18'; Checksums are computed only on columns temperature, city, weather, and weather_date, even though the table may have many other columns. Just-in-time checksum computation: checksums are performed only when and if a column is processed. For example: SQL> SELECT temperature FROM weather WHERE city = 'REDWOOD CITY' AND weather = 'SNOW' AND weather_date BETWEEN '01-JUN-18' AND '30-DEC-18'; Since city = ‘REDWOOD CITY’ and weather = ‘SNOW’ returns no results, no checksum is done for columns weather_date and temperature. Less data on which to perform checksum reduces CPU usage. This feature is automatically enabled when you configure the INMEMORY_SIZE database initialization parameter and upgrade to Oracle Exadata System Software release 19.1.0.0.0 For more details see documentation. Summary This post covered the main innovations and enhancements in the new Exadata 19.1.0.0.0 software release. You get everything outlined here, and more, just by upgrading your Exadata software, with no additional costs. Heck, you don't even need to do that yourself: Oracle Platinum Services or our Cloud Operations team will take care of it at no charge. We are always interested in your feedback. You are welcome to engage with us via Twitter @ExadataPM and by comments here.

Exadata System Software Release 19.1.0.0.0 is now generally available. This release lays the foundation for the Autonomous Database and improves every aspect of Exadata with 20+ features...

Exadata Cloud for Small and Medium Businesses

Information Technology Requirements for Small and Medium Businesses Like their large company brethren, Small and Medium Businesses (SMBs) too are concerned about profitability, growth, lowering costs, improving productivity, and business continuity (data protection, disaster recovery, etc.). While the scale of their IT is smaller than that of large enterprises, SMBs must increasingly meet the same stringent IT requirements expected of large enterprises. As SMBs cannot justify a complex IT organization (for example, most SMB IT organizations report to the CFO) they value solutions that meet their requirements with a small number of vendors and reduced management complexity. In addition, SMBs are increasingly considering cloud solutions because they enable SMBs to: Pay for IT capabilities as needed and pay as operational (OpEx) instead of capital expenditures (CapEx) Integrate with existing applications and IT environment Increase the productivity of their IT The database is the cornerstone of IT, and Exadata is the best proven platform to support both mission critical database workloads and database consolidation. Exadata Cloud offerings enable SMBs to take full advantage of Exadata for all their database needs – from mission-critical databases to database consolidation as well as the management and provisioning of multiple "small" databases.  Exadata Cloud (Cloud at Customer and Cloud Service) Benefits  Exadata delivers performance, availability, scalability and better manageability for all database workloads at lower total cost while delivering improved productivity to application users. Exadata Cloud benefits for SMBs include: Pay for the capabilities you need as operational expenses (OpEx) -- more capital for your core business Exadata Cloud brings a much lower upfront cost than acquiring the hardware, enabling rapid adoption among this customer set. Oracle’s Universal Cloud Credits with Exadata Cloud eliminates the core minimums associated with Exadata Cloud non-metered subscriptions. (In the non-metered model Oracle required customer to subscribe to a minimum number of OCPUs per month.  With Universal Credits there no longer is the minimum subscription.) This dramatically decreases the upfront cost. Additionally, compute cores can be expanded and contracted as needed with no cost penalty, enabling customers to match capacity to workload and optimize costs.   Integrate with your existing applications and IT environment Because it delivers the Oracle Database, Exadata Cloud integrates seamlessly with existing applications and into the existing IT environment. Applications remain unchanged, because the Oracle Database is the same database whether on premises or in the cloud. What does change is that infrastructure work is now offloaded from the IT organization to Oracle. Oracle's Exadata Cloud delivers database services according to a Service Level Objective, and Oracle manages database servers,  networking, and storage servers as part of Exadata. Exadata cloud services are designed to keep the database service up and running through all events and updates so that You (the customer) can focus on value added tasks, innovation and running the business. No longer will IT need to respond to a flashing red light; for example,  for Cloud at Customer you'll be notified of Oracle's need to access the machine in the event there is a component failure. If there is a need to take a database node offline, you'll be notified. And, with RAC the service will continue to run.  Increase productivity by reducing your IT management workload A hurdle in the SMB market with an on premises solution was the need for the customer to administer and maintain Exadata and to develop those resources. With the Exadata Cloud Oracle administers and maintains the Exadata Infrastructure leaving the database administration to the customer. Exadata Cloud optimizes SMB productivity by enabling the existing DBAs to operate at higher efficiency and allows those DBA to focus on running the business and innovation Customer Experience Small and Medium Business Customers that have adopted Exadata Cloud are able to obtain the same benefits that large businesses have experienced last decade with Exadata: Performance and Scale – meet growth demands, whether organic growth or growth through mergers and acquisitions Lowest total cost of ownership – Exadata was often perceived as too expensive by SMBs Lower IT costs through Database consolidation and IT standardization Productivity – Oracle runs the infrastructure, Customers focus on the business and innovation Database Options – Ability to take advantage of all database options through PaaS subscriptions Business Continuity through High Availability and Disaster Recovery Security of Cloud at Customer behind the Customer (Your) firewall The move to a cloud consumption model for Exadata opens up Exadata’s ability to improve productivity and resource utilization to Small and Medium Businesses where Exadata might have been out of reach in the past. Find out more: Exadata overview, Case studies (PDF) (University Salzburg; video: Orbitz, WestJet, Customers section here), business value (IDC paper) Exadata Cloud Service (video: Macy's, Forbes article: Irish Postal Services) and Cloud at Customer (video: Glintt), Exadata Database Machine technical overview.

Information Technology Requirements for Small and Medium Businesses Like their large company brethren, Small and Medium Businesses (SMBs) too are concerned about profitability, growth, lowering costs,...

Sizing for the Cloud: Optimize Exadata Cloud Costs Using Universal Credits

The Universal Credits Model allows Oracle to dramatically alter cloud costs and deployment scenarios. Using Universal Credits, all OCPUs are treated the same, and all OCPUs can be elastically scaled to meet compute demand. This enables the most cost-effective Exadata Cloud deployments that Oracle has offered to date.   Databases, including consolidated databases, have a typical workload that is much less than the peak workload. In this post we discuss how workload variability offers the opportunity for dramatic cost savings when using Universal Credits with sizing appropriate to different scenarios. Production Workloads Corporations are busiest in the weeks approaching and just after the quarter close; Retailers in are busiest from October through January for the holiday season; Manufacturers are busiest during a planning cycle. All businesses have peak activity periods. The rest of the time workloads remain at their typical non-peak level, but on-premises customers must still size their databases for the peak workload plus some extra safety buffer. As an example, say that a typical production workload requires up to 16 cores with a peak workload of 32, plus 4 extra cores. This results in 36 cores provisioned and licensed when only 16 are needed the majority of the time.  Disaster Recovery Assume that the customer provisions database disaster recovery (DR) the same size as production – 36 cores. However, for the vast majority of the time the DR servers are applying redo from production, while waiting for an event that will shift the production workload to DR. Applying redo is a light workload compared to production. The amount of compute capacity required for the redo apply is a function of the amount of redo. If the redo workload requires 6 cores, an additional 30 cores will be provisioned and licensed which will be idle waiting for a rare DR event. Development and Test (Dev/Test) We see Dev/Test environments that vary in size from 1x to 10x the size of their corresponding production environment. Many Dev/Test activities do not exercise the full potential of the production environment, so compute resources are often idle. Dev/Test can often tolerate lower performance than production, as long as functionality is correct. The Dev/Test environment does requires at least the peak performance production size for full peak-performance testing, so here let us assume 36 cores in this case. Note that the rest of the time fewer 36 cores are required. Elastic Sizing with Universal Credits We bring together the different requirements outlined above in an example use case from a financial institution. They want to migrate from a commodity environment to a consolidated environment on Exadata Cloud at Customer. While the peak workload requires 164 OCPUs, the typical workload for 95% of the time is only 68 OCPUs. Said differently, in a month of 744 hours, 707 hours require 68 OCPUs and the remaining 37 hours require an additional 96 cores (obtained as elastic OCPUs or bursting) to meet the peak workload requirement of 164 OCPUs. We can now compare sizing scenarios: Sizing 64 OCPUs for 707 hours of typical workload plus an extra peak 96 OCPUs for 37 hours using Universal Credits results in a 49% decrease in price compared to 164 OCPUs for 744 hours. The significant savings of elastic sizing with Universal Credits warrants a serious look at Exadata Cloud deployment model.

The Universal Credits Model allows Oracle to dramatically alter cloud costs and deployment scenarios. Using Universal Credits, all OCPUs are treated the same, and all OCPUs can be elastically...

Snapshots in the Exadata Cloud

Building on the previous blog entry by Gurmeet (Exadata Snapshots: Simple, Secure, Space Efficient), we are going to see how we can create snapshots in the Exadata Cloud.  Snapshot PDBs With the Exadata Cloud (Cloud Service and Cloud at Customer) we have two ways to create snapshots on our services. The first method is via SQL plus and works exactly like what was shown in the previous blog post. You need to have created a SPARSE disk group when you initially provisioned the service. Once we log into our Exadata Cloud and choose a database to work with, we can choose a PDB we want to create a clone from. We can see what PDBs are available with the following SQL: p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Andale Mono'; color: #28fe14; background-color: #000000; background-color: rgba(0, 0, 0, 0.9)} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Andale Mono'; color: #28fe14; background-color: #000000; background-color: rgba(0, 0, 0, 0.9); min-height: 14.0px} span.s1 {font-variant-ligatures: no-common-ligatures} span.Apple-tab-span {white-space:pre} SQL> show pdbs;       CON_ID CON_NAME                       OPEN MODE  RESTRICTED ---------- ------------------------------ ---------- ----------      2 PDB$SEED                           READ ONLY  NO      3 EIGHTPDB                           READ WRITE NO We see we have the EIGHTPDB to work with. We first need to shut the PDB down on all nodes: p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Andale Mono'; color: #28fe14; background-color: #000000; background-color: rgba(0, 0, 0, 0.9)} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Andale Mono'; color: #28fe14; background-color: #000000; background-color: rgba(0, 0, 0, 0.9); min-height: 14.0px} span.s1 {font-variant-ligatures: no-common-ligatures} SQL> alter pluggable database EIGHTPDB close instances=all;   Pluggable database altered. Then we put it into READY ONLY mode: p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Andale Mono'; color: #28fe14; background-color: #000000; background-color: rgba(0, 0, 0, 0.9)} span.s1 {font-variant-ligatures: no-common-ligatures} SQL> alter pluggable database EIGHTPDB open read only instances=all;             Pluggable database altered. And now we create our PDB Snapshot copy using the SPARSE disk group: p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Andale Mono'; color: #28fe14; background-color: #000000; background-color: rgba(0, 0, 0, 0.9)} span.s1 {font-variant-ligatures: no-common-ligatures} SQL> create pluggable database CLONEPDB from EIGHTPDB tempfile reuse create_file_dest='+SPRC4' snapshot copy keystore identified by "PASSWORD"; p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Andale Mono'; color: #28fe14; background-color: #000000; background-color: rgba(0, 0, 0, 0.9)} span.s1 {font-variant-ligatures: no-common-ligatures} Pluggable database created. Being a cloud database, TDE is on by default thus we need the keystore identified by "PASSWORD";  clause with the PASSWORD being the one you supplied upon database creation. Finally, open the snapshot PDB: SQL> alter pluggable database CLONEPDB open read write instances=all;             Pluggable database altered.   Thats it, we have a PDB snapshot copy that uses the SPARSE disk group for changes.   Snapshot Masters The second method is taking a shapshot master in the UI or via a REST service. Once we have that snapshot master, we can create sparse clones using the SPARSE disk group. Here is how you do just that. In the UI, view the database details. On the left side of the page in the Administration tab. Click that tab. Now you click the Snapshot subtab. Here we can see the Snapshot Master details. You can click the Create Snapshot Master button to bring up the create modal. In this modal we can create our snapshot master by giving it a name, a database name and password. We can use the node subsetting feature by clicking the Hostnames field and selecting what nodes of the service we want this snapshot to run on.  On the bottom of this modal is an ACFS checkbox. Using this checkbox we place this clone's oracle binaries on an ACFS mount saving space. Now this isn't for a production environment but would work well for test and development. Lastly we can choose to also clone the source oracle home rather than create a new one.  Once you fill in this modal, we can click create to start the snapshot create process. Once the snapshot is created, we can see it in the UI. Use the popup menu on the left to create a clone from this snapshot master. This clone uses the SPARSE disk group for any changes to the read only snapshot master. Once the clone it created, you can use it just as you would any database on the Exadata Cloud Service. To see a video of this in action, click here.

Building on the previous blog entry by Gurmeet (Exadata Snapshots: Simple, Secure, Space Efficient), we are going to see how we can create snapshots in the Exadata Cloud.  Snapshot PDBs With the Exadata...

Exadata Database Machine

Exadata Snapshots: Simple, Secure, Space Efficient

Database snapshots are widely used to re-create realistic workloads for database testing and development (test/dev). Almost all enterprise-class storage vendors support snapshot functionality that can create space-efficient point-in-time copies. However, using general-purpose snapshots for database snapshots requires compromises for test/dev activities and incurs extra process and management complexity as compared to database-aware snapshots. In this post we examine the requirements for database snapshots and contrast general-purpose snapshots with Exadata’s database-aware snapshots. Database testing and development with general-purpose snapshots Typical database test/dev requires re-creating the production environment as efficiently and simply as possible, often for multiple concurrent activities, with minimal disruption to production, and subject to data protection requirements (e.g., personal information, credit card numbers, etc., cannot percolate outside the production environment). Most general-purpose snapshot implementations migrate data from their primary (production) storage array to a secondary platform. Because a single production environment often requires multiple test/dev environments, secondary storage arrays and platforms typically can't afford to have the same performance or availability characteristics as the production environment. Also, any off-database redaction process introduces additional users and software into the set of entities that must be trusted. These compromises and additional procedures add complexity and reduce the quality of test/dev activities. Besides these snapshot issues, general-purpose storage arrays require ongoing, specialized administration. Database Administrators (DBAs) manage database snapshots with tools provided by the storage array vendor, third party data management tools, or Oracle’s own Enterprise Manager (OEM). While OEM simplifies snapshot management, general-purpose storage arrays also require significant non-database-related (storage) management, including creation of volumes or LUNs, capacity management, backup of the array, software upgrades, hardware maintenance tasks, and storage-level user control and access. In sum, while general-purpose snapshots are useful, they entail significant compromises that limit the quality and value of test/dev activities. Do all-flash arrays enable first-class database testing and development? Some all-flash array vendors propose to host test and development databases and production databases in the same primary (production) all-flash storage array. While this approach addresses the performance and availability compromises discussed in the previous section, it introduces a new set of issues. Combining production and test/dev workloads on the same storage array requires careful planning and active management of the database systems and the storage array, as test/dev databases can easily command a large portion of the array’s resources and degrade the performance of the production  database. One problem is that most all-flash arrays have low throughput, single-digits GB/s at best, which can be saturated by a handful of test databases running data warehousing queries in the shared storage array. More broadly, general-purpose storage arrays are unaware of the context in which read and write requests are issues by the database, and are thus limited in their ability to combine workloads intelligently. For example, if a production database submits a log-write while a test/dev database is issuing a long batch of writes, an all-flash array is unable to identify and prioritize the log-write over the batch writes. Combining test/dev with production workloads on the same all-flash storage arrays trades improvements in dev/test environments for risks to the production workloads. And all-flash storage arrays are still inherently limited in their throughput. Database-aware snapshots The solution is to make snapshots as close to the database -in functionality and location- as possible. Exadata provides a database-aware snapshot solution that is simple, space efficient, and secure, and it doesn’t compromise on the availability and the performance of the production system. Simple: An Exadata Snapshot is based on a test master, which is a full clone of the source database. From a single test master you can create multiple Exadata Snapshots with minimal additional storage and minimal effort. It takes one SQL command to create a snapshot database and it is created in minimal time.    SQL> create pluggable database PDB1S1 from PDB1TM1 tempfile reuse create_file_dest='+SPARSE' snapshot copy; Each snapshot represents a complete Oracle database and can take advantage of all Exadata features – Smart Scan, Smart Flash Cache, Smart Flash Logging to name a few. Exadata documentation explains the process in great detail. Exadata Snapshots are integrated with Enterprise Manager as well and customers can use EM to create snapshots in a few clicks. This demo gives a sneak peak on integration between Enterprise Manager and Exadata Snapshots. Exadata Cloud offerings also allow users to create a snapshot with just a few clicks (more on this in a later post). Space Efficient: Exadata Snapshots are built on sparse disk groups and use physical storage in proportion to new changes made to the database. In addition, hierarchical snapshots enable you to create snapshots from snapshots and any of these snapshots can act a sparse test master. And you can take a sparse backup of the snapshot, preserving only the new changes introduced in that database. In fact, Rent-A-Center, an Exadata customer, achieved ~30X space savings by implementing Exadata snapshots. Secure: Each Exadata snapshot inherits the same security model as the primary database. In addition, the test master database owner can delete or mask sensitive data in the test master before making it available to non-privileged users in the snapshot database. Exadata's fully comprehensive and database-aware resource management ensures that snapshots do not interfere with the performance of production workloads, and test/dev environments also benefit from Exadata's high availability. In later blog posts we will discuss the best practices on creating and managing snapshots, sizing the system to take advantage of them as well how snapshots are deployed in Exadata Cloud. Next: Snapshots in the Exadata Cloud.

Database snapshots are widely used to re-create realistic workloads for database testing and development (test/dev). Almost all enterprise-class storage vendors support snapshot functionality that can...

Announcing Exadata Cloud at Customer Release 18.1.4: Enhanced Database Consolidation

We are excited to announce general availability of the latest update for Exadata Cloud at Customer.  This new release (18.1.4) brings customers some of their most requested features—features that unlock Exadata Cloud at Customer’s true potential as a database consolidation platform. The key features in this release include:  Support for multiple VM Clusters in a single Exadata Machine Network isolation between the VM Clusters Shared Oracle Homes Snapshots and Sparse Clones for test/dev Restoration from a cloud backup Support for Oracle Database 18c, the latest release of our flagship database. Together, these new features solidify Exadata Cloud at Customer as the best database consolidation platform for any customer.  Unlike traditional public cloud solutions, Exadata Cloud at Customer runs in your data center, able to host all your data—even those with strict privacy and security requirements—while still providing the agility and economic benefits of a cloud solution.  Support for all features is integrated into the easy-to-use web-based UI, making their use simple and intuitive. How do these new feature benefit you as you consolidate?  First of all, multiple VM clusters within the environment provide better isolation. Better isolation means you can consolidate databases that according to your security and privacy policies, or even generally accepted best practices, should not be in the same cluster.  For example, different business units or data may legally require separation, to ensure data is accessible by authorized users only.  Production data can be segregate in a different cluster environment than test/dev, preventing administrators from confusing production data from almost identical test data and eliminating undesired impacts on production workloads due to testing operations.  New network isolation features ensure network data is protected from users on other clusters.  Data destined for one cluster cannot be intercepted by another. The new release also introduces shared Oracle homes.  Although the Exadata platform can consolidate hundreds of databases, the Oracle homes take up space on the local disk drives, and if each database required its own home, you’d be limited by the space on the disk.  In addition, managing and maintaining hundreds of Oracle homes would become a burden.  Maximum Availability Architecture (MAA) recommends less than 10 Oracle homes on a single server.  This is a tradeoff space and management, versus having flexibility to independently upgrade databases.  Now with this release, you can follow these best practices with Exadata Cloud at Customer. One common consolidation use case is creating test and development environments.  The new release makes Exadata Cloud at Customer a great test/dev platform.  Using the snapshot and sparse clone feature, you can quickly and effortlessly create many test/dev environments in a single cluster, without taking up a lot of space.  Each sparse clone only requires the space to record changes from the master.  Creating clones is thus fast and space efficient.  Now each developer and tester can have their own independent test and development environment. One of the biggest challenges to consolidation is migrating the many databases.  The new release makes it easier for you to get you databases into the new platform.  Simply back them up using Oracle Database Backup Cloud Service, and you can restore them to the Exadata Cloud at Customer with little fuss. Leverage the easy-to-use wizard in the control plane, or implement custom automation and workflows with the REST API. Finally, this release will enable Oracle Database 18c on the Exadata Cloud at Customer.  This is the latest release of the Oracle database, and it contains a great many features to support consolidation of databases on the platform.  Improve consolidation density by packing pluggable databases (PDBs) into shared container databases.  Improve overall performance with In Memory database.  All Oracle Database 18c features are available on Exadata Cloud at Customer.  The new features in this Exadata Cloud at Customer release will all help you more easily and safely adopt Oracle Database 18c.  Support for multiple VM Clusters will help anyone wishing to first test their applications with this new release.  You can create an independent 18c cluster for test/dev, while your production applications continue to run on an existing 12.2 cluster.  Restore a copy of your production database to your test/dev environment using the new Cloud backup restoration feature, then use the sparse clones to give each developer or tester their own copy of the database.  There is no need to worry about running out of storage cell space with sparse clones, or local disk space with shared Oracle homes. New deployments will get the latest release automatically, and existing deployments will be contacted by Oracle Cloud Operations to schedule the update.  I know many existing customers are anxiously awaiting these new features to enable the next step in your journey to simplify and consolidate your IT infrastructure. For those of you new to Exadata Cloud at Customer, you can find more information on Oracle.com

We are excited to announce general availability of the latest update for Exadata Cloud at Customer.  This new release (18.1.4) brings customers some of their most requested features—features that...

Oracle Database Exadata Cloud Service

Not every company can own and staff a data center; there is alot involved. So where can you turn if you need the highest-performing and most available infrastructure for running Oracle Database? Well the Exadata Cloud Service of course.  Now while we can have a bit of fun with the beginning of this post, it bring some honesty as well. A good majority of our Exadata Cloud Service customers are new to Exadata. They have never owned one on premises and are able to start their engineered systems journey with the cloud. So why is that? How is the cloud making owning and using an Exadata accessible to more and more customers? This entry as well as a few follow up posts will go into precisely how this is possible.  Lets start from the beginning; Provisioning. With an on-premises Exadata, you have some setup stets to perform before you are up and running. You also have shipping times. We are looking at a couple of weeks. Again, nothing to long, but will require, as we stated before, data center space, power and cooling as well as cabling to get started.  What if I told you with the Exadata Cloud Service, I could have you up and running in 4 hours? No data center or prior knowledge needed? This is the Exadata Cloud Service and one of the reasons that anyone who wants the power of an Exadata for their database workload can have one.  Once you get your welcome to the Oracle Cloud email and you log into your account, you can start creating a service immediately. Just click the Create Service Instance button and off we go.  The first and only page in our Create Service Instance flow asks you to name the service and to decide on how you want to backup your database. Start by naming your service, then selecting the Region where you purchased your ExaCS. Today customers can choose from four different data centers with more coming soon. The Rack Size lets you pick from the shapes you purchased; quarter, half or full. Lastly is backing the service. We have two options today. We can use local and cloud backups or cloud only backups. By checking this checkbox, we split up the grid disks to allow seven days of backups. This will allow you to not only backup quickly, but restore and create clones from backups. More on this later. Once you select these options, you are ready to create your service. Click the Create button and in about 2 hours for a quarter rack shape, you are ready to start creating databases. Speaking of creating databases, lets look at that now. On the create database page, all we need to do is click the Create button. On the first page, you are asked some basic information. First, name the database service and select the location or ExaCS you want it to be placed on. Some customers have multiple ExaCS instances and the  Exadata System dropdown will allow you to select the exact system you would like to create the database on. The next option to note here is the Database Version option. With the Exadata Cloud Service, you can create an 11.2, 12.1 or 12.2 database. And not just one, but multiple databases of multiple versions.  On the next page of the Create Database Wizard is the details page. We ask that you provide the service with a public key that only you have the matching private key. The tooling will use that public to secure the service so that only you, the holder of the matching private key has access and no one else. On this page, just enter some basic information like password for the sys and system users, set the character set of the database and select how you want to backup this particular database. We have three choices; Both cloud and local, cloud only and none. If this is a development database or maybe a database where you want to test out a cool new 12.2 feature, you may not want backups, so choose none. Have a database that is going to be used for production or disaster recovery? Choose one of the other two backup options. When using the cloud backups just provide where your oracle database backup service is and the tooling will automatically set it up for you.  Thats it! Hit create and in about 2 hours, you will have a database running on all the nodes of your ExaCS shape ready to be used. Want to create additional databases? Follow the same steps and in 20 minutes for a quarter rack shape, you will be ready to start using it.

Not every company can own and staff a data center; there is alot involved. So where can you turn if you need the highest-performing and most available infrastructure for running Oracle Database?...

Announcement: Oracle Database 12c Release 2 on Exadata and SuperCluster

The last 10 days have been very exciting for Exadata. Firstwe released Exadata Software 12.2.1.10 with over 30 features and enhancements. Thenwe launched Exadata SL6, unleashing the power of Software-in-Silicon forExadata systems.  Now we are releasingthe on-premises version of Oracle Database 12c Release 2 exclusively on Exadataand SuperCluster. Oracle Database 12c Release 2 enhances every aspect of theOracle Database. Let’s take a look at some highlights: Oracle Multitenant getsa boost with support for Application Containers. Multitenant users can now managememory resources for their Pluggable Databases in addition to CPU and IOresources. Database In-Memory gets faster joins. In-Memory users can now runin-memory queries on JSON data. In-Memory formats are now persisted in storagethus enabling faster restarts. This release also makes the database more secureas now tablespaces can be encrypted online without the need to export/importexisting unencrypted tablespaces. Oracle Database 12c Release 2 also includes themuch-anticipated Oracle Native Sharding. There are hundreds of other featuresthat make the database more secure, more available, faster, easier to manage,and more developer friendly. Oracle Database 12c Release 2 further differentiates the Exadataplatform. Building on Exadata’s unique resource management capabilities and IOoptimizations the following two features are exclusively available on Exadataand SuperCluster: In-Memory processing on Active Data Guard standbydatabases Support for more than 252 Pluggable Databases perContainer Database   This release also enables support for many unique Exadatafeatures such as hierarchical snapshots, Smart Scan offload for compressed indexes, Smart Scan enhancements to XML operators, end-to-endIO latency capping, and extended distance clusters. For more details pleaserefer to this slide deck or tune in to this webcast or read our prior blog post To take advantage of the full potential of Oracle Database12c Release 2 we encourage customers to upgrade to the latest Exadata Softwarerelease 12.2.1.10 before upgrading their database to 12c Release 2. ExadataSoftware 12.2.1.1.0 supports Smart Scan functionality for Oracle Database 12cRelease 2 and is available via OSDC. For the Oracle Database 12c Release 2 release schedule on platformsother than Exadata and Supercluster please refer to MOS Note 742060.1.

The last 10 days have been very exciting for Exadata. First we released Exadata Software 12.2.1.10 with over 30 features and enhancements. Thenwe launched Exadata SL6, unleashing the power...

Exadata SL6: A new Era in Software/Hardware Co-Engineering

Since the launch of Exadata in 2008, Intel x86 processorshave powered Exadata database servers.  Todaywe are expanding our Exadata family with the addition of Exadata SL6, our firstoffering with SPARC M7 processors. SLstands for “SPARC Linux”. The Exadata Database Machine SL6 is nearly identicalto the x86 based Exadata, except it uses Oracle SPARC T7-2 database serversbased on the SPARC M7 processor. SPARCM7 is the world’s most advanced processor to run Oracle databases, and thefirst to use a revolutionary technology from Oracle referred to as Software in Silicon. Software in Silicon technology is abreakthrough in microprocessor and server design, enabling databases to runfaster and with unprecedented security and reliability. Even though the database servers are based onSPARC processors, Exadata SL6 runs the exact same Linux Operating System asx86-based Exadata systems. This blog post discusses the highlights of the product as wellas our motivation behind it and what this announcement means to our existingproduct line. To learn more about thefiner details, please refer to the Exadata SL6 datasheet, and please watch JuanLoaiza explaining Exadata SL6 and it’s benefits in this short video.  Now let’s take a deeper look into Software in Silicontechnology. Software in Silicon is comprised of three very unique technologyofferings: SQL in Silicon, Capacity in Silicon and Security in Silicon. Each of these topics is worthy of their own individualblog posts, but for this write-up we will focus on the highlights. The SPARC M7 processor incorporates 32 on-chip DataAnalytics Accelerator (DAX) engines that are specifically designed to speed upanalytic queries. The accelerators offload in-memory query processing andperform real-time data decompression; capabilities that are also referred to asSQL in Silicon and Capacity in Silicon, respectively. TheDAX SQL in Silicon processors accelerate analytic queries by independentlysearching and filtering columns of data while the normal processor cores runother functions. Using Capacity in Silicon. processors perform datadecompression at full memory speeds, allowing large volumes of in-memory datato be kept in highly compressed format without incurring additional overheadfor decompression when accessed. This means you can drastically increase theprocessing speed of analytics, allowing your applications to optimally processmassive amounts of data while using the Oracle Database 12c In-Memory option -while executing OLTP transactions at top speed at the same time. The Security inSilicon functions of SPARC M7 continuously perform validity checks on everymemory reference made by the processor without incurring performance overhead.Security in Silicon helps detect buffer overflow attacks made by malicioussoftware, and enables applications such as the Oracle Database to identify andprevent erroneous memory accesses. The Security in Silicon technologies alsoencompass cryptographic instruction accelerators, which are integrated intoeach processor core of the SPARC M7 processor. These accelerators enablehigh-speed encryption/decryption for more than a dozen industry-standardciphers. In addition, SPARC M7 is the world’s fastest conventionalmicroprocessor. With industry leading 32 cores per processor running at 4.1 GHz,it has shattered virtually all performance benchmarks. Faster processors allowyou to run bigger workloads on smaller systems, thus lowering your software andpersonnel costs. Exadata SL6 ushers in a new era of hardware and softwareco-engineering through transformational Software in Silicon technology and thepure performance power of SPARC M7, combined with Exadata storage and networking.Faster processors enable much higher throughput that reduces the number ofservers required to run a task. Exadata SL6 also delivers 20-30% more IOs thanan equivalent Exadata X6-2 configuration and hence further lowers the totalcost of ownership of the solution. Exadata SL6 also offers elastic configurations.Customers can start with an eighth rack and incrementally add compute orstorage as and when needed. In a future post we will break down the costanalysis for SL6. The storage and networking subsystem in Exadata SL6 isidentical to Exadata X6-2. It uses the same InfiniBand infrastructure and HighCapacity and Extreme Flash storage servers. Since Exadata SL6 runs the exactsame Oracle Linux as its x86 counterpart, it supports all the same APIs andinterfaces. The platform becomes agnostic for most developer tasks andoperations. In summary, Exadata SL6 is the best-of-all-worlds databasesystem. It uses the fastest processors - SPARC M7, with the fastest storage -Exadata Storage. It uses ultra fast low latency InfiniBand networking withspecialized database aware protocols. Using SQL in Silicon it is able todeliver analytics at silicon speeds. Capacity in Silicon further improves theamount of data against which these analytics routines run. All access to systemmemory is secured using Security in Silicon, making this system one of theworld’s most secure. To top it all off, it leverages Exadata Software with thousandsof database-focused innovations such as SmartScan, Smart Flash Cache etc. thatnot only provide ground breaking database performance but also deliver carriergrade high availability. And all this goodness comes at the exact same listprice as Exadata X6-2. One might ask how Eaxadata SL6 fits in with similar offeringssuch as Oracle SuperCluster. SuperCluster also offers all the goodness of theSPARC M7; in fact it was a pioneer. SuperCluster runs Solaris and, with all theadvantages of Exadata Storage, ZFS and many other optimizations, is your ideal platformto run the most demanding database applications. SL6 broadens our portfolio tooffer the same SPARC M7 and Exadata Storage goodness in a Linux platform. Boththe products have their own differentiators and their own target use cases. Atthe end of the day it gives you, the customer, more choices to consume all theinnovation our products provide. We are fully committed to x86 based Exadatasystems as well; they are our flagship offering and we are not digressing fromthat path. Exadata SL6 will continue to co-exist with its older cousinsSuperCluster and Exadata X6. A little bit of sibling rivalry makes familiestighter and stronger. 

Since the launch of Exadata in 2008, Intel x86 processors have powered Exadata database servers.  Today we are expanding our Exadata family with the addition of Exadata SL6, our firstoffering with...

Exadata 12.2.1.1.0: The Best Database Cloud Platform Just Got Better

We are announcing the general availability of ExadataSoftware Release 12.2.1.1.0.  This is avery content rich release with over 30 features and enhancements that improveevery aspect of Exadata: faster analytics, high throughput low latencytransaction processing, and massive consolidation of mixed database workloads. Compellingenhancements in this release make the platform more secure and easier to manage.Most of these new features are available on all supported Exadata hardwaregenerations, thus ensuring complete investment protection in all Exadata deploymenttypes: On-premises, Cloud Service, or Cloud Machine. It’s difficult to do justice to all the enhancements in asingle post, so we’ll discuss some highlights in this one, followed by a seriesof posts discussing individual features. You can always engage with us throughthis platform or find us on Twitter @ExadataPM. The Exadata documentation describes thefeatures in detail and we also recorded a short webcast  coveringthe highlights. Exadata 12.2.1.1.0 implements query offload capabilities forOracle Database 12.2.0.1. All smart IO features - such as Smart Scan and Smart FlashCache - are now extended to Oracle Database 12.2, in addition to existingsupport for 12.1 and 11.2. Customers can run all three database versionssimultaneously on the same platform and leverage all of Exadata’s offloadcapabilities. At the same time they can also host 10.2 databases via ACFS,though we hope these customers will upgrade toOracle Database 12c soon. Let’s take a look at some of the features of this release inmore detail. Starting with analytics, Smart Scan gets an uplift with the abilityto scan compressed indexes. Queries involving compressed index scans benefitfrom this feature. Additionally,XMLExists and XMLCast can now be offloaded for LOBs as well. Speaking of LOBs,operators “LENGTH, SUBSTR, INSTRM CONCAT,LPAD, RPAD, LTRIM, RTRIM, LOWER, UPPER, NLS_LOWER, NLS_UPPER, NVL, REPLACE,REGEXP_INSTR, TO_CHAR” are now offloaded to the Exadata Storage Servers. Storage Indexes have been hidden, and not often talked about,yet are one of the most impactful features of Exadata. With a just simplemin/max summary information about a column, Storage Indexes help users savetons of unnecessary IO, thus improving analytic performance many fold. In thisrelease, Storage Indexes get a makeover as well. In addition to having min/maxinformation, Storage Indexes will now also store set-membership informationabout the column. One might wonder, what is this “set-membership” information?Let’s dig a little deeper. Min/Max works very well for columns with numericvalues, or dates, or columns with high cardinality, but with low cardinalitycolumns it might not work as well. For example, a column that represents stateswithin the U.S. A simplistic min/maxapproach will have a min of Alabama and a max of Wyoming and almost all querieswill result in an IO as all states will fall within the min/max rangeirrespective of the fact that the particular state data is present in thatcolumn. Set-membership addresses this use case. In addition to the min/maxvalue, set-membership stores the Bloom Filter representation of the hashedvalues of all distinct entries in the region. Storage Indexes and Bloom Filters are a perfect match. BloomFilters are very good at informing what’s not a part of the dataset, andStorage Indexes are trying to avoid IOs to a region that will not yield ameaningful result. Merging all of these together, Storage Indexes will now lookup a Bloom Filter to ensure if an IO is necessary for columns with lowcardinality, and all of this happens dynamically in a very efficient manner.Along with the set membership enhancement we also removed the eight-columnconstraint for Storage Indexes, so now we store summary information fortwenty-four columns. How Storage Indexes share space between min/max andset-membership data is a subject of a blog post on its own, so stay tuned. Shifting focus to in-memory databases, Exadata withIn-Memory Fault Tolerance is an ideal platform for running Oracle DatabaseIn-Memory. In fact, I have yet to meet a customer who needs the performance ofan in-memory database and can sustain an outage if something goes wrong withthe server running the instance. In-memory databases derive their phenomenal performance partlyfrom their ability to utilize SIMD vector processing at a CPU level, and theability to interact with columnar datasets in memory. Until today, in-memorydatabases have been constrained by the compute and memory resources of their databaseserver. This software release extends the benefits of Oracle Database In-Memorybeyond the conventional boundaries of the database server to Exadata storage. Thisis uniquely possible because Exadata uses a server centric approach to storage.Our storage servers are very similar in architecture to most 2-socket compute servers;we use the same family of microprocessors and have the same SIMD capabilitiesas the compute server. Here lies the opportunity; if we can use the SIMD vectorprocessing at the storage tier and redesign our flash cache to use In-MemoryColumnar formats, then we can deliver database performance at the speed of in-memoryprocessing, but with the capacity and economics of flash storage. That’sexactly what we are doing in this software release. In fact, we have extendedthe technology to use the offload capabilities in our storage to deliver in-memoryjoins and aggregation at the storage tier as well. But wait, it gets better. OracleDatabase 12.2 on Exadata also uniquely enables us to run in-memory processingon an Active Data Guard standby database, thus further enhancing the return oninvestment of Active Data Guard. One interesting tidbit… our published performance numbers donot even account for possible performance improvements due to technologies likeStorage Indexes or all the optimizations built in for Oracle DatabaseIn-Memory. So the platform can deliver much more analytic performance thanadvertised. Analytic workloads only make up a portion of jobs on Exadata.OLTP and mixed workload consolidation are the other popular workloads. Let’s discuss enhancements in this releasefocused on delivering better transaction processing on Exadata. OLTP databasesare a lot more impacted by variable performance of the IO subsystem. A fewreleases ago we introduced a series of features that targeted variable performanceof storage devices in the form of IO latency capping. We also introduced “instantdeath detection” of compute nodes via the InfiniBand Infrastructure. With this newrelease we are extending that portfolio to address rare but important IO latencyissues that can arise between compute and storage servers. Sometimes due tooutliers in the networking stack, or hardware issues with a storage server, anIO can experience a prolonged response time. In such rare cases the Exadata DatabaseServer will detect the anomaly and redirect Read IOs to another storage server.This maintains consistently low response times which is very critical for anOLTP database. We also added a lot of features to enhance consolidation onthe platform. By leveraging the Database 12c Multitenant option combined with resourcemanagement, we are able to uniquely deploy more than 252 Pluggable Databases (upto 4,000) per Container Database on an Exadata system running OracleMultitenant. Given the scale of consolidation on Exadata, it’s not uncommon tosee hundreds of databases being deployed on one Exadata system, and this combinationwill enable previously-unseen consolidation densities on Exadata. A lot of these databases could be test/dev databases andhence just implementing efficiencies at the compute tier will not suffice. Rather,the efficiencies have to extend to the storage tier as well. To address this weare enhancing the implementation of snapshots on Exadata. As you might know,Exadata snapshots are snapshots developed by database engineers for databaseengineers. You get the performance ofExadata with the storage efficiencies of a copy-on-write solution. Thissoftware release implements hierarchical snapshots (snapshots of snapshots) onExadata storage. A hierarchical architecture enables real time updates for thetest master database, thus allowing tenants to periodically refresh testdatasets without incurring additional storage costs. In addition, the releasealso enables sparse backups of the snapshots, passing down the storage savingsto secondary storage as well. No discussion about Exadata is complete if we don't include theavailability and manageability aspects of the system. However, improving the availabilityof an already “five-nines” system is a tough ask. We did take that on as achallenge and delivered some unique optimizations in this release. With aserver-centric approach to storage we have compute resources that can be deployedto recover from a possible hardware error. This release allows users todynamically allocate more compute resources when data needs to be redistributedacross devices in the event of a failure. In addition, we restore critical information first, as the system has avery clear understanding of what’s most important in terms of availability. Toadd to this, we are now also leveraging data in our flash cache to speed uprestoration of redundancy (mirroring) and availability in the event of a mediafailure. On the manageability front, we challenged ourselves todeliver on many fronts – upgrades, security, monitoring etc. We improved thesoftware upgrade times by 2x over the prior major release 12.1.3.3.0, whichmakes for an overall improvement of 5x in the last 12 months. In addition to upgrade times we also improvedExawatcher by introducing graphs and added the ability for Oracle Installer to nowinstall quorum disks: bye-bye quorum disk utility. IP addresses, DNS settingsand NTP configurations can now be changed online, so you can bring new tenants onboard easily. We now offer “full stack secure erase”, so you can take tenantsoffline easily and securely as well. There are more advantages to secure erasethan mentioned here, but we’ll address them in a future post. A lot of ourcustomers are developing management frameworks using the REST interface. Tofacilitate this we are extending our REST service to manage database servers inaddition to storage servers. Hopefully this post gives you an overview of the innovationsand enhancements we are delivering with the new Exadata 12.2.1.1.0 softwarerelease. Subsequent posts will go into more detail about these features. Almosteverything discussed in this post comes at $0 costs to you; just upgrade yourExadata software and you are good to go. Heck, you don't even need to commithuman resources for doing that; Oracle Platinum Services will take care of itfor you. No charge.

We are announcing the general availability of Exadata Software Release 12.2.1.1.0.  This is a very content rich release with over 30 features and enhancements that improveevery aspect of Exadata:...

Oracle Database Exadata Cloud Machine now available

Continuing with our cloud innovation, we are very pleased to announce the general availability of Oracle Database Exadata Cloud Machine (ExaCM). For the first time in IT, now you can deploy the most powerful Oracle Database - with all options and features, on the most powerful Database platform - Exadata, with all the cloud computing benefits, at your data center, behind your firewall. The associated Exadata infrastructure is managed by Oracle Cloud experts, which means you can focus on your primary business instead of having your limited IT resources stretched with mundane tasks such as infrastructure updates. If you have cloud computing on your strategic IT roadmap, but you couldn't make that leap for whatever reason, now with ExaCM, you can embrace the Cloud without having to compromise SQL functionality, performance, availability, or transactional integrity. For example - you are considering a tech refresh and are starting to like the cloud pay-as-you-go model.  Also, you are just starting to think about migrating some of your Oracle Databases to the Public Cloud. And right then, as Murphy's Law would have it, you are told that the company policy is that those databases cannot leave the company data center, or even your country's political territory. These policies may have something to do with data sovereignty / data residency policies. Or on a more technical level - you identified these databases - but then you find out that there are all these legacy applications talking to those databases, whose developers are no longer with the company and it would take a super human effort to migrate those applications to a cloud service.So this is what Exadata Cloud Machine is about - it brings our Cloud to you. But as the TV ad goes - we ain't stopping there! You see, an inherent beauty of our Oracle Database cloud strategy is that no changes to applications or data models are required either. This means that if you are running Oracle Databases on-premises today - rest assured, your investment is protected - you can continue to do so, but at the same time, you can start your cloud journey - whether in Oracle's public cloud data centers, or in your own data center, without having to make any compromises. And with Exadata, it doesn't matter whether the workload is OLTP, Analytics or a consolidation effort, it's the same unified platform. This is what we refer to as a consistent experience across our various Exadata deployment models - on-premises Exadata, Exadata Cloud Service or Exadata Cloud Machine. Since it is the same Oracle database, you can leverage the same in-house Oracle skillset that you have accumulated over many, many years. And because it is Exadata, it is the same platform that thousands of other customers have deployed across all verticals, around the world, and maybe even yourself. This is what investment protection is all about.Interested? Please get started here. The Exadata Cloud Machine datasheet is located here.There will be additional interesting updates on our Exadata Cloud strategy in this blogspace, so please stay tuned. Meanwhile, let us know how your enterprise cloud journey looks like!

Continuing with our cloud innovation, we are very pleased to announce the general availability of Oracle Database Exadata Cloud Machine (ExaCM). For the first time in IT, now you can deploy the most...

Exadata: Database Engineering at Cloud Scale

In our neck of the woods here in Silicon Valley, it’s not uncommonto find a lot of startups working on unique challenges regarding DatabaseEngineering. Some are attacking the scalability aspect, some are attempting to reducethe latency of an OLTP database, others are trying to make the data warehouserun faster, while others are focused on bringing the latest innovations insilicon technology to database systems. Exadata is the culmination of all thosechallenges and then some. We are constantly pushing the technology onthree frontiers – performance, availability and scalability - while constantlyreducing the cost of system ownership. Our mandate is to build the World’s fastest,most available and most cost effective systems. This mandate usually leads usto adopt newer hardware technologies and harden existing ones, to challenge theconventional wisdom of solving problems and to be proactive and engineer forthe future needs of our customers rather than narrowly focusing on the problemof the day. Speaking of our customers, we might have the industry’s widestspectrum. Some are running Petabytes large warehouses, with billions of queriesper day; some are running mission critical transactional databases, withbillions of updates per day (yeah that’s right, it’s billions with a B); whilesome are just trying to reduce the cost of their test dev platform byconsolidating databases. Over the last year alone I have met nearly one hundredExadata customers worldwide and from every industry vertical. Any geography youpick, you’ll find Exadata well represented. One heavy-user government customer we partnered with during theearly days of Exadata shared their business problem with us. They were buildinga system to screen all the cargo that enters the country every day to identifymalicious shipments. Such a system, for each individual cargo item, will have toquery multiple databases, crosslink varied sets of information, come up with adecision and update various systems instantaneously. Now repeat this workflowfor billions of items with seasonal fluctuations. And one last thing, thesystem has to remain up all the time, even throughout planned and unplannedmaintenance. When an issue with the system you work on has the potential tomake breaking news on CNN even before it hits your mailbox then you have tothink differently. These customers not only help make our products betterbut also collectively push the technology to the next frontier. That's’why every startup in this space aspires to beat our performance numbers and alot of incumbents are constantly trying to match our availability andscalability stats. Exadata has increasingly become the platform of choice for theconsolidation of Oracle databases. Consolidating databases on a single platform makes a lot ofsense from an operational expense and manageability perspective. To some,consolidation may mean packing a dozen databases running the same type ofworkload on some kind of converged infrastructure. When we looked at it we challengedourselves to design a system that allows customers to run competing workloadswithout compromising performance or availability. OLTP and data warehouseworkloads have very different constraints and very different IO and CPU usagepatterns, so running them concurrently requires some precision engineering. Youneed to make sure the data warehouse workloads don't consume the entire IObandwidth. At the same time the latency sensitive OLTP workloads must get theirrequired TLC. But running hundreds of these workloads on a single systemrequires a completely different level of engineering. It exposes a whole newset of problems and challenges, some never before addressed by the industry.It's not uncommon to find customers running over 100 databases at full throttleon an eighth rack Exadata, our smallest configuration. If you are from our neck of the woods - Silicon Valley and the SanFrancisco Bay Area - then you have a unique opportunity to hear how we approachand solve these unique challenges. In particular how we shave off microsecondsof an OLTP IO or are able to achieve hundreds of gigabytes per second of scanbandwidth and at the same time maintain a five-nines availability, and how wedo all of the above at scale. Kodi Umamageswarn, Vice President of Exadata Development,one of the guys who took Exadata from an idea to an industry-leading product,will speak at the Northern California Oracle User Group on August 17, 2016. Youcan register here for theconference. 

In our neck of the woods here in Silicon Valley, it’s not uncommon to find a lot of startups working on unique challenges regarding DatabaseEngineering. Some are attacking the scalability aspect, some...

Applications and Exadata: Why Select Exadata

Exadata is now in its sixth generation with thousands ofExadatas deployed at thousands of customers. The growth rate remains strong while systems become more powerful,pricing remaining relatively flat, and acquisition models (Capacity on Demandand Infrastructure as a Service Private Cloud) have made acquiring Exadata moreattractive. There are many reasons why customers and partners choose runthe application’s database on Exadata. Most of the reasons to are focused on performance. We’ll see that performance takes many forms. We’ll briefly investigate some of the reasons,outside of Simplifying IT, why customers and partners decide to run theirapplication databases on Exadata. Reliability and Availability This largely has to do with application downtime. Performance manifests itself through theimplementation of Oracle’s Maximum Availability Architecture on Exadata. A new Exadata customer (Fortune 400) recentlyoffered that they were a best of breed Dell and EMC shop prior to selectingExadata. “We did not buy Exadata forperformance (speed). We bought it(Exadata) because we can’t afford to have our ERP system down every week.” Because the database is always up andrunning, the customer sees tremendous improvements in productivity in both ITand its lines of business. Scalability Exadata with its scale out architecture demonstratesexcellent and often near linear scalability. With Exadata customers know what to expect as their application workloadgrows. A major retailer was rolling outnew retail management and check out software across 1500 stores. At 500 stores the current compute and storageenvironment suffered a “melt down” and could not support the 500 stores, letalone the 1500 required. This retailcustomer had prior Exadata experience in their supply chain and quickly movedto Exadata to achieve the needed performance and scalability.  In this situation, getting Exadata up andrunning quickly was essential to meeting their roll out plans (see Time to DeployApplications) Time to Deploy Applications Oracle delivers Exadata hardware and softwarepre-configured, pre-tested and pre-tuned. This upfront engineering dramatically decreases the time to stand-upExadata to get the database(s) and application(s) operational. With a singlepatch from Oracle for planned maintenance customers now test only once, asopposed to testing after each patch is applied. Further, applications run on Exadata unchangedresulting in straightforward migrations, testing and turn up. Sherwin Williams reported that it reducedscheduled application downtime from 200 hours (budgeted) to 65 hours as itupgraded from E-Business Suite 11i to E-Business Suite R12.  Productivity and new business process areimplemented sooner along with realizing the benefits of the applications sooner. Supportability Because Exadata is pre-configured and deployed Exadata systemslook alike, Oracle’s has an incomparable ability to monitor, maintain andsupport Exadata. This enables Oracle tooffer Platinum Support. Sprint replacedan aging best of breed system with Exadata and observed “(Exadata is) A platform that is stableenough where I don’t get a call in the middle of the night.” And, followed with: “One of the things thathelped us a great deal was support – Platinum Support Services that come withExadata. Oracle has notified us beforewe actually realized we were having a disk problem.” Pulte Group, a large US based home builder, andmany many other Exadata customers have shared similar experiences. Performance Applications that run with Exadata see dramatic boosts inperformance. These performance boostslead to gains realized by the various lines of business. An insurance company can load its datawarehouse with Exadata 4 times a day instead of once a day so it has fresh datafor its analyses. Sprint improved theproductivity of its call center operators at least 30% with faster queryresponses. A large grocer was able tonegotiate lower pricing and better delivery from its suppliers with faster,more accurate and longer forecasts. Exadata performance manifests itself in many ways: fromSimplifying IT to allowing customers to Work Faster and Smarter with real timebusiness intelligence, deeper analysis, near real-time batch processing and Internetscale OLTP.   The performance gainsdeliver clear business benefits to various lines of business within anorganization - reducing costs and creating new opportunities.

Exadata is now in its sixth generation with thousands of Exadatas deployed at thousands of customers.  The growth rate remains strong while systems become more powerful,pricing remaining relatively...

10 reasons to run Database In-Memory on Exadata

Exadata istouted by Oracle as the “best platform for running the Oracle Database”. Thevirtues of Exadata for dramatically improving database I/O performance are welldocumented and overwhelmingly supported by customer references. But is Exadata alsothe best platform for running Oracle Database In-Memory? After all, bydefinition an in-memory database avoids I/O by holding data in DRAM. Doesn’tthat mean Exadata has no benefits for running an in-memory database compared tosimilar x86 systems? The simpleanswer is “no”. Exadata is definitely the best platform for running OracleDatabase In-Memory, at least for any important database. If your in-memorydatabase is small enough to fit entirely in DRAM and you can tolerate periodswhen the data isn’t available, and all you wish to do is read-only analyticsand occasional data loads, then Exadata may be overkill. It’s unlikely youwould be considering Exadata for such a situation anyway. Let’s discuss the 10reasons to run Database In-Memory on Exadata: #1.On Exadata, you can configure an in-memory database to be fault-tolerant.In-memory fault-tolerance is only available on Oracle Engineered Systems. #2.Your database can exceed the size of DRAM and scale to any size across memory,flash and disk, with complete application transparency. #3.When your data doesn’t all fit in memory, you still get outstanding analytics performanceon Exadata from disk and flash. For the same reasons, populating the in-memorydata store from storage is very fast. #4.OLTP is fastest on Exadata and you can run in-memory analytics directly againstan OLTP database. Exadata enables Real-time analytics better than any otherplatform. #5.Exadata is an ideal database consolidation platform and Oracle DatabaseIn-Memory enhances that value even further. In many ways Oracle DatabaseIn-Memory “completes” Exadata by applying in-memory performance techniques thatare similar to those used on disk and flash. The entire storage hierarchy(DRAM, flash, disk) delivers maximum value. Adding Oracle Database In-Memory toan existing Exadata system makes tremendous sense. #6.Exadata is highly optimized for RAC clustering, with special protocols overInfiniBand for low-latency cluster interconnect. RAC clustering is how DatabaseIn-Memory fault-tolerance is achieved and how large databases scale-out inmemory. No other platform supports RAC with in-memory like Exadata does. #7.On Exadata, the use of Oracle VM Trusted Partitions is allowed, reducingsoftware license costs. This is especially relevant for database options thatrequire less compute power than a full server. #8.Exadata elastic configurations, introduced in Exadata X5, enable custom configurationsthat are perfectly tailored to specific workloads. If you need more memory butnot a lot of storage capacity you can create that configuration and onlypurchase the Exadata servers needed for the job. #9.Exadata is the development platform for Oracle Database In-Memory. Issues arediscovered and fixed on Exadata first. #10.Exadata is also the primary platform for Oracle Database testing, HA bestpractices validation, integration and support. The same reasons it is the bestplatform for Oracle Database apply to Oracle Database In-Memory.

Exadata is touted by Oracle as the “best platform for running the Oracle Database”. The virtues of Exadata for dramatically improving database I/O performance are welldocumented and overwhelmingly...

Controlling Software Costs on Exadata

Users of Exadata Database Machines can now take advantage ofseveral techniques to control database software license costs including Capacityon Demand (CoD), Elastic Configurations, and Trusted Virtual Machine Partitions. Exadata Capacity on Demand allows users to disable a subsetof the cores on Exadata database servers to reduce licensing requirements. Themaximum number of cores that can be disabled on an Exadata X5-2 Database Serveris 60%. In other words, with Exadata Capacity on Demand, users only need tolicense 14 cores per Exadata X5-2 Database Server even though there are 36physical cores, thus greatly reducing the number of software licenses requiredper server. Exadata Capacity on Demand allows additional cores to be enabledand licensed as needed. Exadata comes with all the necessary software tools toprovision and manage Capacity on Demand. On other x86 servers, all the physical cores on the machinemust be licensed.  This means thatcustomers must plan for the maximum possible core usage over several years, andcarefully deploy a server whose number of cores comes close to this instead ofdeploying standard sized servers. Incontrast, Exadata customers can enable just the cores they need at deploymenttime, and enable additional cores as they need them. Exadata Database Machines can now be Elastically expanded byadding compute and storage servers one at a time as they are needed, extendingthe concept of deploying just the processors that are needed to entire serversin addition to the cores within a server. Exadata now supports Oracle Virtual Machines (OVM). OVM isbased on the industry standard XEN virtualization technology that is also thebasis of Amazon.com’s cloud. OVM can be used to pin virtual machines tospecific cores to reduce licensing requirements in a manner similar to CoD.  Further, OVM on Exadata implements “TrustedPartitions”. Trusted Partitions are much easier to use, and are more flexibleand dynamic than core pinning, and they enable licensing of software on Exadataby Virtual Machine instead of physical core. One benefit of licensing by virtual machine is that it facilitatesconsolidation of databases running different database options since the optionscan be licensed on just the virtual machines using them instead of everyphysical core in the server. To learn more about “Trusted Partitions” onExadata please refer to the Oracle Partitioning Document.Note that unlike some other virtualization solutions, there is no licensecharge for OVM software or management, and no additional support fees for OVMsince OS and VM support is included with standard hardware support. Our customer base and the analyst community has respondedvery positively to the unique direction we have taken in providing them withbetter flexibility and improved cost on Exadata. Please refer to the Gartner article“New X5 Generation Will Bring Pricing Improvements to Oracle Exadata” thatdiscusses these topics. For an overview of Exadata X5 and how it enables you tospend less, please see the X5 launch video. Specific comments on how to “Spend Less by Paying Less” can be found inthis section of the video.

Users of Exadata Database Machines can now take advantage of several techniques to control database software license costs including Capacityon Demand (CoD), Elastic Configurations, and Trusted...

Prereq Dependency Check Added for Exadata DB Server Updating

With this blog update some background on new functionality added to the Exadata dbnodeupdate.sh utility.  The recently added 'Prereq Dependency Check' feature in dbnodeupdate.sh eliminates possible unexpected package dependency problems that occur during update by detecting dependency issues during the prerequisite check phase that could occur when the update is performed. (Linux package dependency issues sometimes happen after operators customizing the db server by installing additional software packages or newer releases of packages). The recommended use of this feature is to run dbnodeupdate.sh with the –v flag (to verify prereqs only) days before the scheduled maintenance.  dbnodeupdate.sh will report dependency problems that must be resolved before proceeding with the update. Note that the dependency check functionality is also run by default prior to performing the update. dbnodeupdate.sh will now also list what packages will be removed for your update. Some details: Updates starting from 11.2.3.1 - 11.2.3.2.2 to any release earlier than 11.2.3.3.0: Dependency check is validated against 'standard' dependencies. Updates starting from 11.2.3.1 - 11.2.3.2.2 to any release equal to or later than 11.2.3.3.0: Dependency check is first validated against 'exact' RPM dependency. If 'exact' RPM dependency check passes it is assumed 'minimum' RPM dependency check will also pass. If 'exact' RPM dependency check fails then 'minimum' RPM dependency check is run. Updates starting from release 11.2.2.4.2 do not have Prereq Dependency Check functionality. dbnodeupdate.sh will report what checks were executed for your update and which of them did 'pass' or 'fail' If the dependency check is executed as part of dbnodeupdate.sh –u and only 'minimum' RPM dependency check passes, then the new target will implicitly be changed to 'minimum' (which is equal to dbnodeupdate.sh -m).  If the dependency check is executed as part of dbnodeupdate.sh –u and both 'exact' and ’minimum’ RPM dependency checks fail, then the operator will not be able to proceed with the update. For dependency checks that fail a separate report is generated.This report highlights to the failing package. The operator can then decide to either remove/install/update the failing package depending on what works best for that particular server. Examples: Prereq run here -this is a prereq only run. Notice the ''Exact' package dependency check failed' and the ''Minimum' package dependency check succeeded' Failing dependencies here  - for more details on what package cause the problem and what can be done to resolve it. Update scenario here - see the same dependency checks and notice 'Update target switched to 'minimum'' Existing backups of the current image overwritten by default: Existing backups of the current image on the inactive lvm will be overwritten by default. You can decide to skip (and retain) the backup by using the "-n" flag.  Rene Kundersma 

With this blog update some background on new functionality added to the Exadata dbnodeupdate.sh utility.  The recently added 'Prereq Dependency Check' feature in dbnodeupdate.sh eliminates possible...

Updating database servers using dbnodeupdate.sh - part 3

When running dbnodeupdate.sh for updating a db server the "-l" argument is always required to specify the source location of the new release/update. From here we will now call this  'location' the repository'.  The repository comes in two different flavors: as an ISO and as Oracle ULN channel. Let's start with the ULN channel.   Exadata ULN Channels For the different Exadata releases starting 11.2.3.1.0 a corresponding 'base channel' is available on linux.oracle.com (ULN). Exadata customers can register to ULN with their CSI, subscribe to the channel (e.g. patch/release) they require and then synchronize one or more channels to with the local (in house) repository. This local repository can then be made available to the internal data-center by publishing it via a web server.  Note: it is not recommended to use an Exadata db server as local YUM repository Instructions how to subscribe a system to ULN and synchronize to a local repository are the same as for Oracle Linux and Oracle VM, so generic instructions can be found on OTN here. The README of a specific Exadata release will always mention what channels are made available for that release. You can also find this information via MOS 888828.1 and also in 1473002.1. Additional to the 'base channel', there is also a 'latest channel'. Currently for Exadata there is a latest channel for 11.2 and 12.1 releases. The content of the 'latest' channel will never remain the same (unlike the 'base' channel) as long as there will be updates for that 11.2 or 12.1 release. When for example a new Exadata 12 release will be published this will be added to the existing latest channel (in addition to a 'base channel' also made available). This is the primary reason for the 'latest' channel being much larger (and taking more time to synchonize) than a base channel. For Exadata installations on release later than 11.2.3.2.1, the 'latest' channel brings additional options. With the latest channel it's now possible to specify what release you want to update to. For example when on 11.2.3.3.0 planning to update to a later (but not the latest 12.1.release, just as an example) you can use the 'latest' channel and specify the "-t" flag to specify what release you want to update to.  Note that this can only be done with the 'latest' channel and that without specifying the "-t" argument by default the db server will be updated to the most recent release it can find in that channel. Of course there is also the option to just grab the 'base' channel and update without specifying any "-t' option. Examples updating with a latest channel specifying no argument (latest release in the channel will be used) here updating with the latest channel to a specific release that not exists (a typo) here updating to a specific release here Exadata channel as ISO  For those not able or willing to synchronize repositories with Oracle ULN there is also an ISO image available. The ISO image is built (and zipped) by Oracle and is only available for  'base' channel content. An ISO is ~1GB and about the same size as the sum of all packages of the corresponding base channel on ULN. Using ISO or ULN From an 'update' perspective there isn't much difference between using ISO or a http repository, only the location (-l) changes: For local YUM repositories (synchronized from Oracle ULN): ./dbnodeupdate.sh -u -l http://myrepo/yum/unknown/EXADATA/dbserver/11.2.3.3.0/base/x86_64/ For using an ISO (example with the 11.2.3.3.0 iso): ./dbnodeupdate.sh -u -l ./p17809253_112330_Linux-x86-64.zip  The ISO file should not be unzipped and there is no need to make an local 'loop mount' to use the iso - this is all done by the dbnodeupdate.sh script Validations For each type of repository some validation checks will be done to see if it a usable is a repository, checks are done for expected files and also if the available Exadata release in the repository is a later release than the one currently installed - because if not, an update would not be possible.  When specifying an http repository it's required to specify the top level directory containing the 'YUM metadata repository directory'. Typically this is the directory that has the 'repodir' directory in it. (see example here). When an http location cannot be identified as a valid repository an you would see a suggestion how to locate the right url. Rene Kundersma

When running dbnodeupdate.sh for updating a db server the "-l" argument is always required to specify the source location of the new release/update. From here we will now call this  'location' the...

Updating database servers using dbnodeupdate.sh - part 2

Within the Oracle Exadata Database Machine documentation and README's you will generally find two types of backups for database server OS backup: The Exadata Database Machine Owners Guide (Chapter 7) has instructions to create backups stored outside of the dbserver, for example on an NFS mount (see 'Creating a Snapshot-Based Backup of Oracle Linux Database Server') dbserver_backup.sh - which creates a local copy of your active lvm. In this post I will explain background and usage for both backups and how they integrate with dbnodeupdate.sh dbserver_backup.sh For backing-up and rolling-back Exadata dbserver OS updates  the dbserver_backup.sh script is used by dbnodeupdate.sh. For each upgrade by default the dbserver_backup.sh script is executed. When executed (either manually or via dbnodeupdate), the dbserver_backup.sh script creates a small snapshot of the 'active' sys lvm. The active sys lvm is the primary lvm that your current OS image is running on. For example: [root@mynode ~]# imageinfo Kernel version: 2.6.39-400.126.1.el5uek #1 SMP Fri Sep 20 10:54:38 PDT 2013 x86_64Image version: 11.2.3.3.0.131014.1Image activated: 2014-01-13 13:20:52 -0700Image status: successSystem partition on device: /dev/mapper/VGExaDb-LVDbSys2 In the above example the active lvm is /dev/mapper/VGExaDb-LVDbSys2.The snapshot is created to have a 'consistent' 'view' of the root filesystem while the backup is made. After the snapshot is created, it's mounted by the same script and then it's contents are copied over to the inactive lvm. For lvm enabled systems, there are always 2 'sys' lvm's "VGExaDb-LVDbSys1" and "VGExaDb-LVDbSys2". VGExaDb-LVDbSys2 will automatically be created (on lvm enabled system) if not existing yet. For the example above, the 'inactive' lvm will be VGExaDb-LVDbSys1 Now, depending on how many files there are in the root (/) filesystem (based on your active sys lvm) the backup times may vary. Previous Grid and Database home installation zip files in /opt/oracle.SupportTools/onecommand will make the backup take longer (not the restore, which I will explain why later). Same for those who have many small files (like mail messages in /var/spool) - the backup may take longer.  One of the first steps the dbnodeupdate.sh script will doing when executed is making a backup with this script. Now, if you want to shorten your downtime and make this backup before the start of your 'planned maintenance window' you have 2 options: Either execute the dbserver_backup.sh script yourself or use dbnodeupdate.sh with the "-b" flag to make a backup only before hand. Example making a backup with dbnodeupdate.sh here (see 'Backup only' for 'Action') When you then have the downtime for planned maintenance and already have the backup you can then let dbnodeupdate skip the backup using the "-n" flag. Example skipping a backup with dbnodeupdate.sh here (See 'Create a backup: No') Both Sys lvm's are 30GB each. The snapshot that will be created is ~1GB. It is recommended to keep this in mind when claiming the free space in the volume group to make your /u01 filesystem as big as possible. (the script checks for 2 GB free space in the volume group) Now, when the update proceeds, the current active lvm will remain the active lvm. This is different than what happens on the cells where the active lvm becomes inactive with an update.  Typically you will only switch active sys lvm's when a rollback needs to be done on a db server, for example, an upgrade from 11.2.3.3.0 to 12.1.1.1.0 needs to be rolled-back. What happens then is nothing more than 'switching' the filesystem label of the sys lvm's, updating grub (the bootloader) and restoring the /boot directory (backed up earlier also by dbnodeupdate.sh). Then, a next boot will now have the previous inactive lvm as active lvm. Rolling back with dbnodepdate.sh as in the example here (a rollback from 11.2.3.2.1 to 11.2.2.4.2)  After booting the node, it's recommended to run dbnodeupdate.sh again with the "-c" flag to relink the oracle home's again. Remarks: It's important, to make a new backup before attempting a new update. In the above example, there is only talk about the sys lvm's. This means custom partitions including /u01 are not backed up. For regular node updates this is enough to rollback to a previous release but it's recommended to also have a backup of other filesystems inline to your requirements. Nodes deployed without lvm will not have this option available Rolling back db servers to previous Exadata releases with this procedure does not rollback the firmware Backup / restore procedure owners guide chapter 7  The backup made with the procedure in chapter 7 of the Oracle Exadatabase Database owners guide covers total node recovery.  Like the dbserver_backup.sh procedure a snapshot is used for a consistent view, then in this scenario a copy is placed outside of the db server (via NFS in this example).  This procedure allow you to backup every filesystem you require. In case of emergency - such as a non-bootable system, the node can be booted with the diagnostic iso. For non-customized partitions an interactive script will then question you to provide backup details and recover the node completely. For customized partitions steps (which are almost the same) can also be found in the owners guide. Advantages /  Disadvantages Both type of backups serve another goal. Also, these are just examples - of course customized backup and restore scenario's are also possible.The procedure as described in the owners guide requires external storage, while the dbserver_backup.sh script uses space on the node - but that is also where the risk is. The backup made with dbserver_backup.sh works well for the purpose of rolling back upgrades. With the automation of dbnodeupdate.sh rollbacks can be done simple and quickly. However - loss of critical partitions and/or filesystems will not be covered with this type of backup - so you may want to combine both types of OS backup. The general recommendation is to use the default built-in backup procedure when running dbnodeupdate to make easy rollback possible. But also backup the entire OS and customized filesystems outside of the database server with an interval based on your own requirements. Rene Kundersma

Within the Oracle Exadata Database Machine documentation and README's you will generally find two types of backups for database server OS backup: The Exadata Database Machine Owners Guide (Chapter...

Updating database servers using dbnodeupdate.sh - part 1

In this and future posts I am planning to describe some new functionality and background of dbnodeupdate.sh starting with Oracle Exadata Database Machine release 11.2.3.3.0. Some of this functionality will be directly available to the operator via the interface and can actually be used via an argument, however, some of the recent changes are made to make patching even easier, reduce human error and downtime. You may also find some of the 'new features' described in MOS 1553103.1 'Exadata Database Server Patching using the DB Node Update Utility' Exclusion/Obsolete list  With updates to Exadata 11.2.3.3.0 or later some packages on the database server will become obsolete. When updating a db server the dbnodeupdate.sh script will mention an 'RPM exclusion list' and an 'RPM obsolete list' in it's confirmation screen. The 'RPM obsolete list' will list the all packages that will be removed by default during the update to 11.2.3.3.0 (or later) when no action is taken by the operator. As an example - click here If you would like to find out first what obsolete packages will be removed you have to choose 'n' when prompted to 'Continue ? [Y/n]'. This will stop your current patching session. Then look at the contents of the freshly created 'obsolete list' (it should have the runid of your dbnodeupdate session in it's header). Example here All the packages listed in the '/etc/exadata/yum/obsolete.lst' file will be removed by default - this has multiple reasons, mainly this is because these packages are not required anymore for Exadata functioning or they are considered a security risk. In case you would like to keep for example the 'java' package, you should create an 'exclusion file'  which is '/etc/exadata/yum/exclusion.lst' and put the 'java*openjdk' rpm name (or wildcard) in it. Example: [root@mynode u01]# cat /etc/exadata/yum/exclusion.lstjava-*-openjdk When dbnodeupdate.sh  is restarted you would see that the 'exclusion file' is detected (an example here). All packages you have put in the 'exclusion file' will still be listed in the obsolete file, but will not be removed when the confirmation screen says 'RPM exclusion list: In use (rpms listed in /etc/exadata/yum/exclusion.lst)'  with the update to 11.2.3.3.0 or later. Frequent releases of dbnodeupdate.sh - keep an eye on it: dbnodeupdates.sh and it's MOS note were designed / made to be released frequently and quickly when needed.This way dbnodeupdate.sh can provide workarounds for known issues, emergency fixes, new features and best practices to the operator relatively quick.This reduces risk of people not keeping up to date with 'known issues' or not being sure it applies to them. Basically the same idea as with the patchmgr plugins. Also, unlike to the storage servers some customization can be done on the db servers - best practices and lessons learned from this in regards to patching may also benefit your environment. For those engineers involved with upgrading Exadata database nodes, I'd like to emphasis to always check for the most recent release of dbnodeupdate.sh when doing a db server update. I have already seen some people watching the releases closely, which is definitely a good thing. Rene Kundersma

In this and future posts I am planning to describe some new functionality and background of dbnodeupdate.sh starting with Oracle Exadata Database Machine release11.2.3.3.0. Some of this functionality...

Enhanced lights-out patching for Exadata Storage Cells

The recently released 11.2.3.3.0 Oracle Exadata version comes with multiple enhancements in the patchmgr utility. For those who don't know what the patchmgr utility is: the patchmgr utility is a tool Exadata Database Administrators use to apply (or rollback) an update to the Oracle Exadata Storage Cells. The enhanced patchmgr will have an option to send the operator an email for the most significant patch and rollback state changes. This eliminates the need to monitor the screen while the update or rollback is in progress. In order to send the email you need to specify values for the '-smtp_from' and '-smtp_to' flags. Example as follows: ./patchmgr -cells ~/cell_group -patch -rolling \ -smtp_from dbma@yourdomain.com \  -smtp_to your-email@yourdomain.com Patchmgr will use the sendmail package which  is installed by default on the Exadata Database Server. It will start sendmail if it's not already started and assumes it's configured to deliver email for the domains you specify. You will recognize the format of the alerts patchmgr sends as they have the same formatting as the ASR emails. The email you will receive when you enable this option can have the following end states for a cell:  Started Start Failed Waiting Patching Rolling Back Failed Succeeded.  Not Attempted  The majority of the above states probably is self explanatory, however explaining 'Not Attempted' may help: 'Not Attempted' will be the final state of a cell when patching or rolling back (in a rolling fashion) of the current cell has failed. In that case the patching will stop and the remaining cells (which are not touched yet) are in the state 'Not Attempted'. So for example: imagine you are patching cel1,cel2 and cel3. When cel1 fails patching, then the end state for cel2 and cel3 will be 'Not Attempted'. The following example will give you the idea: Patching starts in a rolling fashion from 11.2.3.2.0 to 11.2.3.3.0 for cel1,cel2 and cel3.  Note: this is an example of the type of state changes you will see, actual patch timings are not correct and not relevant for this example. Also actual states or formatting may change in your release. 1. patchmgr was started in a rolling fashion. Initial state for all cells is 'Started'. This should be seen as this initialization phase. You will receive an email as follows: 2. The actual patching has begun since this is in rolling fashion, patchmgr starts with the first cel listed, which is cel1. The other two cells are waiting until cel1 finishes. 3. After some time patching of cel1 has completed. You will receive another status update in your inbox stating cel1 was updated successfully. 4. patchmgr continues with the next cell. 5. When succeeded (or failed) you will receive a state update via mail. 6. Last cell to be patched is cel3: 7. The last email you will receive will give an update stating patching was completed and lists the end state for all cells. In case patching or rolling back would fail, you would receive a pointer to a release specific MOS note where you may find additional information.Rene Kundersma 

The recently released 11.2.3.3.0 Oracle Exadata version comes with multiple enhancements in the patchmgr utility. For those who don't know what the patchmgr utility is: the patchmgr utility is a tool...

Demo on Data Guard Protection From Lost-Write Corruption

Today I received the news a new demo has been made available on OTN for Data Guard protection from lost-write corruption. Since this is a typical MAA solution and a very nice demo I decided to mention this great feature also in this blog even while it's a recommended best practice for some time. When lost writes occur an I/O subsystem acknowledges the completion of the block write even though the write I/O did not occur in the persistent storage. On a subsequent block read on the primary database, the I/O subsystem returns the stale version of the data block, which might be used to update other blocks of the database, thereby corrupting it.  Lost writes can occur after an OS or storage device driver failure, faulty host bus adapters, disk controller failures and volume manager errors. In the demo a data block lost write occurs when an I/O subsystem acknowledges the completion of the block write, while in fact the write did not occur in the persistent storage. When a primary database lost write corruption is detected by a Data Guard physical standby database, Redo Apply (MRP) will stop and the standby will signal an ORA-752 error to explicitly indicate a primary lost write has occurred (preventing corruption from spreading to the standby database). Links: MOS (1302539.1). "Best Practices for Corruption Detection, Prevention, and Automatic Repair - in a Data Guard Configuration" Demo MAA Best Practices Rene Kundersma

Today I received the news a new demo has been made available on OTN for Data Guard protection from lost-write corruption. Since this is a typical MAA solution and a very nice demo I decided to...

Exadata X3, 11.2.3.2 and Oracle Platinum Services

Oracle recently announced an Exadata Hardware Update. The overall architecture will remain the same, however some interesting hardware refreshes are done especially for the storage server. Each cell will now have 1600GB of flash, this means an X3-2 full rack will have 20.3 TB of total flash ! For all the details I would like to refer to the Oracle Exadata product page: www.oracle.com/exadata Together with the announcement of the X3 generation. A new Exadata release, 11.2.3.2 is made available. New Exadata systems will be shipped with this release and existing installations can be updated to that release. As always there is a storage cell patch and a patch for the compute node, which again needs to be applied using YUM. Instructions and requirements for patching existing Exadata compute nodes to 11.2.3.2 using YUM can be found in the patch README. Depending on the release you have installed on your compute nodes the README will direct you to a particular section in MOS note 1473002.1. MOS 1473002.1 should only be followed with the instructions from the 11.2.3.2 patch README. Like with 11.2.3.1.0 and 11.2.3.1.1 instructions are added to prepare your systems to use YUM for the first time in case you are still on release 11.2.2.4.2 and earlier. You will also find these One Time Setup instructions in MOS note 1473002.1 By default all compute nodes that will be updated to 11.2.3.2.0 will now have the UEK kernel. Before 11.2.3.2.0 the 'compatible kernel' was used for all compute nodes other than X2-8. For 11.2.3.2.0 customer will have the choice to replace the UEK kernel with the Exadata standard 'compatible kernel' which is also in the ULN 11.2.3.2 channel. Recommended is to use the kernel that is installed by default. One of the other great new things 11.2.3.2 brings is Writeback Flashcache (wbfc). By default wbfc is disabled after the upgrade to 11.2.3.2. Enable wbfc after patching on the storage servers of your test environment and see the improvements this brings for your applications. Writeback FlashCache can be enabled  by dropping the existing FlashCache, stopping the cellsrv process and changing the FlashCacheMode attribute of the cell. Of course stopping cellsrv can only be done in a controlled manner. Steps: drop flashcache alter cell shutdown services cellsrv again, cellsrv can only be stopped in a controlled manner alter cell flashCacheMode = WriteBack alter cell startup services cellsrv create flashcache all Going back to WriteThrough FlashCache is also possible, but only after flushing the FlashCache: alter cell flashcache all flush Last item I like to highlight in particular is already from a while ago, but a great thing to emphasis: Oracle Platinum Services. On top of the remote fault monitoring with faster response times Oracle has included update and patch deployment services.These services are delivered by Oracle Advanced Customer Support at no additional costs for qualified Oracle Premier Support customers. References: 11.2.3.2.0 README Exadata YUM Repository Population, One-Time Setup Configuration and YUM upgrades  1473002.1 Oracle Platinum Services René Kundersma

Oracle recently announced an Exadata Hardware Update. The overall architecture will remain the same, however some interestinghardware refreshes are done especially for the storage server. Each cell...

New channels for Exadata 11.2.3.1.1

With the release of Exadata 11.2.3.1.0 back in April 2012 Oracle has deprecated the minimal pack for the Exadata Database Servers (compute nodes). From that release the Linux Database Server updates will be done using ULN and YUM. For the 11.2.3.1.0 release the ULN exadata_dbserver_11.2.3.1.0_x86_64_base channel was made available and Exadata operators could subscribe their system to it via linux.oracle.com. With the new 11.2.3.1.1 release two additional channels are added: a 'latest' channel (exadata_dbserver_11.2_x86_64_latest) a 'patch' channel (exadata_dbserver_11.2.3.1.0_x86_64_patch) The patch channel has the new or updated packages updated in 11.2.3.1.1 from the base channel. The latest channel has all the packages from 11.2.3.1.0 base and patch channels combined.  From here there are three possible situations a Database Server can be in before it can be updated to 11.2.3.1.1: Database Server is on Exadata release < 11.2.3.1.0 Database Server is patched to 11.2.3.1.0 Database Server is freshly imaged to 11.2.3.1.0 In order to bring a Database Server to 11.2.3.1.1 for all three cases the same approach for updating can be used (using YUM), but there are some minor differences: For Database Servers on a release < 11.2.3.1.0 the following high-level steps need to be performed: Subscribe to el5_x86_64_addons, ol5_x86_64_latest and  exadata_dbserver_11.2_x86_64_latest Create local repository Point Database Server to the local repository* install the update * during this process a one-time action needs to be done (details in the README) For Database Servers patched to 11.2.3.1.0: Subscribe to patch channel  exadata_dbserver_11.2.3.1.0_x86_64_patch Create local repository Point Database Server to the local repository Update the system For Database Servers freshly imaged to 11.2.3.1.0: Subscribe to patch channel  exadata_dbserver_11.2.3.1.0_x86_64_patch Create local  repository Point Database Server to the local repository Update the system The difference between 'situation 2' (Database Server is patched to 11.2.3.1.0) and 'situation 3' (Database Server is freshly imaged to 11.2.3.1.0) is that in situation 2 the existing Exadata-computenode.repo file needs to be edited while in situation 3 this file is not existing  and needs to be created or copied. Another difference is that you will end up with more OFA packages installed in situation 2. This is because none are removed during the updating process.  The YUM update functionality with the new channels is a great enhancements to the Database Server update procedure. As usual, the updates can be done in a rolling fashion so no database service downtime is required.  For detailed and up-to-date instructions always see the patch README's 1466459.1 patch 13998727 888828.1 Rene Kundersma

With the release of Exadata 11.2.3.1.0 back in April 2012 Oracle has deprecated the minimal pack for the Exadata Database Servers (compute nodes). From that release the Linux Database Server updates...

Updating Exadata Compute Nodes using ULN and YUM starting 11.2.3.1

With this post an update on Exadata Planned Maintenance - The new 11.2.3.1 ULN Updating Procedure for Exadata Compute Nodes.As you may already know, starting with Oracle Exadata Storage Server release 11.2.3.1 the 'minimal pack' is deprecated.From now on the (Linux) Computes Node will use Yellowdog Updater (Modified) (YUM) to apply the new updates as rpm packages.  These RPM packages that come from ULN may also contain firmware updates which will be applied in the same updating procedure. In order to update the Exadata Compute Nodes via YUM, you need a repository to download the software from. Of course Oracle provides its customers ULN access for this to make this really easy. It may however happen this requirement for http access to a web location from the Compute Nodes is not possible for some reason. For these situations Oracle worked out an alternative solution. In this post I like to discuss the solution to setup a YUM repository located in the customers Data Center: This is a local system, that can download the updates from ULN and act as a YUM repository for the Compute Nodes as a 'man in the middle' bertween ULN and the Compute Nodes. For installations planning to setup an internal/local YUM repository and currently not on 11.2.3.1: there are two notes that need to be reviewed carefully before the patching starts: README for patch 13741363 : Especially chapter 3 "Performing One-Time Setup on Database Servers of Oracle Exadata Database Machine" The steps described here are one time only steps to prepare and populate the local YUM repository server that will be used for ALL the Compute Nodes. Best is to install the YUM repository server on a separate Linux machine running OEL 4 or OEL 5. Basically the steps are: go to http://linux.oracle.com, use your Hardware CSI for the registration steps, register the YUM repository server and subscribe to the right channels and populate the repository. After the download is finished, the repository is 'build' and now 'local' it needs to be made available by http for the Compute Nodes to download from on the local network. After the setup of the repository V2/X2-2 and X2-8 Compute Nodes require a One-Time setup so they are able to use YUM from now on. The One-Time steps remove a set of packages that can cause the further setup to fail, but also add some packages to support the installations to be done using YUM onwards. Please pay close attention to one of the most important steps of this One-Time setup: the editing of the repository configuration file /etc/yum.repos.d/Exadata-computenode.repo and making sure it points to your local YUM repository if you don't have direct ULN access. README for 13536739: Especially chapter 6 "Updating Oracle Linux Database Servers in Oracle Exadata Database Machine"  After setting up the repository and enabling the Compute Nodes to use YUM there is one thing to do: the update itself. Key step in this process is enabling each Compute Node to use the new repository. After this some  packages (ofed1) may need to be downgraded depending on your installation and some checks for kernel versions need to be done. After this, from now on the the system can be updated using a simple 'yum install' to install the main Exadata database server rpm with all dependent packages.At the end of the installation the Compute Node is rebooted automatically. At this moment I have to make some remarks/disclaimers: Please see the notes for 13741363  and 13536739 for exact steps, I am only highlighting to explain the overall procedure of setting up a local repository and configuring the Compute Nodes using it. If you are able to connect your Compute Nodes to ULN directly there is no need to setup a repository and the related steps can be skipped. For X2-2 (and earlier) and X2-8 the 'updating Oracle Linux Database Server' steps are slightly different. Oracle has provided 'helper' scripts to automated the steps described above making it even more easy The YUM Updating procedure only applies to Linux Compute Nodes having images > 11.2.2.4.2. (for updates to versions lower than 11.2.3.1 you still need to use the minimal pack) Rene Kundersma

With this post an update on Exadata Planned Maintenance - The new 11.2.3.1 ULN Updating Procedure for Exadata Compute Nodes.As you may already know, starting with Oracle Exadata Storage Server release...

11.2.0.3 available for the Oracle Exadata Database Machine

This post is about upgrades. The first thing I like to mention is the availability of 11.2.0.3 for the Oracle Exadata Database Machine since today (Jan 3rd 2012). With this, it is now possible to upgrade your Oracle Exadata Database Machine to 11.2.0.3. To help you best with the upgrade we have released a MOS note that describes the prerequisites and steps to go from 11.2.0.1 or 11.2.0.2 installations to 11.2.0.3. Please see the MOS note for the best approach on how to apply this upgrade in your environment. The MOS note applies to both Solaris 11 Express and Linux installations and upgrades both the Grid Infrastructure and the Database Home on V2, X2-2 and X2-8. Please see MOS note "11.2.0.1/11.2.0.2 to 11.2.0.3 Database Upgrade on Exadata Database Machine" (Doc ID 1373255.1) for more details.For V1 users we have recently also published MOS note 888834.1. This document contains the steps to upgrade the HP Oracle Database Machine to Oracle Exadata Storage Server Software 11.2.X.  The steps include upgrading Oracle Cluster Ready Services (CRS) 11.1.0.7.X and Oracle Automatic Storage Management (ASM) 11.1.0.7.X to Oracle Grid Infrastucture 11.2.0.2 and Oracle RDBMS from 11.1.0.7 to Oracle RDBMS 11.2.0.2. After upgrading to 11.2.0 on V1 hardware using MOS note 888834.1,  MOS note 1373255.1 can be used to upgrade to 11.2.0.3. Please See MOS note "HP Oracle Database Machine 11.2 Upgrade" (Doc ID 888834.1) René Kundersma

This post is about upgrades. The first thing I like to mention is the availability of 11.2.0.3 for the Oracle Exadata Database Machine since today (Jan 3rd 2012). With this, it is now possible to...

Oracle Enterprise Manager Cloud Control 12c Setup Automation kit for Exadata

With this post I like to mention the availability of the Enterprise Manager 12c setup automation kit for Exadata. I also like to explain how easy it is to deploy the agent for a complete Exadata rack of any size with all it's components. After the discovery the system can managed by EM, the deployment I did took only 30 minutes for a quarter rack with all components.You can obtain the "Oracle Enterprise Manager Cloud Control 12c Setup Automation kit for Exadata" from MOS by searching for patch 12960596The deployment of the agent starts with the Exadata configurator sheet. This is the sheet ACS already used for the Exadata deployment. The sheet now also creates a configuration output file for automatic agent and OMS deployment. The file I am talking about is called em.params. This file can be used as input for the OMS or EM 12c deployment scripts. In this posting I will only discuss the agent deployment.This em.params file will have the following information: # This is em.param file# Written : 26-10-2011EM_VERSION=1.0OMS_LOCATION=REMOTE_CORP_GCEM_BASE=/u01/app/oracle/product/EMbaseOMS_HOST=my-host.us.oracle.comOMS_PORT=4900EM_CELLS=(cel12 cel13 cel14)EM_COMPUTE_ILOM_NAME=(db07-c db08-c)EM_COMPUTE_ILOM_IP=(a.b.c.d a.b.c.e)machinemodel="X2-2 Quarter rack"EM_USER=oracleEM_PASSWORD=passwdswikvmname=sw-kvmswikvmip=a.b.c.fswiipname=sw-ipswiipip=a.b.c.gswiib2name=sw-ib2swiib2ip=a.b.c.dswiib3name=sw-ibhswiib3ip=a.b.c.ipduaname=pdu1pduaip=a.b.c.jpdubname=pdu2pdubip=a.b.c.kOf course, the names and ipnumbers are changed in this example to hide any specific information.When you install the kit the configuration file is expected in /opt/oracle.SupportTools/onecommand, this is where your OneCommand deployment files will be anyway.During installation some additional checks will be done like:Trying to ping Oracle Management Server Host my-host.us.oracle.com .Checking oracle user infoSearching for KVM Switch information by key swikvmname from /opt/oracle.SupportTools/onecommand/em.paramSearching for KVM Switch IP information by key swikvmip from /opt/oracle.SupportTools/onecommand/em.paramSearching for Cisco Switch information by key swiipname from /opt/oracle.SupportTools/onecommand/em.paramSearching for Cisco Switch IP information by key swiipip from /opt/oracle.SupportTools/onecommand/em.paramSearching for IB2 Switch information by key swiib2name from /opt/oracle.SupportTools/onecommand/em.paramSearching for IB2 Switch IP information by key swiib2ip from /opt/oracle.SupportTools/onecommand/em.paramSearching for IB3 Switch information by key swiib3name from /opt/oracle.SupportTools/onecommand/em.paramSearching for IB3 Switch IP information by key swiib3ip from /opt/oracle.SupportTools/onecommand/em.paramSearching for PDUA Name information by key pduaname from /opt/oracle.SupportTools/onecommand/em.paramSearching for PDUA IP information by key pduaip from /opt/oracle.SupportTools/onecommand/em.paramSearching for PDUA Name information by key pdubname from /opt/oracle.SupportTools/onecommand/em.paramSearching for PDUB IP information by key pdubip from /opt/oracle.SupportTools/onecommand/em.paramSearching for ILON Names information by key EM_COMPUTE_ILOM_NAME from /opt/oracle.SupportTools/onecommand/em.paramSearching for ILOM IPS information by key EM_COMPUTE_ILOM_IP from /opt/oracle.SupportTools/onecommand/em.paramVerifying ssh setup..Verifying SSH setup for rootWhen the validations are done, the 12.1.0.1.0_AgentCore_226.zip file will be transferred to the compute nodes and the 12c agent will be installed.Like the 11.1 installation, still no agents will be installed on the storage cells, EM will use the compute nodes as proxy to the cells, it's up to you if you want EM to choose which nodes or choose yourself.After deployment, there are three easy steps left in the EM GUI: Exadata discovery GI(Cluster) discovery RAC discovery The discovery steps above are guided by a clear GUI.All the discovery process needs is access to a compute node, from there all the Exadata components will be discovered automatically.The file EM needs to be available for that is  /opt/oracle.SupportTools/onecommand/databasemachine.xmlAfter installation of the kit and discovery in EM 12c, you have a nice overview of the rack with all the components in it, easy to drill down and manage from there. The example below is a quarter rack. René Kundersma Oracle MAA Development

With this post I like to mention the availability of the Enterprise Manager 12c setup automation kit for Exadata. I also like to explain how easy it is to deploy the agent for a complete Exadata rack...

Software Updates, Best Practices and Notes

With a very busy time behind and ahead of me, I still like to make mention of a couple of documents recently we published. OPlan is now available for Linux as well as Solaris X86-64 for the Oracle Exadata Database Machine. The current version of OPlan is 11.2.0.2.5. For more details, see my previous post The two notes mentioned below explain about how OPlan can be used and demo how OPLan works to clone the Grid Infrastructure on the Database Machine, patch the cloned home and switch to it. This is called out-of-place patching. MOS Note 1306814.1 - Oracle Software Patching with OPLAN MOS Note 1136544.1 - Minimal downtime patching via cloning 11gR2 ORACLE_HOME directories Supported software versions for Exadata are still listed in MOS note 888828.1, this is where you will find a reference to the latest Exadata Storage Server software release: 11.2.2.3.5, available via patch 12849110. Please see note 1334254.1 for the details. For the Database Server, the latest patch made available for 11.2.0.2 is Bundle Patch 10, available via Patch 12773458 (Linux version). Please know that BP's can be applied by EM, which makes patching more easy.For ASR we now have release version 3.3 Released. Details can be found in Note 1185493.1.The latest PDU metering unit firmware is 1.04 and available as patch 12871297.For MAA Best pracitces releated to the database machine we released a real good document called "Oracle Exadata Database Machine Consolidation: Segregating Databases and Roles", which you can find here Also releated to best practices is the document "MAA Best Practices for Oracle Exadata Database Machine", you can find that hereRene Kundersma

With a very busy time behind and ahead of me, I still like to make mention of a couple of documents recently we published. OPlan is now available for Linux as well as Solaris X86-64 for the Oracle...

Failover capability for plugins Exadata & EMGC Rapid deployment

With this entry two items around Exadata management:First: a pointer to My Oracle Support note 1110675.1. This is note is called "Oracle Database Machine Monitoring Best Practices" and has a lot of information on Exadata management. In this note you will read about the plugins we have to monitor the various Exadata components and how to configure them, also you will read about creating a 'Dashboard'.The topic I like to cover here is about the steps to take to setup automated fail-over of targets. This is, because by default a target monitored using a plug-in in Enterprise Manager Grid Control is bound to the agent used during the initial installation.So, what you basically want to do is make sure another agent (on another node) is available to take over the monitored targets when the first agents fails; providing HA for your monitoring.For this I like to mention two items: "Package to add failover capabiity for Plug-ins"   (bottom of note 1110675.1.) OEM_Exadata_Dashboard_Deployment_v104.pdf In the file 'OEM_Exadata_Dashboard_Deployment' you read about the following steps: adding the PL/SQL package into the EM repository (package mentioned above) registering the package setting up of agents the target can use for fail-over create failover method and notification rule create fail-back method and notification rule Once you have executed the steps above, the second agent should be able to take over the targets initially assigned to the first agent when the first fails. You can test this by stopping the first agent; in Grid Control you will find the targets under the second agent. Making sure the monitored targets can failover to a surviving agent is recommended, but it going that route, it would make sense to also make sure Grid Control (OMS) and database are both also HA.Second topic for this entry is the "EM 11GC Setup automatic kit" for Exadata. With the latest Oracle Exadata Database Machine Configurator a new output file (em.param) is generated. This file can be used as input for installing Oracle Enterprise Manager Grid Control Agents on Oracle Exadata Database Machine With this kit, Exadata customers, having an existing Oracle Enterprise Manager Grid Control environment can have setup agents rapidly by the Oracle or partner teams doing the Exadata deployment. For the sake of completeness: Oracle Enterprise Manager Grid Control is not a requirement for Exadata but a recommendation.References: MOS 1110675.1 Demo: here Updates: Oracle Enterprise Manager Grid Control Setup Automation kit for Exadata Kit Rene Kundersma

With this entry two items around Exadata management: First: a pointer to My Oracle Support note 1110675.1. This is note is called "Oracle Database Machine Monitoring Best Practices" and has a lot of...

Applying bundle patches on Exadata using Enterprise Manager Grid Control

With this post a small first note on applying Exadata Bundle Patches using Enterprise Manager Grid Control 11g.Did you know: Exadata BP's can be applied in a rolling fashion using Grid Control comparable to 'opatch auto' The patching procedure always updates Opatch to the latest version automatically To some extend conflicts are checked to prevent you from problems using patching Before applying a patch you could run 'analyze' only and verify if the BP patch can be applied. When a SQL script has to run as part of the BP, that's also taken care off Interested ? Here some resources to help you forward: Driving Database Patch Management with EM 11g Configuration management and provisioning of Sun oracle Exadata Database Machine Using Enterprise manager More important, check first for required patches and what BP's are tested: Grid Control OMS and Agent Patches required for setting up Provisioning, Patching and Cloning (Doc ID 427577.1) Patch Oracle Exadata Database Machine via Oracle Enterprise Manager 11gR1 (11.1.0.1) (Doc ID 1265998.1) Patch Requirements for Setting up Monitoring and Administration for Exadata (Doc ID 1323298.1) Still, there's only one single source of truth when it comes to which patches are recommended for Exadata: Database Machine and Exadata Storage Server 11g Release 2 (11.2) Supported Versions (ID 888828.1) A small demo of a comparable 'RAC Rolling patch' can be found on OTN: Rene Kundersma

With this post a small first note on applying Exadata Bundle Patches using Enterprise Manager Grid Control 11g. Did you know:Exadata BP's can be applied in a rolling fashion using Grid Control...

Patching for Exadata: introducing oplan

With this update I like to introduce you to Oplan. Oplan is a utility that facilitates you with the application of bundle patches on Exadata compute nodes via Opatch. This new utility helps you with the patch process by generating step-by-step instructions telling you how to apply a bundle patch in your environment.Oplan is supported from release 11.2.0.2 and available as download here. The steps to install Oplan are straight forward:download Oplan to one of your oracle home's (for example the GI home) as the oracle home owner. Say /u01/app/11.2.0/grid/oplanappend the Oplan directory to your pathas the oracle home owner generate the Oplan installation instructions, for example:oplan generateApplySteps /home/oracle/11828582 (where 11828582 is BP5)Oplan will now generate the specific installation instructions for your environment.You will notice that Oplan provides you with several options how to apply the patch.In my case Oplan offered :in-place instructions using auto patch or notin-place instructions rolling or non-rollingout-of-place patching instructions.For each 'strategy' Oplan lists the advantages and disadvantages. For example an out-of-place instruction may cost you more time, but is easier to rollback. Also the number of steps for each option is listed.Once you made up your mind and made your choice for a specific patch strategy, Oplan will give you step-by-step instructions.Each patch plan generated by Oplan begins with a pre-patch apply phase, then a patch apply phase and ends with a post-patch apply phase.In the pre-patch apply phase the current situation of the oracle homes to be patched will be verified. Some checks that are done:The version of opatch will be verifiedCheck to see if there are no conflicts pending. Next to this, ocm will be configured In the patch apply process the actual patch is applied. This is followed by the post-patch apply phase where srvctl is used to reconfigure the new Oracle home to be used (in my case) and also modify /etc/oratab.For all of this Oplan generates instructions for all of the nodes in the cluster, even how to copy over the patch itself.Last but no least some small notes:Oplan can create rollback instructions for the patch.There is no support for Oracle DataGuardPatch README files should be used to double check the proposed actions, also in case of any conflict: the patch README should be considered as truthAltogether Oplan comes in very handy by summarizing the different patch strategies available. When you made your choice, Oplan tells you exactly what commands to execute. This will limit errors and reduce the time it takes to prepare.Please see note 1306814.1 and check it out yourself.Rene

With this update I like to introduce you to Oplan. Oplan is a utility that facilitates you with the application of bundle patches on Exadata compute nodes via Opatch. This new utility helps you with...

ASR / SNMP on Exadata

Recently I worked with ASR on Exadata for multiple customers. ASR is a great functionality that enables your 'systems' to alert Oracle when hardware failures occur. Sun hardware is using ASR for sometime and since 2009/2010 this is also available for Exadata. My goal is not to re-write the documentation so for general information I like to refer to this link.So, where is this note about ? Well, it is about two things I experienced around setting up ASR. I like to provide my experience so others can be successful with ASR fast as well. (It is however expected that things will be updated in the latest documentation.)First, imagine yourself configuring SNMP traps to be sent to ASR. In this situation be sure to not erase any existing SNMP Subscribers settings for example the subscription to Enterprise Manager Grid Control or whatever you already subscribed for. So, when you have documentation stating to execute "cellcli -e alter cell snmpSubscriber=(host=, port=)" be sure to add existing snmpSubscribers when they exist.The syntax allows this:snmpSubscriber=((host=host [,port=port] [,community=community][,type=ASR])[,(host=host[,port=port][,community=community][,type=ASR])...)Second, when configuring SnmpSubscribers using DCLI you have to work with a slash to escape the brackets. Be sure to verify your SNMP settings after setting them because you might end up with a bracket in the 'asrs.state' file stating 'public\' in stead of 'public'.Having the extra slash after the word 'public' of course doesn't help when sending SNMP-traps:dcli -g dbs_group -l root -n "/opt/oracle.cellos/compmon/exadata_mon_hw_asr.pl -validate_snmp_subscriber -type asr"cn38: Sending test trap to destination - 173.25.100.43:162cn38: (1). count - 50 Failed to run "/usr/bin/snmptrap -v 2c -c public\ -M "+/opt/oracle.cellos/compmon/" -m SUN-HW-TRAP-MIB 173.25.100.43:162 "" SUN-HW-TRAP-MIB::sunHwTrapTestTrap sunHwTrapSystemIdentifier s " Sun Oracle Database Machine secret" sunHwTrapChassisId s "secret" sunHwTrapProductName s "SUN FIRE X4170 SERVER" sunHwTrapTestMessage s "This is a test trap. Exadata Compute Server: cn38.oracle.com ""cn38: getaddrinfo: +/opt/oracle.cellos/compmon/ Name or service not knowncn38: snmptrap: Unknown host (+/opt/oracle.cellos/compmon/)All together ASR is a great addition to Exadata that I highly recommend. Some excellent documentation is written on the implementation details and available on MyOracleSupport. See "Oracle Database Machine Monitoring (Doc ID 1110675.1)"Rene KundersmaTechnical ArchitectOracle Technology Services

Recently I worked with ASR on Exadata for multiple customers. ASR is a great functionality that enables your 'systems' to alert Oracle when hardware failures occur. Sun hardware is using ASR...

11gr2 DataGuard: Restarting DUPLICATE After a Failure

One of the great new features that comes in very handy when databases get larger and larger these days is RMAN's capability to duplicate from an active database and even restart a duplicate when it fails.Imagine yourself the problem I had lately; I used the duplicate from active database feature and had to wait for an hour or 6 before all datafiles where transferred.At the end of the process some error occurred because of the syntax. While this error was easily to solve I was afraid I had to redo the complete procedure and transfer the 2.5 TB again.Well, 11gr2 RMAN surprised me when I re-ran my command with the following output:Using previous duplicated file +DATA/fin2prod/datafile/users.2968.719237649 for datafile 12 with checkpoint SCN of 183289288148Using previous duplicated file +DATA/fin2prod/datafile/users.2703.719237975 for datafile 13 with checkpoint SCN of 183289295823Above I only show a small snippet, but what happend is that RMAN smartly skipped all files that where already transferred !The documentation says this:RMAN automatically optimizes a DUPLICATE command that is a repeat of a previously failed DUPLICATE command. The repeat DUPLICATE command notices which datafiles were successfully copied earlier and does not copy them again. This applies to all forms of duplication, whether they are backup-based (with and without a target connection) or active database duplication. The automatic optimization of the DUPLICATE command can be especially useful when a failure occurs during the duplication of very large databases.If a DUPLICATE operation fails, you need only run the DUPLICATE again, using the same parameters contained in the original DUPLICATE command. Please see chapter 23 of the 11g Release 2 Database Backup and Recovery User's Guide for more details.B.w.t. be very careful with the duplicate command. A small mistake in one of the 'convert' parameters can potentially overwrite your target's controlfile without prompting !Rene KundersmaTechnical ArchitectOracle Technology Services

One of the great new features that comes in very handy when databases get larger and larger these days is RMAN's capability to duplicate from an active database and even restart a duplicate when it...

Installation and testing RAC One Node

Okay time to write something nice about Rac One Node:In order to test RAC One Node, on my Laptop, I just:- installed Oracle VM 2.2- Created two OEL 5.3 imagesThe two images are fully prepared for Oracle 11gr2 Grid Infrastructure and 11gr2 RAC including four shared disks for ASM and private nics.How would I test this all so fast without virtualization ?!In order to view the captures best, I recommend to zoom in a couple of times (control-++)After installation of the Oracle 11gr2 Grid Infrastructure and a "software only installation" of 11gr2 RAC, I installed patch 9004119 as you can see with the opatch lsinv output:This patch has the scripts required to administer RAC One Node, you will see them later.At the moment we have them available for Linux and Solaris.After installation of the patch, I created a RAC database with an instance on one node.Please note that the "Global Database Name" has to be the same as the SID prefix and should be less then or equal to 8 characters:When the database creation is done, first I create a service. This is because RAC One Node needs to be "initialized" each time you add a service:The service configuration details are:After creating the service, a script called raconeinit needs to run from $RDBMS_HOME/bin. This is a script supplied by the patch. I can imagine the next major patch set of 11gr2 has this scripts available by default. The script will configure the database to run on other nodes:After initialization, when you would run raconeinit again, you would see:So, now the configuration is ready and we are ready to run 'Omotion' and move the service around from one node to the other (yes, vm competitor: this is service is available during the migration, nice right ?) .Omotion is started by running Omotion. With Omotion -v you get verbose output:So, during the migration with Omotion you will see the two instance active (RKDB_1 and RKDB_2):And, after the migration, there is only one instance left on the new node:Of course, regular node failures will also initiate a failover, all covered by the default Grid Infrastructure functionalities. The thing that I noticed is that if you would kill a node that runs an active instance, the instance is failed over nicely by RAC Node One, but the name of the instance failing over stays the same, so this is other behavior then the migration:p.s. 1:Some other funny thing I noticed is that after installing 11gr2 Grid Infrastructure, Oracle removes some essential lines from grub.conf. What you get when you try to start a vm after this is: "error: boot loader didn't return any data"Luckily Oracle creates a backup of the grub.conf in /boot/grub named grub.conf.orabackup So, you need to restore that file in the vm images itself.This can be done with the great lomount option.First make sure you create the entry in the fstab of your oracle vm server:/var/ovs/mount/8BF866167A1746FE8FFA0EAA20939C55/running_pool/GRIDNODE01/System.img /tmp/q ext3 defaults 1 1Then execute: lomount -diskimage ./System.img -partition 1This mounts the image to /tmp/q so you can restore the file.p.s. 2:I would like to demo some, hopefully I can do this at the next Planboard Symposium June 8. See this linkRene KundersmaOracle Technology Services, The Netherlands

Okay time to write something nice about Rac One Node: In order to test RAC One Node, on my Laptop, I just: - installed Oracle VM 2.2 - Created two OEL 5.3 images The two images are fully prepared for...

Using DNFS for test purposes

Because of other priorities such as bringing the first v2 Database Machine in the Netherlands into production I did spend less time on my blog that planned. I do however like to tell some things about DNFS, the build-in NFS client we have in Oracle RDBMS since 11.1.What DNFS is and how to set it up can all be found here .As you see this documentation is actually the "Clusterware Installation Guide". I think that is weird, I would expect this to be part of the Admin Guide, especially the "Tablespace" chapter.I do however want to show what I did not find in the documentation that quickly (and solved after talking to my famous colleague "the prutser"):First, a quick setup:1. The standard ODM library needs to be replaced with the NFS ODM library:[oracle@rkdb01 ~]$ cp $ORACLE_HOME/lib/libodm11.so $ORACLE_HOME/lib/libodm11.so_stub[oracle@rkdb01 ~]$ ln -s $ORACLE_HOME/lib/libnfsodm11.so $ORACLE_HOME/lib/libodm11.soAfter changing to this library you will notice the following in your alert.log:Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 2.02. The intention is to mount the datafiles over normal NAS (like NetApp). But, in case you want to test yourself and use an exported NFS filesystem, it should look like the following: [oracle@rkdb01 ~]$ cat /etc/exports /u01/scratch/nfs *(rw,sync,insecure)Please note the "insecure" option in the export, since you will not be able to use DNFS without it if you export a filesystem from a host. Without the "insecure" option the NFS server considers the port used by the database "insecure" and the database is unable to acquire the mount. This is what you will see in the alert.log creating the file: Direct NFS: NFS3ERR 1 Not owner. path rkdb01.nl.oracle.com mntport 930 nfsport 20493. Before configuring the new Oracle stanza for DNFS we still need to configure a regular kernel NFS mount:[root@rkdb01 ~]# cat /etc/fstab | grep nfsrkdb01.nl.oracle.com:/u01/scratch/nfs /incoming nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=6004. Then a so called Oracle-'nfstab' needs to be created that specifies what the available exports to use:[oracle@rkdb01 ~]$ cat /etc/oranfstab server:rkdb01.nl.oracle.compath:192.168.1.40export:/u01/scratch/nfs mount:/incoming5. Creating a tablespace with a datafile on the NFS location:SQL> create tablespace rk datafile '/incoming/rk.dbf' size 10M;Tablespace created.As said, be sure to know that it may happen that if you do not specify the insecure option (like I did) you will still see output from the query v$dnfs_servers:SQL> select * from v$dnfs_servers;ID SVRNAME DIRNAME MNTPORT NFSPORT WTMAX RTMAX-- -------------------- ----------------- --------- ---------- ------ ------1 rkdb01.nl.oracle.com /u01/scratch/nfs 684 2049 32768 32768But, querying v$dnfsfiles and v$dnfs_channels will not return any result, and indeed, you will see the following message in the alert-log whenyou create a file : Direct NFS: NFS3ERR 1 Not owner. path rkdb01.nl.oracle.com mntport 930 nfsport 2049After correcting the export with the "secure" option:SQL> select * from v$dnfs_files;FILENAME FILESIZE PNUM SVR_ID--------------- -------- ------ ------ /incoming/rk.dbf 10493952 20 1 Rene KundersmaOracle Technology Services, The Netherlands

Because of other priorities such as bringing the first v2 Database Machine in the Netherlands into production I did spend less time on my blog that planned. I do however like to tell some things...

Adding iSCSI storage without restarting the iSCSI service

By one colleague I was asked how to add iSCSI storage without stopping the iSCSI service itself. Below how this works. I used tgt as iSCSI service. Create a iSCSI target: (this is what you do on the source):[root@gridnode01 ~]# tgtadm --lld iscsi --op new --mode target --tid 1 -T 192.168.1.172:vol1[root@gridnode01 ~]# dd if=/dev/zero of=/tmp/vol1.dd bs=1M count=1010+0 records in10+0 records out10485760 bytes (10 MB) copied, 0.0285 seconds, 368 MB/s[root@gridnode01 ~]# tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /tmp/vol1.dd [root@gridnode01 ~]# tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALLThen, install iSCSI-initiator utils on the target node and configure the service:[root@gridnode02 ~]# rpm -i iscsi-initiator-utils-6.2.0.868-0.7.el5.i386.rpm [root@gridnode02 ~]# chkconfig iscsi on[root@gridnode02 ~]# service iscsi startiscsid is stoppedTurning off network shutdown. Starting iSCSI daemon: [ OK ][ OK ]Setting up iSCSI targets: iscsiadm: No records found![ OK ]Step 3: Discover the target just created and login to the portal"[root@gridnode02 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.171192.168.1.171:3260,1 192.168.1.172:vol1[root@gridnode02 ~]# iscsiadm --mode node --targetname 192.168.1.172:vol1 --portal 192.168.1.171:3260 --loginLogging in to [iface: default, target: 192.168.1.172:vol1, portal: 192.168.1.171,3260]Login to [iface: default, target: 192.168.1.172:vol1, portal: 192.168.1.171,3260]: successfulNow, let's see the new scsi disk coming in:[root@gridnode02 ~]# tail -f /var/log/messagesJan 10 21:01:19 gridnode02 kernel: sda: Write Protect is offJan 10 21:01:19 gridnode02 kernel: SCSI device sda: drive cache: write backJan 10 21:01:19 gridnode02 kernel: SCSI device sda: 20480 512-byte hdwr sectors (10 MB)Jan 10 21:01:19 gridnode02 kernel: sda: Write Protect is offJan 10 21:01:19 gridnode02 kernel: SCSI device sda: drive cache: write backJan 10 21:01:19 gridnode02 kernel: sda: unknown partition tableJan 10 21:01:19 gridnode02 kernel: sd 0:0:0:1: Attached scsi disk sdaJan 10 21:01:19 gridnode02 iscsid: received iferror -38Jan 10 21:01:19 gridnode02 last message repeated 2 timesJan 10 21:01:19 gridnode02 iscsid: connection1:0 is operational now[root@gridnode02 ~]# fdisk -l /dev/sdaDisk /dev/sda: 10 MB, 10485760 bytes1 heads, 20 sectors/track, 1024 cylindersUnits = cylinders of 20 * 512 = 10240 bytesDisk /dev/sda doesn't contain a valid partition tableNow, add another volume on the source:[root@gridnode01 ~]# tgtadm --lld iscsi --op new --mode target --tid 2 -T 192.168.1.172:vol2[root@gridnode01 ~]# dd if=/dev/zero of=/tmp/vol2.dd bs=1M count=10[root@gridnode01 ~]# tgtadm --lld iscsi --op new --mode logicalunit --tid 2 --lun 1 -b /tmp/vol2.dd [root@gridnode01 ~]# tgtadm --lld iscsi --op bind --mode target --tid 2 -I ALLOn target, re-run the discovery:[root@gridnode02 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.171192.168.1.171:3260,1 192.168.1.172:vol1192.168.1.171:3260,1 192.168.1.172:vol2And login again on the new target:[root@gridnode02 ~]# iscsiadm --mode node --targetname 192.168.1.172:vol2 --portal 192.168.1.171:3260 --loginLogging in to [iface: default, target: 192.168.1.172:vol2, portal: 192.168.1.171,3260]Login to [iface: default, target: 192.168.1.172:vol2, portal: 192.168.1.171,3260]: successfulLet's check the new disk:[root@gridnode02 ~]# tail /var/log/messagesJan 10 21:04:33 gridnode02 kernel: SCSI device sdb: drive cache: write backJan 10 21:04:33 gridnode02 kernel: SCSI device sdb: 20480 512-byte hdwr sectors (10 MB)Jan 10 21:04:33 gridnode02 kernel: sdb: Write Protect is offJan 10 21:04:33 gridnode02 kernel: SCSI device sdb: drive cache: write backJan 10 21:04:33 gridnode02 kernel: sdb: unknown partition tableJan 10 21:04:33 gridnode02 kernel: sd 1:0:0:1: Attached scsi disk sdbJan 10 21:04:33 gridnode02 kernel: sd 1:0:0:1: Attached scsi generic sg3 type 0Jan 10 21:04:33 gridnode02 iscsid: received iferror -38Jan 10 21:04:33 gridnode02 last message repeated 2 timesJan 10 21:04:33 gridnode02 iscsid: connection2:0 is operational now[root@gridnode02 ~]# fdisk -l /dev/sdbDisk /dev/sdb: 10 MB, 10485760 bytes1 heads, 20 sectors/track, 1024 cylindersUnits = cylinders of 20 * 512 = 10240 bytesDisk /dev/sdb doesn't contain a valid partition tableRene KundersmaOracle Technology Services, The Netherlands

By one colleague I was asked how to add iSCSI storage without stopping the iSCSI service itself. Below how this works. I used tgt as iSCSI service. Create a iSCSI target: (this is what you do on the...

Relocating Grid Infrastructure (part 2)

In my previous post, I moved the 11gr2 Grid infrastructure (gi) home to another location. Unfortunately, as I showed, during my actions the re-run of root.sh caused the diskgroup holding the clusterware-files (vote / ocr) to be recreated.Recreating this diskgroup would mean loosing my database data in it.My colleagues from the development organization helped me out and told me how I could solve this. Below you can find my notes. Please note: I am NOT telling is this a supported action. If you perform this actions, you are on your own. I can only recommend you test things first.The current location for the gi is /u01/app/11.2.0/grid:[root@rac1 ~]# cat /etc/oratab | grep app+ASM1:/u01/app/11.2.0/grid:N # line added by AgentORCL:/u01/app/oracle/product/11.2.0/dbhome_1:N # line added by AgentSo, first let's stop all of the clusterware stack (on both nodes)[root@rac1 ~]# . oraenvORACLE_SID = [root] ? +ASM1The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle[root@rac1 ~]# crsctl stop crsCRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1'CRS-2673: Attempting to stop 'ora.orcl.db' on 'rac1'CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac1'CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac1'CRS-2677: Stop of 'ora.registry.acfs' on 'rac1' succeededCRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac1'CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1'CRS-2677: Stop of 'ora.scan1.vip' on 'rac1' succeededCRS-2672: Attempting to start 'ora.scan1.vip' on 'rac2'CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeededCRS-2672: Attempting to start 'ora.rac1.vip' on 'rac2'CRS-2676: Start of 'ora.rac1.vip' on 'rac2' succeededCRS-2676: Start of 'ora.scan1.vip' on 'rac2' succeededCRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rac2'CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac2' succeededCRS-2677: Stop of 'ora.orcl.db' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1'CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.asm' on 'rac1'CRS-2677: Stop of 'ora.asm' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.ons' on 'rac1'CRS-2673: Attempting to stop 'ora.eons' on 'rac1'CRS-2677: Stop of 'ora.ons' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.net1.network' on 'rac1'CRS-2677: Stop of 'ora.net1.network' on 'rac1' succeededCRS-2677: Stop of 'ora.eons' on 'rac1' succeededCRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completedCRS-2677: Stop of 'ora.crsd' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'CRS-2673: Attempting to stop 'ora.asm' on 'rac1'CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeededCRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeededCRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeededCRS-2677: Stop of 'ora.ctssd' on 'rac1' succeededCRS-2677: Stop of 'ora.evmd' on 'rac1' succeededCRS-2677: Stop of 'ora.asm' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.cssd' on 'rac1'CRS-2677: Stop of 'ora.cssd' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.diskmon' on 'rac1'CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeededCRS-2677: Stop of 'ora.diskmon' on 'rac1' succeededCRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeededCRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completedCRS-4133: Oracle High Availability Services has been stopped.Now, create a new directory for the gi, and move gi home into that:[root@rac1 ~]# mkdir /u01/rk[root@rac1 ~]# mv /u01/app/11.2.0/grid /u01/rk[root@rac1 ~]# . oraenvORACLE_SID = [+ASM1] ? The Oracle base for ORACLE_HOME=/u01/rk/grid is /u01/app/oracleNow, edit the file crsconfig_params, so that it reflects the new gi home path:vi $ORACLE_HOME/crs/install/crsconfig_params => and change ORACLE_HOME=/u01/rk/gridIf you would run $OH/crs/install/rootcrs.pl -patch -hahome /u01/rk/grid now, you are in trouble, like I was, this is what my alert file said:[ohasd(26079)]CRS-1339:Oracle High Availability Service aborted due to an unexpected error [Failed to initialize Oracle Local Registry]. Details at (:OHAS00106:) in /u01/rk/grid/log/rac1/ohasd/ohasd.log.So, what was in my ohasd.log:2010-01-05 14:48:04.516: [ OCROSD][3046704848]utopen:6m':failed in stat OCR file/disk /u01/app/11.2.0/grid/cdata/rac1.olr, errno=2, os err string=No such file or directory2010-01-05 14:48:04.516: [ OCROSD][3046704848]utopen:7:failed to open any OCR file/disk, errno=2, os err string=No such file or directory2010-01-05 14:48:04.516: [ OCRRAW][3046704848]proprinit: Could not open raw deviceAs, you can see, I forgot to change the location of my OLR, so let's do it:vi /etc/oracle/olr.loc and change:olrconfig_loc=/u01/rk/grid/cdata/rac1.olrcrs_home=/u01/rk/gridAfter this, I ran the command again and succeeded:[root@rac1 grid]# $OH/crs/install/rootcrs.pl -patch -hahome /u01/rk/grid2010-01-05 14:58:28: Parsing the host name2010-01-05 14:58:28: Checking for super user privileges2010-01-05 14:58:28: User has super user privilegesUsing configuration parameter file: crs/install/crsconfig_paramsCRS-4123: Oracle High Availability Services has been started.See the new status:[root@rac1 grid]# crsctl status resource -t--------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS --------------------------------------------------------------------------------Local Resources--------------------------------------------------------------------------------ora.DATA.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.LISTENER.lsnr ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.asm ONLINE ONLINE rac1 Started ONLINE ONLINE rac2 Started ora.eons ONLINE OFFLINE rac1 ONLINE OFFLINE rac2 ora.gsd OFFLINE OFFLINE rac1 OFFLINE OFFLINE rac2 ora.net1.network ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.ons ONLINE OFFLINE rac1 ONLINE OFFLINE rac2 ora.registry.acfs ONLINE ONLINE rac1 ONLINE ONLINE rac2 --------------------------------------------------------------------------------Cluster Resources--------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac1 ora.oc4j 1 OFFLINE OFFLINE ora.orcl.db 1 ONLINE ONLINE rac1 Open 2 ONLINE OFFLINE ora.rac1.vip 1 ONLINE ONLINE rac1 ora.rac2.vip 1 ONLINE ONLINE rac2 ora.scan1.vip 1 ONLINE ONLINE rac1 Rene KundersmaOracle Technology Services, The Netherlands

In my previous post, I moved the 11gr2 Grid infrastructure (gi) home to another location. Unfortunately, as I showed, during my actions the re-run of root.sh caused the diskgroup holding...

Relocating Grid Infrastructure

Below I will describe the actions one needs to perform when the Oracle 11gr2 Grid Infrastructure ORACLE HOME needs to be moved to a new location.Please note:- You will loose already registered resources (like databases) from OCR (they need to be added back again).- Default LISTENER needs to be re-configured (re-run netca from Grid Inf. home)- You will have downtime during the action- Your ASM diskgroup that holds your cluster disks will be recreated !So, please again, note the ASM diskgroup will be recreated. In case your data is there, you will loose it.These are my steps:1. On all nodes, but the last run the delete force command. This will stop all cluster resources and deconfigure the Oracle clusterware stack on the node.[root@rac1 ]# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -delete -forceOutput:2009-12-29 14:52:16: Parsing the host name2009-12-29 14:52:16: Checking for super user privileges2009-12-29 14:52:16: User has super user privilegesUsing configuration parameter file: ./crsconfig_paramsPRCR-1035 : Failed to look up CRS resource ora.cluster_vip.type for 1PRCR-1068 : Failed to query resourcesCannot communicate with crsdPRCR-1070 : Failed to check if resource ora.gsd is registeredCannot communicate with crsdPRCR-1070 : Failed to check if resource ora.ons is registeredCannot communicate with crsdPRCR-1070 : Failed to check if resource ora.eons is registeredCannot communicate with crsdACFS-9200: SupportedCRS-4535: Cannot communicate with Cluster Ready ServicesCRS-4000: Command Stop failed, or completed with errors.CRS-4544: Unable to connect to OHASCRS-4000: Command Stop failed, or completed with errors.Successfully deconfigured Oracle clusterware stack on this node2. On the last node in run command again with the "-lastnode" option. [root@rac2 ~]# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -delete -force -lastnode3. On both nodes create a new directory and move the ORACLE HOME to there: [root@rac1-2 ~]# mkdir /u01/rk; mv /u01/app/11.2.0/grid /u01/rk4. On both nodes create a new symlink for JRE; [root@rac1-2 ~]# cd /u01/rk/grid; rm -f JRE; ln -s /u01/rk/grid/jdk/jre/ JRE5. As oracle run clone.pl, this will change all the hard coding: export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=/u01/rk/grid cd /u01/rk/grid/clone/bin; run then: perl clone.pl ORACLE_HOME=$ORACLE_HOME output like this:[oracle@rac1 bin]$ perl clone.pl ORACLE_HOME=/u01/rk/grid ORACLE_BASE=/u01/app/oracle./runInstaller -clone -waitForCompletion "ORACLE_HOME=/u01/rk/grid" "ORACLE_BASE=/u01/app/oracle" -defaultHomeName -silent -noConfig -nowait Starting Oracle Universal Installer...Checking swap space: must be greater than 500 MB. Actual 1955 MB PassedPreparing to launch Oracle Universal Installer from /tmp/OraInstall2009-12-29_03-09-42PM. Please wait ...Oracle Universal Installer, Version 11.2.0.1.0 ProductionCopyright (C) 1999, 2009, Oracle. All rights reserved.You can find the log of this install session at: /u01/app/oraInventory/logs/cloneActions2009-12-29_03-09-42PM.log.................................................................................................... 100% Done.Installation in progress (Tuesday, December 29, 2009 3:10:15 PM EST)......................................................................... 73% Done.Install successfulLinking in progress (Tuesday, December 29, 2009 3:10:19 PM EST)Link successfulSetup in progress (Tuesday, December 29, 2009 3:11:39 PM EST)................. 100% Done.Setup successfulEnd of install phases.(Tuesday, December 29, 2009 3:13:38 PM EST)WARNING:The following configuration scripts need to be executed as the "root" user./u01/rk/grid/root.shTo execute the configuration scripts: 1. Open a terminal window 2. Log in as "root" 3. Run the scriptsRun the script on the local node first. After successful completion, you can run the script in parallel on all the other nodes.The cloning of OraHome1 was successful.Please check '/u01/app/oraInventory/logs/cloneActions2009-12-29_03-09-42PM.log' for more details.6. Verify your crsconfig_params (/u01/rk/grid/crs/install/crsconfig_params) and make sure this file is available on both nodes.7. Relink the grid infra home on both nodes As root:# cd /u01/rk/grid/crs/install# perl rootcrs.pl -unlockAs the grid infrastructure for a cluster owner:$ export ORACLE_HOME=cd /u01/rk/grid$ cd /u01/rk/grid/bin/relink8. On each node run root.sh, begin with node1: cd /u01/rk/grid; ./root.sh Check /u01/rk/grid/install/root_rac1_2009-12-29_15-37-11.log for the output of root script run rootcrs as requested on each node:/u01/rk/grid/perl/bin/perl -I/u01/rk/grid/perl/lib -I/u01/rk/grid/crs/install /u01/rk/grid/crs/install/rootcrs.plOutput node1:[root@rac1 grid]# /u01/rk/grid/perl/bin/perl -I/u01/rk/grid/perl/lib -I/u01/rk/grid/crs/install /u01/rk/grid/crs/install/rootcrs.pl2009-12-29 16:05:51: Parsing the host name2009-12-29 16:05:51: Checking for super user privileges2009-12-29 16:05:51: User has super user privilegesUsing configuration parameter file: /u01/rk/grid/crs/install/crsconfig_paramsLOCAL ADD MODE Creating OCR keys for user 'root', privgrp 'root'..Operation successful.Adding daemon to inittabCRS-4123: Oracle High Availability Services has been started.ohasd is startingCRS-2672: Attempting to start 'ora.gipcd' on 'rac1'CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'CRS-2676: Start of 'ora.gipcd' on 'rac1' succeededCRS-2676: Start of 'ora.mdnsd' on 'rac1' succeededCRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeededCRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeededCRS-2672: Attempting to start 'ora.cssd' on 'rac1'CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'CRS-2676: Start of 'ora.diskmon' on 'rac1' succeededCRS-2676: Start of 'ora.cssd' on 'rac1' succeededCRS-2672: Attempting to start 'ora.ctssd' on 'rac1'CRS-2676: Start of 'ora.ctssd' on 'rac1' succeededASM created and started successfully.DiskGroup DATA created successfully.clscfg: -install mode specifiedSuccessfully accumulated necessary OCR keys.Creating OCR keys for user 'root', privgrp 'root'..Operation successful.CRS-2672: Attempting to start 'ora.crsd' on 'rac1'CRS-2676: Start of 'ora.crsd' on 'rac1' succeededSuccessful addition of voting disk 034bbf3dcd1f4f9fbf1afa38db67caad.Successfully replaced voting disk group with +DATA.CRS-4266: Voting file(s) successfully replaced## STATE File Universal Id File Name Disk group-- ----- ----------------- --------- --------- 1. ONLINE 034bbf3dcd1f4f9fbf1afa38db67caad (/dev/sdb1) [DATA]Located 1 voting disk(s).CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'CRS-2677: Stop of 'ora.crsd' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.asm' on 'rac1'CRS-2677: Stop of 'ora.asm' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.cssd' on 'rac1'CRS-2677: Stop of 'ora.cssd' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeededCRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeededCRS-2672: Attempting to start 'ora.gipcd' on 'rac1'CRS-2676: Start of 'ora.gipcd' on 'rac1' succeededCRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeededCRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeededCRS-2672: Attempting to start 'ora.cssd' on 'rac1'CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'CRS-2676: Start of 'ora.diskmon' on 'rac1' succeededCRS-2676: Start of 'ora.cssd' on 'rac1' succeededCRS-2672: Attempting to start 'ora.ctssd' on 'rac1'CRS-2676: Start of 'ora.ctssd' on 'rac1' succeededCRS-2672: Attempting to start 'ora.asm' on 'rac1'CRS-2676: Start of 'ora.asm' on 'rac1' succeededCRS-2672: Attempting to start 'ora.crsd' on 'rac1'CRS-2676: Start of 'ora.crsd' on 'rac1' succeededCRS-2672: Attempting to start 'ora.evmd' on 'rac1'CRS-2676: Start of 'ora.evmd' on 'rac1' succeededCRS-2672: Attempting to start 'ora.asm' on 'rac1'CRS-2676: Start of 'ora.asm' on 'rac1' succeededCRS-2672: Attempting to start 'ora.DATA.dg' on 'rac1'CRS-2676: Start of 'ora.DATA.dg' on 'rac1' succeededCRS-2672: Attempting to start 'ora.registry.acfs' on 'rac1'CRS-2676: Start of 'ora.registry.acfs' on 'rac1' succeededrac1 2009/12/29 16:11:02 /u01/rk/grid/cdata/rac1/backup_20091229_161102.olrPreparing packages for installation...cvuqdisk-1.0.7-1Configure Oracle Grid Infrastructure for a Cluster ... succeeded Output node 2:[root@rac2 grid]# /u01/rk/grid/perl/bin/perl -I/u01/rk/grid/perl/lib -I/u01/rk/grid/crs/install /u01/rk/grid/crs/install/rootcrs.pl2009-12-29 16:13:33: Parsing the host name2009-12-29 16:13:33: Checking for super user privileges2009-12-29 16:13:33: User has super user privilegesUsing configuration parameter file: /u01/rk/grid/crs/install/crsconfig_paramsLOCAL ADD MODE Creating OCR keys for user 'root', privgrp 'root'..Operation successful.Adding daemon to inittabCRS-4123: Oracle High Availability Services has been started.ohasd is startingCRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminatingAn active cluster was found during exclusive startup, restarting to join the clusterCRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeededCRS-2672: Attempting to start 'ora.gipcd' on 'rac2'CRS-2676: Start of 'ora.gipcd' on 'rac2' succeededCRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeededCRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeededCRS-2672: Attempting to start 'ora.cssd' on 'rac2'CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'CRS-2676: Start of 'ora.diskmon' on 'rac2' succeededCRS-2676: Start of 'ora.cssd' on 'rac2' succeededCRS-2672: Attempting to start 'ora.ctssd' on 'rac2'CRS-2676: Start of 'ora.ctssd' on 'rac2' succeededCRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac2'CRS-2676: Start of 'ora.drivers.acfs' on 'rac2' succeededCRS-2672: Attempting to start 'ora.asm' on 'rac2'CRS-2676: Start of 'ora.asm' on 'rac2' succeededCRS-2672: Attempting to start 'ora.crsd' on 'rac2'CRS-2676: Start of 'ora.crsd' on 'rac2' succeededCRS-2672: Attempting to start 'ora.evmd' on 'rac2'CRS-2676: Start of 'ora.evmd' on 'rac2' succeededrac2 2009/12/29 16:16:42 /u01/rk/grid/cdata/rac2/backup_20091229_161642.olrPreparing packages for installation...cvuqdisk-1.0.7-1Configure Oracle Grid Infrastructure for a Cluster ... succeeded9. Finished: [root@rac1 trace]# crsctl status resource -t--------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS --------------------------------------------------------------------------------Local Resources--------------------------------------------------------------------------------ora.DATA.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.asm ONLINE ONLINE rac1 Started ONLINE ONLINE rac2 Started ora.eons ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.gsd OFFLINE OFFLINE rac1 OFFLINE OFFLINE rac2 ora.net1.network ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.ons ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.registry.acfs ONLINE ONLINE rac1 ONLINE ONLINE rac2 --------------------------------------------------------------------------------Cluster Resources--------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac1 ora.oc4j 1 OFFLINE OFFLINE ora.rac1.vip 1 ONLINE ONLINE rac1 ora.rac2.vip 1 ONLINE ONLINE rac2 ora.scan1.vip 1 ONLINE ONLINE rac1 Rene KundersmaOracle Technology Services, The Netherlands

Below I will describe the actions one needs to perform when the Oracle 11gr2 Grid Infrastructure ORACLE HOME needs to be moved to a new location. Please note:- You will loose already registered...