X

An Oracle blog about Exadata

Scaling Database Hardware

I stumbled across this rather convoluted statement from a storage vendor:

The HBAs, HCAs, or NICs on a host must support the type of port (SAS, InfiniBand, iSCSI, or Fibre Channel) to which they connect on the controller-drive tray. For the best performance, the HBAs, HCAs, or NICs should support the highest data rate supported by the HICs to which they connect.

which got me wondering, how scary can scaling database hardware get? And how do we ensure that (the inevitable) hardware obsolescence won’t leave us scouring online auction sites for that last compatible component?

If you have to scale a bespoke configuration, I wish you luck. All I can suggest is test test test, have a valid backup, and a bulletproof back-out plan in case it all goes awry.

If you’re on Exadata and need to scale, you’re in a much better situation, because you are part of a carefully curated ecosystem. All you need to decide is what you want to scale, which follows from your reasons to expand. For example, you may be running low on storage for your data warehouse. Or you may need more cores to consolidate another round of databases into your Exadata. You don't have to review and reassemble the entire stack of components, hoping for the best. Every Exadata configuration has been tested thoroughly by a team of top-notch engineers. The hardware has been assembled in a meticulously run factory that produces hundreds of identical configurations every month. And you have 24/7 access to a single-point of response to any and every issue that may arise.

To expand Exadata hardware follow these steps (see diagram):

Exadata elastic configurations scale compute & storage

  1. Start with what you have
  2. Add Database and/or Storage Servers to meet your increased requirements
  3. If a full rack is not enough, add another rack and repeat 2 (and maybe 3) until you get to where you need to be
  4. There is no 4: I wanted to add that 2 and 3 scale linearly, which is why this is so easy

That's it, you're done. 

The following sections elaborate how to expand Exadata in specific scenarios. 

Database CPU Scaling

One of the most common expansion use cases is needing to increase the number of CPUs to run your database workload. There are a few ways this can be achieved:

Capacity on Demand - If you are running Capacity on Demand licensing on your Exadata with a subset of cores active, you can purchase more licenses and activate additional cores in your existing servers. Though a server reboot will be required, it can be done in rolling fashion in a cluster to avoid database outages. See the documentation for details.

In-Rack Additional Database Servers - If you’ve already activated all cores on your database servers, there are a few paths you can take add more CPUs, depending on your current configuration:

  • Eighth Rack - The Eighth rack, our smallest footprint, consists of two single-CPU database servers and storage servers. Expand via the Eighth Rack to Quarter Rack Database Server Upgrade, which adds the additional CPU, giving you access to the additional cores. Both Database servers get this upgrade, so you will double the available core count. The servers need downtime to add in the hardware, however in a cluster this can be done in a rolling fashion so your database doesn’t see an outage. After this upgrade you have the equivalent of a pair of Quarter Rack Database Servers, scaling CPU further by adding additional database servers (see next item).
  • Quarter and/or Elastic Rack - Earlier generations of Exadata required predefined expansion configurations, i.e., eighth to quarter, quarter to half, half to full, with N Database Servers and M Storage Servers in each step up. As of X5 (and also available to X4 upgrades), scaling is more flexible. All you need to do is add one or more additional Database Servers of the latest generation. These can then be used to balance existing workload and/or add additional database workload. Existing servers continue running and the new database servers are "hot-added" to the setup. There are a few prerequisites, e.g., ensuring you're running the latest Exadata System Software version (see documentation). Unlike the initial quote at the top, there's no need to look for compatible NICs, or figure out what firmware to use on the internal RAID controller. Exadata's smart System Software ensures the correct firmware is loaded, and compatibility between all components has been tested and verified.
  • Multi-rack - If you are at capacity within one rack, just add another rack. Any combination of Database and Storage Servers can be connected across the InfiniBand fabric.

CPU scaling using Capacity on Demand, Add-on DB Servers and Multi-racking allow you to scale up the number of CPUs available for database workload from a minimum of two, into the thousands.

Database Memory Scaling

Maybe you’ve got enough cores, but you want to squeeze in some extra PDBs and need more memory for the Database server, or you realize you could give your application an extra shot in the arm by using Database In-Memory. The later generations of Exadata use DDR4 memory, and can scale up to 1.5TB per database node for the 2-socket variant, (bumping the CPU:RAM ratio from a healthy 8:1 ratio to 32:1). For the 8-socket variant, the memory can be doubled from 3TB to a whopping 6TB of memory, increasing CPU:RAM ratio from 16:1 to 32:1. In-Memory here we come! 

Again, how far you can go depends on your current configuration, but the scale by which memory is added couldn't be easier. For Exadata X7 and X8, there's a single 12 x 64GB DIMMs memory kit for all occasions, you just need to determine the number of kits. The table shows how this works:

 
What I have What I want What I add
N number of Exadata X7-2/X8-2 Database Nodes with 384 per node 768GB per node N x 1 x Memory Kits
N number of Exadata X7-2/X8-2 Database Nodes with 384 per node* 1.5TB per node N x 2 x Memory Kits
N number of Exadata X7-2/X8-2 Database Nodes with 768 per node 1.5TB per node N x 1 x Memory Kits
N number of Exadata X7-8/X8-8 Database Nodes with 3TB per node 6TB per node N x 4 x Memory Kits

* If you are starting with Eighth Rack database nodes, you are limited to max 768GB per node.

Earlier generations have somewhat different memory kit configurations, but upgrading is still simple. Like with previous hardware upgrades, individual server outages are required, but upgrades can be done in a rolling fashion to ensure continued database service.

Client Network Scaling

There are a few situations where additional physical network connections to the database server are required. Perhaps your new security policy requires that backup networks be physically isolated, or you are expanding into another network segment and don't want to disrupt existing VLAN setups (?!?). Whatever the reason, it is easy with Exadata. As described in the documentation, there is a free PCIe slot that can be used to expand the networking solution. In Exadata X8, the options are:

  • The Oracle Quad Port 10GBase-T card - which provides 4 x RJ45 ports of 10Gb
  • The Oracle Dual Port 25 Gb Ethernet Adapter - which provides 2 x SFP28 ports of 25/10Gb

Note: the free PCIe slot is not available in the 1/8th rack configuration.

The database server automatically detects the new card after installation and exposes the additional network interfaces. Similar to CPU scaling, the installation requires a database server shutdown, but again, rolling maintenance ensures no database outages.

Storage Capacity Scaling

Probably the most common scenario in scaling database hardware is adding storage. I don't think I've ever met a customer who achieved a balance between incoming and outgoing data and thus avoided the need to add storage (if you have, please let me know via a comment below).

Exadata's Storage solution is second-to-none, as it maintains truly linear performance when scaling up, and achieves massive compression (averaging around 1:10) with Hybrid Columnar Compression.

Adding storage capacity couldn't be easier (are you seeing a pattern here?). 

No matter what generation of Exadata Hardware you have under support, you just add current-generation storage servers and create a new ASM group, or extend your existing ASM disk groups (see this MAA white paper for best practices), and keep on going.

You can add storage via any current generation High Capacity, Extreme Flash, or Extended Storage Server (XT) per your cost and performance/capacity considerations. 

Like with Database Scalability, if you hit the physical limit of a rack, add a storage expansion rack to the network fabric. The Storage Expansion Rack (SER for short) is exactly that, an Exadata rack dedicated to storage. (It's pretty much an Exadata Rack Elastic model without the database servers, and the spine switch added in by default. See details here).

Storage Server Memory Scaling

By the way, the memory kits I mentioned above with Database Servers can also be applied to the X7 and X8 High Capacity (HC) and Extreme Flash (EF) Storage Servers, bumping the memory from 192GB to 768GB per server. Why scale storage memory? Because this

Exadata Scales for Best Performance and Reliability

Exadata is engineered for maximum flexibility, while the technical rigor delivers legendary performance and reliability. While it is simple to flexibly add capacity as demand requires it, Exadata also ensures that during hardware expansion, replacement and maintenance, the system remains up, with minimal performance impact.

We are always interested in your feedback. You are welcome to engage with us via Twitter @GavinAtHQ or @ExadataPM and by comments here.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.