I stumbled across this rather convoluted statement from a storage vendor:
The HBAs, HCAs, or NICs on a host must support the type of port (SAS, InfiniBand, iSCSI, or Fibre Channel) to which they connect on the controller-drive tray. For the best performance, the HBAs, HCAs, or NICs should support the highest data rate supported by the HICs to which they connect.
which got me wondering, how scary can scaling database hardware get? And how do we ensure that (the inevitable) hardware obsolescence won’t leave us scouring online auction sites for that last compatible component?
If you have to scale a bespoke configuration, I wish you luck. All I can suggest is test test test, have a valid backup, and a bulletproof back-out plan in case it all goes awry.
If you’re on Exadata and need to scale, you’re in a much better situation, because you are part of a carefully curated ecosystem. All you need to decide is what you want to scale, which follows from your reasons to expand. For example, you may be running low on storage for your data warehouse. Or you may need more cores to consolidate another round of databases into your Exadata. You don't have to review and reassemble the entire stack of components, hoping for the best. Every Exadata configuration has been tested thoroughly by a team of top-notch engineers. The hardware has been assembled in a meticulously run factory that produces hundreds of identical configurations every month. And you have 24/7 access to a single-point of response to any and every issue that may arise.
To expand Exadata hardware follow these steps (see diagram):
That's it, you're done.
The following sections elaborate how to expand Exadata in specific scenarios.
One of the most common expansion use cases is needing to increase the number of CPUs to run your database workload. There are a few ways this can be achieved:
Capacity on Demand - If you are running Capacity on Demand licensing on your Exadata with a subset of cores active, you can purchase more licenses and activate additional cores in your existing servers. Though a server reboot will be required, it can be done in rolling fashion in a cluster to avoid database outages. See the documentation for details.
In-Rack Additional Database Servers - If you’ve already activated all cores on your database servers, there are a few paths you can take add more CPUs, depending on your current configuration:
CPU scaling using Capacity on Demand, Add-on DB Servers and Multi-racking allow you to scale up the number of CPUs available for database workload from a minimum of two, into the thousands.
Maybe you’ve got enough cores, but you want to squeeze in some extra PDBs and need more memory for the Database server, or you realize you could give your application an extra shot in the arm by using Database In-Memory. The later generations of Exadata use DDR4 memory, and can scale up to 1.5TB per database node for the 2-socket variant, (bumping the CPU:RAM ratio from a healthy 8:1 ratio to 32:1). For the 8-socket variant, the memory can be doubled from 3TB to a whopping 6TB of memory, increasing CPU:RAM ratio from 16:1 to 32:1. In-Memory here we come!
Again, how far you can go depends on your current configuration, but the scale by which memory is added couldn't be easier. For Exadata X7 and X8, there's a single 12 x 64GB DIMMs memory kit for all occasions, you just need to determine the number of kits. The table shows how this works:
|What I have||What I want||What I add|
|N number of Exadata X7-2/X8-2 Database Nodes with 384 per node||768GB per node||N x 1 x Memory Kits|
|N number of Exadata X7-2/X8-2 Database Nodes with 384 per node*||1.5TB per node||N x 2 x Memory Kits|
|N number of Exadata X7-2/X8-2 Database Nodes with 768 per node||1.5TB per node||N x 1 x Memory Kits|
|N number of Exadata X7-8/X8-8 Database Nodes with 3TB per node||6TB per node||N x 4 x Memory Kits|
* If you are starting with Eighth Rack database nodes, you are limited to max 768GB per node.
Earlier generations have somewhat different memory kit configurations, but upgrading is still simple. Like with previous hardware upgrades, individual server outages are required, but upgrades can be done in a rolling fashion to ensure continued database service.
There are a few situations where additional physical network connections to the database server are required. Perhaps your new security policy requires that backup networks be physically isolated, or you are expanding into another network segment and don't want to disrupt existing VLAN setups (?!?). Whatever the reason, it is easy with Exadata. As described in the documentation, there is a free PCIe slot that can be used to expand the networking solution. In Exadata X8, the options are:
Note: the free PCIe slot is not available in the 1/8th rack configuration.
The database server automatically detects the new card after installation and exposes the additional network interfaces. Similar to CPU scaling, the installation requires a database server shutdown, but again, rolling maintenance ensures no database outages.
Probably the most common scenario in scaling database hardware is adding storage. I don't think I've ever met a customer who achieved a balance between incoming and outgoing data and thus avoided the need to add storage (if you have, please let me know via a comment below).
Exadata's Storage solution is second-to-none, as it maintains truly linear performance when scaling up, and achieves massive compression (averaging around 1:10) with Hybrid Columnar Compression.
Adding storage capacity couldn't be easier (are you seeing a pattern here?).
No matter what generation of Exadata Hardware you have under support, you just add current-generation storage servers and create a new ASM group, or extend your existing ASM disk groups (see this MAA white paper for best practices), and keep on going.
You can add storage via any current generation High Capacity, Extreme Flash, or Extended Storage Server (XT) per your cost and performance/capacity considerations.
Like with Database Scalability, if you hit the physical limit of a rack, add a storage expansion rack to the network fabric. The Storage Expansion Rack (SER for short) is exactly that, an Exadata rack dedicated to storage. (It's pretty much an Exadata Rack Elastic model without the database servers, and the spine switch added in by default. See details here).
By the way, the memory kits I mentioned above with Database Servers can also be applied to the X7 and X8 High Capacity (HC) and Extreme Flash (EF) Storage Servers, bumping the memory from 192GB to 768GB per server. Why scale storage memory? Because this!
Exadata is engineered for maximum flexibility, while the technical rigor delivers legendary performance and reliability. While it is simple to flexibly add capacity as demand requires it, Exadata also ensures that during hardware expansion, replacement and maintenance, the system remains up, with minimal performance impact.