X

Pat Shuff's Blog

  • PaaS
    June 20, 2016

database option - multi tenant

Before we dive into what multi-tenant databases are, let's take a step back and define a few terms. With an on premise system we can have a computer loaded with a database series of databases. Historically the way that this was done was by booting the hardware with an operating system and loading the database onto the operating system. We load the OS onto the root file system or "/" in Unix/Solaris/Linux. We create a /u01 directory to hold the ORACLE_HOME or binaries for the database. Traditionally we load the data into /u02 or keep everything in /u01. Best practices have shown us that splitting the database installation into four parts is probably a good idea. Keeping everything in the root partition is not a good idea because your can fill up your database and lock the operating system at the same time. We can put the binaries into /u01 and do a RAID-5 or RAID-10 stripe for these binaries. We can then put all of our data into /u02 and name the /u02 file system a flash disk or high speed disk to improve performance since this has a high read and write performance requirements. We can RAID-5 or RAID-10 this data to ensure that we don't loose data or will use a more advanced striping technology provided by a hardware disk vendor. We then put our backups into /u03 and do a simple mirror for this partition. We can go with a lower performing disk to save money on the installation and only keep data for a few days/weeks/months then delete it as we get multiple copies of this data. We might replicate it to another data center or copy the data to tape and put it into cold storage for compliance requirements as well as disaster recovery fall backs. If we are going to replicate the data to another data center we will create a /u04 area for change logs and redo logs that will be shipped to our secondary system and applied to the second system to reduce recovery time. Backups give us recovery to the last backup. A live system running Data Guard or Active Data Guard gives us failure back to a few seconds or a transaction or two back rather than hours or days back.

The biggest problem with this solution is that purchasing a single system to run a single database is costly and difficult to manage. We might be running at 10% processor utilization the majority of time but run at 90% utilization for a few hours a week or few days a month. The system is idle most of the time and we are paying for the high water mark rather than the average usage. Many administrators overload a system that have different peak usage times and run multiple database instances on the same box. If, for example, our accounting system peaks on the 25th through the 30th and our sales system peaks on the 5th through the 10th, we can run these two systems on the same box and resource limit each instance during the peak periods and let them run at 20% the rest of the month. This is typically done by installing two ORACLE_HOMEs in the /u01 directory. The accounting system goes into /u01/app/oracle/production/12.1.0/accounting and the sales system goes into /u01/app/oracle/production/12.1.0/sales. Both share the /u02 file system as well and put their data into /u02/app/oracle/oradata/12.1.0/accounting and /u02/app/oracle/oradata/12.1.0/sales. Backups are done to two different locations and the replication and redo logs are similarly replicated to different locations.


Having multiple ORACLE_HOMEs has been a way of solving this problem historically for years. The key drawback is that patching can get troublesome if specific options are used or installed. If, for example, both use ASM (automated storage management) you can't patch one database without patching ASM for both. This makes patch testing difficult on production systems because suddenly sales and accounting are tied together and upgrades have to be done at the same time.

Virtualization introduced a solution to this by allowing you to install different operating systems on the same computer and sublicense the software based on the virtual processors assigned to the application. You suddenly are able to separate the storage interfaces and operating system patches and treat these two systems as two separate systems running on the same box. Unfortunately, the way that the Oracle database is licensed has caused problems and tension with customers. The software does not contain a license key or hardware limit and will run on what is available. Virtualization engines like VMWare and HyperV allow you to soft partition the hardware and dynamically grow with demand. This is both good and bad. It is good because it makes it simpler to respond to increase workloads. It is bad because licensing is suddenly flexible and Oracle treats the maximum number of cores in the cluster as the high water mark that needs to be licensed. This is called soft partitioning. Operating systems like Solaris and AIX have hard partitions and virtualization engines like OracleVM and ZEN provide hard partitions. Customers have traditionally solved this by running an Oracle instance on a single socket or dual socket system to limit the core count. This typically means that the most critical data is running on the oldest and slowest hardware to limit price. Alternatively they run the database on a full blade and license all cores in this blade. This typically causes a system to be overlicensed and underutilized. The admin might limit the core count to 8 cores but there could be 32 cores in the blade and all 32 cores must be licensed. Using a virtualization engine to limit the resources between database instances is not necessarily practical and not fine enough resolution. Going with multiple ORACLE_HOME locations has been a growing trend since you have to license all of the cores based on current licensing policies.

Another big problem with the multiple ORACLE_HOME or multiple operating system approach is that you have multiple systems to manage and patch. If we use the 32 core system to run four instances of application databases we have four patches to make for the virtualization engine, the operating systems, and the databases. An optimum solution would be to run one operating system on all 32 cores and spread the four databases with one ORACLE_HOME across each and resource limit each instance so that they don't become a noisy neighbor for the other three. We can then use resource manager to assign shares to each instance and limit the processor, memory, and network bandwidth based on rules so that noisy neighbors don't stop us from getting our job done. We get our shares and can be the noisy neighbor if no one else is using resources.

With the 12c instance of the database, Oracle introduced an option called multi-tenant. Let's think of a company like SalesForce.com. They don't spin up a new instance for each company that they do business with. They don't install a new ORACLE_HOME for each company. They don't spin up a new operating system and install a new database instance for each company. This would not make economic sense. A five person company would have to spend about $3K/month with SalesForce to cover just the cost of the database license. On the flip side, custom code must be written to isolate user from company A from reading customer contact information from company B. A much simpler way would be to spin up a pluggable database for company A and another for company B. No custom code is required since the records for the two companies are stored in different directories and potentially different disk locations. If we go back and look at our partitioning blog entry we notice that we have our data stored in /u02/app/oracle/oradata/ORCL/PDB1. The ORCL directory is the location of our container database. This contains all of the configuration information for our database. We define our listener at this location. We create our RMAN backup scripts here. We define our security and do auditing at this level. Note that we have a PDB1 subdirectory under this. This is our pluggable database for company A. We would have a PDB2 for company B and the system01.dbf file in that directory is different from the system01.dbf file located in the PDB1 directory. This allows us to create unique users in both directories and not have a global name issue. With SalesForce all usernames must be unique because users are stored in a master database and must be unique. I can not, for example, create a user called backupadmin that allows users to log in to company A and backup the data set if there is a user defined by that same name for any other company world wide. This creates script issues and problems. We can't create a single backup script that works across all companies and must create a unique user and script for each company.

The main concept behind the multi-tenant option is to allow you to run more databases on the same box and reduce the amount of work required to support them. By putting common tasks like backup and restore at the container level, all pluggables on this system are backed up in a central location but separated by the pluggable container so that there is no data mingling. Data can be replicated quickly and easily without having to resort to backup and restore onto a new instance. The system global area (SGA) is common for the container database. Each pluggable container gets their own personal global area (PGA) that manages I/O buffers, compiled sql statements, and cached data.

Note that we have one redo log and undo log area. As changes are made they are copied to a secondary system. We don't have to configure Data Guard for each pluggable instance but for the container database. When we plug a instance into a container it inherits the properties of the container. If we had a container configured to be RAC enabled, all pluggables in the database instance would be RAC enabled. We can use the resource manager in the container database to limit the shares that each pluggable instance gets and reduce the noisy neighbor overlap that happens on a virtual machine configuration. We also reduce the patching, backup, and overall maintenance required to administer the database instance.

To create a pluggable instance we need to make sure that we have requested the High Performance or Extreme Performance Edition of the database. The Standard Edition and Enterprise Edition do not support multi-tenant. It is important to note that to get this same feature on Amazon you can not use RDS because they prohibit you from using this option. You must use IaaS and go with Amazon EC2 to get this feature to work. Microsoft Azure does not offer the Oracle database at the platform level so your only option is Azure Compute.

The pluggable creation is simple and can be done from the command line through sqlplus. The 12c Database Documentation details this process.

CREATE PLUGGABLE DATABASE salespdb ADMIN USER salesadm IDENTIFIED BY password;
or
CREATE PLUGGABLE DATABASE salespdb ADMIN USER salesadm IDENTIFIED BY password ROLES=(DBA);
or more complex
CREATE PLUGGABLE DATABASE salespdb ADMIN USER salesadm IDENTIFIED BY password
STORAGE (MAXSIZE 2G MAX_SHARED_TEMP_SIZE 100M)
DEFAULT TABLESPACE sales
DATAFILE '/disk1/oracle/dbs/salespdb/sales01.dbf' SIZE 250M AUTOEXTEND ON
PATH_PREFIX = '/disk1/oracle/dbs/salespdb/'
FILE_NAME_CONVERT = ('/disk1/oracle/dbs/pdbseed/', '/disk1/oracle/dbs/salespdb/');

Note that we can make the creation simple or define all of the options and file locations. In the last example we create the pluggable instance by cloning the existing pdbseed. In our example this would be located in /u02/app/oracle/oradata/ORCL. We would pull from the pdbseed directory and push into the salespdb directory. All three examples would do this but the third details all options and configurations.

When we create the instance from the sql plus command line, it could assume a PDB name for the file system. We might want to use the more complex configuration. When we executed this from the command line we got a long string of numbers for the directory name of our new pluggable instance called salespdb.


We could do the same thing through sql developer and have it guide us through the renaming steps. It prompts us for the new file name showing where the seed is coming from. We could have just as easily have cloned the salespdb and used it as our foundation rather than creating one from the pdbseed. We right click on the container database header and it prompts us to create, clone, or unplug a pluggable. If we select create we see the following sequence.



One thing that we did not talk about was economics. If you wanted to run multi-tenant on premise you need to purchase a database license at $47.5K per two processors and the multi-tenant option at $23K per two processors as well. This comes in at $60.5K for the license and $13,310 per year for support. Using our four year cost of ownership this comes in at $2,495 per month for the database license. The High Performance edition comes in at $4K per month. Along with this you get about $5K in additional features like diagnostics, tuning, partitioning, compression, and a few other features that we have not covered yet. If you are going to run these options on Amazon or Azure you will need to budget the $2.5K for the database license and more if you want the other features on top of the processor and storage costs for those cloud services. You should also budget the outgoing data charges that you do not have to pay for with the non-metered database service in the Oracle Cloud. Going with the multi-tenant option is cheaper than running the database on two servers and easier than running two ORACLE_HOME instances on the same machine. Going with the High Performance Edition gets you all of these options and offloads things like scale up, backup, initial configuration, and restart of services if a process fails.

In summary, multi-tenant is a good way of overloading services on a single server. The resource management features of the container allow us to dynamically change the allocation to a pluggable database and give more resources to instances that need it and limit noisy neighbors. With the High Performance edition and Extreme Performance Edition we get multi-tenant as a foundation for the service. Our primary interface to create a pluggable instance is either SQL Developer, Enterprise Manager, or sqlplus. We can easily clone an existing instance for a dev/test replica or export an instance and plug it into another system. We will look at this more in depth tomorrow.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.