Pat Shuff's Blog

  • PaaS
    June 17, 2016

database options - review and looking forward

For the past two weeks we have been looking at database as a service (DBaaS) offered as platform as a service (PaaS) on the Oracle Public Cloud. We started this journey on the 2nd of this month by looking at the differences between Standard Edition, Enterprise Edition, High Performance Edition, and Extreme Performance Edition. To quickly review, Standard Edition is the basic database with table encryption as the only option. This is a full feature database without the ability to replicate data with any tool other than copying files and RMAN backup. You can't do things like transportable table spaces, streams, Data Guard, or any other replication technologies to make a system more highly available. Enterprise Edition is a more full featured database that allows for data replication from the installation and comes with Advanced Security (TDE) as a basis for the installation. This edition does not come with Data Guard but does have the option for transportable tablespaces, external references, and ways of replicating data manually from the database that Standard Edition does not contain. We then looked at the High Performance Edition which comes with
  • Transparent Data Enctyption
  • Diagnostics
  • Tuning
  • Partitioning
  • Advanced Compression
  • Advanced Security
  • Data Guard
  • Label Security
  • Multitenant
  • Audit Vault
  • Database Vault
  • Real Application Testing
  • OLAP
  • Spatial and Graphics

We then looked at Extreme Performance Edition that contains all of the above plus
  • Active Data Guard
  • In Memory
  • Real Application Clusters (RAC)
  • RAC One

We then went into a simple description of each option and prepared for the following blogs that will go into more detail and code samples of not only what the options are but look at how to use them and try to tie them back to business benefits. Part of our description was a financial analysis of running a database in infrastructure as a service (IaaS) vs PaaS and the time and efficiency benefits that we get from PaaS over IaaS.

We wrapped up the week on the 3rd with a blog detailing what it takes to get a Windows desktop prepared to use a database in the cloud. The list of software is relatively generic and is not unique to Windows. We could just as easily have selected MacOSX or Linux but selected a Windows 2012 Server running in the Oracle Public Compute Cloud as IaaS. We did this primarily so that we would have a teaching platform that can be saved with a snapshot, reloaded for hands on classes, and accessible from a customer site to demonstrate cloud services. The software that we loaded on the Windows platform includes

  • To access cloud storage
    • Mozilla Firefox
    • RestClient extension for Firefox
    • Google Chrome
    • Postman extension for Chrome
    • CloudBerry for OpenStack (Windows only right now)
  • To access files in our instance
    • Putty
    • Filezilla
    • Cygwin (only needed on Windows)
  • To access our database instance
    • SQL Developer
    • Microsoft Visual C++ libraries (only needed on Windows)

We installed this on the Oracle Public Cloud because we have free accounts on this platform and we can keep them persistent. It would normally cost $150/month to keep this instance active if we purchased it as IaaS. We could just as easily have done this on Amazon EC2 or Microsoft Azure at a similar cost. We provisioned 30 GB of disk for the operating system and 10 GB for the binaries. We requested a 2 vCPU with 30 GB of RAM. If we were doing this on Amazon or Azure we probably would have gone for a smaller memory footprint but this is the base configuration for IaaS and Windows with 2 vCPUs in the Oracle Public Cloud. The idea is that a class of 15-30 people can log into this system with different user names and do minimal configuration to get started on workshops. We can refresh the users but not refresh the system for classes the following day or week. To provision this instance we went to the Oracle Cloud Marketplace to get a pre-configured Windows 2012 Server instance. We then downloaded the list of software and install them on the desktop.

On the 6th we dove into database partitioning to figure out that we can reduce storage costs and improve performance by fitting active data into memory rather than years or months of data that we typically throw away with a select statement. We talked about using partitioning to tier storage on premise and how this makes sense in the cloud but does not have as much impact as it does on premise. We talked about different partitioning strategies and how it can be beneficial to use tools like Enterprise Manager and the partition advisor to look at how you are accessing the data and how you might partition it to improve performance. On the 7th we looked at code samples for partitioning and talked about tableextents and file system storage. We talked about the drawbacks to Amazon RDS and how not having file system access, having to use custom system calls, and not having sys access to the database causes potential issues with partitioning. We walked through a range partition example where we segmented the data into dates and stored the different dates into different tablespaces.

On the 8th we focused on database compression. This in conjunction with partitioning allows us to take older data that we typically don't access and compress for query or historical storage. We talked about the different compression methodologies

  • using historic access patterns (heat map and ADO options)
  • using row compression (by analyzing update and insert operations as they occur)
  • file compression (duplicate file links and file compression of LOBS, BLOGS, and CLOBS)
  • backup data compression
  • Data Guard compression of redo logs before transmission
  • index compressions
  • network transmission compression of results to client systems
  • hybrid columnar compression (Exadata and ZFS only)
  • storage snapshot optimization (ZFS only)

We did not really dive into the code for compression but referred to a variety of books and blogs that have good code samples. We did look at the compression advisor and talked about how to use it to estimate how your mileage could potentially vary. On the 9th we dove into an Oracle by Example tutorial on compression and followed the example using DBaaS. The examples that we followed were for an 11g instance but could have been done in a 12c instance if we had the demo tables installed on the 12c instance.

On the 10th we focused on database tuning options and dove into how to use SQL Tuning Advisor. In the example that we used we referenced an external table that was not part of the select statement which caused an unnecessary table index and full table scan. The example we used was again for 11g to utilize the sample database that we have installed but could just as easily have worked with 12c. On the 13th we dove a little deeper into tuning with a focus on Enterprise Manager and the Performance Advisor utilities. We followed the Oracle by Example Tuning Tutorial to diagnose a performance problem with a sql statement.

On the 14th we looked at transparent data encryption (TDE) and how to enable and disable table compression in the cloud. We talked about the risks of not encrypting by default and tried to draw lesions from Target Corporate and how failure to protect credit card data with encryption led to job losses across the company.

On the 15th we looked at the backup and restore utilities in the cloud and how they differ from traditional RMAN utilities. You can use RMAN just like you do today and replicate your backup and restore as you do today but there are automation tools that monitor RMAN and kick off alternate processes if the backup fails. There are also tools to help restore for those unfamiliar with RMAN and the depths and details of this powerful package.

Today we are reviewing what we have done in the last week and a half and are looking forward to the next week or two. We are going to finish out the database options. On Monday we are going to dive into multi tenant and talk about pluggable databases. This discussion will probably spill over into Tuesday with an overview happening on Monday and code samples and demos on Tuesday. We will need to use a 12c database since this is a new feature that is specific to 12c only. We might split our code samples into using SQL Developer to clone and manage PDBs on Tuesday and cover the same functionality with Enterprise Manager on Wednesday. Following this discussion we will do a high level discussion on Data Guard and look at the change log and log replication strategies that can be used for physical and logical replication. The following days we will look at code samples and configurations from the command line, enterprise manager, and sql developer. We will look at what it will take to setup a primary on premise and standby in the cloud. We will also look at what is required to have both in the cloud and what it takes to flip primary and standby to emulate a failure or maintenance action then flip the two back.

Once we cover Data Guard we will be in a position to talk about real application testing. In essence Data Guard copies all of the writes that happen on the primary and replay them on the standby. Real Applicaiton Testing records the reads as well and replays the reads and writes to help measure performance differences between configurations. This is good for code change testing, patch testing, configuration change testing, and other compare/contrast changes to your production system in a safe environment.

Once we finish the high availability and data replication options we will dive into OLAP and Spatial options. OLAP reorganizes the data for data warehouse analysis and spatial allows you to run geographical select statements like show me all crimes that happened within a mile of this address. Both are optimizations on select statements to help optimize usage of the database in specific instances.

We will wrap up our coverage by looking at Audit Vault and Database Vault. Both of these options are additional levels of security that not only help us protect data but restrict and track access to data. Many financial and healthcare institutions require interfaces like this to show separation of duty as well as traceability to see who accessed what when.

Once we finish the High Performance Edition options we will dive into the Extreme Performance Edition options looking at Active Data Guard, In Memory, RAC and RAC One. Going through all of the options will probably take us through the month of June. We will probably look at the other PaaS options listed in cloud.oracle.com starting some time in July and see how they relate or differ from the DBaaS services that we are currently covering.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.