Wednesday Jun 29, 2016

last blog for a while

This will be my last blog entry for a while. We have been going through the database options and looking at the different database options for Platform as a Service. My intent was to go through all of the options and look at each of them individually. Unfortunately, I am in process of changing groups. My hopes are to restart this blog on blogs.oracle.com/iaas and blogs.oracle.com/paas. I am in process of changing jobs within Oracle and and the process is taking longer than expected. I have had fun on this journey that started back in April. Hopefully everyone has enjoyed the trip as well. My Oracle email will probably be turned off after June 30th so if you need to get in touch with me use the same format for email but at gmail.com. My hope is to resume this blog at the other two locations starting in August. pat

Tuesday Jun 28, 2016

database option - Spatial and Graphics

Today we are going to focus on the Spatial and Graphics option of the Oracle Database. Most business information has a location component, such as customer addresses, sales territories and physical assets. Businesses can take advantage of their geographic information by incorporating location analysis and intelligence into their information systems. The geospatial data features of Oracle Spatial and Graph option support complex geographic information systems (GIS) applications, enterprise applications and location services applications. Oracle Spatial and Graph option extends the spatial query and analysis features included in every edition of Oracle Database with the Oracle Locator feature, and provides a robust foundation for applications that require advanced spatial analysis and processing in the Oracle Database. It supports all major spatial data types and models, addressing challenging business-critical requirements from various industries, including transportation, utilities, energy, public sector, defense and commercial location intelligence.

The Spatial home page is a good starting point to learn more about the technology. Books that cover this topic are

Note that most of these books are three years old or older. Spatial has not changed much between 11g and 12c so the older books are still relevant. The key to the Spatial component is being able to define objects using geospatial tags. To achieve this, Oracle extended the database with the SDO_GEOMETRY data type. This is used just like an INTEGER or CHAR declaration for a variable but it contains a latitude and longitude element to define where something is located. Some sample code that we can lift from the Pro Oracle Spatial book looks like
SQL> CREATE TABLE  us_restaurants_new
(
  id                    NUMBER,
  poi_name       VARCHAR2(32),
  location         SDO_GEOMETRY    -- New column to store locations
);
This creates a table that defines an entry that helps us find where the restaurant is located. We can populate this entry with
SQL> INSERT INTO  us_restaurants_new  VALUES
(
  1,
  'PIZZA HUT',
  SDO_GEOMETRY
  (
    2001,  -- SDO_GTYPE attribute: "2" in 2001 specifies dimensionality is 2.
    NULL,    -- other fields are set to NULL.
    SDO_POINT_TYPE  -- Specifies the coordinates of the point
    (
      -87,  -- first ordinate, i.e., value in longitude dimension
      38,  -- second ordinate, i.e., value in latitude dimension
      NULL  -- third ordinate, if any
    ),
    NULL,
    NULL
  )
);
This inserts and entry for restaurant number 1, labeled PIZZA_HUT, and the location is defined by a point located at -87, 38. Note that these are relative locations defined in relation to a map. We use the SDO_GTYPE to define what type of mapping that we are using and how we are describing the location for this store.

The key benefit to this is that we can define restaurants and things like interstates. We can query the database by asking for any reference that is half a mile from the interstate. This is done with the following query

SQL> SELECT poi_name
FROM
  (
    SELECT poi_name,
      SDO_GEOM.SDO_DISTANCE(P.location, I.geom, 0.5) distance
    FROM us_interstates  I, us_restaurants  P
    WHERE I.interstate = 'I795'
      ORDER BY distance
  )
WHERE ROWNUM <= 5;

POI_NAME
-----------------------------------
PIZZA BOLI'S
BLAIR MANSION INN DINNER THEATER
KFC
CHINA HUT
PIZZA HUT
The select statement does a distance calculation looking at the distance between the interstate labeled I795 and any restaurant in the database. Note that we could have selected an address on the interstate and found something that is less than a specified distance. This is typically how something like Google Maps works. It uses your current location which is read from your phone as a latitude, longitude, and elevation and shows you the search term close to you. This allows you to easily find banks, places to eat, places to get gas, or an address that you are trying to get to.

We can not only look for distances relative to a point or a line (as we did with the interstate) but we can draw shapes around an object and look for things that fall into or out of the shape. For example, if we get the GSP points for a park, we can draw a shape that defines the park using latitude and longitude points. We can then look for related objects inside the park, outside the park, or within a few feet of the park. This helps police look for crimes that happen surrounding a park and react appropriately. In the database we define an object with spatial data and draw a shape around the object. A simple way of doing this is a simple box. The code to do this would look like

SQL> INSERT INTO  USER_SDO_GEOM_METADATA  VALUES
(
  'CUSTOMERS',      -- TABLE_NAME
  'LOCATION',       -- COLUMN_NAME
  SDO_DIM_ARRAY     -- DIMINFO attribute for storing dimension bounds, tolerance
  (
    SDO_DIM_ELEMENT
    (
      'LONGITUDE',  -- DIMENSION NAME for first dimension
      -180,         -- SDO_LB for the dimension
      180,          -- SDO_UB for the dimension
      0.5           -- Tolerance of 0.5 meters
    ),
    SDO_DIM_ELEMENT
    (
      'LATITUDE',   -- DIMENSION NAME for second dimension
      -90,          -- SDO_LB for the dimension
      90,           -- SDO_UB for the dimension
      0.5           -- Tolerance of 0.5 meters
    )
  ),
  8307              -- SRID value for specifying a geodetic coordinate system
);
You can define a data type as
  • Point
  • Line string
  • Polygon
  • Polygon with a hole
  • Collection (a combination of all of the above)
  • Compound line string
  • Compound polygon
  • 3d Composite surface
  • 3d Simple solid
  • 3d Compound solid
  • 3d Collection (a combination of all 3d objects)
When you define a spatial object it uses the SDO_GEOMETRY structure. This structure contains an SDO_GTYPE that defines if the object is 2d or 3d as well as the data type (0 = Uninterpreted type, 1 = Point, 5 = Multipoint, 2 = Line, 6 = Multiline, 3 = Polygon/surface, 7 = Multipolygon/multisurface, 4 = Collection, 8 = Solid, 9 = Multisolid). An entry of 2001 would be a 2d object, designated by the 2, that is a single points, designated by the 1. If this entry were 2002 it would be a 2d object that is a series of points to create a line, designated by the second 2. The SDO_SRID defines specifies the spatial reference system, or coordinate system, for the geometry. We can have a relative coordinate system or use latitude and longitude for coordinates. The SDO_POINT attribute specifies the location of a point geometry, such as the location of a customer. This gives us a reference point to work from and the rest of the data is the relative information based on the SDO_SRID. For example, we can draw a polygon defining a park starting at the northwest corner of the park. The SDO_POINT will provide the northwest corner of the park. The SDO_ELEM_INFO and SDO_ORDINATES attributes describe the polygon around the park. For more detailed examples, look at Chapter 3 of Pro Oracle Spatial for Oracle Database 11g.

We are not going to go into deep detail on how to program Spatial. It is recommended that you look at

It is important to note that Spatial is an optional package that runs on the Enterprise Edition database. If you are going to run this in the cloud you need to use the High Performance Edition or Extreme Performance Edition. If you are going to run this on IaaS you need to purchase the option on top of your database license and the processor metrics need to match. For example, if you run on a 2 virtual core system in the cloud, you need a 2 virtual core license for Spatial as well. You can not run Spatial on Amazon RDS because they disable this feature.

In summary, Spatial and Graphics are optional packages that help you do locational queries against a database. Spatial is not unique to Oracle but the structures and select statements typically do not cross database types but does work with products like Golden Gate to replicate data to other database spatial structures and queries. Spatial is a very powerful package that simplifies select statements that would be very complex otherwise. Finding distance between objects or distances from a line (highway for example) or distances from a polygon (park for example). If your application needs Spatial you need to select the High Performance or Extreme Performance editions.

Monday Jun 27, 2016

Safari Books Diversion

Today I am going to step back and look at something relatively simple but very powerful. One thing that my company provides for employees is a subscription to technology books online. I tend to reference these books in my blog entries mainly because I find them to be good reference material. When I write a blog entry I try to first address the topic from the mindset of a sales rep. I address the simple questions around what is the technology, why is it relevant, and how much will it cost me. If possible the discussion also blends in a compare and contrast with another solution from another vendor. The second post is a technology how to targeted at the pre sales engineer or consultant. To dive deeper typically I use books, whitepapers, hands on tutorials, and demos to pull material from. Safari Books OnLine is my virtual library. Years ago I would go to Barnes and Noble, Fry's, or Bookstop and purchase a book. Safari Pricing starts at $400/year for individuals or teams and is flexible for corporations. If you break this down this means that you need to read about 8-10 books a year to break even. If you read fewer than that, purchase them from Amazon. If you read more than that or just want to read a chapter or two, subscribe to Safari.

Two of the features that I like about this interface is the search engine and the index engine. With the search engine, it looks inside of books and allows you to sort by relevance, date, and allows you to search inside a search. For example, if I do a search for jumpstart I get 3070 references. If I add solaris to the search I get 101 results. Note on the left there are three books written in 2016 and two books written in 2015. We can narrow our search and look for recent books that talk about jumpstart technology provided with Solaris. True, this might not be a relevant topic to you but it is an example of how to find a difficult to find topic in your virtual library.

We can add this search index to our favorites by clicking on the Add to Favorites button and selecting a topic list to add to. In this example we add a JumpStart book from 2001 to our Solaris book list.

We can look at more relevant publications and find something related to Solaris 11.2. We see the relevant information in the search index and when we click on the book it takes us to the relevant chapter. Note the highlighted text from our search. I find this is a good way of researching a topic that I need to learn more about or finding tutorials or examples of how to do something.

One of the nice things about search indexes or lists is that you can share this list with other people and look at other peoples lists. This is done by looking at your Favorites and Folders you can look at the topics that interest you with the books you have saved on that effective shelf.

One of the nice things is that you can look at shelves of other users. If you click on Shared List and search for your shelf title, you get a list of other users shelves. In this example we searched for Solaris and got five shelves that other users are maintaining.

We can subscribe to these shelves and add it to our favorites. This is done by clicking on the Following "+" sign. It adds the shelf to your Favorites list on the left. Note that we are following the "Solaris stuff" folder.

We can also add this as an RSS feed to our mail reader and get updates when the shelf is updated. We can then copy the rss feed html and add it to our news reader or Thunderbird email interface.

If we add this to our Thunderbird reader we get an email interface showing updates and new books added to the shelf. We don't need to go check the list on a regular basis but look at the newsfeed section of our mail browser

I hope this simple diversion was a good break from our dive into DBaaS and PaaS. Being able to do more than just a simple Google search is typically required to find examples and tutorials. Books historically have been a good place to find this and having access to not only my virtual bookshelf but other people's bookshelves where they sort and index things is a good thing. The $400 cost might be a bit prohibitive but the freedom is a good thing. Given that my company provides this subscription at no cost to me, I will continue to use this and read technology books on an airplane in offline mode and search as I am creating blog entries.

Friday Jun 24, 2016

database option - Data Guard part 2

Normally the part two of the option pack has been a hands on tutorial or examples liberally lifted from Oracle by Example, OpenWorld hands on labs, or the GSE Demo environment. I even went to an internal repository site and the database product manager site and all of the tutorials were for an existing database or labs on a virtual machine. None of these were easy to replicate and all of them assumed a virtual machine with a pre installed instance. None of the examples were form 2015 or 2016. If anyone knows of a hands on tutorial that uses the public cloud as at least one half of the example, let me know. There is a really good DR to the Cloud whitepaper that talks about how to setup Data Guard to the cloud but it is more of a discussion than a hands on tutorial. I typically steal screen shots and scripts from demo.oracle.com but the examples that exist in the GSE demo pool use Enterprise Manager 10g, servers inside of Oracle running an early version of 11g, or require a very large virtual image download. The closest thing that I could find as a hands on tutorial is Oracle by Example - Creating a Physical Standby. For this blog we will go through this tutorial and follow along with the Oracle Public Cloud as much as possible.

Step One is to create an 11g database. We could do this on 12c but the tutorial uses 11g. If anyone wants to create a 12c tutorial, contact me and we can work on a workshop together. We might even be able to get it into the hands on labs at OpenWorld or Collaborate next year. Rather than going through all of the steps to create an 11g instance I suggest that you look at the May 4th blog entry - Database as a Service. Select 11g and High Performance Edition. We will call this database instance PRIM rather than ORCL. Your final creation screen should look like

We want to create a second database instance. We will call this one ORCL and select High Performance Edition and 11g. The name does not matter as long as it is different from the first one. I am actually cheating on the second one and using a database instance that I created weeks ago.

It is important to note while we are waiting on the database to finish that we can repeat this in Amazon but need to use EC2 and S3. We can also do this in Azure but in Azure Compute. We will need to provide a perpetual license along with Advanced Security and potentially Compression if we want to compress the change logs when we transmit them across the internet. It is also important to remember that there will be an outbound charge when going from one EC2 or Azure Compute instance to the other. If we assume that we have a 1 TB database and it changes 10% per day, we will ship 100 GB daily or being conservative and saying that we only get updates during the week and not the weekend we would expect 2 TB of outbound charges a month. Our cost for this EC2 service comes in at $320/month. If we use our calculations from our Database Options Blog post we see that the perpetual license amortized over 4 years is $2620/month. This brings the cost of the database and only Advanced Security to $2940. If we amortize this over 3 years the price jumps to $3,813/month. When we compare this to the Oracle High Performance Edition at $4K/month it is comparable but with High Performance Edition we also get eight other features like partitioning, compression, diagnostics, tuning, and others. Note in the calculator that the bulk of the processor cost is outbound data transfer. It would be cheaper to run this with un-metered compute services in the Oracle cloud at $75/month.

If we follow the instructions in DR to Oracle Cloud whitepaper we see that the steps are

  1. Subscribe to Oracle Database Cloud Service
  2. Create an Oracle instance
  3. Configure Network
  4. Encrypt Primary Database (Optional)
  5. Instantiate Data Guard Standby
  6. Perform Data Guard health check
  7. Enable Runtime Monitoring
  8. Enable Redo Transport Compression (Optional)
So far we have done steps one and two. When the database creation has finished we perform step 3 by going into the compute console and opening up port 1521 or the dblistener service. We do this by going to the compute service and looking for our database instance name. In our example we hover over the service and find the dblistener service for prs11gPRIM. We select update and enable the port. Given that these are demo accounts we really can't whitelist the ip addresses and can only open it up to the public internet or nothing. We do this for the primary and the standby database

Once we have this configured we need to look at the ip addresses for prs11gPRIM and prs11gHP. With these ip addresses we can ssh into the compute instances and create a directory for the standby log files. We can create these files with the /u02 partition with the data or the /u03 partition with the backups. I suggest that you put them in the /u04 partition with the archive and redo logs. Once we have created these directories we can follow along with the Oracle By Example Physical Data Guard example starting at step 3. The network configuration is shown on page 12 of DR to Oracle Cloud whitepaper. We can also follow along with this using prs11gPRIM as the primary and prs11gHP as the standby. Unfortunately, after step 5 the instructions stop showing commands and screen shots to finish the configuration. We are forced to go back to the OBE tutorial and modify the scripts that they give and execute the configurations with the new names.

Again, I am going to ask if anyone has a full tutorial on this using cloud services. It seems like every example goes half way and I am not going to finish it in this blog. It would be nice to see a 12c example and see how a pluggable database automatically replicates to the standby when it is plugged in. This would be a good exercise and workshop to run. My guess is this would be a half day workshop and could all be done in the cloud.

Thursday Jun 23, 2016

database option - RMAN

Technically, database backup is not an option with database cloud services, it is bundled into the service as it is with on premise systems. Previously, we talked about backup and restore through the database cloud console. Unfortunately, before we an talk about Data Guard and how to set it up we need to dive a little deeper into RMAN. The first step in setting up Data Guard is to replicate data between two database instances. The recommended way of doing this is with RMAN. You can do this with a backup and recover option or duplicate option. We will look primarily at the duplicate option.

The topic of RMAN is a complex and challenging subject to tackle. There are many configuration options and ways to set up backups and data sets as well as many ways to recover rows, tables, or instances. Some books on RMAN include

Fortunately, to setup Data Guard, we don't need to read all of this material but just need to know the basics. Unfortunately, we can't just click a button to make Data Guard work and automatically setup the relationships, replicate the data, and start log shipping. The key command that we need to get the backup from the primary to the standby is the RMAN command. We can execute this from the primary or the standby because RMAN provides a mechanism to remotely call the other system assuming that port 1521 is open between the two database instances
$ rman target user1/password1@system1 auxiliary user2/password2@system2
In this example user1 on system1 is going to backup the instance that it default connects to and replicates to system2 using the user2 credentials. This command can be executed on either system because we are specifically stating what the source is with the name target and what the standby is with the name auxiliary. Once we connect we can then execute
rman> duplicate target database for standby from active database;
What this will do is replicate the database on system1 and push it to system2. The command will also setup a barrier point in time so that changes to system1 are shipped from this point forward when you enable Data Guard. According to Oracle Data Guard 11gR2 Administration Beginner's Guide (Chapter 2) the output of this command should look something like
Starting Duplicate Db at 26-JUL-12
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=19 device type=DISK
...
contents of Memory Script:
{
   backup as copy current controlfile for standby auxiliary format  '/u02/app/oracle/oradata/orcl/control01.ctl';
   restore clone controlfile to  '/u02/app/oracle/flash_recovery_area/orcl/control02.ctl' from
 '/u02/app/oracle/oradata/orcl/control01.ctl';
}
executing Memory Script
....
sql statement: alter database mount standby database
...
Starting backup at 26-JUL-12
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/u01/app/oracle/oradata/orcl/system01.dbf
output file name=/u02/app/oracle/oradata/orcl/system01.dbf tag=TAG20120726T160751
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:04
channel ORA_DISK_1: starting datafile copy
...
sql statement: alter system archive log current
contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script
datafile 1 switched to datafile copy
input datafile copy RECID=2 STAMP=789667774 file name=/u02/app/oracle/oradata/orcl/system01.dbf
...
Finished Duplicate Db at 26-JUL-12
In this example we copied the system01.dbf file from system1 across the network connection and wrote it to /u02/app/oracle/oradata/orcl/system01.dbf on system2.

Let's take a step back and talk about RMAN a little bit to understand what is going on here. If we look at Oracle RMAN Pocket Reference it details some of the benefits of using RMAN over file system copy or disk cloning to backup a database. These include

  • Incremental backups that only copy data blocks that have changed since the last backup.
  • Tablespaces are not put in backup mode, thus there is no extra redo log generation during online backups.
  • Detection of corrupt blocks during backups.
  • Parallelization of I/O operations.
  • Automatic logging of all backup and recovery operations.
  • Built-in reporting and listing commands.
I would add
  • Compression of data as it is written
  • Encryption of data as it is written
  • Tiered storage of backups to disk and secondary target (cheaper disk, cloud, tape, etc)
When RMAN executes it creates a recovery catalog database which is basically a table in the sys area that records the schema within the catalog database and the tables (and supporting objects) within the schema that contain data pertaining to RMAN backup and recovery operations performed on the target. It also stores details about the physical structure of the target database, a log of backup operations performed on the target database’s datafiles, control files, and archived redo log files as well as stored scripts containing frequently used sequences of RMAN commands

When we execute a backup command we create a backup set that is written to the recovery catalog. The backup set is given a tag that we can reference and restore from. If we do daily incrementals we might want to use a part of the date to create the tag. We can restore to a specific point or date in time from our incremental backups.

If we are worried about having usernames and passwords being passed in via the command line or embedded in scripts we could store this password in the database with the orapwd command. This creates a username/password pair and stores it where RMAN can easily pull it from the database. We do need to give the rmanadmin user rights to execute as SYSDBA but this is easily done with a grant command. Once we do this we can drop the username and password from the rman command and only pass in the username@system parameter. The key reason that you might want to do this is invoking the command from the command line with the password exposes the password through the ps command which can be executed by any user. Embedding the password with the orapwd command helps hide this password.

The nice thing about RMAN is that you can backup and restore parts rather than all of a database. You can execute

RMAN> backup tablespace system, user;
RMAN> backup '/u01/app/oracle/oradata/ORCL/system01.dbf';
RMAN> backup incremental level 4 cumulative database skip readonly;
which will backup the user and system tables, backup the system01.dbf file and all of the tables that it includes, and do a backup of the data that has changed since the last level 4 backup and user previous lower level backups to aggregate changes into the current backup. Note that these three commands do significantly different things. We can look at what we have backed up using the
RMAN> list backups;
to see our backup set and where they are stored. When we looked at the database backup cloud service we went through a backup and recovery.

If we had done a list backups after this backup we would have noticed that the data written to SBT_TAPE was really written to cloud storage and potentially to local disk. We can then point our standby system to this backup set and restore into our database instance. This is done by importing the catalog or registering the target when we do the backup. The registration is done with a command like

$ rman target / catalog rman_901/rman_901@rman_catalog
where we are backing up a local database signified by the "/" and adding the host rman_catalog with username rman_901 and password rman_901.

My recommendation is to look at chapter 12 of Oracle Database 12c Oracle RMAN Backup & Recovery because it details how to use the duplicate option for rman. This is key to setting up Data Guard because it replicates a table from a primary system onto a standby system just prior to starting the Data Guard services. The command could be as simple as

RMAN> duplicate target database to standby1;
This will duplicate your existing instance from your on premise to a cloud or other on premise instance identified by the label standby1. This typically correlates to an ip address of a secondary system and could be a short name like this or a fully qualified domain name. We could get more complex with something like
RMAN> duplicate target database to standby1 
pfile=/u02/app/oracle/oradata/ORCL/init.ora
log_file_name_convert=('primary','standby');
This will do the same thing that the previous command did but read the init.ora file for the ORCL instance and convert anything in the /u02/app/oracle/oradata/ORCL/primary on our existing system to /u02/app/oracle/oradata/ORCL/standby on our target standby1 system. This is an easy way to replicate data from a PDB called primary to a PDB called standby prior to setting up a Data Guard relationship. The steps recommended to create and configure an RMAN copy are
  1. On your standby server, build your auxiliary database directory structures (aka your target directory)
  2. On your primary server, make a copy of the target init.ora file so that it can be moved to the standby server.
  3. Move the target init.ora file to the auxiliary site with scp or other software to copy files.
  4. Start or restart the standby instance in NOMOUNT mode
  5. Configure the listener.ora at the standby site
  6. Configure the tnsnames.ora file at the primary site
  7. Create a password file at the standby server
  8. Move the FRA files from primary to standby
  9. From the primary system, run your duplicate command within RMAN
You can add parameters to allow for parallel copies of the data. You probably should not compress or encrypt the data since we will be pulling it from the backup and writing it into a database. We could potentially compress the data but it will not compress the data on the target system, only compress it for transmission across the internet or local network.

In summary, we needed to dive a little deeper into RMAN than we did before. RMAN is needed to duplicate data from our primary to the target prior to log shipping. There are some complexities associated with RMAN that we exposed and the steps needed to get a secondary site ready with rman are not trivial and need an experienced operating system admin and DBA to get this working. One of the new features of provisioning a cloud database service is a checkbox to create a Data Guard replica in another data center. One of the new features of installing a 12.2.2 database instance is also rumored to have a clone to cloud with Data Guard checkbox. As you install a new on premise database or in cloud database these complex steps are done behind the scenes for you as you would expect from a platform as a service model. Amazon claims to do this with site to site replication and restarting the database in another zone if something happens but this solution requires a reconnection from your application server and forcing your users to reauthenticate and reissue commands in flight. Using Data Guard allows your application server to connect to your primary and standby databases. If the primary fails or times out, the application server automatically connects to the standby for completion of the request. All of this is dependent upon RMAN working and replicating data between two live databases so that log shipping can assume that both servers are in a known state with consistent data on both systems.

Wednesday Jun 22, 2016

database option - Data Guard

To steal liberally from Larry Carpenter's book on Data Guard, Data Guard is a product of more than 15 years of continuous development. We can trace the roots of today’s Data Guard as far back as Oracle7 in the early 1990s. Media recovery was used to apply archived redo logs to a remote standby database, but none of the automation that exists today was present in the product.

Today we are going to look at the material on Data Guard and discuss the differences between Data Guard, Active Data Guard, and Golden Gate. We are going to look at what it takes to replicate from an on premise system to the cloud and from the cloud to an on premise system. It is important to know that you can also synchronize between two cloud instances but we will not cover this today.

If we look at the books that cover this topic they include

Note that there are not any 12c specific books written on Data Guard. This is primarily due to the technology not changing significantly between the 11g and 12c releases. The key new release in 12c is far sync support. We will cover that more later. There are also books written on Active Data Guard and Golden Gate as well If we take a step back and look at high availability, Data Guard is used to provide this functionality between systems. Oracle Data Guard provides the management, monitoring, and automation software to create and maintain one or more standby databases to protect Oracle data from failures, disasters, human error, and data corruptions while providing high availability for mission critical applications. Data Guard is included with Oracle Database Enterprise Edition and in the cloud the High Performance Edition. Oracle Active Data Guard is an option for Oracle Database Enterprise Edition and included in the Extreme Performance Edition in the cloud.

The home page for Data Guard provides links to white papers

There are also a significant number of blogs covering high availability and Data Guard. My recommendation would be to attend the Oracle Education Class or follow one of the two tutorials that cover Basic Data Guard Features and Active Data Guard Features. In both of these tutorials you learn how to use command line features to configure and setup an active - standby relationship between two databases. Larry Carpenter has done a really good job of detailing what is needed to setup and configure two database instances with these tutorials. The labs are a bit long (50+ pages) but cover the material very well and work with on premise systems or cloud systems if you want to play.

The key concepts around Data Guard are the mechanisms for replication and how logs are shipped between systems. The basic foundation of Data Guard centers around replication of changes. When an insert or update is made to a table, this change is captured by the log writer and replicated to the standby system. If the replication mechanism is physical replication the data blocks changed are copied to the standby system. If the replication mechanism is logical replication the sql command is copied to the standby system and executed. Note that the select or read statements are not recorded and copied, only the commands that write to storage or update information in the database. By capturing the changes and shipping them to the standby system we can keep the two systems in synchronization. If a client is trying to execute a select statement on the primary database and the primary fails or goes offline, the select statement can be redirected to the standby for the answer. This results in seconds of delay rather than minutes to hours as is done with disk replication or recovery from a backup. How the replication is communicated to the standby system is also configurable. You can configure a write and release mechanism or a wait for commit mechanism. With the write and release mechanism, the logs are copied to the standby system and the primary system continues operation. With the wait for commit mechanism the primary stalls until the standby system commits the updates.

Significant improvements were made in 11g with the log writer service (LNS) and the redo apply service (RNS). The LNS has the ability to delay the shipping of the logs in the asynchronous update mode and can compress the logs. The RNS knows how the LNS is configured and can get decompress the logs and apply them as was done before. This delay allows for the LNS to look for network congestion and ship the logs when the network is not so overloaded. The compression allows the packet size to be smaller to reduce contention on the network and make the replication more efficient. It is important to note that you can have a single LNS writing to multiple RNS targets to allow for replication not in a one to one configuration but in a one to many configuration. It is also important to note that this technology is different from table cloning or data masking and redaction that we talked about earlier. The assumption is that there is a master copy of the data on the target system and we only ship changes between the systems when an update occurs on the primary.

The key difference between Data Guard and Active Data Guard is the state of the target database. With Data Guard, the database can not have any active sessions other than the RNS agent. You can not open the database for read only to do backups or analytics. Having an active sessions blocks the RNS agent from committing the changes into the database. Active Data Guard solves this problem. The RNS agent understands that there are active connections and can communicate changes to the active sessions if they are reading data from updated areas. A typical SQL connection uses buffering to minimize reads from the disk. Reads are done from an in memory buffer to speed up requests. The problem with reading data on a standby system is invalidation of these buffers. With Data Guard, there is no mechanism to invalidate buffers of sessions on other connections. With Active Data Guard, these mechanisms exist and updates are not only written to the disk but the cache for the other connections are updated.

Golden Gate is a more generic case of Active Data Guard. One of the limitations of Data Guard is that you must have the same chip set, operating system, and database version for replication. Translations are not done when changes are shipped from primary to standby. You can't for example replicate from a Sparc Server to an X86 server running the same version of the Oracle database. One uses little endian while the other uses big endian to store the bits on disk. Physical replication between these two systems would require a byte translation of every change. Data Guard does not support this but Golden Gate does. Golden Gate allows you to no only ship changes from one database instance to a different chip architecture but a different chip architecture on a different operating system running a different database. Golden Gate was originally crated to replicate between database engines so that you could collect data with SQL Server and replicate the data to an Oracle database or MySQL database so that you could do analytics on a different database engine than your data collection engine. With Golden Gate there is a concept similar to the LNS and RNS but the agents are more intelligent and promote the data type to a master view that can be translated into the target type. When we define an integer it might mean 32 bits on one system but 64 bits on another system. Golden Gate is configured to lead fill from 32 to 64 and truncate from 64 to 32 appropriately based on your use cases and configurations.

To replicate between two systems we basically need an open port from the primary system and the standby system to ship the logs. We also need a landing area to drop the change logs so that the RNS can pick up the changes and apply them. This prohibits Amazon RDS from enabling Data Guard, Active Data Guard, or Golden Gate since you do not have file system access. To run Data Guard in Amazon or Azure you need to deploy the Oracle database on a compute or IaaS instance and purchase the perpetual license with all of the options associated with the configuration. The beautiful thing about Data Guard is that it uses the standard port 1521 to communicate between the servers. There are special commands developed to configure and setup Data Guard that bridge between the two systems. As data is transmitted it is done over port 1521 and redirected to the RNS agent. We can either open up a network port in the cloud or create an ssh tunnel to communicate to our standby in the cloud. The communication works in both directions so we can flip which is primary and which is standby with a command or a push of a button with Enterprise Manager.

The important conversation to have about data protection is not necessarily do I have a copy of it somewhere else. We can do that with RMAN backups or file backups to replicate our data in a safe and secure location. The important conversation to have is how long can we survive without access to the data. If an outage will cost us thousands per minute, we need to look at more than file replication and go with parallel database availability. Data Guard provides this mechanism to keep an active database in another location (on premise or in the cloud) and provides for not only a disaster recovery solution but a way or offloading services from our primary production system. We can break the replication for a few hours and stop the redo apply on the standby while we do a backup. The logs will continue to be shipped just not applied. When the backup is finished we grind through the logs and apply the changes to the standby. We have a window of vulnerability but we have this while we are running backups on our primary system as well. We can now offload the backups to our standby system and let the primary continue to run as needed without interruption. In effect what this does is take all of the small changes that happen throughout the day and ship them to a secondary system so there is a trickle effect on performance. If we do an incremental backup at night we basically block the system while we ship all these changes all at once.

In summary, Data Guard is included with the High Performance Edition of the database and a free part of any on premise Enterprise Edition database. Active Data Guard is included with Extreme Performance Edition of the database and can be matched to synchronize an on premise or in cloud database that is also licensed to run Active Data Guard. There is a ton of reference material available on how Data Guard, Active Data Guard, and Golden Gate works. There are numerous tutorials and examples on how to configure and setup the service. It is important to know that you can use the cloud for this replication and a Dr to the Cloud whitepaper is available detailing how to do this.

Tuesday Jun 21, 2016

database option - multi tenant part 2

Yesterday we looked at the concept behind multi-tenant and talked about the economics and operational benefits of using the option. Today we are going to look at examples and technical details. Fortunately, this is a hot topic and there are a ton of interviews, tutorials, and videos showing you how to do this. The best place to start is Oracle Technology Network - multitenant. This site lists seven offerings from Oracle Education, six online tutorials, and four video demos. Unfortunately, most books are a little light on this topic and cover it lightly in a chapter buried in the high number chapters. The most recent books cover this topic directly Two of these books are pre-order and the third is only a few months old. The other books talk about it as an abstract term with little or no examples. Safari Books does not have many that cover this subject because the topic is so new and few books have been published on the topic.

The Oracle Base Blog has a series of postings about multitenant and does a really good job of showing diagrams and sample code. There is a significant amount of information at this site (24 posts) looking at the subject in depth. I normally provide a variety of links to other bloggers but I think that this work is good enough to deserve top billing by itself.

Internal to Oracle the GSE Demo pages have a few demos relating to multi-tenant.

  • PaaS - Data Management (Solutions Demo) has a hands on tutorial in the cloud
  • DB12c Multi-Tenant Workshop by William Summerhill is on retriever.us.oracle.com
  • Oracle Database 12c multitenant and in-memory workshop using OPC by Ramulu Posham on retriever.us.oracle.com

For the rest of this blog I am going to go through the workshop by Ramulu Posham because it is the most complete and does everything 100% in the cloud. We could do this on the Oracle Public Cloud using DBaaS, or a database installed in IaaS on Oracle, Amazon, or Azure. We can not do this on Amazon RDS because they disable multi-tenant and prohibit it from working.

The schedule for the workshop is

  • 9:00 - 9:15 intro
  • 9:15 -10:15 cloud intro
  • 10:30 - 2:00 multi-tenant workshop
The workshop consists of creating two pluggable database instances in the cloud and look at pluggable creation, cloning, and snap cloning. The assumption is that you have a public cloud account and can create two 12c databases in the cloud with less than 100 GB of disk space. You can do this on two 1 OCPU 7.5 GB instance but require High Performance or Extreme Performance Edition to get multi-tenant to work. The only software that we will need for our Windows 2012 IaaS instance will be Swing Bench which helps put a load on the database and allows us to look at utilization of resources for a container database and our pluggable instances.

The flow of the workshop is shown in the following slide. We are going to create a database with data in the container and another database and put both instances in a pluggable database on one instance.

Some of the more interesting slides from the presentation are shown below. The file location slide helped me understand where resources get allocated. The redo logs, for example, are part of the container database and not each pluggable. You setup Data Guard for all pluggables by configuring the container and replication happens automatically. The discussion on cloning a database is interesting because you don't need to copy all of the data. You only copy the header information and reference the same data between the original and the clone. Changes are tracked with file links as they are updated on both sides. The managing slide helped me understand that there is still a DBA for each pluggable as well as a master DBA for the container. Seeing that in a picture helped me understand it better. There are also multiple slides on resource management and shares. I pulled a representative slide as well as the summary benefits slide. This is what is covered in the first hour prior to the hands on labs starting.

To start the lab, we create a 12c High Performance instance called salessvc$GC where $GC is a substitute for each lab participant. We will use 01 as our example so we will create salessvc01.

Note that we call the database instance salessvc01, the ORACLE_SID salesc01, and the pluggable container sales01. We can not have the ORACLE_SID and the pluggable instance the same because it will confuse the listener so those names must be different. The creation takes between 30 minutes and an hour on our demo accounts. While we are waiting we will create a second instance with the name cmrsvc01 with similar parameters using the SID of crms01 and a pluggable container of crm01.

Once we have the ip address of the two instances we can create an ssh tunnel for ports 443, 5500, 80, and 1521. This is done by creating an ssh tunnel in our putty client. The presentation material for the labs go through with very good screen shots for all of these steps. We have covered all of this before and are just summarizing the steps rather than detailing each step. Refer to previous blog entries or the workshop notes on how to do this.

The sales instance looks like the screen shots below. We configure the ports and look at the directory structure in the /u02/app/oracle/oradata directory to verify that the sales01 pluggable database was created under the container database salesc01.

Once we have the database created and ports configured we download and launch SwingBench to load data into the database and drive loads to test response time and sql performance.

We need to download SwingBench and Java 8 to execute the code properly. Once we download SwingBench we unzip it and install it with a java command.

The only true trick in the install is that we must execute lsnrctl status on the database to figure out what the listener is looking for in the connection string. We do this then type in localhost:1521 and the connection string to populate the database with SwingBench.

We repeat this process for the cmrc01 instance, repeat the SwingBench install, and unplug the soe database from the crmc01 pluggable database to the salessvc01 database service and plug it in as a pluggable. The big issue here is having to unplug and copy the xml file. It requires uploading the private key and allowing ssh between the two instances. Once this is done the SwingBench is run against both instances to see if performance improves or decreases with two pluggables on the same instance. The instructions do a good job of walking you through all of this.

Overall, this is a good workshop to go through. It describes how to create pluggable containers. It describes how to unplug and clone PDBs. It is a good hands on introduction and even brings in performance and a sample program to generate a synthetic load.

Monday Jun 20, 2016

database option - multi tenant

Before we dive into what multi-tenant databases are, let's take a step back and define a few terms. With an on premise system we can have a computer loaded with a database series of databases. Historically the way that this was done was by booting the hardware with an operating system and loading the database onto the operating system. We load the OS onto the root file system or "/" in Unix/Solaris/Linux. We create a /u01 directory to hold the ORACLE_HOME or binaries for the database. Traditionally we load the data into /u02 or keep everything in /u01. Best practices have shown us that splitting the database installation into four parts is probably a good idea. Keeping everything in the root partition is not a good idea because your can fill up your database and lock the operating system at the same time. We can put the binaries into /u01 and do a RAID-5 or RAID-10 stripe for these binaries. We can then put all of our data into /u02 and name the /u02 file system a flash disk or high speed disk to improve performance since this has a high read and write performance requirements. We can RAID-5 or RAID-10 this data to ensure that we don't loose data or will use a more advanced striping technology provided by a hardware disk vendor. We then put our backups into /u03 and do a simple mirror for this partition. We can go with a lower performing disk to save money on the installation and only keep data for a few days/weeks/months then delete it as we get multiple copies of this data. We might replicate it to another data center or copy the data to tape and put it into cold storage for compliance requirements as well as disaster recovery fall backs. If we are going to replicate the data to another data center we will create a /u04 area for change logs and redo logs that will be shipped to our secondary system and applied to the second system to reduce recovery time. Backups give us recovery to the last backup. A live system running Data Guard or Active Data Guard gives us failure back to a few seconds or a transaction or two back rather than hours or days back.

The biggest problem with this solution is that purchasing a single system to run a single database is costly and difficult to manage. We might be running at 10% processor utilization the majority of time but run at 90% utilization for a few hours a week or few days a month. The system is idle most of the time and we are paying for the high water mark rather than the average usage. Many administrators overload a system that have different peak usage times and run multiple database instances on the same box. If, for example, our accounting system peaks on the 25th through the 30th and our sales system peaks on the 5th through the 10th, we can run these two systems on the same box and resource limit each instance during the peak periods and let them run at 20% the rest of the month. This is typically done by installing two ORACLE_HOMEs in the /u01 directory. The accounting system goes into /u01/app/oracle/production/12.1.0/accounting and the sales system goes into /u01/app/oracle/production/12.1.0/sales. Both share the /u02 file system as well and put their data into /u02/app/oracle/oradata/12.1.0/accounting and /u02/app/oracle/oradata/12.1.0/sales. Backups are done to two different locations and the replication and redo logs are similarly replicated to different locations.

Having multiple ORACLE_HOMEs has been a way of solving this problem historically for years. The key drawback is that patching can get troublesome if specific options are used or installed. If, for example, both use ASM (automated storage management) you can't patch one database without patching ASM for both. This makes patch testing difficult on production systems because suddenly sales and accounting are tied together and upgrades have to be done at the same time.

Virtualization introduced a solution to this by allowing you to install different operating systems on the same computer and sublicense the software based on the virtual processors assigned to the application. You suddenly are able to separate the storage interfaces and operating system patches and treat these two systems as two separate systems running on the same box. Unfortunately, the way that the Oracle database is licensed has caused problems and tension with customers. The software does not contain a license key or hardware limit and will run on what is available. Virtualization engines like VMWare and HyperV allow you to soft partition the hardware and dynamically grow with demand. This is both good and bad. It is good because it makes it simpler to respond to increase workloads. It is bad because licensing is suddenly flexible and Oracle treats the maximum number of cores in the cluster as the high water mark that needs to be licensed. This is called soft partitioning. Operating systems like Solaris and AIX have hard partitions and virtualization engines like OracleVM and ZEN provide hard partitions. Customers have traditionally solved this by running an Oracle instance on a single socket or dual socket system to limit the core count. This typically means that the most critical data is running on the oldest and slowest hardware to limit price. Alternatively they run the database on a full blade and license all cores in this blade. This typically causes a system to be overlicensed and underutilized. The admin might limit the core count to 8 cores but there could be 32 cores in the blade and all 32 cores must be licensed. Using a virtualization engine to limit the resources between database instances is not necessarily practical and not fine enough resolution. Going with multiple ORACLE_HOME locations has been a growing trend since you have to license all of the cores based on current licensing policies.

Another big problem with the multiple ORACLE_HOME or multiple operating system approach is that you have multiple systems to manage and patch. If we use the 32 core system to run four instances of application databases we have four patches to make for the virtualization engine, the operating systems, and the databases. An optimum solution would be to run one operating system on all 32 cores and spread the four databases with one ORACLE_HOME across each and resource limit each instance so that they don't become a noisy neighbor for the other three. We can then use resource manager to assign shares to each instance and limit the processor, memory, and network bandwidth based on rules so that noisy neighbors don't stop us from getting our job done. We get our shares and can be the noisy neighbor if no one else is using resources.

With the 12c instance of the database, Oracle introduced an option called multi-tenant. Let's think of a company like SalesForce.com. They don't spin up a new instance for each company that they do business with. They don't install a new ORACLE_HOME for each company. They don't spin up a new operating system and install a new database instance for each company. This would not make economic sense. A five person company would have to spend about $3K/month with SalesForce to cover just the cost of the database license. On the flip side, custom code must be written to isolate user from company A from reading customer contact information from company B. A much simpler way would be to spin up a pluggable database for company A and another for company B. No custom code is required since the records for the two companies are stored in different directories and potentially different disk locations. If we go back and look at our partitioning blog entry we notice that we have our data stored in /u02/app/oracle/oradata/ORCL/PDB1. The ORCL directory is the location of our container database. This contains all of the configuration information for our database. We define our listener at this location. We create our RMAN backup scripts here. We define our security and do auditing at this level. Note that we have a PDB1 subdirectory under this. This is our pluggable database for company A. We would have a PDB2 for company B and the system01.dbf file in that directory is different from the system01.dbf file located in the PDB1 directory. This allows us to create unique users in both directories and not have a global name issue. With SalesForce all usernames must be unique because users are stored in a master database and must be unique. I can not, for example, create a user called backupadmin that allows users to log in to company A and backup the data set if there is a user defined by that same name for any other company world wide. This creates script issues and problems. We can't create a single backup script that works across all companies and must create a unique user and script for each company.

The main concept behind the multi-tenant option is to allow you to run more databases on the same box and reduce the amount of work required to support them. By putting common tasks like backup and restore at the container level, all pluggables on this system are backed up in a central location but separated by the pluggable container so that there is no data mingling. Data can be replicated quickly and easily without having to resort to backup and restore onto a new instance. The system global area (SGA) is common for the container database. Each pluggable container gets their own personal global area (PGA) that manages I/O buffers, compiled sql statements, and cached data.

Note that we have one redo log and undo log area. As changes are made they are copied to a secondary system. We don't have to configure Data Guard for each pluggable instance but for the container database. When we plug a instance into a container it inherits the properties of the container. If we had a container configured to be RAC enabled, all pluggables in the database instance would be RAC enabled. We can use the resource manager in the container database to limit the shares that each pluggable instance gets and reduce the noisy neighbor overlap that happens on a virtual machine configuration. We also reduce the patching, backup, and overall maintenance required to administer the database instance.

To create a pluggable instance we need to make sure that we have requested the High Performance or Extreme Performance Edition of the database. The Standard Edition and Enterprise Edition do not support multi-tenant. It is important to note that to get this same feature on Amazon you can not use RDS because they prohibit you from using this option. You must use IaaS and go with Amazon EC2 to get this feature to work. Microsoft Azure does not offer the Oracle database at the platform level so your only option is Azure Compute.

The pluggable creation is simple and can be done from the command line through sqlplus. The 12c Database Documentation details this process.

CREATE PLUGGABLE DATABASE salespdb ADMIN USER salesadm IDENTIFIED BY password;

or

CREATE PLUGGABLE DATABASE salespdb ADMIN USER salesadm IDENTIFIED BY password ROLES=(DBA);

or more complex

CREATE PLUGGABLE DATABASE salespdb ADMIN USER salesadm IDENTIFIED BY password
  STORAGE (MAXSIZE 2G MAX_SHARED_TEMP_SIZE 100M)
  DEFAULT TABLESPACE sales 
    DATAFILE '/disk1/oracle/dbs/salespdb/sales01.dbf' SIZE 250M AUTOEXTEND ON
  PATH_PREFIX = '/disk1/oracle/dbs/salespdb/'
  FILE_NAME_CONVERT = ('/disk1/oracle/dbs/pdbseed/', '/disk1/oracle/dbs/salespdb/');
Note that we can make the creation simple or define all of the options and file locations. In the last example we create the pluggable instance by cloning the existing pdbseed. In our example this would be located in /u02/app/oracle/oradata/ORCL. We would pull from the pdbseed directory and push into the salespdb directory. All three examples would do this but the third details all options and configurations.

When we create the instance from the sql plus command line, it could assume a PDB name for the file system. We might want to use the more complex configuration. When we executed this from the command line we got a long string of numbers for the directory name of our new pluggable instance called salespdb.

We could do the same thing through sql developer and have it guide us through the renaming steps. It prompts us for the new file name showing where the seed is coming from. We could have just as easily have cloned the salespdb and used it as our foundation rather than creating one from the pdbseed. We right click on the container database header and it prompts us to create, clone, or unplug a pluggable. If we select create we see the following sequence.

One thing that we did not talk about was economics. If you wanted to run multi-tenant on premise you need to purchase a database license at $47.5K per two processors and the multi-tenant option at $23K per two processors as well. This comes in at $60.5K for the license and $13,310 per year for support. Using our four year cost of ownership this comes in at $2,495 per month for the database license. The High Performance edition comes in at $4K per month. Along with this you get about $5K in additional features like diagnostics, tuning, partitioning, compression, and a few other features that we have not covered yet. If you are going to run these options on Amazon or Azure you will need to budget the $2.5K for the database license and more if you want the other features on top of the processor and storage costs for those cloud services. You should also budget the outgoing data charges that you do not have to pay for with the non-metered database service in the Oracle Cloud. Going with the multi-tenant option is cheaper than running the database on two servers and easier than running two ORACLE_HOME instances on the same machine. Going with the High Performance Edition gets you all of these options and offloads things like scale up, backup, initial configuration, and restart of services if a process fails.

In summary, multi-tenant is a good way of overloading services on a single server. The resource management features of the container allow us to dynamically change the allocation to a pluggable database and give more resources to instances that need it and limit noisy neighbors. With the High Performance edition and Extreme Performance Edition we get multi-tenant as a foundation for the service. Our primary interface to create a pluggable instance is either SQL Developer, Enterprise Manager, or sqlplus. We can easily clone an existing instance for a dev/test replica or export an instance and plug it into another system. We will look at this more in depth tomorrow.

Friday Jun 17, 2016

database options - review and looking forward

For the past two weeks we have been looking at database as a service (DBaaS) offered as platform as a service (PaaS) on the Oracle Public Cloud. We started this journey on the 2nd of this month by looking at the differences between Standard Edition, Enterprise Edition, High Performance Edition, and Extreme Performance Edition. To quickly review, Standard Edition is the basic database with table encryption as the only option. This is a full feature database without the ability to replicate data with any tool other than copying files and RMAN backup. You can't do things like transportable table spaces, streams, Data Guard, or any other replication technologies to make a system more highly available. Enterprise Edition is a more full featured database that allows for data replication from the installation and comes with Advanced Security (TDE) as a basis for the installation. This edition does not come with Data Guard but does have the option for transportable tablespaces, external references, and ways of replicating data manually from the database that Standard Edition does not contain. We then looked at the High Performance Edition which comes with
  • Transparent Data Enctyption
  • Diagnostics
  • Tuning
  • Partitioning
  • Advanced Compression
  • Advanced Security
  • Data Guard
  • Label Security
  • Multitenant
  • Audit Vault
  • Database Vault
  • Real Application Testing
  • OLAP
  • Spatial and Graphics
We then looked at Extreme Performance Edition that contains all of the above plus
  • Active Data Guard
  • In Memory
  • Real Application Clusters (RAC)
  • RAC One
We then went into a simple description of each option and prepared for the following blogs that will go into more detail and code samples of not only what the options are but look at how to use them and try to tie them back to business benefits. Part of our description was a financial analysis of running a database in infrastructure as a service (IaaS) vs PaaS and the time and efficiency benefits that we get from PaaS over IaaS.

We wrapped up the week on the 3rd with a blog detailing what it takes to get a Windows desktop prepared to use a database in the cloud. The list of software is relatively generic and is not unique to Windows. We could just as easily have selected MacOSX or Linux but selected a Windows 2012 Server running in the Oracle Public Compute Cloud as IaaS. We did this primarily so that we would have a teaching platform that can be saved with a snapshot, reloaded for hands on classes, and accessible from a customer site to demonstrate cloud services. The software that we loaded on the Windows platform includes

  • To access cloud storage
    • Mozilla Firefox
    • RestClient extension for Firefox
    • Google Chrome
    • Postman extension for Chrome
    • CloudBerry for OpenStack (Windows only right now)
  • To access files in our instance
    • Putty
    • Filezilla
    • Cygwin (only needed on Windows)
  • To access our database instance
    • SQL Developer
    • Microsoft Visual C++ libraries (only needed on Windows)
We installed this on the Oracle Public Cloud because we have free accounts on this platform and we can keep them persistent. It would normally cost $150/month to keep this instance active if we purchased it as IaaS. We could just as easily have done this on Amazon EC2 or Microsoft Azure at a similar cost. We provisioned 30 GB of disk for the operating system and 10 GB for the binaries. We requested a 2 vCPU with 30 GB of RAM. If we were doing this on Amazon or Azure we probably would have gone for a smaller memory footprint but this is the base configuration for IaaS and Windows with 2 vCPUs in the Oracle Public Cloud. The idea is that a class of 15-30 people can log into this system with different user names and do minimal configuration to get started on workshops. We can refresh the users but not refresh the system for classes the following day or week. To provision this instance we went to the Oracle Cloud Marketplace to get a pre-configured Windows 2012 Server instance. We then downloaded the list of software and install them on the desktop.

On the 6th we dove into database partitioning to figure out that we can reduce storage costs and improve performance by fitting active data into memory rather than years or months of data that we typically throw away with a select statement. We talked about using partitioning to tier storage on premise and how this makes sense in the cloud but does not have as much impact as it does on premise. We talked about different partitioning strategies and how it can be beneficial to use tools like Enterprise Manager and the partition advisor to look at how you are accessing the data and how you might partition it to improve performance. On the 7th we looked at code samples for partitioning and talked about tableextents and file system storage. We talked about the drawbacks to Amazon RDS and how not having file system access, having to use custom system calls, and not having sys access to the database causes potential issues with partitioning. We walked through a range partition example where we segmented the data into dates and stored the different dates into different tablespaces.

On the 8th we focused on database compression. This in conjunction with partitioning allows us to take older data that we typically don't access and compress for query or historical storage. We talked about the different compression methodologies

  • using historic access patterns (heat map and ADO options)
  • using row compression (by analyzing update and insert operations as they occur)
  • file compression (duplicate file links and file compression of LOBS, BLOGS, and CLOBS)
  • backup data compression
  • Data Guard compression of redo logs before transmission
  • index compressions
  • network transmission compression of results to client systems
  • hybrid columnar compression (Exadata and ZFS only)
  • storage snapshot optimization (ZFS only)
We did not really dive into the code for compression but referred to a variety of books and blogs that have good code samples. We did look at the compression advisor and talked about how to use it to estimate how your mileage could potentially vary. On the 9th we dove into an Oracle by Example tutorial on compression and followed the example using DBaaS. The examples that we followed were for an 11g instance but could have been done in a 12c instance if we had the demo tables installed on the 12c instance.

On the 10th we focused on database tuning options and dove into how to use SQL Tuning Advisor. In the example that we used we referenced an external table that was not part of the select statement which caused an unnecessary table index and full table scan. The example we used was again for 11g to utilize the sample database that we have installed but could just as easily have worked with 12c. On the 13th we dove a little deeper into tuning with a focus on Enterprise Manager and the Performance Advisor utilities. We followed the Oracle by Example Tuning Tutorial to diagnose a performance problem with a sql statement.

On the 14th we looked at transparent data encryption (TDE) and how to enable and disable table compression in the cloud. We talked about the risks of not encrypting by default and tried to draw lesions from Target Corporate and how failure to protect credit card data with encryption led to job losses across the company.

On the 15th we looked at the backup and restore utilities in the cloud and how they differ from traditional RMAN utilities. You can use RMAN just like you do today and replicate your backup and restore as you do today but there are automation tools that monitor RMAN and kick off alternate processes if the backup fails. There are also tools to help restore for those unfamiliar with RMAN and the depths and details of this powerful package.

Today we are reviewing what we have done in the last week and a half and are looking forward to the next week or two. We are going to finish out the database options. On Monday we are going to dive into multi tenant and talk about pluggable databases. This discussion will probably spill over into Tuesday with an overview happening on Monday and code samples and demos on Tuesday. We will need to use a 12c database since this is a new feature that is specific to 12c only. We might split our code samples into using SQL Developer to clone and manage PDBs on Tuesday and cover the same functionality with Enterprise Manager on Wednesday. Following this discussion we will do a high level discussion on Data Guard and look at the change log and log replication strategies that can be used for physical and logical replication. The following days we will look at code samples and configurations from the command line, enterprise manager, and sql developer. We will look at what it will take to setup a primary on premise and standby in the cloud. We will also look at what is required to have both in the cloud and what it takes to flip primary and standby to emulate a failure or maintenance action then flip the two back.

Once we cover Data Guard we will be in a position to talk about real application testing. In essence Data Guard copies all of the writes that happen on the primary and replay them on the standby. Real Applicaiton Testing records the reads as well and replays the reads and writes to help measure performance differences between configurations. This is good for code change testing, patch testing, configuration change testing, and other compare/contrast changes to your production system in a safe environment.

Once we finish the high availability and data replication options we will dive into OLAP and Spatial options. OLAP reorganizes the data for data warehouse analysis and spatial allows you to run geographical select statements like show me all crimes that happened within a mile of this address. Both are optimizations on select statements to help optimize usage of the database in specific instances.

We will wrap up our coverage by looking at Audit Vault and Database Vault. Both of these options are additional levels of security that not only help us protect data but restrict and track access to data. Many financial and healthcare institutions require interfaces like this to show separation of duty as well as traceability to see who accessed what when.

Once we finish the High Performance Edition options we will dive into the Extreme Performance Edition options looking at Active Data Guard, In Memory, RAC and RAC One. Going through all of the options will probably take us through the month of June. We will probably look at the other PaaS options listed in cloud.oracle.com starting some time in July and see how they relate or differ from the DBaaS services that we are currently covering.

Thursday Jun 16, 2016

database option - backup and restore

Backup and recovery abilities are arguably the most critical skills required of a database administrator. Recovery Manager (RMAN) is Oracle’s standard backup and recovery tool; every Oracle DBA should be familiar with utilizing RMAN. Some DBAs use alternate tools since RMAN is an Oracle specific tool to backup data in a database. Alternatives include Veritas Backup Exec, Comvault Sympana, Legato Networker, EMC and NetApp tools, and other packages. I am not going to list books and software packages in this blog. When I did a search on Safari Books search for rman we get 9 books published in 2016, 16 in 2015, and 20 in 2014. There are also a ton of blogs so I suggest that you go with your favorite blog and search for rman or backup. There are hundreds of different ways to backup a database and restore the database as well as optimize how much space the database takes up in cold storage.

The important things to consider when you look at backup and recovery are

  • full or incremental backups
  • backing up to disk, tape, or cloud
  • replicating data to another site with disk mirroring, Data Guard, or Golden Gate
  • hot backup or cold backup along with middleware and file system backup synchronization
  • recovery point objective and recovery time objective
  • compression, deduplication, and encryption of data at rest
  • backup windows and restore windows

It is important to note that when you purchase DBaaS, independent of any edition, you get backup done for you based on the options you select at creation. When you create a database you can opt for no backup, local backup, and full backups. The no backup can be used for development and test instances. We might just want a sandbox to play in and don't care about keeping this data so that we can restore. If we loose the data for any reason we can recreate it from our production system. When you select local backups you get incremental backups daily at 2am and a full backup Sunday morning at 2am. This gives you a seven day window for recovery so that you can restore data back to your last full backup with daily incrementals. These backups go to the /u03 file system on the instance that you create. Nothing is copied off the instance that you create so a complete failure of the system could result in potential data loss. The full backup does an incremental to /u03 daily and a full backup Sunday morning at 2am to /u03 as well. Prior to the full backup, the backup file is copied to the Cloud Object Storage Container that you created prior to creating the database. When you created the database you specify the days that you want to retain backups. If, for example, you ask for a 90 day backup you get 13 full backups copied to the object storage. If you have a Java Cloud Service connected to this database, the Java configuration and war files are also copied to this same area. The backup is automatically done for you and can be reconfigured and done manually using the cloud specific commands. Refer to the Using Oracle Database Cloud - Database as a Service Documentation (chapter 6) to understand how to backup using the cloud commands to keep backups in sync with the automated utilities rather than doing an rman manually.

You can generate a backup before you make a change to the database with the following command

sudo /var/opt/oracle/bkup_api/bkup_api bkup_start
This command must be executed from the opc account and not the oracle account because it needs root roles to store data and update the logs associated with cron jobs. To restore the database from the last backup you execute the following command
sudo dbaascli orec --args -latest
You can list the database backups that exist with
sudo dbaascli orec --args -list
To restore from a specific backup you execute
sudo dbaascli orec --args -pitr backup-tag
where backup-tag is the name listed with the -list command.

The dbaascli command shuts down the database, extracts and restores configuration files, prepares for recovery, performs the recovery, and restarts the database instance after recovery. You can use the rman utility to restore individual tables or tablespaces as usual but tools exist to manage the rman repository that are accessible to do scheduled backups, full backups, and restores.

One of the nice features of RMAN is that you can duplicate a database to an alternate location and restore from an alternate location. For example, you can take a backup of your on premise database and write to the Oracle Object Cloud Service. You can then create a database in the cloud using the DBaaS and load the data into this database from your backup in cloud object storage. You can also provision an on premise database, execute replication commands to configure the second database as a Data Guard replica, initiate the backup of your on premise instance, initiate the restore on your cloud instance, and configure Data Guard to maintain the log shipping, and kick off the Data Guard service on both instances. We will cover this in an upcoming blog.

Using DBaaS does get rid of the discussion of snap mirror backups. This is where you take the tablespace.dbf files, shutdown the database, copy these files to another database, and ingest the dbf file into the second database. Many users like to use this method for backup and recovery because it works for MySQL and Postgress. It is not a recommended backup and restore mechanism for an Oracle database. Synchronization issues can result if changes are being made and data is partitioned across multiple dbf files. Using tools like EMC or NetApp mirroring software work well for VMWare to clone data between data centers. This technology does not work to the cloud and data replication is more difficult if the source and target are from different hardware vendors. The Oracle database also has problems ingesting this data at times and fails to startup from mirror instances if file consistency is not maintained between the system.dbf, user.dbf, and other tablespace files.

Data compression is also a tricky issue. The Oracle database has a compression option that it uses to compress data to minimize disk usage in the tablespace files. If you take this compressed dbf file and then try to again compress it, it typically just consumes excess processor cycles and takes upto eight times as long to restore the data once it is doubly compressed. It is important to not only look at your backup window but your restore window. It might take an hour to backup and compress your data but it might take upto 8 hours to restore it. One of the key performance benefits that Oracle brings to the table is using ZFS storage as a back end for a database. Since the database knows what the ZFS storage appliance can and can't do, it can offload the compression processing and algorithms to the storage appliance. Unfortunately, this does not work for other storage vendors. The cloud database instance uses this to it's advantage so data is automatically compressed when copies are done to the object storage. The compression does not take processing power from the database engine but is offloaded to the storage engine.

You can customize the backup definition and parameters. All of the files are located in /var/opt/oracle/ocde/assistants/bkup/ and the bkup.cfg file defines how backup operates. It is important to note that there is a different command line set of tools if your database is a RAC configuration since you have to reconfigure files on both systems that comprise your cluster but they both use this directory and file to configure the backup process, timing, and procedures.

In summary, database backup is part of the database as a service on all levels. The foundation tool that is used is RMAN. There are command line api tools that allow you to backup and restore without having to learn RMAN. If you want to reconfigure and drill down into the depths and bowels of RMAN you can do this. The backups are automatically encrypted using your wallet that we talked about a couple of days ago. You can also use Enterprise Manager to run RMAN for your cloud service just like you do with your on premise database instances. The important thing to remember is that you don't need to learn RMAN unless you want to. If you deploy a database on infrastructure as a service you have to learn RMAN because it is not automated for you or configured. If you use Amazon RDS, backups are done by doing a disk clone to another zone. You can not setup Data Guard replication to the RDS instance because you don't have file system access to copy the change logs from your on premise database to the RDS storage. You can do this with EC2 and S3 but you loose the automated configurations and services that you get with database as a service. You will also need to purchase Advanced Security to do secure backups to IaaS instances from all cloud vendors but not with DBaaS or PaaS from Oracle.

Wednesday Jun 15, 2016

database option - diagnostics

Keeping a database tuned is a full time job. Automating some of these tasks helps a DBA support more databases and reduce the time required to generate the data. Automatic report generation is a good way of getting these reports. One of the questions I constantly get asked is what is included with PaaS or DBaaS and what is done for me. With DBaaS, database tuning and diagnostics are not part of the services provided. The DBA still needs to look for processes that are holding locks. The DBA still needs to look for run away sql statements. The DBA still needs to look for alternate execution plans and sql tuning to make the database run faster. The Diagnostics Pack is a key tool to help with this. The services that are included with DBaaS include database and operating system patching, making sure that the database is restarted if it stops (unless you issue a shutdown command), and performs automated backups that you can tweak and tune in frequency and amount of data stored. Tools like ADDM, ASH, and AWR are still required by the DBA and can be accessed from the command line through sqlplus, using SQL Developer, or Enterprise Manager. In 10g, many diagnostic tools like ASH and AWR were embedded into the database. In 11g they were automated to collect the data into a central location. In 12c reports were automated so that DBAs did not need to schedule the jobs and generate reports late at night to look at in the morning. Many of these features started in Enterprise Manager but got migrated into the database. There was some controversy with the 10g release because it did impact performance compared to the 9i release but that seems to have gone away with the 11g and 12c releases. An overall architecture of the performance collection can be seen in the diagram below.

The key features to the Diagnostic Pack for the database include

  • Active Session History (ASH)
  • Automated Workload Repository (AWR)
  • Automatic Database Diagnostic Monitor (ADDM)
  • Enterprise Manager Performance reporting
  • SQL Developer Performance reporting

More information on all of these topics can be found in a variety of locations. Most of the information in this blog can be found at

ASH

ASH statistics are enhanced to provide row-level activity information for each captured SQL statement. This information can be used to identify which part of the SQL execution contributed most significantly to the SQL elapsed time. The ASH views, accessible to DBAs, provide historical information about the active sessions in the database.

AWR

The Automated Workload Repository (AWR) reports provide baseline statistics for the database and show the database performance during specified intervals. A baseline contains performance data from a specific time period that is preserved for comparison with other similar workload periods when performance problems occur. Several types of baselines are available in Oracle Database: fixed baselines, moving window baselines, and baseline templates.

ADDM

DBAs and developers can use ADDM to analyze the database and generate recommendations for performance tuning. In Oracle 11g, ADDM was enhanced to perform analysis on database clusters on various levels of granularity (such as database cluster, database instance, or specific targets) over any specified time period.

You can access ADDM through SQL Developer or Enterprise Manager. To access these functions you must first enable the Diagnostics Pack which allows you access to the reports.

You can manually run the ADDM report with a command line

@?rdbms/admin/addmrpt.sql
If you look at SQL Developer and go to the DBA navigation link and expand the database for Performance you can see the AWR and ADDM reports. Expanding on these links shows you the various reports. For the ADDM, for example, you can quickly see if there is a recommendation or not and drill down into the recommendation.

If we click on one of the finding with a Yes in the recommendation column we can look at the report and recommendations that it has. For the example we found it had two suggestions for tuning. Below are samples of this report and the two recommendations.

We can look at similar information from Enterprise Manager by navigating to the Performance page and selecting the report that we want.

Typical AWR report output usually contains an incredible amount of information about an Oracle database’s application workload behavior. When a database instance is suffering from a gc buffer busy wait event during the time period chosen for the AWR report, however, that event will usually surface as one of the Top 5 Timed Events

With AWR you can create a baseline and look at spot performance or variation from a baseline. Page 21 of Diagnostic Pack Presentation does a good job of describing how to read and understand an AWR Report. The blog Bash DBA does a good job of walking through an AWR and looking for issues and problems in a system.

In summary, we are not going to dive deep into AWR and ADDM diagnostics. Knowing how to do this differentiates a high paid DBA from a junior DBA. There are classes through Oracle Education - 12c Performance Management and Tuning and other vendors that teach you how to understand this option as well as the books we mentioned above and certification exams to help show that you know this material. It is important to note that all of these tools work with platform as a service (Oracle and Amazon RDS) as well as infrastructure as a service and on-premise installations. The key difference is that the diagnostic and tuning tools are bundled with the High Performance and Extreme Performance Editions. For IaaS and on-premise you need to purchase a perpetual license that we discussed a few blogs ago.

Tuesday Jun 14, 2016

database options - advanced security

Advanced Security and Transparent Data Encryption (TDE) stops would-be attackers from bypassing the database and reading sensitive information from storage by enforcing data-at-rest encryption in the database layer. Earlier when we talked about partitioning and compression we talked about tablespace files and how to manage them. If these files are stored in clear text, anyone who can access our file system could theoretically read the dbf file and do a clear text search. If, for example, we have a credit card number "1234 5678 9012 3456". We could find this string with a strings, grep, or od statement (octal dump to show text data). If we don't know the credit card number we can just do a dump on the data and see the values stored in the database. With enough trial and error we can figure out what the structure of the database might be and correlate names to credit card information. By default, all data stored in the cloud has TDE enabled. This is done in the $ORACLE_HOME/network/admin/sqlnet.ora file. If you look at this this file, you will see that the ENCRYPTION_WALLET_LOCATION is defined as well as the SQLNET.ENCRYPTION_TYPES_SERVER is defined. When the database is created, a wallet is generated based on your ssh keys allowing you to access your data. In the database that we created we have the wallet file location in /u01/app/oracle/admin/ORCL/tde_wallet. There is a file called cwallet.sso that is enabled anytime someone connects to the database through port 1521 or through the sqlplus command. If we rename this file to something else, we can connect to the database and create clear text files. Note that it is not recommended that you do this but we are doing this to highlight first the differences between Iaas and PaaS as well as the need for data encryption in the cloud. With a database installation on top of compute as is done with EC2, Azure Compute, and Oracle Compute, we have to enable TDE, configure the wallet, configure the cwallet.sso. With PaaS, this is all done for you and you can manage the keystore and rotate key access.

Note that it is not recommended that you execute these commands. They will desecure your database and allow for clear text storage and transmission of data across the internet. We are doing this as an example.

cat $ORACLE_HOME/network/admin/sqlnet.ora
cd /u01/app/oracle/admin/ORCL/tde_wallet
ls
This should allow you to see the cwallet.so file which enables automatic wallet connection upon login. If we want to change encryption we can first look at the parameter encrypt_new_tablespaces and see that it is set to CLOUD_ONLY which encrypts everything. We want to change this to DDL which says that we only encrypt if we tell it to. We first want to create a tablespace and a banking user with encryption turned on. This is done with
sqlplus / as sysdba
alter session set container=PDB1;
drop tablespace banking including contents and datafiles;
create tablespace banking datafile '/u02/app/oracle/oradata/ORCL/PDB1/banking.dbf' size 20m;
We then create a banking user as well as a paas_dba user. One is used to create an encrypted table and the other is to create a clear text table
drop user banking;
create user banking identified by "PaasBundle_1" default tablespace banking; 
grant create session to banking;
grant unlimited tablespace to banking;
drop user paas_dba;
create user paas_dba identified by "PaasBundle_1";
grant create session to paas_dba;
grant dba to paas_dba;

We now connect to the PDB1 as the banking user and create a table in our encrypted tablespace

connect banking/PaasBundle_1@PDB1
create table banking_customers (first_name varchar(20), last_name varchar(20), ccn varchar(20)) tablespace banking;
insert into banking_customers values('Mike','Anderson','5421-5424-1451-5340');
insert into banking_customers values('Jon','Hewell','5325-8942-5653-0031');
commit;
alter system flush bufffer_cache;
select ccn from banking_customers;
This should create a table with two entries. The table will be encrypted and stored in the banking.dbf file. If we do a string search from this file we will not find the credit card number starting with 5421. Now that we have an encrypted table created we need to reconnect to the database and disable encryption on new tables. To do this we change the parameter from CLOUD_ONLY to DDL as sysdba.
sqlplus / as sysdba
show parameter encrypt_new_tablespaces;
alter system set encrypt_new_tablespaces=DDL SCOPE=BOTH;
We can now create a new tablespace and the contents are only encrypted if we pass in the encrypt statement with the create statement. The tablespace will not be encrypted by default. We do this operation as paas_dba who has dba rights to create a tablespace and table.
connect paas_dba/PaasBundle_1@PDB1;
drop tablespace bankingCLEAR including contents and datafiles;
create tablespace bankingCLEAR datafile '/u02/app/oracle/oradata/ORCL/PDB1/bankingCLEAR.dbf' size 20m;
create table banking_Clearcustomers tablespace bankingCLEAR as select * from banking_customers;
select ccn from banking_Clearcustomers;
We should get back two entries from both select statements and see two credit card numbers. We can then exit sqlplus and look at the banking.dbf and bankingCLEAR.dbf files to see if we can lift credit cards. By executing the
strings bankingCLEAR.dbf | grep 5421
strings banking.dbf | grep 5421
we see that we get the credit card number from the bankingCLEAR.dbf file. The data is inserted clear text. It is important to remember that all data should be encrypted in motion and at rest. We need to change the parameter back to CLOUD_ONLY for the encrypt_new_tablespaces moving forward. By default we connect to the data using the encryption wallet. We can disable this as well as turning off default encryption.

In summary, we have looked at what it takes to encrypt data at rest in the cloud. By default it is turned on and we don't have to do anything with platform as a service. If we are using infrastructure as a service we need to purchase the Advanced Security option, turn on encrypting tablespaces bu changing a parameter, enable the wallet to our login, and install keys for the wallet. Platform as a service provides levels typically above and beyond what most customers have in their data center. The recent credit card loss that happened at Target a few years ago happened because they owned the Advanced Security option for the database but did not turn on the feature. An outside consultant (working on something non-it related) got access to a shared storage and pulled the dbf files onto a USB drive. They were able to get thousands of credit cards from the data center costing the CIO and IT staff their jobs. Today we learned how to turn off TDE in the cloud to illustrate what it takes to turn it on in your data center and we looked at the simplicity of pulling data from a dbf file if we know the data we are looking for. We could just as easily have just looked for number patterns and peoples names and correlated the two as valid credit card numbers.

I would like to thank the GSE team at Oracle for doing 90% of the work on this blog. I liberally hijacked the demo scripts and most of the content from demo.oracle.com, PaaS - Data Management (Solutions Demo) by Brijesh Karmakar. The GSE team creates demo environments and scripts for specific products. This demo is an Oracle internal demo and can be requested from your pre-sales consultant in your area. I took the demo code from the Database Vault and Encryption_demo_script written on 08-Jun-2016. I have to admit that looking for demo scripts on demo.oracle.com and searching for TDE was very opportune given that I wrote this blog on the 9th and it was published on the 8th.

Monday Jun 13, 2016

database option - tuning part 2

In our last entry we looked at the results of a sql tuning advisor. We used SQL Developer to execute our code and create a tuning advisory for the code that we executed. We could have gone through Enterprise Manager and done the same thing but done this historically rather on live data. In this blog entry we will analyze the same results using the Enterprise Manager Express that comes with the database as a service in the Oracle Cloud. To connect to this service we need to first open up the network ports to enable connection to port 1158. This is done through the database console or we could do this with ssh tunneling of port 1158 to our database target ip address.

Once we have access to port 1158 we can connect to the Enterprise Manager Express by going to the ip address of our server, in this instance 129.152.134.189 which we got from the database console, and connect to https://129.152.134.189:1158/em. Note that we might get a security exception since the certificate is self signed. We need to add an exception and connect to this service. When we are prompted for a login we connect as sys with sysdba rights. Note that we can not do this on Amazon RDS since we do not have sys access to the database in this service.

When we click on the Performance link at the top of the screen we can look at the overall performance of our system and drill down in to reports to get more information.

If we scroll down to the bottom we see a link called Advisor Central. We can follow this link and look at all of the tuning advisors that have been run and examine the results.

We can select a previously run tuning advisor and look at the recommendations that we did from SQL Developer. When we dive into the report we get a little different style report.

Note that the SQL profile recommends a 19.8% savings if we change the profile. If we click on the Recommendations and expand the information as a table rather than a graph we see that the pk_dept reference takes up a good amount of time and if we could get rid of it since it is not referenced it will speed up the select statements. If we click on the compare explain plans we can see how much of a speed up we will get if we implement the new plan. What I don't see from this is the recommendation to drop the dept d reference that we got from SQL Developer.

Note that the recommendation does state "Consider removing the disconnected table or view from this statement or add a join condition which refers to it" but does not specifically recommend removing dept d from the select statement as is done in SQL Developer.

If we wanted to expand upon use of the tuning advisor we could follow along the Oracle by Example Tuning Tutorial and look at how to initiate tuning advisories through Enterprise Manager. The first thing done in this tutorial is to initiate nightly tuning tasks by going into the server and enabling the Automated Maintenance Tasks. You first click the Configure button then click Configure the Automatic SQL Tuning button. After you change the Automatic Configuration of SQL Profiles to yes and click Apply you have a checkbox window to select the dates to run the tuning tasks.

Once we have this defined we can execute the following code as sys

set echo on

drop user ast cascade;
create user ast identified by ast;
grant dba to ast;
alter system flush shared_pool;
--
-- Turn off AUTOTASK
--
alter system set "_enable_automatic_maintenance"=0;
--
-- Clear out old executions of auto-sqltune
--
exec dbms_sqltune.reset_tuning_task('SYS_AUTO_SQL_TUNING_TASK');
--
-- Drop any profiles on AST queries
--
declare
  cursor prof_names is
    select name from dba_sql_profiles where sql_text like '%AST%';
begin
  for prof_rec in prof_names loop
    dbms_sqltune.drop_sql_profile(prof_rec.name);
  end loop;
end;
/
This creates a user ast. I recommend changing the password to something more complex.

We can then run a series of malformed select statements to generate some synthetic load to report upon and correct. Note that we do this as the ast user and not sys.

set echo off

select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;
select /*+ USE_NL(s c) FULL(s) FULL(c) AST */ c.cust_id, sum(s.quantity_sold) from sh.sales s, sh.customers c where s.cust_id = c.cust_id and c.cust_id < 2 group by c.cust_id;

Once we have the workload created we can kick off the sql tuning advisor with some different code.

set echo on

exec dbms_workload_repository.create_snapshot;

variable window varchar2(20);

begin
 select upper(to_char(sysdate,'fmday'))||'_WINDOW' into :window from dual;
end;
/

print window;

--
-- Open the corresponding maintenance window, but with other clients disabled
--

alter system set "_enable_automatic_maintenance"=1
/

exec dbms_auto_task_admin.disable( -
  'auto optimizer stats collection', null, :window);

exec dbms_auto_task_admin.disable( -
  'auto space advisor', null, :window);

exec dbms_scheduler.open_window(:window, null, true);

--
-- Close the maintenance window when sqltune is done
--

exec dbms_lock.sleep(60);

declare
  running number;
begin

  loop
    select count(*)
    into   running
    from   dba_advisor_executions
    where  task_name = 'SYS_AUTO_SQL_TUNING_TASK' and
           status = 'EXECUTING';

    if (running = 0) then
      exit;
    end if;

    dbms_lock.sleep(60);
  end loop;

  dbms_scheduler.close_window(:window);

end;
/

alter system set "_enable_automatic_maintenance"=0
/

--
-- Re-enable the other guys so they look like they are enabled in EM.
-- Still they will be disabled because we have set the underscore.
--

exec dbms_auto_task_admin.enable( -
  'auto optimizer stats collection', null, :window);

exec dbms_auto_task_admin.enable( -
  'auto space advisor', null, :window);

Note that this executes with some errors but still generates a good sql tuning advisor report. If we look back at the Advisor Central we can dive into the report and see what happened.

We can get an 8 second speedup by reformatting the sql select statements. This might or might not be worthy of tuning based on how many times we execute the code.

In summary, we have alternate ways of looking at sql tuning as well as a say of looking at historic data. We turned on automatic tuning reports which does consume more resources but if we have extra cpu cycles we can benefit from the reports. The Enterprise Manager Express that comes with database as a service is a very powerful tool. It is not a centralized utility like a centralized Enterprise Manager but can be used to automate and record reports for a single database. This service is only installed with the platform as a service and must be manually added and configured if you install your own database manually on top of infrastructure as a service. Having this common management interface is a huge benefit to DBAs who are asked to manage and maintain instances in the cloud. The Enterprise Manager used in the cloud is the exact same version that is used for an on-premise system. If you choose to install and configure a central Enterprise Manager server you can attach to instances in your data center as well as instances in the cloud. The only requirement is that you have file level access and sys/root access to install the agent.

Friday Jun 10, 2016

database option - tuning

Today we are going to look at using the diagnostics and tuning package that comes with the High Performance and Extreme Performance Editions of the database. We do not get these options with the Standard Edition or Enterprise Edition and if we use Amazon RDS, EC2, Oracle IaaS, or Microsoft Azure Compute to use the tuning option we must bring along a license for the options. Diagnostics are licensed at $150 per named user or $7,500 per processor. This correlates to $294 per processor per month. Options like the SQL Tuning Advisor and Automatic SQL Tuning are part of the Tuning pack option. Tuning pack is $100 per named user or $5,000 per processor. This comes in at $196 per processor per month if we use the four year amortization that we talked about last week.

There are three ways to look at the SQL Tuning Advisor. We can use Enterprise Manager in a central site and analyze historic data from days, weeks, and months back. Unfortunately, we can not use this in conjunction with Amazon RDS. We can use the Enterprise Manager Express which is part of the database and gives you three hours of history of database performance. Again, we can not use this in conjunction with Amazon RDS. These features are disabled and turned off as part of the Amazon installation. We can use SQL Developer to connect to the database on all platforms. This allows us to pull down real time diagnostics and look at live database performance. We will go through an Oracle by Example SQL Tuning Advisor Tutorial that details how to enable and use the tuning advisor packs. The database version that we will be using is the 11g version of the database. These same steps should work with 12c because the features have not changed and SQL Developer knows what to do between the two versions of the database and present a common user interface to do SQL Tuning.

The first step that we have to do is find out the ip address of our 11g database. We do this by going to the database console and looking at our instance detail.

We then create a connection to the database with SQL Developer. This is done first as the sys user as sysdba connecting to the ORCL instance at the ip address of the database. We can verify that we are connected to a High Performance Edition by issuing a select statement against the v$version table.

select * from v$version;

Before we can execute step 8 in the Tuning Advisor Tutorial we must enable the user scott and set a password for the account. To do this we expand the Other Users selection at the bottom left of the screen, find the user scott, and enable the account while setting the password.

We can now connect to the 11g instance and give user scott permission to attach to the sql resources with the commands

grant advisor to scott;
grant administer sql tuning set to scott;

We then clear the existing statistics to make sure we are not looking at old artifacts but what we are going to execute. This is done by exeucting

exec DBMS_STATS.DELETE_SCHEMA_STATS ('scott');

At this point we switch over to the user scott and execute a select statement

select sum(e.sal), avg(e.sal), count(1), e.deptno from dept d, emp e group by e.deptno order by e.deptno;

We can launch the SQL Tuning Advisor from the icon at the top of the screen. This opens a new tab next to the resulting output from the select statement.

The output from the tuning advisor has four parts. We can look at the statistics that were gathered, look at suggested indexes, sql profile, and restructuring statement recommendations. The index output did not say anything but the other three had recommendations.

The restructuring statement suggests that we remove the dept d definition since we really are not using it in the select statement. We then execute the following modified command

select sum(e.sal), avg(e.sal), count(1), e.deptno from emp e group by e.deptno order by e.deptno;

When we rerun the command without the dept d in the select statement we get a clean output from the SQL Advisor.

In summary, we can use Enterprise Manager, Enterprise Manager Express, or SQL Developer to run the tuning advisor. We walked through a simple example of how to do this with SQL Developer on a single select statement. We walked through the SQL Developer because it works on all cloud platforms and the Enterprise Manager solutions do not work well with Amazon RDS. With these tools we can dive into SQL performance issues, tune the database, and optimize the cloud system to utilize fewer resources and cost us less money. If we can reduce the processor count by a couple of processors that more than pays for the cost of the High Performance Edition incremental cost over the Enterprise Edition.

Thursday Jun 09, 2016

database option - compression part 2

Yesterday we looked at the different compression options that are available for a database. Today we are going to walk through an example. The example comes from Oracle by Example - Compression. This is a hands on tutorial that has you execute code in an 11g database. Note that you must create this database as a High Performance or Extreme Performance database. If you create a Standard Edition or Enterprise Edition the execution will fail with an option not available error as we saw with partitioning a couple of days ago.

To start, we create an 11g database in the Oracle Public Cloud. We create the instance, wait an hour or so, change the network configuration to open port 1521 to the public, and connect using sys as sysdba to the instance. We are going to use SQL Developer in our Windows 2012 instance to make the connection. To get the connection information, we the database console and get the ip address of the instance.

We then go to our sqldeveloper tool and add this database connection. We can use ssh tunneling or open port 1521 to the world to make the connection.

The first step that we are told to do is to execute the setup.sql file available via the tutorial. We are not going to execute this program but do everything by hand through sql developer. The purpose of this script is to enable the user sh, set a password, and grant privileges to the user. We can do this from SQL Developer. The code that it recommends using is

connect / as sysdba
set echo on
alter user sh identified by sh account unlock;
grant create tablespace to sh;
grant drop tablespace to sh;
First, we don't want to use such a simple password. We change this and set it to something a little more secure. We select the database instance, in our example it is prs11gHP where we are connected as the sys user. We select other Users..., the user sh, and edit the entry. When the screen comes up to edit the user, we enable the account, set the password, grant create tablespace and drop tablespace rights to the user and apply. This effectively executes the script shown above.

At this point, we have a user that can create and drop tables. We now want to load the create_sales_tbls.sql code from the tutorial.

The create script first, drops the existing tables. This might generate an error because the table does not exist. This error is not significant and won't stop everything from executing. We then create a non-compressed and a compressed table by selecting from the demo sales table that exists if you installed the demo database during your install.

drop table sales_nocompress purge
/
drop table sales_compress purge
/
set echo on
set timing on
create table sales_nocompress
as select * from sales
/

create table sales_compress compress for all operations
as select * from sales where 1=0
/

select count(*)
from sales_compress
/
Note that the two create statements should create a table of the same size. What we see is that the creation of the first table takes just over 4 seconds because we pull in the sales table information. The second creation does not take as long because the data is in cache and the where clause fails for all select operations.

When we do the select, the table size should be zero based on the where clause. We then to an insert into the table to create a table of the same size. This is done by executing

@oltp_insert

set timing off
select count(*) from sales_compress
/
select count(*) from sales_nocompress
/
This executes the oltp_insert.sql code then compares the counts of the two tables to make sure they contain the same number of records. The code that is executed in the insert script is
SQL> set timing on
SQL> declare

  commit_after integer := 0 ;
  loop_variable integer ;

  cursor c_sales is
  select prod_id
  , cust_id
  , time_id
  , channel_id
  , promo_id
  , quantity_sold
  , amount_sold
  from sales ;

begin

  for r_sales in c_sales
  loop

    if commit_after = 0
    then

      loop_variable := 0 ;

      commit_after := round(dbms_random.value(1,1)) ;

    end if ;

    insert into sales_compress
    (prod_id, cust_id, time_id, channel_id, promo_id, quantity_sold, amount_sold)
    values
    ( r_sales.prod_id
    , r_sales.cust_id
    , r_sales.time_id
    , r_sales.channel_id
    , r_sales.promo_id
    , r_sales.quantity_sold
    , r_sales.amount_sold
    ) ;

    if loop_variable = commit_after
    then
      commit ;
      commit_after := 0 ;
    end if ;

    loop_variable := loop_variable + 1 ;

  end loop ;

end ;
/
We are not going to go through this code but it does return the same amount of entries as the uncompressed table. The values that are inserted are pulled from the sales table and inserted into the compressed table. Note that we are using the basic compression since we did not state any compress methodology when we created the table.

We can execute the examine_storage.sql script to see that the compressed storage takes up about half the storage as the uncompressed table. We can also see that the table is enabled for oltp compression by looking at the parameters of the table from a select statement.

We can also look at the select time differences by reading all of the data from the compressed and uncompressed tables. Note that the compressed table takes about 3/4 of the time that the uncompressed takes to execute.

In summary, we were able to create an 11g database, create a table that is compressed and non-compressed and look at the relative size and timing on retrieving data from the table. We can experiment with this data and grow the table size to see if we still get the same improvements as the table gets larger. We can try different compression algorithms to see if it effects performance or compression ratios. We have done all of this in a database as a service public cloud instance. The only tools that we needed was a SQL Developer connection and an Oracle Cloud account. We could have done with with Amazon RDS as well as EC2 and Microsoft Azure Compute. The key difference is that this experiment took about two hours to execute and we only consumed about $15 to learn and play with compression on 11g (or 12c) given that a low memory option for the database is only $6.720 per OCPU per hour. With the pay as you go option we burn less than $15 and turn off the service. We could have uploaded our own data sets into the database instance and played with the compression advisor in a sandbox and not effected our production environment. If we were using database backup as a service we could have restored a single table from our backup and play with the compression variations and compression advisor.

About

bocadmin_ww

Search

Archives
« July 2016
SunMonTueWedThuFriSat
     
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
      
Today