Tuesday Jan 12, 2016

Monitoring Exadata Storage Server Grids with Enterprise Manager


Oracle Exadata provides customers with a tightly-integrated hardware and software stack. The components work together, making the solution extremely performant. One part of the stack that is often poorly understood is the I/O subsystem. Having an understanding of the components and setting up appropriate monitoring are key tasks for Exadata administrators.The whitepaper "Exadata Health and Resource Utilization Monitoring - Exadata Storage Server KPIs" will discuss the components of the Exadata I/O stack, identify key performance indicators, and make use of Oracle’s world class monitoring solution, Enterprise Manager, to provide a holistic approach to making sure the I/O subsystem is functioning properly. The paper can be downloaded here or via MOS Note 2094648.1 

Thursday Nov 06, 2014

Tools For Generating Consistent Loads

It's finally ready. The new database machine you've spent months, planning and architecting. All those shiny new components perfectly aligned for extreme performance. Will it give you the results you expect? How can you baseline and compare your new and existing systems? Pointing your application at your new machine may take some time to setup and depending on the behavior of the application, may not stress test all hardware components as you might like. You need a set of easy to configure scripts for load testing and you need tools and procedures to compare old systems to new. 

This will be the first of three blog posts to help with that effort. In this post, I'll go over some tools you can use to generate various loads. The next two posts in the series I'll talk about using Enterprise Manager to evaluate and baseline loads, and strategies to compare systems using consistent loads.

Test Suite

My current test suite includes two methods to generate loads: one leverages Swingbench, which is a well known and popular load generating tool, and the other is a solution I cooked up myself. Both sets of scripts can be altered to tailor their load characteristics. I've also included a variable load script wrapper for each, which you can use to adjust the load over time. For example: you can have a load test that runs for a total of 5 hours and within that 5 hour window your load could fluctuate every 30 minutes from heavy to light. The custom scripts are also flexible enough to support altering their behavior if you have a specific set of SQL/PLSQL commands you would like to run.

For this article, my database is running on an Exadata X2-2 quarter rack.

Using Swingbench

Swingbench is a great tool for quickly generating loads on an Oracle database. It's easy to setup and has many configurable options. Although swingbench has a nice GUI interface for creating your test schemas and running your load, I really like the command line interface. With the CLI you can create scripts to interact with Swingbench and nohup loads on remote hosts so your load can run hours or days without needing to be logged in.


If you don't have it already, download a copy of Swingbench and unzip the files on your host machine. You can run Swingbench from your database host or a remote client. If you co-locate them on your database host, take this into account during load measurement. 

There are a few different types of schemas you can create with Swingbench, and each type has an associated XML wizard file in the bin directory to help with creating that schema. I tend to use the Order Entry (OE) schema the most as it's behavior is more representative of an OLTP system, so we will be using the oewizard.xml file for this example. Open up the XML file in your favorite editor and update the connection information for the system user that will create the schema, then run oewizard on the file like this...

oewizard -c oewizard.xml -cl -cs //<your_host_or_vip>/<service> -u <test_user_name> -p <test_user_pw> -ts <tablespace_name> -create -df <asm_disk_group> -part -scale 4 -debug

You can use -scale to adjust the size of your schema which will also increase the time it takes to build. A scale of 4 gives me about a 10G schema.  


When your schema is ready, edit the supplied swingconfig.xml file with your connection info and use charbench to verify your schema. 

charbench -c swingconfig.xml

With our schema ready, now we can define our load also using the swingconfig.xml file. There are a number of parameters you can adjust to define your load. Here are the ones I find affective.

  • <NumberOfUsers>40</NumberOfUsers>
  • <MinDelay>100</MinDelay>
  • <MaxDelay>1000</MaxDelay>
  • <LogonDelay>150</LogonDelay>
  • <WaitTillAllLogon>false</WaitTillAllLogon>

MinDelay and MaxDelay specify the wait time between transactions in milliseconds. A LogonDelay helps avoid connection storms (unless that's what you want to test) and I like setting WaitTillAllLogin to false so my load starts right away and there is a nice ramp up over time. If I want to push the system hard I set Min/MaxDelay low and increase the number of users.

Further down the swingconfig.xml file you will find descriptions of the actual transactions that will be executed. Each transaction type can be turned on/off and it's weighted value compared to other transactions can be adjusted. This section is were you will do most of your tweaking to get the load profile you want.


Here's a Top Activity graph in Enterprise Manager showing two consecutive tests. The first test had 300 users with a Min/MaxDelay of 15/20. I decreased the Min/MaxDelay to 10/15 for an increased load which you can see below.  

Here's an example of a heavily overloaded system in which the application doesn't scale. I've setup Swingbench with 800 users connecting every 2 seconds for a slow buildup, Min/MaxDelay of 5/15, and I'm only using the "Order Products" transactions. These transactions perform single row inserts with no batching. Around 11:30am there are ~500 sessions and the system starts to overload. CPU has become saturated and other background processes like the database writers start to slowdown causing excessive Concurrency and Configuration waits in the buffer cache. Our SGA for this test was 10G.

overload example

Variable Loads With Swingbench

In order to generate variable swingbench loads over time, I've created a small wrapper script, variable_load.pl written in Perl that can be used to define how long your load should run and also the variation in that load. To adjust the load you define how many users will be connected. Here's a snippet of the script which describes each parameter.

### how long you want the test to run in hourse:minutes
$runtime = "00:30";

### your swingbench config file
$conf_file = 'swingconfig.xml';

### Adjust your vaiable loads here
### RunTime = how ling this load will run in hours:minutes
### UserCount = how many user connections
### SleepTime = how many seconds to sleep before running the load, if needed
###              RunTime  UserCount  SleepTime
@swing_list = ( ["00:02", 400,       120],
                ["00:05", 200,         0],
                ["00:05", 100,         0] ); 

With these settings here's what our load profile looks like.

variable load swingbench

Custom Load Generation

There have been times during my own performance testing in which I needed to generate a very specific type of load. Most recently, I needed to generate a heavy large block IO load, so I put together these scripts in response to that need. I tried to keep them easy to setup, run and alter if necessary. The load uses a single schema and creates a test table for each session that will be connected, so the load needs to be initialized based on the maximum number of sessions expected for testing.

Setup and Execution

  1. Download the package to your host and unzip/tar in an empty directory.
  2. Edit the load_env.sh file to setup variables for your test. This is where you will define the maximum number of test tables you will need.
  3. Run the init_load.sh script to setup your user test schema and test tables. You will be prompted for the SYSTEM user password.
  4. Run the start_load.sh script to begin your test. This script requires two parameters, the low and high values for the test tables to use and thus the number of sessions. This was done to allow running a small load and then ramping up and running additional loads as needed. Examples...
    • start_load.sh 1 10 : Will run ten sessions, using test tables 1 through 10.
    • start_load.sh 16 20 : Will run 5 sessions, using test tables 16 through 20.
    • start_load.sh 1 1 : Will run 1 session.
  5. Running stop_load.sh will kill all sessions and thus stop the load. 

Here's what a load of 20 users looks like. Lots of large block IO!

io load example

Custom variable loads can also be run using the variable_load.pl script found in the package. It has the same parameters to adjust as in the Swingbench script. Here's an example of a variable load that ramps up, overloads the system, then drops back down again.

variable load io

As the IO gets heavy we start seeing more contention in the log buffer. 

variable io waits


It's possible to design your own custom loads with these scripts, as you may need to execute a particular PL/SQL package or perhaps test how well SQL will scale against a large partitioned table. This can be achieved by editing the init_object.sh and load_object.sh files.

init_object.sh : Edit this script to create or initialize any objects needed for your test. This script gets executed multiple times depending on how many sessions you plan to run concurrently. If you don't have a session specific setup, you can leave an empty code block.

load_object.sh : This is the code that gets executed for each session you define. If you had PL/SQL you wanted to test, this is where you would put it.

As an example, for this test I created some database links for each instance and altered the script to select from our test table using the database links, thus creating a heavy network load. I've included this example script load_object_network.sh in the package zip file as well.

network load


With a set of tools to define consistent, predictable loads we are now ready to baseline our systems. Next in the series I will go over the tools available in Enterprise Manager which will help in that effort. 




« February 2016