Monday May 12, 2008

Optimizing Oracle Schema Analyze with CMT based servers

A common observation regarding Niagara based servers is that system maintenance or database administration tasks can run slower than previous generations of Sun servers. While single-threaded performance may be less, these maintenance tasks are often able to be parallelized, especially using a database engine as mature as Oracle. Take for instance the task of gathering schema statistics. Oracle offers many options on how to gather schema statistics, but there are a few ways to reduce overall gather statistics time:
  • Increased Parallelism
  • Reduced Sample Size
  • Concurrency
Oracle has written many articles in metalink which discuss sample size and the various virtues. There have also been many volumes written on optimizing the Oracle cost based optimizer (CBO). Jonathan Lewis of who is a member of the famous Oaktable network has written books and multiple white papers on the topic. You can read these for insight into the Oracle CBO. While a reasonable sample size or the use of the "DBMS_STATS.AUTO_SAMPLE_SIZE" can seriously reduce the gather statistics times, I will leave that up to you to choose the sample size the produces the best plans.

Results

The following graph shows the total run time in seconds of a "GATHER_SCHEMA_STATS" operations at various levels of parallelism and sample size on a simple schema of 130GB. All tests were run on a Maramba T5240 with a 6140 array and two channels.

GATHER_SCHEMA_STATS parallelism and sample_size


Note that if higher levels of sampling are required, parallelism can help to significantly reduce the overall runtime of the GATHER_SCHEMA_STATS operation. Of course a smaller sample size can be employed as well.

GATHER_SCHEMA_STATS options

SQL> connect / as sysdba

-- Example with 10 percent with parallel degree 32
--
SQL> EXECUTE SYS.DBMS_STATS.GATHER_SCHEMA_STATS (OWNNAME=>'GLENNF', 
     ESTIMATE_PERCENT=>10, 
     DEGREE=>32, 
     CASCADE=>TRUE);

-- Example with AUTO_SAMPLE_SIZE and parallel degree 32
--
SQL> EXECUTE SYS.DBMS_STATS.GATHER_SCHEMA_STATS (OWNNAME=>'GLENNF', 
     ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE, 
     DEGREE=>32, 
     CASCADE=>TRUE);

Note that you must have "parallel_max_servers" set to at least the level of parallelism desired for the GATHER_SCHEMA_STATS operation. I typically set it higher to allow for other parallel operations to get servers.

        SQL> alter system set parallel_max_servers = 128;

Finally, you can easily run a schema collect on multiple schema's concurrently and in parallel by issuing GATHER_SCHEMA_STATS from multiple sessions and ensuring the level of parallelism is set high enough to accommodate.

Configuration

  • T5240 - "Maramba" CMT based server
    • 2 x UltraSPARC T2 Plus @ 1.4GHz
    • 128GB memory
    • 2 x 1GB fiber channel HBA
    • 1 x 6140 Storage array with 1 lun per channel.
  • Software
    • Solaris 10 Update 5
    • Oracle 10.2.0.3
    • CoolTools
  • Schema
      SQL> Connected.
      SQL> SQL> SQL> SQL> SQL> SQL> SQL>   2    3    4  
      OWNER	 TABLE_NAME	NUM_ROWS       MB
      -------- ------------ ---------- --------
      GLENNF	 B2	       239826150    38560
      GLENNF	 B1	       237390000    32110
      GLENNF	 S2		 4706245      750
      GLENNF	 S4		 4700995      750
      GLENNF	 S5		 4699955      750
      GLENNF	 S7		 4698450      750
      GLENNF	 S8		 4706435      750
      GLENNF	 S9		 4707445      750
      GLENNF	 S10		 4700905      750
      GLENNF	 S3		 4706375      750
      GLENNF	 GTEST		 4706170      750
      
      OWNER	 TABLE_NAME	NUM_ROWS       MB
      -------- ------------ ---------- --------
      GLENNF	 S6		 4700980      750
      GLENNF	 S1		 4705905      710
      HAYDEN	 HTEST		 4723031      750
      
      14 rows selected.
      
      SQL>   2    3    4  
      OWNER	 INDEX_NAME	NUM_ROWS       MB
      -------- ------------ ---------- --------
      GLENNF	 B1_I2	       244841720    11623
      GLENNF	 B2_I2	       239784800    11451
      GLENNF	 B1_I1	       248169793     8926
      GLENNF	 B2_I1	       241690170     8589
      GLENNF	 S6_I2		 4790380      229
      GLENNF	 S3_I2		 4760090      227
      GLENNF	 S2_I2		 4693120      226
      GLENNF	 S5_I2		 4688230      224
      GLENNF	 S8_I2		 4665695      223
      GLENNF	 S4_I2		 4503180      216
      GLENNF	 S1_I2		 4524730      216
      
      OWNER	 INDEX_NAME	NUM_ROWS       MB
      -------- ------------ ---------- --------
      GLENNF	 S9_I2		 4389080      211
      GLENNF	 S10_I2 	 4364885      209
      GLENNF	 S7_I2		 4357240      208
      GLENNF	 S2_I1		 4972635      177
      GLENNF	 S3_I1		 4849660      174
      GLENNF	 S6_I1		 4830895      174
      GLENNF	 S9_I1		 4775830      171
      GLENNF	 S7_I1		 4772975      169
      GLENNF	 S5_I1		 4648410      168
      GLENNF	 GTEST_C1	 4686790      167
      GLENNF	 S1_I1		 4661605      166
      
      OWNER	 INDEX_NAME	NUM_ROWS       MB
      -------- ------------ ---------- --------
      GLENNF	 S4_I1		 4626965      166
      GLENNF	 S10_I1 	 4605100      164
      GLENNF	 S8_I1		 4590735      163
      
      25 rows selected.
      

Monday Apr 21, 2008

Throughput computing series: Utilizing CMT with Oracle

Since we just recently announced mutli-chip based CMT servers that provide up to 128 threads in a 1U or 2U box, it seems fitting to pick up this thread on throughput computing.

The key to fully appreciating the CMT architecture with Oracle is to exploit the available threads. As I have spoke about earlier in the "throughput computing series", this can be done through "concurrency", "parallelism", or both. Oracle, being the mature product that it is, can achieve high-levels of parallelism as well as concurrency.

Concurrent processing with Oracle

For examples of concurrent processing with Oracle, look at recent results on the Oracle Ebusiness payroll benchmark. This shows that using concurrent processes to break up the batch, you can increase batch throughput. By going from 4 to 64 processes, batch time decreased from 31.53 minutes to 4.63 minutes and throughput was increased by 6.8x!

With Oracle's Ebusiness Suite of software, you can increase the number of "concurrent manager" processes to more fully utilize the available threads on the system. Each ISV has different ways of controlling batch distribution and concurrency. You will have to check with your various software vendors to find all the ways to increase concurrency.

Parallelism in Oracle

People often associate parallelism in Oracle with parallel query. In most cases where CMT is involved, I see a lack of understanding of how to achieve parallelism with more basic administrative functions. Oracle excels in providing parallelism for important administrative tasks.

  • Schema analyze
  • Index build/rebuild
  • Parallel loader
  • Parallel export/import with datapump

    While parallelism exists for these administrative tasks, some configuration is required. I will examine the various ways to achieve optimal throughput with CMT based systems on these tasks.
  • Monday Mar 10, 2008

    Oracle db_block_checksum performance bug

    We recently ran across a pretty strange performance bug with the checksum function of Oracle. This bug (6814520) causes excessive CPU to be used for the checksum routine. The checksum feature of Oracle is enabled by the db_block_checksum=TRUE parameter in Oracle. With the release of Oracle 10gR2, "TRUE" is the default setting. The order of magnitude of CPU overhead depends on the type of Solaris SPARC architecture.

    Chip     %overhead
    ----     ---------
    SPARC64       250%  
    USIV           45%  
    ------------------
    w/patch         8% 
    

    Oracle released a patch via metalink to address this situation. This patch is for 10.2.0.3 installations. The fix will be included in: 11.1.0.7, 10.2.0.4, and 10.2.0.5.

    If you are unsure whether or not you are hitting this bug, you can easily alter this parameter on the fly:
      SQL> alter system set db_block_checksum=FALSE
    Warning, this will disable the checksum feature and blocks written when this is turned to FALSE will not contain checksum information.

    Regardless of whether you are currently hitting the bug or not, the recommendation is:
      INSTALL "6814520" WHEN USING ORACLE 10.2.0.3 !!!

    Thursday Feb 14, 2008

    Ensuring directIO with Oracle 10.2.0.3 on Solaris UFS filesystems

    I usually really dislike blog entries that have nothing to say other than repackage bug descriptions and offer them up as knowledge, but in this case I have made an exception since the full impact of the bug is not fully described.

    There is a fairly nasty Oracle bug with 10.2.0.3 that prevents the use of DirectIO with Solaris. The metalink note "406472.1" describes the failure modes but fails to mention the performance impact if you use "filesystemio_options=setall" and fail to have the mandatory patch "5752399" in place.

    This was particularly troubling to me since we have been recommending for years the use of the "setall" to ensure all the proper filesystem options are set for optimal performance. I just finished working a customer situation where this patch was not installed and their critical batch run-times were nearly 4x as large... Not a pretty situation.... OK, So bottom line:
      MAKE SURE YOU INSTALL "5752399" WHEN USING ORACLE 10.2.0.3 !!!

    Wednesday Jan 16, 2008

    Throughput computing series: System Concurrency and Parallelism

    Most environments have some open source SW that is used as part of the application stack. Depending on the packages, this can take a fair amount of time to configure and compile. To speed the install process, parallelism can easily be used to take advantage of the throughput of CMT servers.

    Let us consider the following five open source packages:
    • httpd-2.2.6
    • mysql-5.1.22-rc
    • perl-5.10.0
    • postgresql-8.2.4
    • ruby-1.8.6

    The following experiments will time the installation of these packages in both a serial, parallel, and concurrent fashion.

    Parallel builds

    After the "configure" phase is complete, these packages are all compiled using gmake. This is where parallelism within each job can be used to speed the install process. By using the "gmake -j" option, the level of parallelism can specified for each of the packages. This can dramatically improve the overall compile time as seen below.

    compile time \*without\* concurrency
      • Jobs were ran in a serial fashion but with parallelism within the job itself.
      • 79% reduction in compile time at 32 threads/job.

    Concurrency and Parallelism

    The build process for the various packages are not each able to be parallelized perfectly. In fact, the best overall gain of any of the packages is 6x out of 32. This is where concurrency comes into play. If we start all the compiles at the same time and use parallelism as well, this further reduces the overall build time.

    compile times with concurrency and parallelism
      • All 5 jobs were run concurrently with 1 and 32 (threads/job).
      • 88% overall reduction in compile time from serial to parallel with concurrency.
      • 42% reduction in compile time over parallel jobs ran serially.

    Load it up!

    Hopefully, this helps to better describe how to achieve better system throughput through parallelism and concurrency. Sun's CMT servers are multi-threaded machines which are capable of a high level of throughput. Whether you are building packages from source or installing pre-build packages, you have to load up the machine to see throughput.

    Monday Jan 14, 2008

    Throughput computing series: Parallel commands (pbzip2)

    In this installment of the throughput computing series, I will explore how to get parallelism from the system point of view. The system administrator who first begins to configure the system will start forming impressions from the moment the shrink wrap comes off the server. First impressions and potential parallel options will be explored in this entry.

    Off with the shrink-wrap... on with the install

    Unfortunately, most installation processes involve a fair number of single-threaded procedures. As mentioned before, the CMT processor is designed to be a processor that optimizes the overall throughput of a server - often to the detriment of single threaded processes. There are several schools of thought on this one. First is, why bother - the install process happens but once and it really doesn't matter. That is true for most typical environments. But the current trend toward grid computing and virtualization makes "time to provision" often a critical factor. To help speed provisioning, there are some things that can be done by using parallelized commands and concurrency.

    pbzip2 to the rescue

    A very common time-consuming part of provisioning is the packing/unpacking of SW packages. Commonly, gzip or bzip is used to unpack data and packages, but this is not a parallel program. Fortunately, there is a parallel version of bzip that has been made available. "pbzip2" allows you to specify the level of parallelism in order to speed the compression/decompression process.

    I spent a little time experimenting with the pbzip program after repeated interactions that always seemed to come back to "gzip" performance. I decided to do some quick benchmarks with pbzip2 using both the T2000(8core@1.4GHz) and v20z(AMD 2cores@2.2GHz).

    pbzip2 benchmark

    The setup used a 135M text file. This file was the trade_history.txt created using the egen program distributed by the tpc council for the TPC-E benchmark. This file was compressed using the following simple test script:
      #!/bin/ksh
      
      for i in 1 2 4 8 16 32
      do
        print "pbzip2 compress: ${i} threads\\n" 
        timex pbzip2 -p${i} small.txt
        print "pbzip2 decompress: ${i} threads\\n" 
        timex pbzip2 -d -p${i} small.txt.bz2
      done
      
      
    T2000 pbzip2 throughput T2000 pbzip2 throughput


    At lower thread counts, the v20z with two AMD cores does better. This is expected since the AMD x64 processor is optimized single-threaded performance. But you can see as you crank up the thread count, the T2000 starts to really shine. This demonstrates my main point that to push massive throughput within a single application, you need lots of threads and parallelism.

      ...The next entry will explore how concurrency and parallelism can help improve build times.

    Tuesday Jan 08, 2008

    Throughput computing series: Defining Throughput Computing

    This is the first installment in a series of entries that discuss different aspects of throughput computing. This series aims to improve the understanding of how SPARC CMT servers can be utilized to increase business throughput. Let's start with a definition.

    What is throughput computing?

    Oxford American defines "throughput" as:
      "The amount of materials or items passing through a system or process"
    In computer terms, throughput computing is the amount of "work" that can be done in a given period of time. Things like "orders per second", "paychecks per hour", "queries per second", "webpages per minute",... are all metrics of throughput. These measures help define the amount of work a system can complete in a given period of time.

    Misguided throughput metrics

    • Latency or Response time is not a throughput metric.
    • CPU % is not a throughput metric.
    • IO wait% is definitely not a throughput metric... or anything other than a measure of idle time :)
    • The "Load average" of a system is not a measure of throughput. OK... you get the idea.

    Job level parallelism

    Job level parallelism is about taking a single job and breaking it into multiple pieces. Say you have 10,000 letters to put stamps on. If it takes 3 seconds per letter, you would need 30,000 seconds or more than 8 hours to complete the task. Now consider you are a teacher and you bring the letters to class. There are 20 students in the class so each student will place stamps on 500 letters. With only 500 letters to complete per student, the job can be done in only 1500 seconds or 25 minutes.

    In terms of throughput, one person processes one letter every 3 seconds or 60/3 = 20 letters per minute... and a class of 20 students can process 20\*20 = 400 letters per minute.

    Concurrency

    Concurrency comes from running multiple jobs or applications together on a system. A job may be single-threaded or use multiple threads of execution as discussed above. These jobs need not be related or even from the same application. To further increase concurrency, virtualization is often used to run multiple concurrent OS images on the same machine in-order to take advantage of modern multi-threaded systems.

    Putting it all together with Chip Multi-Threading

    Denis Sheanan sums up Sun's throughput computing initiative in his paper on CMT as:
      "Sun’s Throughput Computing initiative represents a new paradigm in system design, with a focus toward maximizing the overall throughput of key commercial workloads rather than the speed of a single thread of execution. Chip multi-threading (CMT) processor technology is key to this approach, providing a new thread-rich environment that drives application throughput and processor resource utilization while effectively masking memory access latencies."
    The salient point is that you must have an application that has multiple threads of execution in-order to take advantage of CMT. Multiple threads of execution could come from a single job that has been parallelized or from multiple jobs of different types that run concurrently.

    Resources

    Monday Jan 07, 2008

    Throughput computing series... getting the most of your SPARC CMT server.

    I was thinking about the development of a CMT throughput benchmark, but it occurred to me that there are many \*good\* examples of throughput already out with the benchmarks we publish... just look at the bmseer postings on the Recent T2 results and the long line of performance records on the T2000.

    The biggest disconnect with CMT servers is a misunderstanding of throughput and multi-threaded applications. I made a posting last year which touched on some initial impressions, but I thought it would be a good idea to dig in further.

    This entry is to kick off a series of postings that explore different aspects of throughput computing in a CMT environment. The rough outline is as follows:
      Overview
      • Definition of Throughput computing, multi-threading, and concurrency.
      Explore system parallelism
      • Unix commands and parallel options
      • Concurrent builds/compiles.
      • Configuring the system for parallelism
      Configuring applications for parallelism
      • Concurrency vs multi-threading
      • Single-Threaded jobs
      Database parallelism with Oracle
      • Parallel loader and datapump
      • Index build parallelism
      • Concurrent processing in Oracle
      • Configuring Oracle for CMT servers

    Friday Jan 04, 2008

    Organizational stove-pipes complicate database storage configurations.

    IT organizations at large companies are complex entities where people are partitioned by function. There are SAN people, system administrators, Database administrators, and Developers. While it is good to specialize by function there seems to be a mis-match when each organization optimizes their internal operations. Let me walk you though the a common situation where the SAN administrators and system administrators each try to optimize performance without consideration to overall picture.

    The setup

    • DBA requests storage for new application. They are expecting filesystem(s) or RAW luns will be presented for ASM use.
    • System's administrators request luns from the Storage administrators to fulfill the request.
    • Storage administrators supply the luns.

    Systems Administrators

    Their job is to make sure the performance of the supplied luns map cleanly to the Database environment. For years System Administrators have been using SW volume management/Raid to improve performance. So, naturally, they request a large number of luns (say 128) from the Storage administrators so they can stripe. Past experimentation has shown that a 32k stripe width was best.

    Storage Administrators

    The Storage people take care of large Hitachi or EMC boxes. Their job is to supply luns to applications and make sure their "san-box" performs well. They gladly supply the luns to the Systems administrators, but to ensure performance of the SAN box, they must prevent the fiber from "resets". The maximum number of requests on a fiber is 256 requests. So, no problem, they have the system administrators adjust the "sd_max_throttle" parameter so the OS will queue events and not cause resets. The rule of thumb is to set it to:
    
           sd_max_throttle = 256/#luns = 256/128 = 2
    
    
    

    Putting it all together

    So, now the system administrator takes these 128 luns and creates four file systems by striping 32 luns together each with a 32k stripe width using SVM. Since this is a SAN, there are multiple connections from the host to the SAN in this case there are 4 connections. MPxIO is used to round-robin IO requests to the 4 connections to the SAN to balance load and allow for fail-over in case of an HBA failure.

    This environment is turned over to the DBA who finds the performance is less than stellar.

    Analysis

    The DBA is running 10 jobs that result in queries which full scan 10 tables. These queries request 1MB per IO. Now a stripe width of 32k breaks down the 1MB IO into 32 equal pieces... and since there are 10 concurrent jobs that equates to 32\*10 or 320 concurrent request for IO. Finally, these 320 request, are routed down one of the four channels so that would be 320/4 or 80 requests per channel. Are you beginning to see the problem?

    Given the "sd_max_throttle" setting of 2, the OS will allow 2 outstanding requests at a time. If you look at the array, the performance will look great... so it must be an OS problem :)

    The Fix

    This issue was solved in multiple phases.
    • Quick Fix: Simply increase the "sd_max_throttle" >= 80. This will prevent queuing at the driver level.
    • Increased stripe width. Use an SVM stripe width of 1MB or greater. This will reduce the number of IO being broken down by the SW volume manager.
    • Optimal Solution. Eliminate SW striping all together and build larger luns within the SAN box.

    Summary

    Storage issues often involve multiple layers of HW, SW, people, and organizations. To architect a well thought out solution, all aspects must be taken into consideration. Get everyone talking and sharing information so that your organizational stove-pipes don't cripple application performance.

    Tuesday Oct 16, 2007

    Sun server reboot best practices

    I had a recent interaction with a customer regarding how often to "reboot" Sun servers. There is no official policy at Sun regarding rebooting - mainly because we sell enterprise class machines. The intent of an enterprise class machine is to stay up at all cost. Our enterprise class servers have the ability to add and remove memory, IO, and CPU without a reboot. I know it is common practice to reboot "windows" based machines on a regular schedule, but this simply does NOT apply to Enterprise class Sun servers.

    For sake of discussion, I will post what I think our server reboot policy should entail.

    Purposed Sun Server Reboot Policy:

    "Only reboot Sun servers when installing SW or HW that requires a reboot. It is not necessary to reboot servers on a regular schedule like Windows servers."

    Friday Sep 28, 2007

    PGstatspack - Getting at Postgres performance data.

    I thought I posted this a while ago... Maybe a blog bug?
    =====
    I have been working with Oracle for the past 18 years, mostly in the performance arena. Last year, I began working with Postgres as well. Being a performance guy, I naturally was looking at how to get at the performance data necessary to tune the database for maximum performance. To my surprise, little existed in the way of performance tools for Postgres. I was looking for the "Statpack" or "AWR" report for Postgres. I found several on-off tools but nothing that provided a "Load Profile" like Statspack.

    PG_STAT\* tables... V$ tables in disguise

    Postgres has a series of tables that are essentially counters like the V$ tables. They record the counts of things like:
    • commited transactions
    • rolled back transactions
    • tuples accessed
    • tuples inserted
    • block read
    • block hits
    • tuples accessed by table and index
    • physical reads by table and index

    Creating a prototype

    I fashioned the prototype after Oracle's Statspack. I created a simple schema where I essentially duplicated the PG_STAT\* tables and added a key for the snapshot. There is also a management table "pgstatspack_snap" which stores the snapid, timestamp, and a short description.
    To keep with the statspack like theme, a simple PLPGSQL procedure was created to take snapshots:
        SELECT pgstatspack_snap('My test run');
      

    Creating pgstatspack reports

    Now \*all\* you have to do is create the reports. I have created a simple report that gets at the heart of what is encapsulated in the "Load Profile" section of the Statspack. Additionally, I have profiled some of the table objects in terms of access, IO, etc. The report essentially does a diff of the counters between the two snap intervals. Time data is applied to calculate the per-second rates.
    This is meant to be a launch pad for experimentation. Hopefully, you will find it interesting. The prototype package and report can be downloaded here: pgstatspack.tar.gz
    
    $ rpt.sh 1 2 
    
    DATABASE THROUGHPUT 
    ==============================================================
     database  |  tps   | hitrate | lio_ps  |  rd_ps  | rows_ps  | ins_ps | upd_ps | del_ps 
    -----------+--------+---------+---------+---------+----------+--------+--------+--------
     igen      | 169.55 |   94.00 | 3909.70 |  211.15 | 23543.05 |  50.87 |  46.74 |   0.00 
     tpce      |   0.04 |    0.00 | 2310.97 | 2307.90 |     0.65 |   0.01 |   0.00 |   0.00 
     postgres  |   0.03 |   99.00 |    1.86 |    0.00 |     0.44 |   0.00 |   0.00 |   0.00 
     template1 |   0.00 |    0.00 |    0.00 |    0.00 |     0.00 |   0.00 |   0.00 |   0.00 
     template0 |   0.00 |    0.00 |    0.00 |    0.00 |     0.00 |   0.00 |   0.00 |   0.00 
    (5 rows)
    
    MOST ACCESSED TABLES by pct of tuples: igen database
    ==============================================================
        table     | tuples_pct | tab_hitpct | idx_hitpct | tab_read | tab_hit | idx_read | idx_hit 
    --------------+------------+------------+------------+----------+---------+----------+---------
     order_125    |         45 |         91 |         77 |    67566 |  698578 |    58050 |  202950
     product_125  |         42 |         99 |         99 |       82 |  120060 |       30 |  127345
     industry_125 |         10 |         99 |          0 |        1 |   22409 |        0 |       0
     customer_125 |          1 |         94 |         99 |    34978 |  657096 |     6858 | 1032477
    
    
    Note: This prototype is built on top of the 8.3 version of Postgres. Some modification would be required to use it on other versions of Postgres.

    Thursday Aug 16, 2007

    Getting past GO with Sparc CMT

    The Sparc T1/T2 chip is a lean mean throughput machine. Load on DB users, application servers, JVMs, ect and this chip begins to hum. While the benchmark proof points are many, there still seem to be mis-conceptions about the performance of this chip.

    I have ran across several performance evaluations lately where the T2000 was not being utilized to its full potential. The story goes like so...

    System Administrators First impressions

    Installing SW and configuring Solaris seems a little slow compared to the V490's we have sitting in the corner. But, this is not a show stopper - just an observation. The system admin presses on and preps the machine for the DBA.

    DBAs First Impressions

    After the OS is installed, the machine is turned over to the DBAs to install and configure Oracle. The DBA notices that, compared to the v490, Oracle installation is taking about twice as long. They continue to configure and begin loading an export file from the production machine. Again, this too is slower than the v490. Thinking something is wrong with the OS/HW, the DBA now contacts the system administrator.

    Fanning the fire

    At this point, the DBA and system admin have an "ah-ha" moment and begin to speculate that something is awry. The system admin "times" some simple unix commands. "tar", "gzip", "sort", ect... all seem slower on the T2000. The DBA does a few simple queries... again slower. What gives? Google fingers are warmed up, blogs are consulted, best practices are applied, and the results are unchanged.

    Throughput requires more than one thing

    The DBA and System admin have fallen into a the trap of not testing the \*real\* application. In the process of setting up the environment, the single-threaded jobs to install and configure the SW and load the database are slower. But, that is not the application. The real application, is a on-line store with several hundred users running concurrently. Now we are getting somewhere.

    Throughput, Throughput, Throughput!

    Finally, the real testing begins. Load generators are broken out to simulate the hundreds of users. After loading up the system, it is found that the T2000 DB server can handle about 2x the number of Orders/Sec than the V490! Wait a minute. "gzip" is slower but this little chip can process 2x the Orders? That's what CMT is all about... throughput, throughput, throughput!

    Friday Jun 29, 2007

    Swingbench Order-Entry doesn't scale-up with equal load

    In my previous post, I pointed out some considerations to deploying the Swingbench Order-Entry benchmark on large systems. The main bottle-neck in this case was the database size. When scaling too small of a database to huge transaction rates, concurrency issues in the data prevent scaling. Luckily, Swingbench has a way to adjust the number of "Users" and "Orders"... or so it would seem.

    Adjusting Users and Orders
    I used the "oewizard" utility to create the maximum number of customers and orders - 1 million each. This created a database that was about 65GB total. The "oewizard" is a single threaded process and therefore takes a little time... Be patient. After doing my 1st run, I was a little concerned at the difference in performance.

    Scale-up differences
    In the real-world as database size grows, often transactions bloat. This is often noticed by enterprising DBAs and performance analysts. Eventually, this will lead to a re-coding of SQL or some changes in the transaction logic. So as a real-world database scales-up it will go through a series of bloating and fixing.

    When designing a benchmark to show scale-up and make comparisons of systems at various database sizes, it is desirable to ensure transactions are presented with a similar load. If this is not the case, it should be noted and comparisons should NOT be made across database sizes. The "Order Products", "New Registration", and "Browse Order" transactions which are part of the SwingBench Order-Entry test, all experience transaction bloat as the database size is increased.

    The following response time chart shows the effects of "one" user running on databases of 25,000 and 1,000,000 orders.

    The moral-- beware of comparing results of differing database sizes using the Swingbench default Order-Entry kit.

    Monday Jun 11, 2007

    Swing and a miss... Benchmarking is not \*that\* easy.

    I applaud tools that aim to make life easier. The cell phone is a wonderful invention that when combined with my palm pilot was wonderful. Now Apple has taken it as step further with the music, movies, internet and birthed the iPhone - nicer still!

    Over the past year, I have been seeing more and more IT shops experiment with benchmark tools. One such tool is a kit developed by Dominic Giles of Oracle called Swingbench. Swingbench is a benchmark toolkit that is easy to install and run. Now the DBA can install the benchmark schema and with a few clicks... Wham they are benchmarking! Now comes the hard part - What do these results mean?

    After about the 4th call of a customer having performance issues with their application "Swingbench", I was compelled to take a deeper look.

    Luckily, all of the performance problems were easily solved by someone who benchmarks for a living. They were typically misconfiguration issues like: filesystem features, lack of io, lack of memory, too small of a dataset, ect... The scary part, these situations all used the supplied "demo" schema's.

    By pursuing the Swingbench documentation, I saw that the demo schema's top out at a 100GB database size. This is also alarming. Most IT shops that buy servers or deploy multi-node RAC configurations have more disk than the modern laptop. So you can imagine my surprise when I saw a bake-off of an enterprise class machine that is essentially doing no IO and choking to death on latches... simply the wrong test for the environment.

    Event                                               Waits    Time (s) Ela Time
    -------------------------------------------- ------------ ----------- --------
    latch free                                      4,542,675   1,137,914    79.04
    log file sync                                     242,359     164,671    11.44
    buffer busy waits                                 102,540      61,887     4.30
    enqueue                                            35,142      42,498     2.95
    CPU time                                                       25,310     1.76
    

    Benchmarking, is simply not \*that\* easy. It takes time to scale up a workload that can simulate your environment. No question that Swingbench gives you a nice head start. It allows you to encapsulate your transactions, run simple regression tests, but you have to take the time to customize the kit to include your data and transactions. The demo schema's are simply a starting point.

    Wednesday Nov 15, 2006

    Where do you cache Oracle data?

    Using the filesystem page cache in combination with the Oracle buffer cache for database files was commonplace before 64-bit databases were prevalent - machines had a lot of memory and databases could not use more than 4GB. Now after many years of 64-bit databases, there are still a fair number of systems that still use buffered IO via the filesystem or cached QIO. While buffered IO used to provide benefit, it can cause substandard performance and impede scaling on modern large-scale systems. Buffered file system issues include:
    • Single-writer lock
    • Memory fragmentation: 8k blocks instead of 4M or 32M ISM.
    • Double buffering doubles memory bandwidth requirements!
    • Segmap thrashing... lots of xcalls!

    2x throughput increase with Oracle Caching vs OS buffered IO

    A quick experiment was conducted on Oracle 10gR2 on a large memory machine (192GB).
    • 1st test: DB cache was set to 1GB and the database was mounted on a buffered file system.
    • 2nd test: DB cache was set to 50GB and the database was mounted direct - NOT buffered.
    A 46GB table was populated, indexed, and then queried by 100 processes each requesting a range of data. A single row was retrieved at a time to simulate what would happen in an OLTP environment. The data was cached so that no IO occurred during any of the runs. When the dust had settled, the Oracle buffer cache provided a 2X speedup over buffered file systems. There was also a dramatic decrease in getmaps, xcalls, and system CPU time. The table below shows the results.

      Cache

      OS

      Rows/sec

      getmaps/sec

      xcalls/sec

      Usr

      sys

      FS

      S9

      287,114

      86,516

      2,600,000

      71

      28

      DB

      S9

      695,700

      296

      3,254

      94

      5

      FS

      S10

      334,966

      106,719

      1,003

      78

      21


    Notice that cross calls for the Solaris 10 with FS cache have been nearly eliminated while the getmaps have increased in proportional to throughput. This is due to the elimination of xcalls associated with the getmap operation. That said, the mild improvement in throughput with S10 on filesystems, it is nothing like the 2x improvement achieved by avoiding buffered IO altogether.

    Recognizing buffered IO impact

    A higher amount of system CPU time can be observed at the high-level. It is not uncommon to see a usr/sys ratio of 1 or less on systems where buffered IO is in use. This is due to the high number of getmap reclaims and cross-calls (xcal). You can observe cross-calls with mpstat(1) command. Segmap activity can be best observed using segmapstat utility which is part of the cachekit utilities. The segmapstat utility polls "kstats" to retrieve hit/miss data in an easy to read format. If you are using Solaris 10, the impact due to cross-calls is less, but segmap activity is still visible.

    Finally, it would be nice to be able to see the amount of data in the page cache. If you are on Solaris 8, you will need to install the memtool 8 written by Richard McDougal. If you are on Solaris 9 or greater, you can use mdb(1) with the ::memstat command. Beware, this command will take a long time to run and may affect performance, therefore it is best to run this when the system is not busy.

    # mdb -k Loading modules: [ unix krtld genunix ip usba wrsm random
      ipc nfs ptm cpc ]
     > ::memstat 
    Page Summary                Pages                MB  %Tot 
    ------------     ----------------  ----------------  ---
    Kernel                     430030              3359    2% 
    Anon                       805572              6293    3% 
    Exec and libs                9429                73    0% 
    Page cache               14974588            116988   52% 
    Free (cachelist)          2547680             19903    9% 
    Free (freelist)           9853807             76982   34% 
    Total                    28621106            223602 
    

    How do you avoid using buffered IO?

    The easiest way to avoid using the OS page cache is to simply use RAW partitions. This is commonly done in combination with SVM or VxVM. More recently, Oracle introduced their own volume manager (ASM) which makes use of async IO and eases the administration of Oracle databases. That said, databases on RAW partitions are not for everyone. Often users perfer to use standard OS tools to view and manpulate database files in filesystems.

    Most filesystems have ways of bypassing the OS page cache for Oracle datafiles. UFS, QFS, and VxFS all support mounting filesystems to bypass the OS page cache - the only exeception is ZFS which doesn't allow for direct or async IO. Below, methods for disabling buffered IO with filesystems are discussed.

    FILESYSTEMIO_OPTIONS=SETALL (Oracle 9i and greater) init.ora parameter

    The first step to avoiding buffered IO is to use the "FILESYSTEMIO_OPTIONS" parameter. When you use the "SETALL" option, this sets all the options for a particular filesystem to enable directio or async IO. Setting the FILSYSTEMIO_OPTIONS to anything other than "SETALL" could reduce performance. Therefore, it is a best practice to set this option.

    UFS and directio

    With UFS, the only way to bypass the page cache is with directio. If you are using Oracle 9i or greater, then set the FILESYSTEMIO_OPTIONS=SETALL init.ora parameter. This the preferred way of enabling directio with Oracle. With this method, Oracle uses an api to enable directio when it opens database files. This method allows you to still use buffered IO for operations like backup and archiving. If you are using Oracle 8i, then the only way to enable directio with UFS is via the forcedirectio mount option.

    VxFS with QIO

    VxFS has several options for disabling buffered IO. Like UFS, VxFS does support directio but it is not as efficient as Quick IO (QIO) or Oracle Data Management (ODM). With VxFS, async IO is possible with QuickIO or ODM. Data files for use with QIO must be created with a special utility or converted to the QIO format. With QIO you have to be careful that the "cached" QIO option is not enabled. With the cached QIO option, blocks of selected data files will be placed in the OS page cache.

    VxFS with ODM

    Like QIO, ODM uses async IO. ODM uses an api specified by Oracle to open and manipulate data files. ODM lowers overhead in large systems by sharing file descriptors and eliminating the need for each oracle shadow/server process to open and obtain its own file descriptors.

    Convincing Oracle to cache table data

    Finally, after all this is done Oracle still may not properly cache table data. I have seen more than a few persons enable "directio" and increase the SGA only to have response time of their critical queries take longer! If a table is too large or the "cache" attribute is not set, Oracle will not attempt to cache tables when scanning. This is done to avoid flooding the Oracle buffer cache with data that will most likely not be used. Luckily, there is an easy way to correct this behavior by setting the "CACHE" storage parameter on a table.
    
      SQL> alter table BIGACTIVETABLE cache;
    

    Finally, you may need to convince some of the Oracle DBAs of the benefit. DBAs look at Oracle performance data from an Oracle centric point of view. When data such as Oracle's statspack is analyzed, some pretty awsome response times can be seen. Wait events for IO such as "db file sequential read" and "db file scattered reads" can show response times of less than 1ms when reading from the OS page cache. Often when looking at such data, DBA's are reluctant to give up this response time. This should be viewed as an oppurtunity to further improve performance by placing the data in the Oracle buffer cache and avoiding the reads alltogether.

    Summary and references

    Hopefully this has given you some background on why unbuffered IO is so critical to obtain optimal performance with Oracle. It is far more efficient to obtain an Oracle blocks from the database buffer cache than to go through the OS page cache layers.
    About

    This blog discusses performance topics as running on Sun servers. The main focus is in database performance and architecture but other topics can and will creep in.

    Search

    Archives
    « April 2014
    SunMonTueWedThuFriSat
      
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
       
           
    Today
    News

    No bookmarks in folder

    Blogroll

    No bookmarks in folder