Monday Oct 19, 2009

Exadata V2... Oracle grid consolidation in a box

I spent some time last week at OOW talking with Oracle customers regarding the technology in the Exadata V2 database machine. There were certainly a lot of customers excited to use this for their data warehouses - 21GB/sec disk throughput, 50GB/sec flash cache, and Hybrid Columnar Compression really accelerate this machine past the competition. The viability of Exadata V2 for DW/BI was really a given, but what impressed me the most was the number of customers looking to consolidate applications in this environment.

Ever since I was first brought onto this project, I thought Exadata V2 would be an excellent platform for consolidation. In my experience working on the largest of Sun's servers, I have seen customers with dozens of instances on a single machine. Using M9000 series machines, you can create domains in order to support multiple environments - this very much mirrors what Exadata V2 can provide. Exadata V2 allows DBAs to deploy multiple instances across a grid of RAC nodes available in the DB machine – and since you are using RAC, availability is a given. Also, the addition of Flash allows for up to 1 million IOPs to support your ERP/OLTP environments. Consider the picture below.

With this environment, your production data warehouse can share the same infrastructure as the ERP, test, and development environments. This model allows the flexibility to add/subtract nodes from a particular database as needed. But, the operational efficiency is not the biggest benefit to consolidation. The savings in terms of power, space, and cooling are substantial.

Consider for a moment the number of drives necessary to match the 1 million IOPs available in the database machine. Assuming you are using the best 15,000 rpm drive, you would be able to do 250 IOPs/drive. So, to get to 1 million IOPs, you would need 4,000 drives! A highly dense 42U storage rack can house any where from 300-400 drives. So, you would need 10 racks, just for the storage and at least one rack for servers.

With Exadata V2, you get more than 10:1 savings in floor space and all the power an cooling benefits as well. It is no wonder people are excited about Exadata V2 as a platform to consolidate compute and storage resources.

Tuesday Dec 09, 2008

Oracle analysis 101 : Begining analysis techniques

Recently, I was asked to present beginning Oracle analysis techniques to an internal audience of Sun engineers. This presentation was a lot of fun to put together and was well received. After cleaning it up a bit and taking out the boring internal Sun stuff, I thought the presentation might be useful to a larger audience. This presentation focuses on problem statement, environmental, and basic AWR/Statspack analysis.

If you find this useful or have suggestions, drop me a note.

Saturday Nov 08, 2008

Virtual CPUs effect on Oracle SGA allocations.

Several years ago, I wrote about how Oracle views multi-threaded processors. At the time we were just introducing a dual-core processor. This doubling of the number of cores was presented by Solaris as virtual CPUs and Oracle would automatically size the CPU_COUNT accordingly. But what happens when you introduce a 1RU server that has 128 virtual CPUs?

The UltraSPARC T1/T2/T2+ servers have many threads or virtual CPUs. The CPU_COUNT on these systems is sized no different than before. So, the newly introduced T5540 with 4xUltraSPARC T2+ processors would have 256 threads and CPU_COUNT would be set to 256.

So, what does CPU_COUNT have to do with memory?

Thanks to my friends in the Oracle Real World Performance group, I was made aware that Oracle uses CPU_COUNT to size the minimum amount of SGA allowed. In one particular case, the DBA was trying to allocate 70 database instances on a T5140 with 64GB of memory and 128 virtual CPUs. Needless to say, the SGA_TARGET would have to be set fairly low in-order to accomplish this task. A SGA_TARGET was set to 256MB, but the following error was encountered.

    ORA-00821: Specified value of sga_target 256M is too small
After experimentation, they were able to start Oracle with a target of 900MB, but with 70 instances this would not fly. Manually lowering the CPU_COUNT allowed the DBA to use an SGA_TARGET of 256MB. Obviously, this is an extreme case and changing CPU_COUNT was reasonable.

Core and virtual CPU counts have been on the rise for some years now. Combine rising virtual CPU count with the current economic climate and I would suspect that consolidation will be more popular than ever. In general, I would not advocate changing CPU_COUNT manually. If you had one instance on this box, the default be just fine. CPU_COUNT automatically sizes so many other parameters that you should be very careful before making a change.

Monday Sep 22, 2008

Oracle Open World 2008 - Growing Green Databases with UltraSPARC CMT

The time has come present at Oracle Open World on UltraSPARC CMT performance. I decided to post the final version here in addition to the OOW site. I hope to see you there!
Session ID: S299785
     Title: Growing Green databases with Oracle on the UltraSPARC CMT processor
      Date: Monday Sept 22nd
      Time: 13:00 - 14:00
     Place: Moscone South Rm 236

Thursday May 29, 2008

Optimizing Oracle DSS operations with CMT based servers

This entry continues the Throughput Computing Series to show how a typical DSS operation can be optimized with CMT based servers. The "Create as Select" and "Insert into as Select" operations are quite common in DSS and OLTP environments as well. Unless parallelism is specified, Oracle will single thread these operations. To achieve optimal throughput, these operations can use parallel query and DML operations.

Results

I created a 20GB table on a T5240 server to serve as the source for the "Create as Select" (CAS) operations. The parallelism of the CAS operation was increased until the IO subsystem was maxed out. This resulted in a drop from 25 minutes with no parallelism to 2 minutes 40 seconds with 8 threads...thats nearly a 10x speedup by simply using parallelism built into Oracle!



This server was configured with just two HBAs, one for each the source and destination tables. This limited throughput of CAS operations to 127MB/sec, or one HBA. With this IO configuration, it took only 8 threads to reach maximum throughput. You should experiment to achieve maximum throughput of your IO configuration. If you suspect your IO configuration is not performing up to speed, look into doing some IO micro benchmarking to find the maximum throughput outside of Oracle. A topic for a later discussion :)

SQL syntax

The following shows how to use parallel DML and parallel query.
           ## Create as Select ##
           ##
           SQL> alter session enable parallel dml;
                
           SQL> create table abc
                parallel (degree 32)
                as
                select /\*+ parallel(gtest, 32) \*/ \* from gtest;
    
    
           ## Insert as Select ##
           ##
           SQL> alter session enable parallel dml;
            
           SQL> insert /\*+ parallel(abc,32) \*/
                into abc
                select /\*+ parallel(gtest,32) \*/ \* from gtest;
    
    

Wednesday May 21, 2008

Optimizing Oracle index create with CMT based servers

One of the most common ways to improve SQL performance is the use of indexes. While Oracle does have a wide variety of indexes available, these tests focus on the most commonly used B-tree index. On large tables it is important to ensure indexes get created in a timely fashion, that is why Oracle introduced several features to decrease index creation time:
  • "unrecoverable"

    This feature prevent the logging of intermediate steps of the index creation process. There is really no value to logging of intermediate steps. Index creation should be thought of as an atomic process - if it fails, you can always start over. If you create indexes as "unrecoverable" they won't be recoverable until a backup is performed on the target tablespace.

  • "parallel"

    This simply uses parallel query/dml to speed the creation of indexes.
The following index create statement shows how to use the "parallel" and "unrecoverable" features for index creation.
      create index gtest_c1 on gtest(idname)
      pctfree 30  parallel 64 tablespace glennf_i unrecoverable;
      

Results

The following test created an non-unique index on varchar(32) column of a 20GB table. Parallelism was increased from 1->64 in order to use the available IO bandwidth. With parallelism of 1 index creation took 34 minutes, while with parallelism of 64 it took only 3 minutes and 45 seconds!



These tests use the same configuration as previous posts regarding Oracle in the Throughput Computing series.

Wednesday May 14, 2008

Parallelizing Oracle backup with RMAN on CMT based servers

A backup window is important to keep in check to ensure time for batch and on-line work. With Oracle RMAN there are several ways to keep backups flowing smoothly. This example shows how you can use multiple channels and parallelism to increase the throughput of backup to the maximum of your IO configuration.

Results

This graph shows scaling in MB/sec based on the # of channels in use. The term "channels" used by Oracle does not have any relation to actual "physical" channels. In Oracle RMAN terms, a channel is simply a "connection" to a database for which to backup data. Data files are assigned to "connections" in a round-robin fashion so as to utilize all connections as evenly as possible.



By configuring a parallelism of 20 with RMAN, I was able to increase throughput from 5->80 MB/sec. Single threaded performance was limited to 5MB/sec mainly due to the high CPU component that comes with using "COMPRESSED" backups. The way to maximize IO throughput with COMPRESSION is to simply add more streams.

RMAN commands to achieve parallelism

I used the following commands to create 20 backup "channels" for RMAN. Notice that they configured to use the same directory, just with different file formats.

RMAN> configure channel 1 device type disk format
     '/o6s_data/GLENNF/d2/backup_db_c1%d_S_%s_P_%p_T_%t' MAXPIECESIZE 1024 M;
RMAN> configure channel 2 device type disk format
     '/o6s_data/GLENNF/d2/backup_db_c2%d_S_%s_P_%p_T_%t' MAXPIECESIZE 1024 M;
...
...
RMAN> configure channel 20 device type disk format
     '/o6s_data/GLENNF/d2/backup_db_c20%d_S_%s_P_%p_T_%t' MAXPIECESIZE 1024 M;

After creating these channels, you must tell RMAN how to connect to these channels:

RMAN> configure channel 1 DEVICE TYPE DISK CONNECT '/as sysdba';
RMAN> configure channel 2 DEVICE TYPE DISK CONNECT '/as sysdba';
...
...
RMAN> configure channel 20 DEVICE TYPE DISK CONNECT '/as sysdba';

Next, you need to tell RMAN to use disk parallelism of 20:

RMAN> CONFIGURE DEVICE TYPE DISK BACKUP TYPE 
      TO COMPRESSED BACKUPSET PARALLELISM 20;

Finally, let's issue the backup command:

RMAN> BACKUP TABLESPACE GLENNF_RMAN;

Starting backup at 09-MAY-08
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=966 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=952 devtype=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: sid=940 devtype=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: sid=938 devtype=DISK
allocated channel: ORA_DISK_5
channel ORA_DISK_5: sid=939 devtype=DISK
allocated channel: ORA_DISK_6
channel ORA_DISK_6: sid=969 devtype=DISK
allocated channel: ORA_DISK_7
channel ORA_DISK_7: sid=961 devtype=DISK
allocated channel: ORA_DISK_8
channel ORA_DISK_8: sid=963 devtype=DISK
allocated channel: ORA_DISK_9
channel ORA_DISK_9: sid=953 devtype=DISK
allocated channel: ORA_DISK_10
channel ORA_DISK_10: sid=970 devtype=DISK
allocated channel: ORA_DISK_11
channel ORA_DISK_11: sid=920 devtype=DISK
allocated channel: ORA_DISK_12
channel ORA_DISK_12: sid=943 devtype=DISK
allocated channel: ORA_DISK_13
channel ORA_DISK_13: sid=968 devtype=DISK
allocated channel: ORA_DISK_14
channel ORA_DISK_14: sid=929 devtype=DISK
allocated channel: ORA_DISK_15
channel ORA_DISK_15: sid=960 devtype=DISK
allocated channel: ORA_DISK_16
channel ORA_DISK_16: sid=931 devtype=DISK
allocated channel: ORA_DISK_17
channel ORA_DISK_17: sid=927 devtype=DISK
allocated channel: ORA_DISK_18
channel ORA_DISK_18: sid=957 devtype=DISK
allocated channel: ORA_DISK_19
channel ORA_DISK_19: sid=958 devtype=DISK
allocated channel: ORA_DISK_20
channel ORA_DISK_20: sid=964 devtype=DISK
channel ORA_DISK_1: starting compressed full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00068 name=/oracle/O6S/sapraw/glenn1
channel ORA_DISK_1: starting piece 1 at 09-MAY-08
channel ORA_DISK_2: starting compressed full datafile backupset
channel ORA_DISK_2: specifying datafile(s) in backupset
input datafile fno=00069 name=/oracle/O6S/sapraw/glenn2
channel ORA_DISK_2: starting piece 1 at 09-MAY-08
channel ORA_DISK_3: starting compressed full datafile backupset
channel ORA_DISK_3: specifying datafile(s) in backupset
input datafile fno=00070 name=/oracle/O6S/sapraw/glenn3
channel ORA_DISK_3: starting piece 1 at 09-MAY-08
channel ORA_DISK_4: starting compressed full datafile backupset
channel ORA_DISK_4: specifying datafile(s) in backupset
input datafile fno=00071 name=/oracle/O6S/sapraw/glenn4
channel ORA_DISK_4: starting piece 1 at 09-MAY-08
channel ORA_DISK_5: starting compressed full datafile backupset
channel ORA_DISK_5: specifying datafile(s) in backupset
input datafile fno=00072 name=/oracle/O6S/sapraw/glenn5
channel ORA_DISK_5: starting piece 1 at 09-MAY-08
channel ORA_DISK_6: starting compressed full datafile backupset
channel ORA_DISK_6: specifying datafile(s) in backupset
input datafile fno=00073 name=/oracle/O6S/sapraw/glenn6
channel ORA_DISK_6: starting piece 1 at 09-MAY-08
channel ORA_DISK_7: starting compressed full datafile backupset
channel ORA_DISK_7: specifying datafile(s) in backupset
input datafile fno=00074 name=/oracle/O6S/sapraw/glenn7
channel ORA_DISK_7: starting piece 1 at 09-MAY-08
channel ORA_DISK_8: starting compressed full datafile backupset
channel ORA_DISK_8: specifying datafile(s) in backupset
input datafile fno=00075 name=/oracle/O6S/sapraw/glenn8
channel ORA_DISK_8: starting piece 1 at 09-MAY-08
channel ORA_DISK_9: starting compressed full datafile backupset
channel ORA_DISK_9: specifying datafile(s) in backupset
input datafile fno=00076 name=/oracle/O6S/sapraw/glenn9
channel ORA_DISK_9: starting piece 1 at 09-MAY-08
channel ORA_DISK_10: starting compressed full datafile backupset
channel ORA_DISK_10: specifying datafile(s) in backupset
input datafile fno=00077 name=/oracle/O6S/sapraw/glenn10
channel ORA_DISK_10: starting piece 1 at 09-MAY-08
channel ORA_DISK_11: starting compressed full datafile backupset
channel ORA_DISK_11: specifying datafile(s) in backupset
input datafile fno=00078 name=/oracle/O6S/sapraw/glenn11
channel ORA_DISK_11: starting piece 1 at 09-MAY-08
channel ORA_DISK_12: starting compressed full datafile backupset
channel ORA_DISK_12: specifying datafile(s) in backupset
input datafile fno=00079 name=/oracle/O6S/sapraw/glenn12
channel ORA_DISK_12: starting piece 1 at 09-MAY-08
channel ORA_DISK_13: starting compressed full datafile backupset
channel ORA_DISK_13: specifying datafile(s) in backupset
input datafile fno=00080 name=/oracle/O6S/sapraw/glenn13
channel ORA_DISK_13: starting piece 1 at 09-MAY-08
channel ORA_DISK_14: starting compressed full datafile backupset
channel ORA_DISK_14: specifying datafile(s) in backupset
input datafile fno=00081 name=/oracle/O6S/sapraw/glenn14
channel ORA_DISK_14: starting piece 1 at 09-MAY-08
channel ORA_DISK_15: starting compressed full datafile backupset
channel ORA_DISK_15: specifying datafile(s) in backupset
input datafile fno=00082 name=/oracle/O6S/sapraw/glenn15
channel ORA_DISK_15: starting piece 1 at 09-MAY-08
channel ORA_DISK_16: starting compressed full datafile backupset
channel ORA_DISK_16: specifying datafile(s) in backupset
input datafile fno=00083 name=/oracle/O6S/sapraw/glenn16
channel ORA_DISK_16: starting piece 1 at 09-MAY-08
channel ORA_DISK_17: starting compressed full datafile backupset
channel ORA_DISK_17: specifying datafile(s) in backupset
input datafile fno=00084 name=/oracle/O6S/sapraw/glenn17
channel ORA_DISK_17: starting piece 1 at 09-MAY-08
channel ORA_DISK_18: starting compressed full datafile backupset
channel ORA_DISK_18: specifying datafile(s) in backupset
input datafile fno=00085 name=/oracle/O6S/sapraw/glenn18
channel ORA_DISK_18: starting piece 1 at 09-MAY-08
channel ORA_DISK_19: starting compressed full datafile backupset
channel ORA_DISK_19: specifying datafile(s) in backupset
input datafile fno=00086 name=/oracle/O6S/sapraw/glenn19
channel ORA_DISK_19: starting piece 1 at 09-MAY-08
channel ORA_DISK_20: starting compressed full datafile backupset
channel ORA_DISK_20: specifying datafile(s) in backupset
input datafile fno=00087 name=/oracle/O6S/sapraw/glenn20
channel ORA_DISK_20: starting piece 1 at 09-MAY-08
channel ORA_DISK_2: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c2O6S_S_81_P_1_T_654270132 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:58
channel ORA_DISK_3: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c3O6S_S_82_P_1_T_654270132 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_3: backup set complete, elapsed time: 00:00:58
channel ORA_DISK_4: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c4O6S_S_83_P_1_T_654270132 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_4: backup set complete, elapsed time: 00:00:58
channel ORA_DISK_9: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c9O6S_S_88_P_1_T_654270133 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_9: backup set complete, elapsed time: 00:00:57
channel ORA_DISK_11: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c11O6S_S_90_P_1_T_654270133 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_11: backup set complete, elapsed time: 00:00:57
channel ORA_DISK_12: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c12O6S_S_91_P_1_T_654270133 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_12: backup set complete, elapsed time: 00:00:57
channel ORA_DISK_13: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c13O6S_S_92_P_1_T_654270133 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_13: backup set complete, elapsed time: 00:00:57
channel ORA_DISK_18: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c18O6S_S_97_P_1_T_654270134 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_18: backup set complete, elapsed time: 00:00:56
channel ORA_DISK_20: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c20O6S_S_99_P_1_T_654270135 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_20: backup set complete, elapsed time: 00:00:55
channel ORA_DISK_10: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c10O6S_S_89_P_1_T_654270133 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_10: backup set complete, elapsed time: 00:00:58
channel ORA_DISK_16: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c16O6S_S_95_P_1_T_654270134 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_16: backup set complete, elapsed time: 00:00:57
channel ORA_DISK_1: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c1O6S_S_80_P_1_T_654270132 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:00
channel ORA_DISK_5: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c5O6S_S_84_P_1_T_654270132 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_5: backup set complete, elapsed time: 00:01:00
channel ORA_DISK_14: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c14O6S_S_93_P_1_T_654270134 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_14: backup set complete, elapsed time: 00:00:58
channel ORA_DISK_7: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c7O6S_S_86_P_1_T_654270132 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_7: backup set complete, elapsed time: 00:01:01
channel ORA_DISK_8: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c8O6S_S_87_P_1_T_654270132 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_8: backup set complete, elapsed time: 00:01:01
channel ORA_DISK_6: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c6O6S_S_85_P_1_T_654270132 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_6: backup set complete, elapsed time: 00:01:04
channel ORA_DISK_15: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c15O6S_S_94_P_1_T_654270134 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_15: backup set complete, elapsed time: 00:01:02
channel ORA_DISK_17: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c17O6S_S_96_P_1_T_654270134 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_17: backup set complete, elapsed time: 00:01:02
channel ORA_DISK_19: finished piece 1 at 09-MAY-08
piece handle=/o6s_data/GLENNF/d2/backup_db_c19O6S_S_98_P_1_T_654270135 tag=TAG20080509T134205 comment=NONE
channel ORA_DISK_19: backup set complete, elapsed time: 00:01:01
Finished backup at 09-MAY-08

Configuration

  • T5240 - "Maramba" CMT based server
    • 2 x UltraSPARC T2 Plus @ 1.4GHz
    • 128GB memory
    • 2 x 1GB fiber channel HBA
    • 1 x 6140 Storage array with 1 lun per channel.
  • Software
    • Solaris 10 Update 5
    • Oracle 10.2.0.3
    • CoolTools

Monday May 12, 2008

Optimizing Oracle Schema Analyze with CMT based servers

A common observation regarding Niagara based servers is that system maintenance or database administration tasks can run slower than previous generations of Sun servers. While single-threaded performance may be less, these maintenance tasks are often able to be parallelized, especially using a database engine as mature as Oracle. Take for instance the task of gathering schema statistics. Oracle offers many options on how to gather schema statistics, but there are a few ways to reduce overall gather statistics time:
  • Increased Parallelism
  • Reduced Sample Size
  • Concurrency
Oracle has written many articles in metalink which discuss sample size and the various virtues. There have also been many volumes written on optimizing the Oracle cost based optimizer (CBO). Jonathan Lewis of who is a member of the famous Oaktable network has written books and multiple white papers on the topic. You can read these for insight into the Oracle CBO. While a reasonable sample size or the use of the "DBMS_STATS.AUTO_SAMPLE_SIZE" can seriously reduce the gather statistics times, I will leave that up to you to choose the sample size the produces the best plans.

Results

The following graph shows the total run time in seconds of a "GATHER_SCHEMA_STATS" operations at various levels of parallelism and sample size on a simple schema of 130GB. All tests were run on a Maramba T5240 with a 6140 array and two channels.

GATHER_SCHEMA_STATS parallelism and sample_size


Note that if higher levels of sampling are required, parallelism can help to significantly reduce the overall runtime of the GATHER_SCHEMA_STATS operation. Of course a smaller sample size can be employed as well.

GATHER_SCHEMA_STATS options

SQL> connect / as sysdba

-- Example with 10 percent with parallel degree 32
--
SQL> EXECUTE SYS.DBMS_STATS.GATHER_SCHEMA_STATS (OWNNAME=>'GLENNF', 
     ESTIMATE_PERCENT=>10, 
     DEGREE=>32, 
     CASCADE=>TRUE);

-- Example with AUTO_SAMPLE_SIZE and parallel degree 32
--
SQL> EXECUTE SYS.DBMS_STATS.GATHER_SCHEMA_STATS (OWNNAME=>'GLENNF', 
     ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE, 
     DEGREE=>32, 
     CASCADE=>TRUE);

Note that you must have "parallel_max_servers" set to at least the level of parallelism desired for the GATHER_SCHEMA_STATS operation. I typically set it higher to allow for other parallel operations to get servers.

        SQL> alter system set parallel_max_servers = 128;

Finally, you can easily run a schema collect on multiple schema's concurrently and in parallel by issuing GATHER_SCHEMA_STATS from multiple sessions and ensuring the level of parallelism is set high enough to accommodate.

Configuration

  • T5240 - "Maramba" CMT based server
    • 2 x UltraSPARC T2 Plus @ 1.4GHz
    • 128GB memory
    • 2 x 1GB fiber channel HBA
    • 1 x 6140 Storage array with 1 lun per channel.
  • Software
    • Solaris 10 Update 5
    • Oracle 10.2.0.3
    • CoolTools
  • Schema
      SQL> Connected.
      SQL> SQL> SQL> SQL> SQL> SQL> SQL>   2    3    4  
      OWNER	 TABLE_NAME	NUM_ROWS       MB
      -------- ------------ ---------- --------
      GLENNF	 B2	       239826150    38560
      GLENNF	 B1	       237390000    32110
      GLENNF	 S2		 4706245      750
      GLENNF	 S4		 4700995      750
      GLENNF	 S5		 4699955      750
      GLENNF	 S7		 4698450      750
      GLENNF	 S8		 4706435      750
      GLENNF	 S9		 4707445      750
      GLENNF	 S10		 4700905      750
      GLENNF	 S3		 4706375      750
      GLENNF	 GTEST		 4706170      750
      
      OWNER	 TABLE_NAME	NUM_ROWS       MB
      -------- ------------ ---------- --------
      GLENNF	 S6		 4700980      750
      GLENNF	 S1		 4705905      710
      HAYDEN	 HTEST		 4723031      750
      
      14 rows selected.
      
      SQL>   2    3    4  
      OWNER	 INDEX_NAME	NUM_ROWS       MB
      -------- ------------ ---------- --------
      GLENNF	 B1_I2	       244841720    11623
      GLENNF	 B2_I2	       239784800    11451
      GLENNF	 B1_I1	       248169793     8926
      GLENNF	 B2_I1	       241690170     8589
      GLENNF	 S6_I2		 4790380      229
      GLENNF	 S3_I2		 4760090      227
      GLENNF	 S2_I2		 4693120      226
      GLENNF	 S5_I2		 4688230      224
      GLENNF	 S8_I2		 4665695      223
      GLENNF	 S4_I2		 4503180      216
      GLENNF	 S1_I2		 4524730      216
      
      OWNER	 INDEX_NAME	NUM_ROWS       MB
      -------- ------------ ---------- --------
      GLENNF	 S9_I2		 4389080      211
      GLENNF	 S10_I2 	 4364885      209
      GLENNF	 S7_I2		 4357240      208
      GLENNF	 S2_I1		 4972635      177
      GLENNF	 S3_I1		 4849660      174
      GLENNF	 S6_I1		 4830895      174
      GLENNF	 S9_I1		 4775830      171
      GLENNF	 S7_I1		 4772975      169
      GLENNF	 S5_I1		 4648410      168
      GLENNF	 GTEST_C1	 4686790      167
      GLENNF	 S1_I1		 4661605      166
      
      OWNER	 INDEX_NAME	NUM_ROWS       MB
      -------- ------------ ---------- --------
      GLENNF	 S4_I1		 4626965      166
      GLENNF	 S10_I1 	 4605100      164
      GLENNF	 S8_I1		 4590735      163
      
      25 rows selected.
      

Monday Apr 21, 2008

Throughput computing series: Utilizing CMT with Oracle

Since we just recently announced mutli-chip based CMT servers that provide up to 128 threads in a 1U or 2U box, it seems fitting to pick up this thread on throughput computing.

The key to fully appreciating the CMT architecture with Oracle is to exploit the available threads. As I have spoke about earlier in the "throughput computing series", this can be done through "concurrency", "parallelism", or both. Oracle, being the mature product that it is, can achieve high-levels of parallelism as well as concurrency.

Concurrent processing with Oracle

For examples of concurrent processing with Oracle, look at recent results on the Oracle Ebusiness payroll benchmark. This shows that using concurrent processes to break up the batch, you can increase batch throughput. By going from 4 to 64 processes, batch time decreased from 31.53 minutes to 4.63 minutes and throughput was increased by 6.8x!

With Oracle's Ebusiness Suite of software, you can increase the number of "concurrent manager" processes to more fully utilize the available threads on the system. Each ISV has different ways of controlling batch distribution and concurrency. You will have to check with your various software vendors to find all the ways to increase concurrency.

Parallelism in Oracle

People often associate parallelism in Oracle with parallel query. In most cases where CMT is involved, I see a lack of understanding of how to achieve parallelism with more basic administrative functions. Oracle excels in providing parallelism for important administrative tasks.

  • Schema analyze
  • Index build/rebuild
  • Parallel loader
  • Parallel export/import with datapump

    While parallelism exists for these administrative tasks, some configuration is required. I will examine the various ways to achieve optimal throughput with CMT based systems on these tasks.
  • Monday Mar 10, 2008

    Oracle db_block_checksum performance bug

    We recently ran across a pretty strange performance bug with the checksum function of Oracle. This bug (6814520) causes excessive CPU to be used for the checksum routine. The checksum feature of Oracle is enabled by the db_block_checksum=TRUE parameter in Oracle. With the release of Oracle 10gR2, "TRUE" is the default setting. The order of magnitude of CPU overhead depends on the type of Solaris SPARC architecture.

    Chip     %overhead
    ----     ---------
    SPARC64       250%  
    USIV           45%  
    ------------------
    w/patch         8% 
    

    Oracle released a patch via metalink to address this situation. This patch is for 10.2.0.3 installations. The fix will be included in: 11.1.0.7, 10.2.0.4, and 10.2.0.5.

    If you are unsure whether or not you are hitting this bug, you can easily alter this parameter on the fly:
      SQL> alter system set db_block_checksum=FALSE
    Warning, this will disable the checksum feature and blocks written when this is turned to FALSE will not contain checksum information.

    Regardless of whether you are currently hitting the bug or not, the recommendation is:
      INSTALL "6814520" WHEN USING ORACLE 10.2.0.3 !!!

    Thursday Feb 14, 2008

    Ensuring directIO with Oracle 10.2.0.3 on Solaris UFS filesystems

    I usually really dislike blog entries that have nothing to say other than repackage bug descriptions and offer them up as knowledge, but in this case I have made an exception since the full impact of the bug is not fully described.

    There is a fairly nasty Oracle bug with 10.2.0.3 that prevents the use of DirectIO with Solaris. The metalink note "406472.1" describes the failure modes but fails to mention the performance impact if you use "filesystemio_options=setall" and fail to have the mandatory patch "5752399" in place.

    This was particularly troubling to me since we have been recommending for years the use of the "setall" to ensure all the proper filesystem options are set for optimal performance. I just finished working a customer situation where this patch was not installed and their critical batch run-times were nearly 4x as large... Not a pretty situation.... OK, So bottom line:
      MAKE SURE YOU INSTALL "5752399" WHEN USING ORACLE 10.2.0.3 !!!

    Monday Jan 07, 2008

    Throughput computing series... getting the most of your SPARC CMT server.

    I was thinking about the development of a CMT throughput benchmark, but it occurred to me that there are many \*good\* examples of throughput already out with the benchmarks we publish... just look at the bmseer postings on the Recent T2 results and the long line of performance records on the T2000.

    The biggest disconnect with CMT servers is a misunderstanding of throughput and multi-threaded applications. I made a posting last year which touched on some initial impressions, but I thought it would be a good idea to dig in further.

    This entry is to kick off a series of postings that explore different aspects of throughput computing in a CMT environment. The rough outline is as follows:
      Overview
      • Definition of Throughput computing, multi-threading, and concurrency.
      Explore system parallelism
      • Unix commands and parallel options
      • Concurrent builds/compiles.
      • Configuring the system for parallelism
      Configuring applications for parallelism
      • Concurrency vs multi-threading
      • Single-Threaded jobs
      Database parallelism with Oracle
      • Parallel loader and datapump
      • Index build parallelism
      • Concurrent processing in Oracle
      • Configuring Oracle for CMT servers

    Friday Jan 04, 2008

    Organizational stove-pipes complicate database storage configurations.

    IT organizations at large companies are complex entities where people are partitioned by function. There are SAN people, system administrators, Database administrators, and Developers. While it is good to specialize by function there seems to be a mis-match when each organization optimizes their internal operations. Let me walk you though the a common situation where the SAN administrators and system administrators each try to optimize performance without consideration to overall picture.

    The setup

    • DBA requests storage for new application. They are expecting filesystem(s) or RAW luns will be presented for ASM use.
    • System's administrators request luns from the Storage administrators to fulfill the request.
    • Storage administrators supply the luns.

    Systems Administrators

    Their job is to make sure the performance of the supplied luns map cleanly to the Database environment. For years System Administrators have been using SW volume management/Raid to improve performance. So, naturally, they request a large number of luns (say 128) from the Storage administrators so they can stripe. Past experimentation has shown that a 32k stripe width was best.

    Storage Administrators

    The Storage people take care of large Hitachi or EMC boxes. Their job is to supply luns to applications and make sure their "san-box" performs well. They gladly supply the luns to the Systems administrators, but to ensure performance of the SAN box, they must prevent the fiber from "resets". The maximum number of requests on a fiber is 256 requests. So, no problem, they have the system administrators adjust the "sd_max_throttle" parameter so the OS will queue events and not cause resets. The rule of thumb is to set it to:
    
           sd_max_throttle = 256/#luns = 256/128 = 2
    
    
    

    Putting it all together

    So, now the system administrator takes these 128 luns and creates four file systems by striping 32 luns together each with a 32k stripe width using SVM. Since this is a SAN, there are multiple connections from the host to the SAN in this case there are 4 connections. MPxIO is used to round-robin IO requests to the 4 connections to the SAN to balance load and allow for fail-over in case of an HBA failure.

    This environment is turned over to the DBA who finds the performance is less than stellar.

    Analysis

    The DBA is running 10 jobs that result in queries which full scan 10 tables. These queries request 1MB per IO. Now a stripe width of 32k breaks down the 1MB IO into 32 equal pieces... and since there are 10 concurrent jobs that equates to 32\*10 or 320 concurrent request for IO. Finally, these 320 request, are routed down one of the four channels so that would be 320/4 or 80 requests per channel. Are you beginning to see the problem?

    Given the "sd_max_throttle" setting of 2, the OS will allow 2 outstanding requests at a time. If you look at the array, the performance will look great... so it must be an OS problem :)

    The Fix

    This issue was solved in multiple phases.
    • Quick Fix: Simply increase the "sd_max_throttle" >= 80. This will prevent queuing at the driver level.
    • Increased stripe width. Use an SVM stripe width of 1MB or greater. This will reduce the number of IO being broken down by the SW volume manager.
    • Optimal Solution. Eliminate SW striping all together and build larger luns within the SAN box.

    Summary

    Storage issues often involve multiple layers of HW, SW, people, and organizations. To architect a well thought out solution, all aspects must be taken into consideration. Get everyone talking and sharing information so that your organizational stove-pipes don't cripple application performance.

    Friday Sep 28, 2007

    PGstatspack - Getting at Postgres performance data.

    I thought I posted this a while ago... Maybe a blog bug?
    =====
    I have been working with Oracle for the past 18 years, mostly in the performance arena. Last year, I began working with Postgres as well. Being a performance guy, I naturally was looking at how to get at the performance data necessary to tune the database for maximum performance. To my surprise, little existed in the way of performance tools for Postgres. I was looking for the "Statpack" or "AWR" report for Postgres. I found several on-off tools but nothing that provided a "Load Profile" like Statspack.

    PG_STAT\* tables... V$ tables in disguise

    Postgres has a series of tables that are essentially counters like the V$ tables. They record the counts of things like:
    • commited transactions
    • rolled back transactions
    • tuples accessed
    • tuples inserted
    • block read
    • block hits
    • tuples accessed by table and index
    • physical reads by table and index

    Creating a prototype

    I fashioned the prototype after Oracle's Statspack. I created a simple schema where I essentially duplicated the PG_STAT\* tables and added a key for the snapshot. There is also a management table "pgstatspack_snap" which stores the snapid, timestamp, and a short description.
    To keep with the statspack like theme, a simple PLPGSQL procedure was created to take snapshots:
        SELECT pgstatspack_snap('My test run');
      

    Creating pgstatspack reports

    Now \*all\* you have to do is create the reports. I have created a simple report that gets at the heart of what is encapsulated in the "Load Profile" section of the Statspack. Additionally, I have profiled some of the table objects in terms of access, IO, etc. The report essentially does a diff of the counters between the two snap intervals. Time data is applied to calculate the per-second rates.
    This is meant to be a launch pad for experimentation. Hopefully, you will find it interesting. The prototype package and report can be downloaded here: pgstatspack.tar.gz
    
    $ rpt.sh 1 2 
    
    DATABASE THROUGHPUT 
    ==============================================================
     database  |  tps   | hitrate | lio_ps  |  rd_ps  | rows_ps  | ins_ps | upd_ps | del_ps 
    -----------+--------+---------+---------+---------+----------+--------+--------+--------
     igen      | 169.55 |   94.00 | 3909.70 |  211.15 | 23543.05 |  50.87 |  46.74 |   0.00 
     tpce      |   0.04 |    0.00 | 2310.97 | 2307.90 |     0.65 |   0.01 |   0.00 |   0.00 
     postgres  |   0.03 |   99.00 |    1.86 |    0.00 |     0.44 |   0.00 |   0.00 |   0.00 
     template1 |   0.00 |    0.00 |    0.00 |    0.00 |     0.00 |   0.00 |   0.00 |   0.00 
     template0 |   0.00 |    0.00 |    0.00 |    0.00 |     0.00 |   0.00 |   0.00 |   0.00 
    (5 rows)
    
    MOST ACCESSED TABLES by pct of tuples: igen database
    ==============================================================
        table     | tuples_pct | tab_hitpct | idx_hitpct | tab_read | tab_hit | idx_read | idx_hit 
    --------------+------------+------------+------------+----------+---------+----------+---------
     order_125    |         45 |         91 |         77 |    67566 |  698578 |    58050 |  202950
     product_125  |         42 |         99 |         99 |       82 |  120060 |       30 |  127345
     industry_125 |         10 |         99 |          0 |        1 |   22409 |        0 |       0
     customer_125 |          1 |         94 |         99 |    34978 |  657096 |     6858 | 1032477
    
    
    Note: This prototype is built on top of the 8.3 version of Postgres. Some modification would be required to use it on other versions of Postgres.

    Thursday Aug 16, 2007

    Getting past GO with Sparc CMT

    The Sparc T1/T2 chip is a lean mean throughput machine. Load on DB users, application servers, JVMs, ect and this chip begins to hum. While the benchmark proof points are many, there still seem to be mis-conceptions about the performance of this chip.

    I have ran across several performance evaluations lately where the T2000 was not being utilized to its full potential. The story goes like so...

    System Administrators First impressions

    Installing SW and configuring Solaris seems a little slow compared to the V490's we have sitting in the corner. But, this is not a show stopper - just an observation. The system admin presses on and preps the machine for the DBA.

    DBAs First Impressions

    After the OS is installed, the machine is turned over to the DBAs to install and configure Oracle. The DBA notices that, compared to the v490, Oracle installation is taking about twice as long. They continue to configure and begin loading an export file from the production machine. Again, this too is slower than the v490. Thinking something is wrong with the OS/HW, the DBA now contacts the system administrator.

    Fanning the fire

    At this point, the DBA and system admin have an "ah-ha" moment and begin to speculate that something is awry. The system admin "times" some simple unix commands. "tar", "gzip", "sort", ect... all seem slower on the T2000. The DBA does a few simple queries... again slower. What gives? Google fingers are warmed up, blogs are consulted, best practices are applied, and the results are unchanged.

    Throughput requires more than one thing

    The DBA and System admin have fallen into a the trap of not testing the \*real\* application. In the process of setting up the environment, the single-threaded jobs to install and configure the SW and load the database are slower. But, that is not the application. The real application, is a on-line store with several hundred users running concurrently. Now we are getting somewhere.

    Throughput, Throughput, Throughput!

    Finally, the real testing begins. Load generators are broken out to simulate the hundreds of users. After loading up the system, it is found that the T2000 DB server can handle about 2x the number of Orders/Sec than the V490! Wait a minute. "gzip" is slower but this little chip can process 2x the Orders? That's what CMT is all about... throughput, throughput, throughput!
    About

    This blog discusses performance topics as running on Sun servers. The main focus is in database performance and architecture but other topics can and will creep in.

    Search

    Archives
    « April 2014
    SunMonTueWedThuFriSat
      
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
       
           
    Today
    News

    No bookmarks in folder

    Blogroll

    No bookmarks in folder