Tuesday Jul 11, 2006

Threshold login triggers for Oracle 10046 event trace collection

There are multiple ways to gather trace data. You can instrument the application, pick an oracle sid from sysdba, turn on tracing for all users (ouch), or use a login trigger to narrow down to a specific user. Each of these methods have merit, but recently I desired to gather traces at various user levels.

The problem with most packaged applications, is that they all use the \*same\* userid. For this Oracle 10G environment, I used this fact to filter only connections of the type that I wanted to sample. I wanted to gather 10046 event trace data when the number of connections was 10, 20, or 30. To achieve this, I used a logon trigger and sampled the number of sessions from v$session to come up with the connection count. I have found this little trick to be very useful in automating collection without modifying the application. I hope this can be useful to you as well.

create or replace trigger trace_my_user
  after logon on database

DECLARE
  mycnt  int;

BEGIN

SELECT count(\*)
 INTO mycnt
 FROM v$session
 WHERE username='GLENNF';

 if (user='GLENNF') and ((mycnt=10) or (mycnt=20) or (mycnt=30)) then
   dbms_monitor.session_trace_enable(null,null,true,true);
 end if;
end;
/

Thursday Feb 16, 2006

Oracle's vision of multi-core processors on Solaris

There seems to be some confusion about how an Oracle instance uses multi-core processors. From a database point of view, it can use all CPU resource that is offered by Solaris. To find out what CPU resources are available, use the "prtdiag" and "psrinfo" commands. The example below shows a single board USIV domain on a SF25K. There are a total of 4 \* USIV Processors running 8 cores @1.2GHz each. The prtdiag output shows the 4 processors with two CPU ID's each. The psrinfo command simply shows all 8 cores.

catscratchb:root> /usr/sbin/prtdiag
System Configuration:  Sun Microsystems  sun4u Sun Fire 15000
System clock frequency: 150 MHz
Memory size: 32768 Megabytes

========================= CPUs =========================

         CPU      Run    E$    CPU     CPU
Slot ID   ID       MHz    MB   Impl.    Mask
--------  -------  ----  ----  -------  ----
/SB01/P0   32, 36  1200  16.0  US-IV    2.2
/SB01/P1   33, 37  1200  16.0  US-IV    2.2
/SB01/P2   34, 38  1200  16.0  US-IV    2.2
/SB01/P3   35, 39  1200  16.0  US-IV    2.2

catscratchb:root> /usr/sbin/psrinfo
32      on-line   since 02/07/2006 18:00:23
33      on-line   since 02/07/2006 18:00:25
34      on-line   since 02/07/2006 18:00:25
35      on-line   since 02/07/2006 18:00:25
36      on-line   since 02/07/2006 18:00:25
37      on-line   since 02/07/2006 18:00:25
38      on-line   since 02/07/2006 18:00:25
39      on-line   since 02/07/2006 18:00:25

Oracle does size internal latching structures based on the number of CPUs available. This sizing does NOT disallow Oracle shadow process from using CPU resource, it simply makes the system better suited to scale. To find out how many CPUs Oracle thinks are available, run the following SQL command as sysdba. We have found that Oracle sizes its data structures based on the number or CPU IDs that are reported by psrinfo.

SQL> connect / as sysdba

SQL> select ksppinm name, ksppstvl value, ksppdesc description
       from x$ksppi x, x$ksppcv y
       where (x.indx = y.indx)
         and ksppinm like '%cpu_count%'
       order by name;

NAME            VALUE  DESCRIPTION
--------------- ------ ----------------------------------
cpu_count       8      number of CPUs for this instance

Tuesday Jun 21, 2005

"orasrp" - Free Oracle response time based trace profiler.

Ever since I started reading the "Optimizing Oracle Performance" book by Cary Millsap, I have been salivating over getting a copy of the Hotsos profiler - but alas it is too expensive. Recently I tried to find a copy of their free version "Sparky" but it had been pulled due to support issues.

Finally, I stumbled upon orasrp. This analyzer is written in python and can be used standalone to produce html, or you can browse a directory of trace files via your web browser.

Monday Apr 11, 2005

SUPerG conference - Paper and Presentation

I am going to be presenting at the spring SUPerG next week 4/21 - 4/23. IMHO, this is the best conference Sun offers - A great forum for techies to meet techies. Since I typically, update my presentation a few times before the conference, I thought it best to post LAG copies.

Hope to see you there,
Glenn

Demystifying High-End Sun Fire Behavior
when Scaling Database Applications

by Glenn Fawcett and Marcus Heckel
Paper rev1.02 updated 4/19/05 after blog comments!!
Presentation rev1.03 updated 4/21/05 prior to pres :)

This paper attempts to explain the difference in the Sun Fire Server line. After examining the inherent differences, scaling topics will be discussed. The effects of large user, memory, and cpu count will be discussed along with Tips on how to best manage and scale applications.

Monday Aug 30, 2004

Session Wait History in Oracle 10g

In the past you could only sample Oracle Wait Events and often missed interesting information. In 10g Oracle introduced a new view, V$ACTIVE_SESSION_HISTORY, which keeps a historical view of wait events. I ran across a great article in the Database Journal which describes how to use this new feature... Pretty cool!

Oracle performance on RAW or VxFS with QIO/ODM.

The RAW vs COOKED debate that has been going on for some time. You will find little argument which is more efficient at caching DB buffers. If and Oracle process can get a block by doing a block get, this uses much less code than issuing an IO and getting it from the UFS cache. If this is the case, why have I ran into numerous DBAs that have tried Raw once but saw worse or no performance improvement from RAW?

Further investigation usually reveals that the FS cache was providing caching benefit, that was not able to be realized within Oracle. Without modifying the SGA size and possibly storage parameters on active tables/indexes, it is hard to realize any benefit with Raw.

A good place to start is by looking at your largest objects. If your statistics are current, you can use the following SQL. OEM also does a good job of showing table sizes.
SQL> select table_name, CACHE, num_rows, BLOCKS/128 MB 
     from all_tables
     order by MB desc
/
TABLE_NAME                     CACHE   NUM_ROWS         MB
------------------------------ ----- ---------- ----------
BIGACTIVETABLE                     N   20000000 13258.2891
LOOKUP                             Y      50000   150.5703
...
Also, You need to know the index sizes so..
SQL> select index_name,
            num_rows,
            leaf_blocks/128 MB ,
            BUFFER_POOL
     from all_indexes
     where owner = 'STSC'
     order by MB desc
/
INDEX_NAME                       NUM_ROWS         MB BUFFER_
------------------------------ ---------- ---------- -------
BIGACTIVETABLE_PK                20000000  9245.6875 DEFAULT
....
Since we now know what the largest tables and indexes are, we should expect to see them in the buffer cache. This can be done by querying the v$bh table.
SQL> SELECT object_name, count(\*)/128 MB 
     FROM v$bh, all_objects
     WHERE object_id=objd
     GROUP BY object_name
     ORDER BY MB desc
/

OBJECT_NAME                            MB
------------------------------ ----------
LOOKUP                          120.09375
BIGACTIVETABLE                  100.98438
BIGACTIVETABLE_PK                75.23438
....
Notice that "BIGACTIVETABLE" and it's index are not being cached very well. This is due to the fact that the storage parameter on the table and index are set to "NO CACHE". This causes Oracle to not actively try to cache these blocks. By alter these storage parameters
"alter table BIGACTIVETABLE cache;"
and increasing the SGA, these tables and indexes can use more Oracle buffer cache. Note, that you can also separate objects into the DEFAULT, KEEP, and RECYCLE buffer pools. Statspack reports calculate hit ratios for the DEFAULT, KEEP, are RECYCLE buffer pools separately. I tend to put all objects I am actively trying to KEEP into the KEEP pool so I can figure my caching efficiency via statspack.

I hope this helps inspire you to give this a try. It is not black magic, give it a try.

Take Care,
Glenn
About

This blog discusses performance topics as running on Sun servers. The main focus is in database performance and architecture but other topics can and will creep in.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
News

No bookmarks in folder

Blogroll

No bookmarks in folder