High CPU usage on Oracle RAC investigation with statspack - followup

I just saw this here
Note:
- Many DBAs feel that if the data is already contained within the buffer cache the query should be efficient. This could not be further from the truth. Retrieving more data than needed, even from the buffer cache, requires CPU cycles and interprocess IO. Generally speaking, the cost of physical IO is not 10,000 times more expensive. It actually is in the neighborhood of 67 times and actually almost zero if the data is stored in the UNIX buffer cache.

One must always be careful when using the UNIX buffer cache with Oracle
The default Unix behavior is to comply with the POSIX standard for reading and writing files

Read-Write Locks and Attributes

Read-write locks (also known as readers-writer locks) allow a thread to exclusively lock some shared data while updating that data, or allow any number of threads to have simultaneous read-only access to the data.


So the default file system behavior is not optimal for oracle, since oracle can manage its' file accesses e.g. not reading a block while that same block is being written. The extra layer of protection that POSIX gives is not needed. You can improve your IO concurrency in Solaris by adding the directio mount option to the database partitions in /etc/vfstab, or you could set the Oracle Parameter FILESYTEMIO_OPTIONS to SETALL

You may now want to increase the size of the buffer cache since oracle is now bypassing the buffer cache and some of that memory can now allocated directly for oracle buffers.
 

[ T: ]
Comments:

Post a Comment:
Comments are closed for this entry.
About

rmd

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today