Virtual CPUs effect on Oracle SGA allocations.

Several years ago, I wrote about how Oracle views multi-threaded processors. At the time we were just introducing a dual-core processor. This doubling of the number of cores was presented by Solaris as virtual CPUs and Oracle would automatically size the CPU_COUNT accordingly. But what happens when you introduce a 1RU server that has 128 virtual CPUs?

The UltraSPARC T1/T2/T2+ servers have many threads or virtual CPUs. The CPU_COUNT on these systems is sized no different than before. So, the newly introduced T5540 with 4xUltraSPARC T2+ processors would have 256 threads and CPU_COUNT would be set to 256.

So, what does CPU_COUNT have to do with memory?

Thanks to my friends in the Oracle Real World Performance group, I was made aware that Oracle uses CPU_COUNT to size the minimum amount of SGA allowed. In one particular case, the DBA was trying to allocate 70 database instances on a T5140 with 64GB of memory and 128 virtual CPUs. Needless to say, the SGA_TARGET would have to be set fairly low in-order to accomplish this task. A SGA_TARGET was set to 256MB, but the following error was encountered.

    ORA-00821: Specified value of sga_target 256M is too small
After experimentation, they were able to start Oracle with a target of 900MB, but with 70 instances this would not fly. Manually lowering the CPU_COUNT allowed the DBA to use an SGA_TARGET of 256MB. Obviously, this is an extreme case and changing CPU_COUNT was reasonable.

Core and virtual CPU counts have been on the rise for some years now. Combine rising virtual CPU count with the current economic climate and I would suspect that consolidation will be more popular than ever. In general, I would not advocate changing CPU_COUNT manually. If you had one instance on this box, the default be just fine. CPU_COUNT automatically sizes so many other parameters that you should be very careful before making a change.

Comments:

Hi Glenn,
I fear you show one of the biggest problems of the T[12](+) family: Memory.
CPO Power increased faster than the price for Memory decreased so most of the time modern server have more than enough CPU-Power where they suffer memory limitations. (This might not be 100% true in all cases, we just face this on a current streamlining/consolidation project).
Back to your ORA-821: I totally agree NOT to tune CPU_COUNT unless you know what happens. Sadly Oracle did not provide informations what changes based on CPU_COUNT. A better way would be to provide only a reducced number of CPUs to the DB (both at the scan on startup AND during workload) if there are multiple Instances on this host. But doung so is not trivial in Solaris and influences other things also.
Maybe I can provide some more Details and real-live advises in the future - depends on the outcome of our consolidation ...
At all, agreat blog with great Information!
br
Martin

Posted by Martin Berger on November 21, 2008 at 05:17 AM PST #

Thanks for the feedback... Indeed the strength of the T[12]+ line of servers is how it can efficiently use memory via massive multi-threading. I also agree that if we can just present the proper CPU count to Oracle, it would have no choice but to set it accordingly. This can be easily achieve using Solaris Zones with resource pools. If you create a zone with 4 threads, Oracle would see it as 4 CPUs and set CPU_COUNT=4. This is probably needs some more entries. Indeed consolidation is becoming quite popular these days.

Posted by Glenn Fawcett on November 21, 2008 at 07:00 AM PST #

Hi Glenn,
can you provide some more informations about the init.ora parameters your friends used? In my tests 900MB is far too small. With a T5240 I had to set SGA_TARGET=8768M minimum (otherwise I got: ORA-00821: Specified value of sga_target 1856M is too small, needs to be at least 8768M), but even with this config, I hit ORA-04031: unable to allocate 22472 bytes of shared memory ("shared pool","unknown object","sga heap(3,1)","character set memory") during startup!
So SGA_TARGET=13G was fine for my tests to get all parameters derived from CPU_COUNT up to 128 CPUs.
You can find my findings here: http://berxblog.blogspot.com/2009/01/instance-parameters-derived-from.html
br
Martin

Posted by Martin Berger on January 07, 2009 at 04:34 AM PST #

Thanks Martin for testing on 11gR1. The environment I am referring to is 10g, so this is likely the difference. Seems like quite an increase, but still not really an issue since the T5240 comes with a minimum of 32GB of memory.

Posted by Glenn Fawcett on January 07, 2009 at 07:54 AM PST #

Post a Comment:
Comments are closed for this entry.
About

This blog discusses performance topics as running on Sun servers. The main focus is in database performance and architecture but other topics can and will creep in.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
News

No bookmarks in folder

Blogroll

No bookmarks in folder