Configuration example of Oracle ASM in Solaris Container.

Configuration example of Oracle ASM in Solaris Container.


In this post I will give you a tip on how to setup Oracle ASM in Solaris Container. The main point of container's configuration is to set proper privileges. Your container should have proc_priocntl (must) and proc_lock_memory (highly recommended) privileges in order to function properly with ASM in it. Use the following as an configuration example when creating container and adjust it for your needs. Please read comments inlined:


create

# container will be named zone1

# make sure that directory /zones exist and have permissions 700

set zonepath=/zones/zone1

set autoboot=true

set limitpriv=default,proc_priocntl,proc_lock_memory

set scheduling-class="FSS"

set max-shm-memory=16G


# use ip-type exclusive at your wish, non-exclusive is also possible

set ip-type=exclusive

add net

set physical=e1000g1

end

add fs

set dir=/usr/local

# make sure /opt/zone1/local exist

set special=/opt/zone1/local

set type=lofs

end

add fs

# mount /distro from global zone into container.

# I have Oracle distribution files there

set dir=/distro

set special=/distro

set type=lofs

add options [ro]

end


# this device will be used for ASM inside container

add device

set match=/dev/rdsk/c4t600A0B8000562790000005D04998C446d0s0

end


# permit to use 16 cpu within container

add dedicated-cpu

set ncpus=16-16

end

verify

commit


Put this confing into zone1.txt and edit this file to adjust your configuration. Then create, install and boot container.

# zonecfg -z zone1 -f zone1.txt

# zoneadm -z zone1 install

# zoneadm -z zone1 boot


When you are done login to newly created container and proceed with installing Oracle and configuring ASM.


Tips:

  1. Since I have dual-FC connected 6140 array I have it configured with Solaris I/O multipathing feature enabled

# stmsboot -D fp -e

  1. I really like to use VNC to access my lab remotely


Comments:

This will work, but will not work well as production system, constant problem: Global domain down all Local zones are down, oracle local zone takes CPU to process all other local zones suffering, so as production solution this is not working well at all. Live experience.

Posted by Sergey Khochay on March 30, 2009 at 03:41 AM PDT #

"Global domain down all Local zones are down" - There will be always such dependency in virtualization. How to avoid this? Do you have single box example?

"oracle local zone takes CPU to process all other local zones suffering" - this is the question of proper resource assignment.

Posted by Roman Ivanov on March 30, 2009 at 06:49 AM PDT #

Roman,

We have a SFE6900 with 7 Zones configured on Sun Solaris 10.

In one zone we are trying to install Oracle ASM.

Our configiguration :

root@e6900i # zonecfg -z datatrafic info
zonename: datatrafic
zonepath: /export/zone/datatrafic
brand: native
autoboot: false
bootargs:
pool: datatrafic-pool
limitpriv: default,proc_priocntl,proc_lock_memory
scheduling-class:
ip-type: shared
[max-lwps: 8192]
fs:
dir: /oracle/product/datatrafic
special: /export/zone/datatrafic/ora1
raw not specified
type: lofs
options: []
fs:
dir: /oracle/product/agent
special: /export/zone/datatrafic/ora2
raw not specified
type: lofs
options: []
fs:
dir: /oracle/product/asm
special: /export/zone/datatrafic/ora3
raw not specified
type: lofs
options: []
fs:
dir: /oracle/product/admin
special: /export/zone/datatrafic/ora4
raw not specified
type: lofs
options: []
fs:
dir: /oradata/datatrafic/archiv1
special: /export/zone/datatrafic/ora5
raw not specified
type: lofs
options: []
fs:
dir: /oradata/datatrafic/archiv2
special: /export/zone/datatrafic/ora6
raw not specified
type: lofs
options: []
fs:
dir: /oradata/datatrafic/export
special: /export/zone/datatrafic/ora7
raw not specified
type: lofs
options: []
net:
address: 161.196.33.19
physical: ce0
defrouter not specified
device
match: /dev/dsk/c8t5006016130603645d3s6
device
match: /dev/rdsk/c8t5006016130603645d3s6
device
match: /dev/rdsk/c8t5006016930603645d4s6
device
match: /dev/dsk/c8t5006016930603645d4s6
device
match: /dev/dsk/c8t5006016130603645d5s6
device
match: /dev/rdsk/c8t5006016130603645d5s6
device
match: /dev/dsk/c8t5006016130603645d10s6
device
match: /dev/rdsk/c8t5006016130603645d10s6
device
match: /dev/dsk/c8t5006016930603645d6s6
device
match: /dev/rdsk/c8t5006016930603645d6s6
device
match: /dev/dsk/c8t5006016130603645d11s6
device
match: /dev/rdsk/c8t5006016130603645d11s6
device
match: /dev/dsk/c8t5006016130603645d7s6
device
match: /dev/rdsk/c8t5006016130603645d7s6
device
match: /dev/dsk/c1t5006016830603645d12s6
device
match: /dev/rdsk/c1t5006016830603645d12s6
device
match: /dev/dsk/c8t5006016130603645d9s6
device
match: /dev/rdsk/c8t5006016130603645d9s6
device
match: /dev/dsk/c8t5006016930603645d8s6
device
match: /dev/rdsk/c8t5006016930603645d8s6
device
match: /dev/dsk/c8t5006016930603645d14s6
device
match: /dev/rdsk/c8t5006016930603645d14s6
device
match: /dev/dsk/c8t5006016930603645d15s6
device
match: /dev/rdsk/c8t5006016930603645d15s6
device
match: /dev/dsk/c8t5006016930603645d20s6
device
match: /dev/rdsk/c8t5006016930603645d20s6
device
match: /dev/dsk/c8t5006016130603645d16s6
device
match: /dev/rdsk/c8t5006016130603645d16s6
device
match: /dev/dsk/c8t5006016130603645d18s6
device
match: /dev/rdsk/c8t5006016130603645d18s6
device
match: /dev/dsk/c8t5006016130603645d17s6
device
match: /dev/rdsk/c8t5006016130603645d17s6
device
match: /dev/dsk/c8t5006016130603645d19s6
device
match: /dev/rdsk/c8t5006016130603645d19s6
device
match: /dev/rdsk/c8t5006016130603645d13s6
device
match: /dev/dsk/c8t5006016130603645d13s6
device
match: /dev/dsk/c8t5006016130603645d21s6
device
match: /dev/rdsk/c8t5006016130603645d21s6
device
match: /dev/dsk/c8t5006016930603645d22s6
device
match: /dev/rdsk/c8t5006016930603645d22s6
device
match: /dev/dsk/c8t5006016130603645d23s6
device
match: /dev/rdsk/c8t5006016130603645d23s6
device
match: /dev/dsk/c8t5006016930603645d24s6
device
match: /dev/rdsk/c8t5006016930603645d24s6
device
match: /dev/dsk/c8t5006016930603645d25s6
device
match: /dev/rdsk/c8t5006016930603645d25s6
capped-memory:
physical: 16G
[swap: 16G]
[locked: 512K]
rctl:
name: zone.max-lwps
value: (priv=privileged,limit=8192,action=deny)
rctl:
name: zone.max-swap
value: (priv=privileged,limit=17179869184,action=deny)
rctl:
name: zone.max-locked-memory
value: (priv=privileged,limit=524288,action=deny)
root@e6900i #

Our /etc/project
# more project
system:0::::
user.root:1::::project.max-shm-memory=(privileged,4294967296,deny)
noproject:2::::
default:3::::
group.staff:10::::
user.oracledt:100:oracledt:::process.max-sem-nsems=(privileged,1024,deny);projec
t.max-sem-ids=(privileged,256,deny);project.max-shm-ids=(privileged,256,deny);pr
oject.max-shm-memory=(privileged,4294967296,deny)
#

Could you help me to know if my project config is Ok ?

Because I have a lot of doubts about if the user.root need memory assigned. This because the ASM is actived from root account runnig the localconfig script.

I have 16 GB total assigned. What is the better memory assigned by root vs oracle ?

Thanks in Advence and sorry by my bad english.

AGAD

Posted by Angel Gadea on July 29, 2009 at 06:48 AM PDT #

Your config looks good at first look. Instead of having project settings to root you may setup it for system, so it will be system wide (in zone). Make sure that ASM running as root or oracle. You may check running process limits:
# prctl -P -i process <pid>

Posted by Roman Ivanov on July 31, 2009 at 12:14 AM PDT #

Hello,

I have SUN ENTERPRISE T5140 SPARC with 2xCPU = 128vcpu and 16GB RAM.
Please help me howto configure this server for Oracle10g in zones?
I have two non-global zones: oracle and glassfish
I understand mostly how partitione cpu and memory.

Posted by bieszczaders on January 12, 2010 at 04:50 AM PST #

bieszczaders,

Best Practices for Running Oracle Databases in Solaris Containers:
http://blogs.sun.com/pomah/entry/best_practices_for_running_oracle

Posted by Roman Ivanov on January 12, 2010 at 04:13 PM PST #

Post a Comment:
  • HTML Syntax: NOT allowed
About

Roman (pomah) Ivanov, ISV Engineering. Tips how to run Oracle best on Sun. Performance monitoring and tuning for system administrators. OpenSolaris user experience.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today