Configuring Oracle ASM in Solaris Container with Solaris Volume Manager.

Make sure that you have read the following posts before continuing with this post:

We have learned before how to configure Oracle ASM in Solaris Container. For this we setup our container with raw device access. While this works good it is not good idea to expose device names into container. Virtualized OS administrator should not know about devices in global container. And being attached to special device is not giving flexibility. In this exercise we will migrate from raw device onto SMV metadevice.

First we will start with creating metadevice in global container. This metadevice will consist of single mirrror. Create a new device with enough space to hold your database, for example like this.

# metastat d40 
d40: Mirror 
    Submirror 0: d41 
      State: Okay         
    Pass: 1 
    Read option: roundrobin (default) 
    Write option: parallel (default) 
    Size: 2147450880 blocks (1023 GB) 

d41: Submirror of d40 
    State: Okay         
    Size: 2147450880 blocks (1023 GB) 
    Stripe 0: 
        Device                                             Start Block  Dbase        State Reloc Hot Spare 
        /dev/dsk/c4t600A0B80003391700000061F49A65117d0s0          0     No            Okay   Yes 

Device Relocation Information: 
Device                                           Reloc  Device ID 
/dev/dsk/c4t600A0B80003391700000061F49A65117d0   Yes    id1,ssd@n600a0b80003391700000061f49a65117

Read more about Solaris Volume Manager.

Then we need to allow our container to access this device.

# zonecfg -z zone1
zonecfg:zone1> add device 
zonecfg:zone1:device> set match=/dev/md/rdsk/d40 
zonecfg:zone1:device> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit

Container must be rebooted (# zoneadm -z zone1 reboot) to get access to this new device. Make sure that you have stopped all oracle processes in your container before rebooting. As soon as your container boots you may login to it and use new device for ASM (assuming that ASM instance is running).

First, assing oracle:dba ownership in the same way as we did before.

# cd /dev/md/rdsk 
# ls -lh 
total 0 
crw-r-----   1 root     sys       85, 40 Feb 27 04:40 d40 
# chown oracle:dba d40 
# ls -lh 
total 0 
crw-r-----   1 oracle   dba       85, 40 Feb 27 04:40 d40 

Then connect to oracle ASM instance and attach this device to ASM diskgroup:

$ ORACLE_SID=+ASM sqlplus / as sysdba
SQL\*Plus: Release 11.1.0.7.0 - Production on Sun Mar 1 07:27:02 2009 

Copyright (c) 1982, 2008, Oracle.  All rights reserved. 


Connected to: 
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production 
With the Partitioning, OLAP, Data Mining and Real Application Testing options 

SQL> create diskgroup testdg external redundancy disk '/dev/md/rdsk/d40'; 
create diskgroup testdg external redundancy disk '/dev/md/rdsk/d40' 
\* 
ERROR at line 1: 
ORA-15018: diskgroup cannot be created 
ORA-15031: disk specification '/dev/md/rdsk/d40' matches no disks 
ORA-15014: path '/dev/md/rdsk/d40' is not in the discovery set 

By default /dev/md/rdsk in not in the search path for candidate devices. So we need to alter initialization parameter:

SQL> show parameter asm_diskstring; 

NAME                                 TYPE        VALUE 
------------------------------------ ----------- ------------------------------ 
asm_diskstring                       string 

SQL> alter system set asm_diskstring = '/dev/rdsk','/dev/md/rdsk'; 

System altered. 

SQL> show parameter asm_diskstring;

NAME                                 TYPE        VALUE 
------------------------------------ ----------- ------------------------------ 
asm_diskstring                       string      /dev/rdsk, /dev/md/rdsk 

Before attaching to our current diskgroup try to create test group testdg. And if it will be ok then we will re-attach it to the current disk group.

SQL> create diskgroup testdg external redundancy disk '/dev/md/rdsk/d40'; 

Diskgroup created. 

SQL> SELECT STATE, NAME FROM V$ASM_DISKGROUP; 

STATE       NAME 
----------- ------------------------------ 
MOUNTED     DG1 
MOUNTED     TESTDG 

SQL> DROP DISKGROUP TESTDG; 

Diskgroup dropped. 

SQL> ALTER DISKGROUP DG1 ADD DISK '/dev/md/rdsk/d40'; 

Diskgroup altered. 

SQL> SELECT STATE, NAME FROM V$ASM_DISKGROUP; 

STATE       NAME 
----------- ------------------------------ 
MOUNTED     DG1 

As soon as you will create new disk you may see that oracle starts rebalancing disks:

                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device 
    0.2   41.8    0.8 42190.0  0.0  0.5    0.0   11.8   0  49 md/d40 
    0.2   41.8    0.8 42190.0  0.0  0.5    0.0   11.8   0  49 md/d41 
   41.6   43.2 42189.2  172.8  0.0  0.6    0.0    6.5   0  49 c4t600A0B8000562790000005D04998C446d0 
    0.2  330.2    0.8 42190.0  0.0  2.9    0.0    8.9   0  49 c4t600A0B80003391700000061F49A65117d0 

Lets veirfy it:

SQL> set linesize 132; 
SQL> column path format a18; 
SQL> column name format a10; 
SQL> column G# format 99; 
SQL> column D# format 99; 
SQL> select group_number G#, disk_number D#, state, redundancy, name, path, total_mb, free_mb, (total_mb - free_mb) used_mb from v$asm_disk; 
 G#  D# STATE    REDUNDA NAME       PATH                 TOTAL_MB    FREE_MB    USED_MB 
--- --- -------- ------- ---------- ------------------ ---------- ---------- ---------- 
  1   0 NORMAL   UNKNOWN DG1_0000   /dev/rdsk/c4t600A0    1048567     838515     210052 
                                    B8000562790000005D 
                                    04998C446d0s0 

  1   1 NORMAL   UNKNOWN DG1_0001   /dev/md/rdsk/d40      1048560    1039786       8774 

40 minutes later:

SQL> select group_number G#, disk_number D#, state, redundancy, name, path, total_mb, free_mb, (total_mb - free_mb) used_mb from v$asm_disk; 

 G#  D# STATE    REDUNDA NAME       PATH                 TOTAL_MB    FREE_MB    USED_MB 
--- --- -------- ------- ---------- ------------------ ---------- ---------- ---------- 
  1   0 NORMAL   UNKNOWN DG1_0000   /dev/rdsk/c4t600A0    1048567     938606     109961 
                                    B8000562790000005D 
                                    04998C446d0s0 

  1   1 NORMAL   UNKNOWN DG1_0001   /dev/md/rdsk/d40      1048560     939695     108865  
 

And finally we can drop first disk.

SQL> alter diskgroup dg1 drop disk dg1_0000; 

Diskgroup altered. 

SQL> select group_number G#, disk_number D#, state, redundancy, name, path, total_mb, free_mb, (total_mb - free_mb) used_mb from v$asm_disk; 

 G#  D# STATE    REDUNDA NAME       PATH                 TOTAL_MB    FREE_MB    USED_MB 
--- --- -------- ------- ---------- ------------------ ---------- ---------- ---------- 
  1   0 DROPPING UNKNOWN DG1_0000   /dev/rdsk/c4t600A0    1048567     939145     109422 
                                    B8000562790000005D 
                                    04998C446d0s0 

  1   1 NORMAL   UNKNOWN DG1_0001   /dev/md/rdsk/d40      1048560     939156     109404  
 

Wait for oracle to complete evacuating data:

                   extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device 
    0.2   81.8    0.8 40515.8  0.0  0.7    0.0    8.8   0  60 md/d40 
    0.2   81.8    0.8 40515.8  0.0  0.7    0.0    8.8   0  60 md/d41 
   39.2    1.0 40141.4    4.0  0.0  0.5    0.0   13.1   0  52 c4t600A0B8000562790000005D04998C446d0 
    0.2  357.6    0.8 40515.8  0.0  2.9    0.0    8.0   0  60 c4t600A0B80003391700000061F49A65117d0 

SQL> select group_number G#, disk_number D#, state, redundancy, name, path, total_mb, free_mb, (total_mb - free_mb) used_mb from v$asm_disk; 

 G#  D# STATE    REDUNDA NAME       PATH                 TOTAL_MB    FREE_MB    USED_MB 
--- --- -------- ------- ---------- ------------------ ---------- ---------- ---------- 
  0   0 NORMAL   UNKNOWN            /dev/rdsk/c4t600A0          0          0          0 
                                    B8000562790000005D 
                                    04998C446d0s0 

  1   1 NORMAL   UNKNOWN DG1_0001   /dev/md/rdsk/d40      1048560     829745     218815  
 

From the global container we will detach initial raw device leaving SMV managed one only attached to container.

# zonecfg -z zone1 
zonecfg:zone1> remove device match=/dev/rdsk/c4t600A0B8000562790000005D04998C446d0s0 
zonecfg:zone1> verify 
zonecfg:zone1> commit 
zonecfg:zone1> exit

Again reboot container while making sure that oracle processes were shutdown properly.

From now you have metadevice attached to container. From global container you can attach second mirror to this metadevice. Also you can move metadevice between arrays. And you have advantage of your container does not care about devices constituting this metadevice.

Comments:

a disks becames unknown after alter diskgroup dg drop diska since there is not enough free space in the dg, and then the dg can not be mounted, is it possible to recover this dg or force it to be mounted?

Posted by guest on July 28, 2009 at 02:29 AM PDT #

have you tried to
ALTER DISKGROUP dg UNDROP DISKS;

Posted by Roman Ivanov on July 28, 2009 at 03:11 PM PDT #

Please tell me if you are allocating a disk to ASM ie.. i am changing my disk ownership to oracle but i dindt specified any value to diskstring will my asm instance will start or not

Posted by Ramanan on October 04, 2009 at 09:28 PM PDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

Roman (pomah) Ivanov, ISV Engineering. Tips how to run Oracle best on Sun. Performance monitoring and tuning for system administrators. OpenSolaris user experience.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today