Friday Aug 19, 2011

Using ACFS to leverage the management of an Oracle Active/Passive cluster configuration

After having setup an Active/Passive Cluster Failover configuration ( See this blog entry ), it might be useful to have a shared location to store diagnostic data, audit trace ... and be able to have a consistent view of the system, which is server independent..


Actually, any kind of shared file system can do the job. But, as we are using and enjoying the whole Oracle stack of toys, we can make use of Oracle ACFS, which is coming on top of Oracle ASM, to do the job.

In this entry, I will apply the guidelines that can be found in the Oracle ASM User Guide to configure an ACFS volume disk, mount it and use it to store database diagnostic related files.

As you will see, this is a rather straightforward procedure.

Prerequisites

  • Oracle 11gR2 Grid Infrastructure is installed and running with an existing ASM diskgroup (DG_ASM).
  • If doing this on a Linux Box (as I usually do), make sure you do not use Oracle Enterprise Kernel, ACFS is (for the moment) not supported with this version of Linux.
  • Preferably go for Oracle Linux (2.6.18-238.el5)
     ACFS-9459: ADVM/ACFS is not supported on this OS version: '2.6.32-100.26.2.el5' 
  • Oracle 11gR2 Database is installed and running a database (DB11G is my Oracle instance)
    Even better, Active/Passive cluster failover configuration is achieved.
  • If you are not on a Linux box (ACFS works on Solaris, AIX, HP-UX ..),  Unix commands may differ. But the idea remains the same…
      

Steps

  1. Create a new dedicated disk group, with its associated failure groups.
    An existing disk group cans still be used, but, to make things clearer, I would advise to keep ACFS disk group away from DATA and RECO DGs.
    In this example although, I do use the same Disk Group (DG_ASM) for ACFS, DATA and RECO.
       
  2. Set the Oracle Env to grid
    . oraenv
    +ASM
  3. Create a volume group inside this disk group.
    The following command creates a new volume – named volacfs – in the DG_ASM disk group.
    asmcmd volcreate -G DG_ASM -s 256M volacfs
  4. Determine the device name associated to this volume
    asmcmd volinfo -G DG_ASM volacfs
             Diskgroup Name: DG_ASM
          
                    Volume Name: VOLACFS
                    Volume Device: /dev/asm/volacfs-149
                    State: ENABLED
                    Size (MB): 256
                    Resize Unit (MB): 256
                    Redundancy: UNPROT
                    Stripe Columns: 4
                    Stripe Width (K): 128
                    Usage:
                    Mountpath:
  5. Create a filesystem on top of this volume
     /sbin/mkfs -t acfs /dev/asm/volacfs-149

    Now,  volinfo shows the Usage of this diskgroup : ACFS
     
    asmcmd volinfo -G DG_ASM volacfs
             Diskgroup Name: DG_ASM
          
                    Volume Name: VOLACFS
                    Volume Device: /dev/asm/volacfs-149
                    State: ENABLED
                    Size (MB): 256
                    Resize Unit (MB): 256
                    Redundancy: UNPROT
                    Stripe Columns: 4
                    Stripe Width (K): 128
                    Usage: ACFS
                    Mountpath:
                   
  6. Register this filesystem with the clusterware
    /sbin/acfsutil registry -a /dev/asm/volacfs-149 /acfs
           acfsutil registry: mount point /acfs successfully added to Oracle Registry  
    Now, volinfo shows the Mountpath of this diskgroup : /acfs

    asmcmd volinfo -G DG_ASM volacfs
           Diskgroup Name: DG_ASM
           
                  Volume Name: VOLACFS
                  Volume Device: /dev/asm/volacfs-149
                  State: ENABLED
                  Size (MB): 256
                  Resize Unit (MB): 256
                  Redundancy: UNPROT
                  Stripe Columns: 4
                  Stripe Width (K): 128
                  Usage: ACFS
                  Mountpath: /acfs

  7. as root, mount the filesystem
    mkdir /acfs
    mount -t acfs /dev/asm/volacfs-149 /acfs  
    chown -R oracle:dba /acfs
           chown: changing ownership of `/acfs/lost+found': Permission denied
  8. and now, use it
    mkdir -p /acfs/diag
    mkdir -p /acfs/audit/DB11G/adump
      
    SQL> alter system set  = '/acfs';
    SQL> alter system set audit_file_dest='/acfs/audit/DB11G/adump' scope=spfile;
    SQL> startup force

From now onward, all the diagnostic files and audit files will be stored on the new acfs volume and will be shared between all the nodes of the cluster.

Consequently, a failover to another node will be seamless, from the diagnostic and audit perspective.
  
 

Troubleshooting

Although acfs drivers (ACFS, OKS and ADVM) are installed and started during the execution of Grid infrastructure root.sh, they will have to be manually loaded in case of server boot. Afterward, volume must be manually remounted.

I hope this will be fixed in a future release of the service.

Meantime, things need to be achieved manually.

As root, run the following command:

$GRID_HOME/bin/acfsload start –s

Then, as grid/oracle user,
Check the driver status

$GRID_HOME/bin/acfsdriverstate loaded
       ACFS-9203: true

Enable the volume

asmcmd volinfo -G DG_ASM volacfs
       Diskgroup Name: DG_ASM

              Volume Name: VOLACFS
              Volume Device: /dev/asm/volacfs-149
              State:
DISABLED
              Size (MB): 256
              Resize Unit (MB): 256
              Redundancy: UNPROT
              Stripe Columns: 4
              Stripe Width (K): 128
              Usage: ACFS
              Mountpath: /acfs

asmcmd volenable –G DG_ASM volacfs

asmcmd volinfo -G DG_ASM volacfs
       Diskgroup Name: DG_ASM

              Volume Name: VOLACFS
              Volume Device: /dev/asm/volacfs-149
              State: ENABLED
              Size (MB): 256
              Resize Unit (MB): 256
              Redundancy: UNPROT
              Stripe Columns: 4
              Stripe Width (K): 128
              Usage: ACFS
              Mountpath: /acfs

 

And finally, as root,
Mount the volume,

/bin/mount -t acfs -o all none none

/bin/mount
       /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
       proc on /proc type proc (rw)
       sysfs on /sys type sysfs (rw)
       devpts on /dev/pts type devpts (rw,gid=5,mode=620)
       /dev/sda1 on /boot type ext3 (rw)
       tmpfs on /dev/shm type tmpfs (rw)
       none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
       sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
       oracleasmfs on /dev/oracleasm type oracleasmfs (rw)
       /dev/asm/volacfs-149 on /acfs type acfs (rw)


Conclusion

ACFS brings interesting features that help the management of cluster-wide files likes the diagnostic data, the audit files, the various log files.

It can also be used as a location for the High Availability scripts of the Active/Passive cluster configuration.

A great enhancement, although, would be to have ACFS services started automatically after server startup. Having to perform manually the steps to bring it up does not make sense in a context of High Availability.

 

References

Relevant information can be found in the following bible :

   Oracle Database Storage Administrator's Guide  11g Release 2 (11.2) 
        Chapter 5  : Introduction to Oracle ACFS
        Chapter 13 : Oracle ACFS  Command-Line Tools

 

 

Gilles Haro 
Technical Expert - Core Technology, Oracle Consulting  
 

Friday Dec 24, 2010

A Christmas Query

[Read More]
About

This blog presents my personal thoughts and findings around the Oracle Database. These entries present the outcomes from experiences and testing regarding various technological aspects of the Oracle database. Opinions are my own and not necessarily those of Oracle Corporation.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today