Tuesday Mar 04, 2014

Upgrade to Grid Infrastructure - but OCR and VOTING on RAW?

Just received this question from a colleague these days:

"The current customer environment is 10.2.0.5 on Linux with a 2 node RAC cluster having OCR and Voting Disks on RAW devices. Customer is concerned about the possibility of upgrading to 11gR2 Grid infrastructure first before they could upgrade to 12c Grid infrastructure."

Now the answer is written down in MOS Note 1572925.1:
How to Upgrade to 12c Grid Infrastructure if OCR or Voting File is on Raw/Block Devices

Basically the MOS Note says:
You will have to relocate your OCR/Voting to a supported device BEFORE you can upgrade to Oracle Grid Infrastructure 12c. No way out. A bit more clarification (thanks to Markus Michalewicz):

The assumption of the note (which you might want to state) is that the customer has pre-12c GI with OCR / Voting Disk on RAW.

In this case, Option A is always an option.

For Option B, however, you need to distinguish. So the note should say: 

  • Option B: Customer is on 11g Rel. 2 still using RAW Devices 
    • Then move the OCR and Voting Disks to ASM
  • Option C: Customer is on pre-11g Rel. 2 and does not want to introduce a CFS (bad idea anyways) or use NFS
    • Then upgrade to 11g Rel. 2 and move the Clusterware files into ASM as mentioned under Option B.

Unfortunately another example to add to my collection of "What happens if you stay too long on older releases?". In this case it simply increases complexity and drives costs as well. And please no complaints: Oracle Database 10.2 went out of PREMIER SUPPORT in July 2010 - 3.5 years ago!!!

-Mike

Friday Dec 21, 2012

Creating ASM for test purposes in the file system

First of all, I'm back after pausing for a while - sorry for not updating the blog in the past weeks ... and you won't see many updates in the following weeks as it'll be holiday season (and we Germans have sooooo many public holidays) :-)

Anyway, back to tech topics. Today I want to test Oracle Restart upgrades. Oracle Restart internally is called SIHA (Single Instance High Availability) which explains the topic a bit more. Basically it means having your database reside in ASM and let Oracle Clusterware take care on it, even though you don't have a cluster. Not a bad idea as this can be very helpful in real world environments. But I did realize that the entire process is not documented in all details. So I'd thought I should give this a try.

The first challenge I do face: I have just one disk in my machine - so I'll have to tweak ASM a bit to make it work with files on the file system.

Creating two empty strawman files in file system with dd is not a big deal:
$ dd if=/dev/zero of=/oradata/ASM/dg_DATA bs=8192 count=1000000 oflag=direct
1000000+0 records in
1000000+0 records out
8192000000 bytes (8.2 GB) copied, 336.371 seconds, 24.4 MB/s
[V112] oracle@localhost:/oradata
$ dd if=/dev/zero of=/oradata/ASM/dg_BCK bs=8192 count=500000 oflag=direct
500000+0 records in
500000+0 records out
4096000000 bytes (4.1 GB) copied, 246.021 seconds, 16.6 MB/s

But the next step is to start the cssd (Cluster Synchronization Services Demon) in my Oracle Database 10.2.0.5 installation from within the $ORACLE_HOME/bin directory:
[root@localhost bin]# . localconfig add
/etc/oracle does not exist. Creating it now.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Configuration for local CSS has been initialized
Adding to inittab
Startup will be queued to init within 30 seconds.
Checking the status of new Oracle init process...
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        localhost
CSS is active on all nodes.
Oracle CSS service is installed and running under init(1M)

Otherwise no chance for ASM to start up.

Now my attempts to use simply DBCA (Database Configuration Assistant) to creare the ASM instance on these two strawman files did not work as the DBCA didn't want to recognize the "disks". So back to good old command line. By the way, there's a MOS Note out there which may be helpful as well (but didn't work in my case).
How To Create ASM Diskgroups using NFS/NAS Files? (Doc ID 731775.1)

  1. Create a password file for ASM instance in $ORACLE_HOME/dbs
  2. Create a fresh init.ora for ASM within the same directory having the following parameters set:
    _asm_allow_only_raw_disks='FALSE'
    asm_diskstring='/oradata/ASM/dg*'
    asm_power_limit=4
    instance_type=asm
  3. With these parameter set I could bring the instance into MOUNT state ready to create the two disk groups after setting the ORACLE_SID=+ASM in the environment:
    SYS:+ASM> create diskgroup DATA external redundancy disk '/oradata/ASM/dg_DATA';
    Diskgroup created.
    SYS:+ASM>  create diskgroup BCK  external redundancy disk '/oradata/ASM/dg_BCK';
    Diskgroup created.

Starting up ASM did work now well after shutting it down first - and a check for SELECT path from V$ASM_DISK did show me my disks.

Next step - simply - is to create a database with DBCA inside of ASM. So the first part of my test did complete.

... to be continued soon ...

About

Mike Dietrich - Oracle Mike Dietrich
Senior Principal Technologist - Database Upgrade Development Group - Oracle Corporation

Based near Munich/Germany and spending plenty of time in airplanes to run either upgrade workshops or work onsite with reference customers. Acting as interlink between customers and the Upgrade Development.

Contact me either via XING or LinkedIn

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
2
3
4
5
6
9
10
12
13
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
Slides Download Center
OOW Slides Download
Visitors since 17-OCT-2011
White Paper and Docs
Oracle Blogs
Workshops
Viewlets and Videos
This week on my Rega/iPod/CD
Workshop Map
Upgrade Reference Papers