Tuesday Sep 11, 2012

Oracle Solaris 8 P2V with Oracle database 10.2 and ASM

Background information

In this document I will demonstrate the following scenario:

Migration of Solaris 8 Physical system with Oracle database version 10.2.0.5 with ASM file-system located on a SAN storage, into a Solaris 8 branded zone inside a Solaris 10 guest domain on top of Solaris 11 control domain.


In the first example we will preserve the host information.

On the second example we will modify the host name.

Executive summary

In this document I will demonstrate how we managed to leverage the Solaris 8 p2v tool in order to migrate a physical Solaris 8 system with Oracle database with ASM file system into a Solaris 8 branded Zone.

The ASM file system located on a LUN in SAN storage connected via FC HBA.

During the migration we used the same LUN on the source and target servers in order to avoid data migration.

The P2V tool successfully migrated the Solaris 8 physical system into the Solaris 8 branded Zone and the Zone was able to access the ASM file system.

Architecture layout



Source system

Hardware details:
Sun Fire V440 server with 4 UltraSPARC 3i CPUs 1593MHZ and 8GB of RAM

Opreating system Solaris 8 2/04 + latest recommended patch set


Target system


Oracle's SPARC T4-1 server with A single 8-core, 2.85 GHz SPARC T4 processor and 32GB of RAM

Install Solaris 11


Setting up Control Domain

primary# ldm add-vcc port-range=5000-5100 primary-vcc0 primary
primary# ldm add-vds primary-vds0 primary
primary# ifconfig -a
net0: flags=1000843 mtu 1500 index 2
inet 10.162.49.45 netmask ffffff00 broadcast 10.162.49.255
ether 0:14:4f:ab:e3:7a
primary# ldm add-vsw net-dev=net0 primary-vsw0 primary
primary# ldm list-services primary
VCC
NAME LDOM PORT-RANGE
primary-vcc0 primary 5000-5100

VSW
NAME LDOM MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
primary-vsw0 primary 00:14:4f:fb:44:4d net0 0 switch@0 1 1 1500 on

VDS
NAME LDOM VOLUME OPTIONS MPGROUP DEVICE
primary-vds0 primary

primary# ldm set-mau 1 primary
primary# ldm set-vcpu 16 primary
primary# ldm start-reconf primary
primary# ldm set-memory 8G primary
primary# ldm add-config initial
primary# ldm list-config

factory-default
initial [current]
primary-with-clients

primary# shutdown -y -g0 -i6

Enable the virtual console service

primary# svcadm enable vntsd

primary# svcs vntsd

STATE STIME FMRI

online 15:55:10 svc:/ldoms/vntsd:default

Setting Up Guest Domain

primary# ldm add-domain ldg1
primary# ldm add-vcpu 32 ldg1
primary# ldm add-memory 8G ldg1
primary# ldm add-vnet vnet0 primary-vsw0 ldg1

primary# ldm add-vnet vnet1 primary-vsw0 ldg1

primary# ldm add-vdsdev /dev/dsk/c3t1d0s2 vol1@primary-vds0

primary# ldm add-vdisk vdisk1 vol1@primary-vds0 ldg1

primary# ldm set-var auto-boot\?=true ldg1

primary# ldm set-var boot-device=vdisk1 ldg1

primary# ldm bind-domain ldg1

primary# ldm start-domain ldg1

primary# telnet localhost 5000

{0} ok boot net - install

Install solaris 10 update 10 (Solaris 10 08/11)


Verify that all the Solaris services on guest LDom are up & running


guest # svcs -xv

Oracle Solaris Legacy Containers install

The Oracle Solaris Legacy Containers download includes two versions of the product:


- Oracle Solaris Legacy Containers 1.0.1


- For Oracle Solaris 10 10/08 or later


- Oracle Solaris Legacy Containers 1.0


- For Oracle Solaris 10 08/07


- For Oracle Solaris 10 05/08


Both product versions contain identical features. The 1.0.1 product depends


on Solaris packages introduced in Solaris 10 10/08. The 1.0 product delivers


these packages to pre-10/08 versions of Solaris.


We will use the Oracle Solaris Legacy Containers 1.0.1
since our Solaris 10 version is 08/11 To install the Oracle Solaris Legacy Containers 1.0.1 software:

1. Download the Oracle Solaris Legacy Containers software bundle
from http://www.oracle.com.

2. Unarchive and install 1.0.1 software package:
guest # unzip solarislegacycontainers-solaris10-sparc.zip
guset # cd solarislegacycontainers/1.0.1/Product
guest # pkgadd -d `pwd` SUNWs8brandk

Starting the migration

On the source system


sol8# su - oracle


Shutdown the Oracle database and ASM instance
sol8$ sqlplus "/as sysdba"

SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 13:19:48 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> shutdown immediate
sol8$ export ORACLE_SID=+ASM


sol8$ sqlplus "/as sysdba"

SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 13:21:38 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> shutdown
ASM diskgroups dismounted
ASM instance shutdown

Stop the listener


sol8 $ lsnrctl stop

LSNRCTL for Solaris: Version 10.2.0.5.0 - Production on 26-AUG-2012 13:23:49

Copyright (c) 1991, 2010, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
The command completed successfully

Creating the archive

sol8 # flarcreate -S -n s8-system /export/home/s8-system.flar

Copy the archive to the target guest domain


On the target system

Move and connect the SAN storage to the target system

On the control domain add the SAN storage LUN into the guest domain

primary # ldm add-vdsdev /dev/dsk/c5t40d0s6 oradata@primary-vds0
primary # ldm add-vdisk oradata oradata@primary-vds0 ldg1

On the guest domain verify that you can access the LUN

guest# format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
0. c0d0
/virtual-devices@100/channel-devices@200/disk@0
1. c0d2
/virtual-devices@100/channel-devices@200/disk@2


Set up the Oracle Solaris 8 branded zone on the guest domain


The Oracle Solaris 8 branded zone s8-zone  is configured with the zonecfg command.

Here is the output of zonecfg –z s8-zone info command after configuration is completed:


guest# zonecfg -z s8-zone info

zonename: s8-zone

zonepath: /zones/s8-zone

brand: solaris8

autoboot: true

bootargs:

pool:

limitpriv: default,proc_priocntl,proc_lock_memory

scheduling-class: FSS

ip-type: exclusive

hostid:

net:

        address not specified

        physical: vnet1

        defrouter not specified

device

        match: /dev/rdsk/c0d2s0

attr:

        name: machine

        type: string

        value: sun4u

 Install the Solaris 8 zone

guest# zoneadm -z s8-zone install -p -a /export/home/s8-system.flar

Boot the Solaris 8 zone

guest# zoneadm –z s8-zone boot

guest # zlogin –C s8-zone 

sol8_zone# su - oracle

Modify the ASM disk ownership 

sol8_zone# chown oracle:dba /dev/rdsk/c0d2s0

Start the listener


sol8_zone$ lsnrctl start

Start the ASM instance

sol8_zone$ export ORACLE_SID=+ASM
sol8_zone$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 14:36:44 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> startup
ASM instance started

Total System Global Area 130023424 bytes
Fixed Size 2050360 bytes
Variable Size 102807240 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Start the database

sol8_zone$ export ORACLE_SID=ORA10
sol8_zone$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 14:37:13 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1610612736 bytes
Fixed Size 2052448 bytes
Variable Size 385879712 bytes
Database Buffers 1207959552 bytes
Redo Buffers 14721024 bytes
Database mounted.
Database opened.

Second example


In this example we will modify the host name.

guest # zoneadm -z s8-zone install -u -a /net/server/s8_image.flar

Boot the Zone

guest # zoneadm -z s8-zone boot

Configure the Zone with a new ip address and new host name

guest # zlogin –C s8-zone

Modify the ASM disk ownership

sol8_zone # chown oracle:dba /dev/rdsk/c0d2s0

sol8_zone # cd $ORACLE_HOME/bin

reconfigure the ASM parameters

sol8_zone # ./localconfig delete

Aug 27 05:17:11 s8-zone last message repeated 3 times
Aug 27 05:17:28 s8-zone root: Oracle CSSD being stopped
Stopping CSSD.
Unable to communicate with the CSS daemon.
Shutdown has begun. The daemons should exit soon.
sol8_zone # ./localconfig add

Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'other'..
Operation successful.
Configuration for local CSS has been initialized

sol8_zone # su - oracle

Start the listener + ASM + Oracle database

Sunday Aug 02, 2009

Logical Domains Physical-to-Virtual (P2V) Migration


The Logical domains P2V migration tool automatically converts an existing physical system
to a virtual system that runs in a logical domain on a chip multithreading (CMT) system.
The source system can be any of the following:



    \*  Any sun4u SPARC system that runs at least the Solaris 8 Operating System
    \*  Any sun4v system that runs the Solaris 10 OS, but does not run in a logical domain


In this entry I will demonstrate how to use Logical domains P2V migration tool to migrate Solaris running on V440 server (physical)
Into guest domain running on the T5220 server (virtual)
Architecture layout :



Before you can run the LogicalDomains P2VMigration Tool, ensure that the following are true:



  •     Target system runs at least LogicalDomains 1.1 on the following:

  •      Solaris 10 10/08 OS

  •     Solaris 10 5/08 OS with the appropriate LogicalDomains 1.1 patches

  •     Guest domains run at least the Solaris 10 5/08 OS

  •     Source system runs at least the Solaris 8 OS


prerequisites


In addition to these prerequisites, configure an NFS file system to be shared by both the source
and target systems. This file system should be writable by root.However, if a shared file system
is not available, use a local file system that is large enough to hold a file system dump of the
source system on both the source and target systems Limitations
Version 1.0 of the LogicalDomains P2VMigration Tool has the following limitations:



  •      Only UFS file systems are supported.

  •      Each guest domain can have only a single virtual switch and virtual disk

  •      The flash archiving method silently ignores excluded file systems.

The conversion from a physical system to a virtual system is performed in the following phases:


  • Collection phase   Runs on the physical source system. collect creates a file system image of the source system based on the configuration information that it collects about the source system.

  • Preparation phase. Runs on the control domain of the target system. prepare creates the logical domain on the target system based on the configuration information collected in the collect phase.

  • Conversion phase. Runs on the control domain of the target system. In the convert phase,the created logical domain is converted into a logical domain that runs the Solaris 10 OS by using the standard Solaris upgrade process.


    Collection phase
    On the target machine T5220

    Prepare the NFS server in order to hold the a file system dump of the source system on both the source and target systems.
    In this use case I will use the target machine (T5220) as the NFS server
    # mkdir /p2v
    # share -F nfs -o root=v440 /p2v

    Verify the NFS share
    # share
    -  /p2v  root=v440  ""
    Install the Logical Domains P2V MigrationTool
    Go to the Logical Domains download page at http://www.sun.com/servers/coolthreads/
    ldoms/get.jsp.
    Download the P2V software package, SUNWldmp2v
    Use the pkgadd command to install the SUNWldmp2v package
    # pkgadd -d . SUNWldmp2v

    Create the /etc/ldmp2v.conf file we will use it later
    # cat /etc/ldmp2v.conf

    VSW="primary-vsw0"
    VDS="primary-vds0"
    VCC="primary-vcc0"
    BACKEND_PREFIX="/ldoms/disks/"
    BACKEND_TYPE="file"
    BACKEND_SPARSE="no"
    BOOT_TIMEOUT=10
    On the source machine V440
    Install the Logical Domains P2V MigrationTool
    # pkgadd -d . SUNWldmp2v
    Mount the NFS share
    # mkdir /p2v
    # mount t5220:/p2v /p2v
    Run the collection command
    # /usr/sbin/ldmp2v collect -d /p2v/v440
    Collecting system configuration ...
    Archiving file systems ...
    DUMP: Date of this level 0 dump: August 2, 2009 4:11:56 PM IDT
    DUMP: Date of last level 0 dump: the epoch
    DUMP: Dumping /dev/rdsk/c1t0d0s0 (mdev440-2:/) to /p2v/v440/ufsdump.0.
    The collection phase took 5 minutes for 4.6 GB dump file
    Preparation phase
    On the target machine T5220
    Run the preparation command
    We will keep the source machine (V440) mac address
    # /usr/sbin/ldmp2v prepare -d /p2v/v440 -o keep-mac v440
     Creating vdisks ...
    Creating file systems ...
    Populating file systems ...
    The preparation phase took 26 minutes


    We can see that for each physical cpu on the V440 server the LDom P2V Tool create 4 vcpu on the guest domain and assigns the same amount memory that the physical system has
    # ldm list -l v440


    NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
    v440                   inactive   ------                                   4     8G

    CONTROL
        failure-policy=ignore

    DEPENDENCY
        master=

    NETWORK
        NAME             SERVICE                     DEVICE     MAC               MODE   PVID VID                  MTU
        vnet0            primary-vsw0                           00:03:ba:c4:d2:9d        1

    DISK
        NAME             VOLUME                      TOUT DEVICE  SERVER         MPGROUP
        disk0            v440-vol0@primary-vds0

    Conversion Phase
    Before starting the conversion phase shout down the source server (V440) in order to avoid ip address conflict
    On the V440 server
    # poweroff
    On the jumpstart server

    You can use the Custom JumpStart feature to perform a completely hands-off conversion.
    This feature requires that you create and configure the appropriate sysidcfg and profile files for the client on the JumpStart server.
    The profile should consist of the following lines: install_type upgrade root_device  c0d0s0



    The sysidcfg file :

    name_service=NONE
    root_password=uQkoXlMLCsZhI
    system_locale=C
    timeserver=localhost timezone=Europe/Amsterdam
    terminal=vt100 security_policy=NONE nfs4_domain=dynamic network_interface=PRIMARY {netmask=255.255.255.192 default_route=none protocol_ipv6=no}
    On the target server T5220
    # ldmp2v convert -j -n vnet0 -d /p2v/v440 v440Testing original system status ...
    LDom v440 started
    Waiting for Solaris to come up ...
    Using Custom JumpStart
    Trying 0.0.0.0...
    Connected to 0.
    Escape character is '\^]'.

    Connecting to console "v440" in group "v440" ....
    Press ~? for control options ..
    SunOS Release 5.10 Version Generic_139555-08 64-bit
    Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
    Use is subject to license terms.



    For information about the P2V migration tool, see the ldmp2v(1M) man page.





About

This blog covers cloud computing, big data and virtualization technologies

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today