Tuesday Sep 11, 2012

Oracle Solaris 8 P2V with Oracle database 10.2 and ASM

Background information

In this document I will demonstrate the following scenario:

Migration of Solaris 8 Physical system with Oracle database version 10.2.0.5 with ASM file-system located on a SAN storage, into a Solaris 8 branded zone inside a Solaris 10 guest domain on top of Solaris 11 control domain.


In the first example we will preserve the host information.

On the second example we will modify the host name.

Executive summary

In this document I will demonstrate how we managed to leverage the Solaris 8 p2v tool in order to migrate a physical Solaris 8 system with Oracle database with ASM file system into a Solaris 8 branded Zone.

The ASM file system located on a LUN in SAN storage connected via FC HBA.

During the migration we used the same LUN on the source and target servers in order to avoid data migration.

The P2V tool successfully migrated the Solaris 8 physical system into the Solaris 8 branded Zone and the Zone was able to access the ASM file system.

Architecture layout



Source system

Hardware details:
Sun Fire V440 server with 4 UltraSPARC 3i CPUs 1593MHZ and 8GB of RAM

Opreating system Solaris 8 2/04 + latest recommended patch set


Target system


Oracle's SPARC T4-1 server with A single 8-core, 2.85 GHz SPARC T4 processor and 32GB of RAM

Install Solaris 11


Setting up Control Domain

primary# ldm add-vcc port-range=5000-5100 primary-vcc0 primary
primary# ldm add-vds primary-vds0 primary
primary# ifconfig -a
net0: flags=1000843 mtu 1500 index 2
inet 10.162.49.45 netmask ffffff00 broadcast 10.162.49.255
ether 0:14:4f:ab:e3:7a
primary# ldm add-vsw net-dev=net0 primary-vsw0 primary
primary# ldm list-services primary
VCC
NAME LDOM PORT-RANGE
primary-vcc0 primary 5000-5100

VSW
NAME LDOM MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
primary-vsw0 primary 00:14:4f:fb:44:4d net0 0 switch@0 1 1 1500 on

VDS
NAME LDOM VOLUME OPTIONS MPGROUP DEVICE
primary-vds0 primary

primary# ldm set-mau 1 primary
primary# ldm set-vcpu 16 primary
primary# ldm start-reconf primary
primary# ldm set-memory 8G primary
primary# ldm add-config initial
primary# ldm list-config

factory-default
initial [current]
primary-with-clients

primary# shutdown -y -g0 -i6

Enable the virtual console service

primary# svcadm enable vntsd

primary# svcs vntsd

STATE STIME FMRI

online 15:55:10 svc:/ldoms/vntsd:default

Setting Up Guest Domain

primary# ldm add-domain ldg1
primary# ldm add-vcpu 32 ldg1
primary# ldm add-memory 8G ldg1
primary# ldm add-vnet vnet0 primary-vsw0 ldg1

primary# ldm add-vnet vnet1 primary-vsw0 ldg1

primary# ldm add-vdsdev /dev/dsk/c3t1d0s2 vol1@primary-vds0

primary# ldm add-vdisk vdisk1 vol1@primary-vds0 ldg1

primary# ldm set-var auto-boot\?=true ldg1

primary# ldm set-var boot-device=vdisk1 ldg1

primary# ldm bind-domain ldg1

primary# ldm start-domain ldg1

primary# telnet localhost 5000

{0} ok boot net - install

Install solaris 10 update 10 (Solaris 10 08/11)


Verify that all the Solaris services on guest LDom are up & running


guest # svcs -xv

Oracle Solaris Legacy Containers install

The Oracle Solaris Legacy Containers download includes two versions of the product:


- Oracle Solaris Legacy Containers 1.0.1


- For Oracle Solaris 10 10/08 or later


- Oracle Solaris Legacy Containers 1.0


- For Oracle Solaris 10 08/07


- For Oracle Solaris 10 05/08


Both product versions contain identical features. The 1.0.1 product depends


on Solaris packages introduced in Solaris 10 10/08. The 1.0 product delivers


these packages to pre-10/08 versions of Solaris.


We will use the Oracle Solaris Legacy Containers 1.0.1
since our Solaris 10 version is 08/11 To install the Oracle Solaris Legacy Containers 1.0.1 software:

1. Download the Oracle Solaris Legacy Containers software bundle
from http://www.oracle.com.

2. Unarchive and install 1.0.1 software package:
guest # unzip solarislegacycontainers-solaris10-sparc.zip
guset # cd solarislegacycontainers/1.0.1/Product
guest # pkgadd -d `pwd` SUNWs8brandk

Starting the migration

On the source system


sol8# su - oracle


Shutdown the Oracle database and ASM instance
sol8$ sqlplus "/as sysdba"

SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 13:19:48 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> shutdown immediate
sol8$ export ORACLE_SID=+ASM


sol8$ sqlplus "/as sysdba"

SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 13:21:38 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> shutdown
ASM diskgroups dismounted
ASM instance shutdown

Stop the listener


sol8 $ lsnrctl stop

LSNRCTL for Solaris: Version 10.2.0.5.0 - Production on 26-AUG-2012 13:23:49

Copyright (c) 1991, 2010, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
The command completed successfully

Creating the archive

sol8 # flarcreate -S -n s8-system /export/home/s8-system.flar

Copy the archive to the target guest domain


On the target system

Move and connect the SAN storage to the target system

On the control domain add the SAN storage LUN into the guest domain

primary # ldm add-vdsdev /dev/dsk/c5t40d0s6 oradata@primary-vds0
primary # ldm add-vdisk oradata oradata@primary-vds0 ldg1

On the guest domain verify that you can access the LUN

guest# format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
0. c0d0
/virtual-devices@100/channel-devices@200/disk@0
1. c0d2
/virtual-devices@100/channel-devices@200/disk@2


Set up the Oracle Solaris 8 branded zone on the guest domain


The Oracle Solaris 8 branded zone s8-zone  is configured with the zonecfg command.

Here is the output of zonecfg –z s8-zone info command after configuration is completed:


guest# zonecfg -z s8-zone info

zonename: s8-zone

zonepath: /zones/s8-zone

brand: solaris8

autoboot: true

bootargs:

pool:

limitpriv: default,proc_priocntl,proc_lock_memory

scheduling-class: FSS

ip-type: exclusive

hostid:

net:

        address not specified

        physical: vnet1

        defrouter not specified

device

        match: /dev/rdsk/c0d2s0

attr:

        name: machine

        type: string

        value: sun4u

 Install the Solaris 8 zone

guest# zoneadm -z s8-zone install -p -a /export/home/s8-system.flar

Boot the Solaris 8 zone

guest# zoneadm –z s8-zone boot

guest # zlogin –C s8-zone 

sol8_zone# su - oracle

Modify the ASM disk ownership 

sol8_zone# chown oracle:dba /dev/rdsk/c0d2s0

Start the listener


sol8_zone$ lsnrctl start

Start the ASM instance

sol8_zone$ export ORACLE_SID=+ASM
sol8_zone$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 14:36:44 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> startup
ASM instance started

Total System Global Area 130023424 bytes
Fixed Size 2050360 bytes
Variable Size 102807240 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Start the database

sol8_zone$ export ORACLE_SID=ORA10
sol8_zone$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 14:37:13 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1610612736 bytes
Fixed Size 2052448 bytes
Variable Size 385879712 bytes
Database Buffers 1207959552 bytes
Redo Buffers 14721024 bytes
Database mounted.
Database opened.

Second example


In this example we will modify the host name.

guest # zoneadm -z s8-zone install -u -a /net/server/s8_image.flar

Boot the Zone

guest # zoneadm -z s8-zone boot

Configure the Zone with a new ip address and new host name

guest # zlogin –C s8-zone

Modify the ASM disk ownership

sol8_zone # chown oracle:dba /dev/rdsk/c0d2s0

sol8_zone # cd $ORACLE_HOME/bin

reconfigure the ASM parameters

sol8_zone # ./localconfig delete

Aug 27 05:17:11 s8-zone last message repeated 3 times
Aug 27 05:17:28 s8-zone root: Oracle CSSD being stopped
Stopping CSSD.
Unable to communicate with the CSS daemon.
Shutdown has begun. The daemons should exit soon.
sol8_zone # ./localconfig add

Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'other'..
Operation successful.
Configuration for local CSS has been initialized

sol8_zone # su - oracle

Start the listener + ASM + Oracle database

Thursday Jul 05, 2012

LDoms with Solaris 11


Oracle VM Server for SPARC (LDoms) release 2.2 came out on May 24. You can get the software, see the release notes, reference manual, and admin guide here on the Oracle VM for SPARC page.

Oracle VM Server for SPARC enables you to create multiple virtual systems on a single physical system.Each virtual system is called alogical domain and runs its own instance of Oracle Solaris 10 or Oracle Solaris 11.

The version of the Oracle Solaris OS software that runs on a guest domain is independent of the Oracle Solaris OS version that runs on the primary domain. So, if you run the Oracle Solaris 10 OS in the primary domain, you can still run the Oracle Solaris 11 OS in a guest domain, and if you run the Oracle Solaris 11 OS in the primary domain, you can still run the Oracle Solaris 10 OS in a guest domain


In addition to that starting with the Oracle VM Server for SPARC 2.2 release you can migrate guest domain even if source and target machines have different processor type.




You can migrate guest domain from a system with UltraSPARC  T2+ or SPARC T3 CPU to a system with a SPARC T4 CPU.The guest domain on the source and target system must run Solaris 11 In order
to enable cross CPU migration.In addition to that you need to change the cpu-arch property value on the source system.


For more information about Oracle VM Server for SPARC (LDoms) with Solaris 11 and  Cross CPU Migration refer to the following white paper



Friday Jun 04, 2010

Increasing Application Availability Using Oracle VM Server for SPARC (LDoms) An Oracle Database Example

This white paper by Orgad Kimchi and Roman Ivanov of Oracle's ISV Engineering Team describes how to use the warm migration feature of Oracle VM Server for SPARC to move a running Oracle database application from one server to another without disruption. It includes explanations of the process and instructions for setting up the source and target servers.


For the full white paper see Increasing Application Availability Using Oracle VM Server for SPARC

Monday May 10, 2010

Oracle VM Server for SPARC (LDoms) Dynamic Resource Management


In this entry, I will demonstrate how to use the new feature of Oracle VM Server for SPARC
(previously called Sun Logical Domains or LDoms) version 1.3 Dynamic Resource Management
(a.k.a DRM) for allocating CPUs resources based on workload and pre defined polices


For the full article see  Oracle VM Server for SPARC DRM

Sunday Aug 02, 2009

Logical Domains Physical-to-Virtual (P2V) Migration


The Logical domains P2V migration tool automatically converts an existing physical system
to a virtual system that runs in a logical domain on a chip multithreading (CMT) system.
The source system can be any of the following:



    \*  Any sun4u SPARC system that runs at least the Solaris 8 Operating System
    \*  Any sun4v system that runs the Solaris 10 OS, but does not run in a logical domain


In this entry I will demonstrate how to use Logical domains P2V migration tool to migrate Solaris running on V440 server (physical)
Into guest domain running on the T5220 server (virtual)
Architecture layout :



Before you can run the LogicalDomains P2VMigration Tool, ensure that the following are true:



  •     Target system runs at least LogicalDomains 1.1 on the following:

  •      Solaris 10 10/08 OS

  •     Solaris 10 5/08 OS with the appropriate LogicalDomains 1.1 patches

  •     Guest domains run at least the Solaris 10 5/08 OS

  •     Source system runs at least the Solaris 8 OS


prerequisites


In addition to these prerequisites, configure an NFS file system to be shared by both the source
and target systems. This file system should be writable by root.However, if a shared file system
is not available, use a local file system that is large enough to hold a file system dump of the
source system on both the source and target systems Limitations
Version 1.0 of the LogicalDomains P2VMigration Tool has the following limitations:



  •      Only UFS file systems are supported.

  •      Each guest domain can have only a single virtual switch and virtual disk

  •      The flash archiving method silently ignores excluded file systems.

The conversion from a physical system to a virtual system is performed in the following phases:


  • Collection phase   Runs on the physical source system. collect creates a file system image of the source system based on the configuration information that it collects about the source system.

  • Preparation phase. Runs on the control domain of the target system. prepare creates the logical domain on the target system based on the configuration information collected in the collect phase.

  • Conversion phase. Runs on the control domain of the target system. In the convert phase,the created logical domain is converted into a logical domain that runs the Solaris 10 OS by using the standard Solaris upgrade process.


    Collection phase
    On the target machine T5220

    Prepare the NFS server in order to hold the a file system dump of the source system on both the source and target systems.
    In this use case I will use the target machine (T5220) as the NFS server
    # mkdir /p2v
    # share -F nfs -o root=v440 /p2v

    Verify the NFS share
    # share
    -  /p2v  root=v440  ""
    Install the Logical Domains P2V MigrationTool
    Go to the Logical Domains download page at http://www.sun.com/servers/coolthreads/
    ldoms/get.jsp.
    Download the P2V software package, SUNWldmp2v
    Use the pkgadd command to install the SUNWldmp2v package
    # pkgadd -d . SUNWldmp2v

    Create the /etc/ldmp2v.conf file we will use it later
    # cat /etc/ldmp2v.conf

    VSW="primary-vsw0"
    VDS="primary-vds0"
    VCC="primary-vcc0"
    BACKEND_PREFIX="/ldoms/disks/"
    BACKEND_TYPE="file"
    BACKEND_SPARSE="no"
    BOOT_TIMEOUT=10
    On the source machine V440
    Install the Logical Domains P2V MigrationTool
    # pkgadd -d . SUNWldmp2v
    Mount the NFS share
    # mkdir /p2v
    # mount t5220:/p2v /p2v
    Run the collection command
    # /usr/sbin/ldmp2v collect -d /p2v/v440
    Collecting system configuration ...
    Archiving file systems ...
    DUMP: Date of this level 0 dump: August 2, 2009 4:11:56 PM IDT
    DUMP: Date of last level 0 dump: the epoch
    DUMP: Dumping /dev/rdsk/c1t0d0s0 (mdev440-2:/) to /p2v/v440/ufsdump.0.
    The collection phase took 5 minutes for 4.6 GB dump file
    Preparation phase
    On the target machine T5220
    Run the preparation command
    We will keep the source machine (V440) mac address
    # /usr/sbin/ldmp2v prepare -d /p2v/v440 -o keep-mac v440
     Creating vdisks ...
    Creating file systems ...
    Populating file systems ...
    The preparation phase took 26 minutes


    We can see that for each physical cpu on the V440 server the LDom P2V Tool create 4 vcpu on the guest domain and assigns the same amount memory that the physical system has
    # ldm list -l v440


    NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
    v440                   inactive   ------                                   4     8G

    CONTROL
        failure-policy=ignore

    DEPENDENCY
        master=

    NETWORK
        NAME             SERVICE                     DEVICE     MAC               MODE   PVID VID                  MTU
        vnet0            primary-vsw0                           00:03:ba:c4:d2:9d        1

    DISK
        NAME             VOLUME                      TOUT DEVICE  SERVER         MPGROUP
        disk0            v440-vol0@primary-vds0

    Conversion Phase
    Before starting the conversion phase shout down the source server (V440) in order to avoid ip address conflict
    On the V440 server
    # poweroff
    On the jumpstart server

    You can use the Custom JumpStart feature to perform a completely hands-off conversion.
    This feature requires that you create and configure the appropriate sysidcfg and profile files for the client on the JumpStart server.
    The profile should consist of the following lines: install_type upgrade root_device  c0d0s0



    The sysidcfg file :

    name_service=NONE
    root_password=uQkoXlMLCsZhI
    system_locale=C
    timeserver=localhost timezone=Europe/Amsterdam
    terminal=vt100 security_policy=NONE nfs4_domain=dynamic network_interface=PRIMARY {netmask=255.255.255.192 default_route=none protocol_ipv6=no}
    On the target server T5220
    # ldmp2v convert -j -n vnet0 -d /p2v/v440 v440Testing original system status ...
    LDom v440 started
    Waiting for Solaris to come up ...
    Using Custom JumpStart
    Trying 0.0.0.0...
    Connected to 0.
    Escape character is '\^]'.

    Connecting to console "v440" in group "v440" ....
    Press ~? for control options ..
    SunOS Release 5.10 Version Generic_139555-08 64-bit
    Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
    Use is subject to license terms.



    For information about the P2V migration tool, see the ldmp2v(1M) man page.





Sunday Apr 19, 2009

Ldom with ZFS

Logical Domains offers a powerful and consistent methodology for creating virtualized server environments across the entire CoolThreads server range:

   \* Create multiple independent virtual machines quickly and easily
     using the hypervisor built into every CoolThreads system.
   \* Leverage advanced Solaris technologies such as ZFS cloning and
     snapshots to speed deployment and dramatically reduce disk
     capacity requirements.

In this entry I will demonstrate the integration between Ldom and ZFS

Architecture layout





Downloading Logical Domains Manager and Solaris Security Toolkit

Download the Software

Download the zip file (LDoms_Manager-1_1.zip) from the Sun Software Download site. You can find the software from this web site:

http://www.sun.com/ldoms

 Unzip the zip file.
# unzip LDoms_Manager-1_1.zip
Please read the REDME file for any prerequisite
The installation script is part of the SUNWldm package and is in the Install subdirectory.


# cd LDoms_Manager-1_1


Run the install-ldm installation script with no options.
# Install/install-ldm

Select a security profile from this list:

a) Hardened Solaris configuration for LDoms (recommended)
b) Standard Solaris configuration
c) Your custom-defined Solaris security configuration profile

Enter a, b, or c [a]: a


Shut down and reboot your server
# /usr/sbin/shutdown -y -g0 -i6

Use the ldm list command to verify that the Logical Domains Manager is running
# /opt/SUNWldm/bin/ldm list


NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-c-- SP 32 16256M 0.0% 2d 23h 27m

Creating Default Services

You must create the following virtual default services initially to be able to use them later:
vdiskserver – virtual disk server
vswitch – virtual switch service
vconscon – virtual console concentrator service

Create a virtual disk server (vds) to allow importing virtual disks into a logical domain.
# ldm add-vds primary-vds0 primary

Create a virtual console concentrator (vcc) service for use by the virtual network terminal server daemon (vntsd)
# ldm add-vcc port-range=5000-5100 primary-vcc0 primary

Create a virtual switch service
(vsw) to enable networking between virtual network
(vnet) devices in logical domains
# ldm add-vsw net-dev=e1000g0 primary-vsw0 primary

Verify the services have been created by using the list-services subcommand.


# ldm list-services

Set Up the Control Domain

Assign cryptographic resources to the control domain.
# ldm set-mau 1 primary

Assign virtual CPUs to the control domain.
# ldm set-vcpu 4 primary

Assign memory to the control domain.
# ldm set-memory 4G primary

Add a logical domain machine configuration to the system controller (SC).
# ldm add-config initial

Verify that the configuration is ready to be used at the next reboot
# ldm list-config

factory-default
initial [next poweron]

Reboot the server
# shutdown -y -g0 -i6

Enable the virtual network terminal server daemon, vntsd
# svcadm enable vntsd

Create the zpool

# zpool create ldompool c1t2d0 c1t3d0

# zfs create ldompool/goldimage

# zfs create -V 15g ldompool/goldimage/disk_image



Creating and Starting a Guest Domain

Create a logical domain.
# ldm add-domain goldldom

Add CPUs to the guest domain.
ldm add-vcpu 4 goldldom

Add memory to the guest domain
# ldm add-memory 2G goldldom

Add a virtual network deviceto the guest domain.
# ldm add-vnet vnet1 primary-vsw0 goldldom

Specify the device to be exported by the virtual disk server as a virtual disk to the guest domain
# ldm add-vdsdev /dev/zvol/dsk/ldompool/goldimage/disk_image vol1@primary-vds0

Add a virtual disk to the guest domain.
# ldm add-vdisk vdisk0 vol1@primary-vds0 goldldom

Set auto-boot and boot-device variables for the guest domain
# ldm set-variable auto-boot\\?=false goldldom
# ldm set-var boot-device=vdisk0 goldldom


Bind resources to the guest domain goldldom and then list the domain to verify that it is bound.
# ldm bind-domain goldldom
# ldm list-domain goldldom


NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
primary          active     -n-cv-  SP      4     4G       0.2%  15m
goldldom         bound      ------  5000    4     2G

Start the guest domain
# ldm start-domain goldldom
Connect to the console of a guest domain
# telnet 0 5000
Trying 0.0.0.0...
Connected to 0.
Escape character is '\^]'.
Connecting to console "goldldom" in group "goldldom" ....
Press ~? for control options ..

{0} ok

Jump-Start the goldldom

{0} ok boot net - install
We can login to the new guest and verify that the file system is zfs

# zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rpool  14.9G  1.72G  13.2G    11%  ONLINE  -
Restore the goldldom configuration to an "as-manufactured" state with the sys-unconfig command


# sys-unconfig
This program will unconfigure your system.  It will cause it
to revert to a "blank" system - it will not have a name or know
about other systems or networks.
This program will also halt the system.
Do you want to continue (y/n) y

Press ~. in order to return to the primary domain

Stop the guest domain
# ldm stop goldldom
Unbind the guest domain

# ldm unbind  goldldom
Snap shot the disk image
# zfs snapshot ldompool/goldimage/disk_image@sysunconfig

Create new zfs file system for the new guest
# zfs create ldompool/domain1

Clone the goldldom disk image
# zfs clone ldompool/goldimage/disk_image@sysunconfig ldompool/domain1/disk_image

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
ldompool 17.0G 117G 21K /ldompool
ldompool/domain1 18K 117G 18K /ldompool/domain1
ldompool/domain1/disk_image 0 117G 2.01G -
ldompool/goldimage 17.0G 117G 18K /ldompool/goldimage
ldompool/goldimage/disk_image 17.0G 132G 2.01G -
ldompool/goldimage/disk_image@sysunconfig 0 - 2.01G -

Creating and Starting the second  Domain


# ldm add-domain domain1
# ldm add-vcpu 4 domain1
# ldm add-memory 2G domain1
# ldm add-vnet vnet1 primary-vsw0 domain1
# ldm add-vdsdev /dev/zvol/dsk/ldompool/domain1/disk_image vol2@primary-vds0
# ldm add-vdisk vdisk1 vol2@primary-vds0 domain1
# ldm set-var auto-boot\\?=false domain1
# ldm set-var boot-device=vdisk1 domain1

# ldm bind-domain domain1
# ldm list-domain domain1
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
domain1 bound ------ 5001 8 2G

Start the domain
# ldm start-domain domain1

Connect to the console
# telnet 0 5001
{0} ok boot net -s

Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Booting to milestone "milestone/single-user:default".
Configuring devices.
Using RPC Bootparams for network configuration information.
Attempting to configure interface vnet0...
Configured interface vnet0
Requesting System Maintenance Mode
SINGLE USER MODE

# zpool import -f rpool
# zpool export rpool
# reboot


Answer the configuration questions

Login to the new domain and verify that we have zfs file system

# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 14.9G 1.72G 13.2G 11% ONLINE -

About

This blog covers cloud computing, big data and virtualization technologies

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today