Thursday Jul 05, 2012

LDoms with Solaris 11


Oracle VM Server for SPARC (LDoms) release 2.2 came out on May 24. You can get the software, see the release notes, reference manual, and admin guide here on the Oracle VM for SPARC page.

Oracle VM Server for SPARC enables you to create multiple virtual systems on a single physical system.Each virtual system is called alogical domain and runs its own instance of Oracle Solaris 10 or Oracle Solaris 11.

The version of the Oracle Solaris OS software that runs on a guest domain is independent of the Oracle Solaris OS version that runs on the primary domain. So, if you run the Oracle Solaris 10 OS in the primary domain, you can still run the Oracle Solaris 11 OS in a guest domain, and if you run the Oracle Solaris 11 OS in the primary domain, you can still run the Oracle Solaris 10 OS in a guest domain


In addition to that starting with the Oracle VM Server for SPARC 2.2 release you can migrate guest domain even if source and target machines have different processor type.




You can migrate guest domain from a system with UltraSPARC  T2+ or SPARC T3 CPU to a system with a SPARC T4 CPU.The guest domain on the source and target system must run Solaris 11 In order
to enable cross CPU migration.In addition to that you need to change the cpu-arch property value on the source system.


For more information about Oracle VM Server for SPARC (LDoms) with Solaris 11 and  Cross CPU Migration refer to the following white paper



Wednesday Feb 15, 2012

Oracle VM for SPARC (LDoms) Live Migration

This document provides information how we can increase application
availability by using the Oracle VM Server for SPARC software
(previously called Sun Logical Domains).

We tested the Oracle 11gr2 DB on a SPARC T4 while migration the
guest domain from one SPARC T4 server to another without shutting down the
Oracle DB.

In the testing the Swingbench OrderEntry workload was used to
generate workload, OrderEntry is based on the oe schema that ships
with Oracle 11g.

The scenario includes the following characteristic :
30GB DB disk size with 18GB SGA size, The number of
workload users were 50 and the time between actions taken by each
user were 100 ms.


The paper shows the complete configuration process, including the
creation and configuration of guest domains using the ldm command.
In addition to that Storage configuration and layout plus software requirements that
were used to demonstrate the Live Migration between two T4 systems.


For more information see Increasing Application Availability by Using the Oracle VM Server for SPARC Live Migration Feature: An Oracle Database Example


Tuesday Dec 15, 2009

VirtualBox Teleporting


ABSTRACT

In this entry, I will demonstrate how to use the new feature of
VirtualBox version 3.1 Migration (a.k.a teleporting) for moving a
virtual machine over a network from one VirtualBox host to another,
while the virtual machine is running.


Introduction to VirtualBox


VirtualBox
is a general-purpose full virtualizer for x86 hardware.
Targeted at server, desktop and embedded use, it is now the only
professional-quality virtualization solution that is also Open Source
Software.

Introduction to Teleporting


Teleporting requires that a machine be currently running on one host,
which is then called the "source". The host to which the virtual
machine will be teleported will then be called the "target". The
machine on the target is then configured to wait for the source to
contact the target. The machine's running state will then be
transferred from the source to the target with minimal downtime.

This works regardless of the host operating system that is running on
the hosts: you can teleport virtual machines between Solaris and Mac
hosts, for example.


Architecture layout :




Prerequisites :

1. The target and source machines should both be running VirtualBox version 3.1or later.

2. The target machine must be configured with the same amount of memory
(machine and video memory) and other hardware settings,  as the source
machine. Otherwise teleporting will fail with an error message.

3. The two virtual machines on the source and the target must share the same storage

  they either use the same iSCSI targets or both hosts have access to  the same storage via NFS or SMB/CIFS.

4. The source and target machines cannot have any snapshots.

5. The hosts must have fairly similar CPUs. Teleporting between Intel and AMD CPUs will probably fail with an error message.

Preparing the storage environment


For this example, I will use OpenSolaris x86 as CIFS server in order to
enable shared storage for the source and target machines ,but you can
use any iSCSI NFS or CIFS server for this task.

Install the packages from the OpenSolaris.org repository:

# pfexec pkg install SUNWsmbs SUNWsmbskr


Reboot the system to activate the SMB server in the kernel.

# pfexec reboot


Enable  the CIFS service:

# pfexec svcadm enable –r smb/server


If the following warning is issued, you can ignore it:

svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances

Verified the service

# pfexec svcs smb/server


STATE          STIME    FMRI

online          8:38:22 svc:/network/smb/server:default

The Solaris CIFS SMB service uses WORKGROUP as the default group. If
the workgroup needs to be changed, use the following command to change
the workgroup name:

# pfexec smbadm join –w workgroup-name


Next edit the /etc/pam. conf file to enable encrypted passwords to be used for CIFS.

Add the following line to the end of the file:

other password required pam_smb_passwd. so. 1     nowarn

# pfexec  echo "other password required pam_smb_passwd.so.1     nowarn" >> /etc/pam.conf


 Each user currently in the /etc/passwd file needs to re-encrypt to be able to use the CIFS service:
# pfexec passwd user-name

Note -
After the PAM module is installed, the passwd command
automatically generates CIFS-suitable passwords for new users. You must
also run the passwd command to generate CIFS-style passwords for
existing users.

Create mixed-case ZFS file system.

# pfexec zfs create -o casesensitivity=mixed  rpool/vboxstorage


Enable SMB sharing for the ZFS file system.

# pfexec  zfs set sharesmb=on rpool/vboxstorage


Verify how the file system is shared.

# pfexec sharemgr show -vp


Now, you can access the share by connecting to \\\\solaris-hostname\\share-name


Create new Virtual machine, for the virtual hard disk select “Create new hard disk” then select next



Select the next button



For the disk location enter the network drive that you mapped from the previous section then press the next button



Verify the disk settings and then press
the finish button



Continue with the Install process ,after finishing the install process shutdown the Virtual machine in order to avoid any storage locking.


On the target machine :


Map the same network drive


Configure a new virtual machine but
instead of selecting “Create new hard drive” select use “Use existing hard drive”.




In the Virtual Media Manger window
select the Add button,and point to the same location as the
source machine hard drive ( the network drive).




Don't start the Virtual machine yet.

To wait for a teleport request to arrive when it is started,use the following VBoxManage command:

VBoxManage modifyvm <targetvmname> --teleporter on --teleporterport <port>


where <targetvmname> is the name of the virtual machine on the
target in this use case opensolaris  ,and <port> is a TCP/IP
port number to be used on both the source and the target. In this example, I used port 6000.

C:\\Program Files\\Sun\\VirtualBox>VBoxManage modifyvm  opensolaris --teleporter on --teleporterport 6000


Next, start the VM on the target. You will see that instead of actually
running, it will show a progress dialog. indicating that it is waiting
for a teleport request to arrive.



You can see that the machine status changed to Teleporting 



On the source machine: 

Start the Virtual machine

When it is running and you want it to be teleported, issue the following command :
VBoxManage controlvm <sourcevmname> teleport --host <targethost> --port <port>
where <sourcevmname> is the name of the virtual machine on the
source (the machine that is currently running), <targethost> is
the host or IP name of the target that has the machine that is waiting
for the teleport request, and <port> must be the same number as
specified in the command on the target (i.e. for this example: 6000).
C:\\Program Files\\Sun\\VirtualBox>VBoxManage controlvm opensolaris  teleport --host target_machine_ip --port 6000
VirtualBox Command Line Management Interface Version 3.1.0
(C) 2005-2009 Sun Microsystems, Inc.
All rights reserved.
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
You can see that the machine status changed to Teleported



The teleporting process took ~5 seconds

For more information about VirtualBox

Wednesday Nov 25, 2009

Solaris Zones migration with ZFS

ABSTRACT
In this entry I will demonstrate how to migrate a Solaris Zone running
on T5220 server to a new  T5220 server using ZFS as file system for
this Zone.
Introduction to Solaris Zones

Solaris Zones provide a new isolation primitive for the Solaris OS,
which is secure, flexible, scalable and lightweight. Virtualized OS
services  look like different Solaris instances. Together with the
existing Solaris Resource management framework, Solaris Zones forms the
basis of Solaris Containers.

Introduction to ZFS


ZFS is a new kind of file system that provides simple administration,
transactional semantics, end-to-end data integrity, and immense
scalability.
Architecture layout :





Prerequisites :

The Global Zone on the target system must be running the same Solaris release as the original host.

To ensure that the zone will run properly, the target system must have
the same versions of the following required operating system packages
and patches as those installed on the original host.


Packages that deliver files under an inherit-pkg-dir resource
Packages where SUNW_PKG_ALLZONES=true
Other packages and patches, such as those for third-party products, can be different.

Note for Solaris 10 10/08: If the new host has later versions of the
zone-dependent packages and their associated patches, using zoneadm
attach with the -u option updates those packages within the zone to
match the new host. The update on attach software looks at the zone
that is being migrated and determines which packages must be updated to
match the new host. Only those packages are updated. The rest of the
packages, and their associated patches, can vary from zone to zone.
This option also enables automatic migration between machine classes,
such as from sun4u to sun4v.


Create the ZFS pool for the zone
# zpool create zones c2t5d2
# zpool list

NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
zones   298G    94K   298G     0%  ONLINE  -

Create a ZFS file system for the zone
# zfs create zones/zone1
# zfs list

NAME          USED  AVAIL  REFER  MOUNTPOINT
zones         130K   293G    18K  /zones
zones/zone1    18K   293G    18K  /zones/zone1

Change the file system permission
# chmod 700 /zones/zone1

Configure the zone
# zonecfg -z zone1

zone1: No such zone configured
Use 'create' to begin configuring a new zone.

zonecfg:zone1> create -b
zonecfg:zone1> set autoboot=true
zonecfg:zone1> set zonepath=/zones/zone1
zonecfg:zone1> add net
zonecfg:zone1:net> set address=192.168.1.1
zonecfg:zone1:net> set physical=e1000g0
zonecfg:zone1:net> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
Install the new Zone
# zoneadm -z zone1 install

Boot the new zone
# zoneadm -z zone1 boot

Login to the zone
# zlogin -C zone1

Answer all the setup questions

How to Validate a Zone Migration Before the Migration Is Performed

Generate the manifest on a source host named zone1 and pipe the output
to a remote command that will immediately validate the target host:
# zoneadm -z zone1 detach -n | ssh targethost zoneadm -z zone1 attach -n -

Start the migration process

Halt the zone to be moved, zone1 in this procedure.
# zoneadm -z zone1 halt

Create snapshot for this zone in order to save its original state
# zfs snapshot zones/zone1@snap
# zfs list

NAME               USED  AVAIL  REFER  MOUNTPOINT
zones             4.13G   289G    19K  /zones
zones/zone1       4.13G   289G  4.13G  /zones/zone1
zones/zone1@snap      0      -  4.13G  -
Detach the zone.
# zoneadm -z zone1 detach

Export the ZFS pool,use the zpool export command
# zpool export zones


On the target machine
 Connect the storage to the machine and then import the ZFS pool on the target machine
# zpool import zones
# zpool list

NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
zones   298G  4.13G   294G     1%  ONLINE  -
# zfs list

NAME               USED  AVAIL  REFER  MOUNTPOINT
zones             4.13G   289G    19K  /zones
zones/zone1       4.13G   289G  4.13G  /zones/zone1<
zones/zone1@snap  2.94M      -  4.13G  -

On the new host, configure the zone.
# zonecfg -z zone1

You will see the following system message:

zone1: No such zone configured

Use 'create' to begin configuring a new zone.

To create the zone zone1 on the new host, use the zonecfg command with the -a option and the zonepath on the new host.

zonecfg:zone1> create -a /zones/zone1
Commit the configuration and exit.
zonecfg:zone1> commit
zonecfg:zone1> exit
Attach the zone with a validation check.
# zoneadm -z zone1 attach

The system administrator is notified of required actions to be taken if either or both of the following conditions are present:

Required packages and patches are not present on the new machine.

The software levels are different between machines.

Note for Solaris 10 10/08: Attach the zone with a validation check and
update the zone to match a host running later versions of the dependent
packages or having a different machine class upon attach.
# zoneadm -z zone1 attach -u

Solaris 10 5/09 and later: Also use the -b option to back out specified patches, either official or IDR, during the attach.
# zoneadm -z zone1 attach -u -b IDR246802-01 -b 123456-08

Note that you can use the -b option independently of the -u option.

Boot the zone
# zoneadm -z zone1 boot

Login to the new zone
# zlogin -C zone1

[Connected to zone 'zone1' console]

Hostname: zone1

All the process took approximately five minutes

For more information about Solaris ZFS and Zones

Sunday Aug 02, 2009

Logical Domains Physical-to-Virtual (P2V) Migration


The Logical domains P2V migration tool automatically converts an existing physical system
to a virtual system that runs in a logical domain on a chip multithreading (CMT) system.
The source system can be any of the following:



    \*  Any sun4u SPARC system that runs at least the Solaris 8 Operating System
    \*  Any sun4v system that runs the Solaris 10 OS, but does not run in a logical domain


In this entry I will demonstrate how to use Logical domains P2V migration tool to migrate Solaris running on V440 server (physical)
Into guest domain running on the T5220 server (virtual)
Architecture layout :



Before you can run the LogicalDomains P2VMigration Tool, ensure that the following are true:



  •     Target system runs at least LogicalDomains 1.1 on the following:

  •      Solaris 10 10/08 OS

  •     Solaris 10 5/08 OS with the appropriate LogicalDomains 1.1 patches

  •     Guest domains run at least the Solaris 10 5/08 OS

  •     Source system runs at least the Solaris 8 OS


prerequisites


In addition to these prerequisites, configure an NFS file system to be shared by both the source
and target systems. This file system should be writable by root.However, if a shared file system
is not available, use a local file system that is large enough to hold a file system dump of the
source system on both the source and target systems Limitations
Version 1.0 of the LogicalDomains P2VMigration Tool has the following limitations:



  •      Only UFS file systems are supported.

  •      Each guest domain can have only a single virtual switch and virtual disk

  •      The flash archiving method silently ignores excluded file systems.

The conversion from a physical system to a virtual system is performed in the following phases:


  • Collection phase   Runs on the physical source system. collect creates a file system image of the source system based on the configuration information that it collects about the source system.

  • Preparation phase. Runs on the control domain of the target system. prepare creates the logical domain on the target system based on the configuration information collected in the collect phase.

  • Conversion phase. Runs on the control domain of the target system. In the convert phase,the created logical domain is converted into a logical domain that runs the Solaris 10 OS by using the standard Solaris upgrade process.


    Collection phase
    On the target machine T5220

    Prepare the NFS server in order to hold the a file system dump of the source system on both the source and target systems.
    In this use case I will use the target machine (T5220) as the NFS server
    # mkdir /p2v
    # share -F nfs -o root=v440 /p2v

    Verify the NFS share
    # share
    -  /p2v  root=v440  ""
    Install the Logical Domains P2V MigrationTool
    Go to the Logical Domains download page at http://www.sun.com/servers/coolthreads/
    ldoms/get.jsp.
    Download the P2V software package, SUNWldmp2v
    Use the pkgadd command to install the SUNWldmp2v package
    # pkgadd -d . SUNWldmp2v

    Create the /etc/ldmp2v.conf file we will use it later
    # cat /etc/ldmp2v.conf

    VSW="primary-vsw0"
    VDS="primary-vds0"
    VCC="primary-vcc0"
    BACKEND_PREFIX="/ldoms/disks/"
    BACKEND_TYPE="file"
    BACKEND_SPARSE="no"
    BOOT_TIMEOUT=10
    On the source machine V440
    Install the Logical Domains P2V MigrationTool
    # pkgadd -d . SUNWldmp2v
    Mount the NFS share
    # mkdir /p2v
    # mount t5220:/p2v /p2v
    Run the collection command
    # /usr/sbin/ldmp2v collect -d /p2v/v440
    Collecting system configuration ...
    Archiving file systems ...
    DUMP: Date of this level 0 dump: August 2, 2009 4:11:56 PM IDT
    DUMP: Date of last level 0 dump: the epoch
    DUMP: Dumping /dev/rdsk/c1t0d0s0 (mdev440-2:/) to /p2v/v440/ufsdump.0.
    The collection phase took 5 minutes for 4.6 GB dump file
    Preparation phase
    On the target machine T5220
    Run the preparation command
    We will keep the source machine (V440) mac address
    # /usr/sbin/ldmp2v prepare -d /p2v/v440 -o keep-mac v440
     Creating vdisks ...
    Creating file systems ...
    Populating file systems ...
    The preparation phase took 26 minutes


    We can see that for each physical cpu on the V440 server the LDom P2V Tool create 4 vcpu on the guest domain and assigns the same amount memory that the physical system has
    # ldm list -l v440


    NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
    v440                   inactive   ------                                   4     8G

    CONTROL
        failure-policy=ignore

    DEPENDENCY
        master=

    NETWORK
        NAME             SERVICE                     DEVICE     MAC               MODE   PVID VID                  MTU
        vnet0            primary-vsw0                           00:03:ba:c4:d2:9d        1

    DISK
        NAME             VOLUME                      TOUT DEVICE  SERVER         MPGROUP
        disk0            v440-vol0@primary-vds0

    Conversion Phase
    Before starting the conversion phase shout down the source server (V440) in order to avoid ip address conflict
    On the V440 server
    # poweroff
    On the jumpstart server

    You can use the Custom JumpStart feature to perform a completely hands-off conversion.
    This feature requires that you create and configure the appropriate sysidcfg and profile files for the client on the JumpStart server.
    The profile should consist of the following lines: install_type upgrade root_device  c0d0s0



    The sysidcfg file :

    name_service=NONE
    root_password=uQkoXlMLCsZhI
    system_locale=C
    timeserver=localhost timezone=Europe/Amsterdam
    terminal=vt100 security_policy=NONE nfs4_domain=dynamic network_interface=PRIMARY {netmask=255.255.255.192 default_route=none protocol_ipv6=no}
    On the target server T5220
    # ldmp2v convert -j -n vnet0 -d /p2v/v440 v440Testing original system status ...
    LDom v440 started
    Waiting for Solaris to come up ...
    Using Custom JumpStart
    Trying 0.0.0.0...
    Connected to 0.
    Escape character is '\^]'.

    Connecting to console "v440" in group "v440" ....
    Press ~? for control options ..
    SunOS Release 5.10 Version Generic_139555-08 64-bit
    Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
    Use is subject to license terms.



    For information about the P2V migration tool, see the ldmp2v(1M) man page.





About

This blog covers cloud computing, big data and virtualization technologies

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today