Tuesday Dec 15, 2009

VirtualBox Teleporting


ABSTRACT

In this entry, I will demonstrate how to use the new feature of
VirtualBox version 3.1 Migration (a.k.a teleporting) for moving a
virtual machine over a network from one VirtualBox host to another,
while the virtual machine is running.


Introduction to VirtualBox


VirtualBox
is a general-purpose full virtualizer for x86 hardware.
Targeted at server, desktop and embedded use, it is now the only
professional-quality virtualization solution that is also Open Source
Software.

Introduction to Teleporting


Teleporting requires that a machine be currently running on one host,
which is then called the "source". The host to which the virtual
machine will be teleported will then be called the "target". The
machine on the target is then configured to wait for the source to
contact the target. The machine's running state will then be
transferred from the source to the target with minimal downtime.

This works regardless of the host operating system that is running on
the hosts: you can teleport virtual machines between Solaris and Mac
hosts, for example.


Architecture layout :




Prerequisites :

1. The target and source machines should both be running VirtualBox version 3.1or later.

2. The target machine must be configured with the same amount of memory
(machine and video memory) and other hardware settings,  as the source
machine. Otherwise teleporting will fail with an error message.

3. The two virtual machines on the source and the target must share the same storage

  they either use the same iSCSI targets or both hosts have access to  the same storage via NFS or SMB/CIFS.

4. The source and target machines cannot have any snapshots.

5. The hosts must have fairly similar CPUs. Teleporting between Intel and AMD CPUs will probably fail with an error message.

Preparing the storage environment


For this example, I will use OpenSolaris x86 as CIFS server in order to
enable shared storage for the source and target machines ,but you can
use any iSCSI NFS or CIFS server for this task.

Install the packages from the OpenSolaris.org repository:

# pfexec pkg install SUNWsmbs SUNWsmbskr


Reboot the system to activate the SMB server in the kernel.

# pfexec reboot


Enable  the CIFS service:

# pfexec svcadm enable –r smb/server


If the following warning is issued, you can ignore it:

svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances

Verified the service

# pfexec svcs smb/server


STATE          STIME    FMRI

online          8:38:22 svc:/network/smb/server:default

The Solaris CIFS SMB service uses WORKGROUP as the default group. If
the workgroup needs to be changed, use the following command to change
the workgroup name:

# pfexec smbadm join –w workgroup-name


Next edit the /etc/pam. conf file to enable encrypted passwords to be used for CIFS.

Add the following line to the end of the file:

other password required pam_smb_passwd. so. 1     nowarn

# pfexec  echo "other password required pam_smb_passwd.so.1     nowarn" >> /etc/pam.conf


 Each user currently in the /etc/passwd file needs to re-encrypt to be able to use the CIFS service:
# pfexec passwd user-name

Note -
After the PAM module is installed, the passwd command
automatically generates CIFS-suitable passwords for new users. You must
also run the passwd command to generate CIFS-style passwords for
existing users.

Create mixed-case ZFS file system.

# pfexec zfs create -o casesensitivity=mixed  rpool/vboxstorage


Enable SMB sharing for the ZFS file system.

# pfexec  zfs set sharesmb=on rpool/vboxstorage


Verify how the file system is shared.

# pfexec sharemgr show -vp


Now, you can access the share by connecting to \\\\solaris-hostname\\share-name


Create new Virtual machine, for the virtual hard disk select “Create new hard disk” then select next



Select the next button



For the disk location enter the network drive that you mapped from the previous section then press the next button



Verify the disk settings and then press
the finish button



Continue with the Install process ,after finishing the install process shutdown the Virtual machine in order to avoid any storage locking.


On the target machine :


Map the same network drive


Configure a new virtual machine but
instead of selecting “Create new hard drive” select use “Use existing hard drive”.




In the Virtual Media Manger window
select the Add button,and point to the same location as the
source machine hard drive ( the network drive).




Don't start the Virtual machine yet.

To wait for a teleport request to arrive when it is started,use the following VBoxManage command:

VBoxManage modifyvm <targetvmname> --teleporter on --teleporterport <port>


where <targetvmname> is the name of the virtual machine on the
target in this use case opensolaris  ,and <port> is a TCP/IP
port number to be used on both the source and the target. In this example, I used port 6000.

C:\\Program Files\\Sun\\VirtualBox>VBoxManage modifyvm  opensolaris --teleporter on --teleporterport 6000


Next, start the VM on the target. You will see that instead of actually
running, it will show a progress dialog. indicating that it is waiting
for a teleport request to arrive.



You can see that the machine status changed to Teleporting 



On the source machine: 

Start the Virtual machine

When it is running and you want it to be teleported, issue the following command :
VBoxManage controlvm <sourcevmname> teleport --host <targethost> --port <port>
where <sourcevmname> is the name of the virtual machine on the
source (the machine that is currently running), <targethost> is
the host or IP name of the target that has the machine that is waiting
for the teleport request, and <port> must be the same number as
specified in the command on the target (i.e. for this example: 6000).
C:\\Program Files\\Sun\\VirtualBox>VBoxManage controlvm opensolaris  teleport --host target_machine_ip --port 6000
VirtualBox Command Line Management Interface Version 3.1.0
(C) 2005-2009 Sun Microsystems, Inc.
All rights reserved.
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
You can see that the machine status changed to Teleported



The teleporting process took ~5 seconds

For more information about VirtualBox

Thursday Jul 02, 2009

Storage virtualization with COMSTAR and ZFS


COMSTAR is a software framework that enables you to turn any
OpenSolaris host into a SCSI target that can be accessed over the
network by initiator hosts. COMSTAR breaks down the huge task of
handling a SCSI target subsystem into independent functional modules.
These modules are then glued together by the SCSI Target Mode Framework (STMF).

COMSTAR features include:

    \* Extensive LUN Masking and mapping functions
    \* Multipathing across different transport protocols
    \* Multiple parallel transfers per SCSI command
    \* Scalable design
    \* Compatible with generic HBAs


COMSTAR is integrated into the latest OpenSolaris

In this entry I will demonstrate the integration between COMSTAR and ZFS


Architecture layout :




You can install all the appropriate COMSTAR packages

# pkg install storage-server
On a newly installed OpenSolaris system, the STMF service is disabled by default.

You must complete this task to enable the STMF service.
View the existing state of the service
# svcs stmf
 disabled 15:58:17 svc:/system/stmf:default

Enable the stmf service
# svcadm enable stmf
Verify that the service is active.
# svcs stmf
 online 15:59:53 svc:/system/stmf:default

Create a RAID-Z storage pool.
The server has six controllers each with eight disks and I have
built the storage pool to spread I/O evenly and to enable me to build 8
 RAID-Z stripes of equal length.
# zpool create -f  tank \\
raidz c0t0d0 c1t0d0 c4t0d0 c6t0d0 c7t0d0 \\
raidz c1t1d0 c4t1d0 c5t1d0 c6t1d0 c7t1d0
raidz c0t2d0 c4t2d0 c5t2d0 c6t2d0 c7t2d0 \\
raidz c0t3d0 c1t3d0 c5t3d0 c6t3d0 c7t3d0 \\
raidz c0t4d0 c1t4d0 c4t4d0 c6t4d0 c7t4d0 \\
raidz c0t5d0 c1t5d0 c4t5d0 c5t5d0 c7t5d0 \\
raidz c0t6d0 c1t6d0 c4t6d0 c5t6d0 c6t6d0 \\
raidz c0t7d0 c1t7d0 c4t7d0 c6t7d0 c7t7d0 \\
spare c0t1d0 c1t2d0 c4t3d0 c6t5d0 c7t6d0 c5t7d0


After the pool is created, the zfs utility can be used to create 50GB ZFS volume.
# zfs create -V 50g tank/comstar-vol1

Create a logical unit using the volume.
# sbdadm create-lu /dev/zvol/rdsk/tank/comstar-vol1
Created the following logical unit :

GUID DATA SIZE SOURCE
--------------------------- ------------------- ----------------
600144f07bb2ca0000004a4c5eda0001 53687025664 /dev/zvol/rdsk/tank/comstar-vol1

Verify the creation of the logical unit and obtain the Global Unique Identification (GUID) number for the logical unit.
# sbdadm list-lu
Found 1 LU(s)
GUID DATA SIZE SOURCE
-------------------------------- ------------------- ----------------
600144f07bb2ca0000004a4c5eda0001 53687025664 /dev/zvol/rdsk/tank/comstar-vol1

This procedure makes a logical unit available to all initiator hosts on a storage network.
Add a view for the logical unit.

# stmfadm add-view GUID_number

Identify the host identifier of the initiator host you want to add to your view.
Follow the instructions for each port provider to identify the initiators associated with each
port provider.


You can see that the port mode is Initiator

# fcinfo hba-port
       HBA Port WWN: 210000e08b91facd
        Port Mode: Initiator
        Port ID: 2
        OS Device Name: /dev/cfg/c16
        Manufacturer: QLogic Corp.
        Model: 375-3294-01
        Firmware Version: 04.04.01
        FCode/BIOS Version: BIOS: 1.4; fcode: 1.11; EFI: 1.0;
        Serial Number: 0402R00-0634175788
        Driver Name: qlc
        Driver Version: 20080617-2.30
        Type: L-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN: 200000e08b91facd
        Max NPIV Ports: 63
        NPIV port list:

Before making changes to the HBA ports, first check the existing port
bindings.
View what is currently bound to the port drivers.
In this example, the current binding is pci1077,2422.
# mdb -k
Loading modules: [ unix genunix specfs dtrace mac cpu.generic
cpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci zfs sd sockfs ip hook
neti sctp arp usba qlc fctl  
fcp cpc random crypto stmf nfs lofs logindmux ptm ufs sppp nsctl ipc ]
> ::devbindings -q qlc
ffffff04e058f560 pci1077,2422, instance #0 (driver name: qlc)
ffffff04e058f2e0 pci1077,2422, instance #1 (driver name: qlc)

Quit mdb.
> $q
Remove the current binding, which in this example is qlc.
In this example, the qlc driver is actively bound to pci1077,2422.
You must remove the existing binding for qlc before you can add that
binding to a new driver.

 Single quotes are required in this syntax.
# update_drv -d -i 'pci1077,2422' qlc
Cannot unload module: qlc
Will be unloaded upon reboot.

This message does not indicate an error.
The configuration files have been updated but the qlc driver remains
bound to the port until reboot.
Establish the new binding to qlt.
Single quotes are required in this syntax.
# update_drv -a -i 'pci1077,2422' qlt
  Warning: Driver (qlt) successfully added to system but failed to
attach

This message does not indicate an error. The qlc driver remains bound
to the port, until reboot.
The qlt driver attaches when the system is rebooted.
Reboot the system to attach the new driver, and then recheck the
bindings.
# reboot
# mdb -k

Loading modules: [ unix genunix specfs dtrace mac cpu.generic
cpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci zfs sd sockfs ip hook
neti sctp arp usba fctl stmf lofs fcip cpc random crypto nfs logindmux
ptm ufs sppp nsctl ipc ]
> ::devbindings -q qlt
ffffff04e058f560 pci1077,2422, instance #0 (driver name: qlt)
ffffff04e058f2e0 pci1077,2422, instance #1 (driver name: qlt)

Quit mdb.
 > $q


You can see that the port mode is Target

# fcinfo hba-port

        HBA Port WWN: 210000e08b91facd
        Port Mode: Target
        Port ID: 2
        OS Device Name: /dev/cfg/c16
        Manufacturer: QLogic Corp.
        Model: 375-3294-01
        Firmware Version: 04.04.01
        FCode/BIOS Version: BIOS: 1.4; fcode: 1.11; EFI: 1.0;
        Serial Number: 0402R00-0634175788
        Driver Name: qlc
        Driver Version: 20080617-2.30
        Type: L-port
        State: online
        Supported Speeds: 1Gb 2Gb 4Gb
        Current Speed: 4Gb
        Node WWN: 200000e08b91facd
        Max NPIV Ports: 63
        NPIV port list:


Verify that the target mode framework has access to the HBA ports.
# stmfadm list-target -v

Target: wwn.210100E08BB1FACD
Operational Status: Online
Provider Name : qlt
Alias : qlt1,0
Sessions : 0
Target: wwn.210000E08B91FACD
Operational Status: Online
Provider Name : qlt
Alias : qlt0,0
Sessions : 1
Initiator: wwn.210000E08B89F077
Alias: -
Logged in since: Thu Jul 2 12:02:59 2009


Now for the client setup :


On the client machine verify that you can see the new logical unit
# cfgadm -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c0                             scsi-bus     connected    configured   unknown
c0::dsk/c0t0d0                 CD-ROM       connected    configured   unknown
c1                             scsi-bus     connected    configured   unknown
c1::dsk/c1t0d0                 disk         connected    configured   unknown
c1::dsk/c1t2d0                 disk         connected    configured   unknown
c1::dsk/c1t3d0                 disk         connected    configured   unknown
c2                             fc-private   connected    configured   unknown
c2::210000e08b91facd           disk         connected    configured   unknown
c3                             fc           connected    unconfigured unknown
usb0/1                         unknown      empty        unconfigured ok
usb0/2                         unknown      empty        unconfigured ok

You might need to rescan the SAN BUS in order to discover the new logical unit
# luxadm -e forcelip /dev/cfg/c2
# format
Searching for disks...
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@0,0
1. c1t2d0 <FUJITSU-MAV2073RCSUN72G-0301-68.37GB>
/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@2,0
2. c1t3d0 <FUJITSU-MAV2073RCSUN72G-0301-68.37GB>
/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@3,0
3. c2t210000E08B91FACDd0 <SUN-COMSTAR-1.0 cyl 65533 alt 2 hd 16 sec 100>
/pci@7c0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w210000e08b91facd,0
Specify disk (enter its number):

You can see the SUN-COMSTAR-1.0 in the disk properties
Now you can build storage pool on top of it
# zpool create comstar-pool c2t210000E08B91FACDd0
Verify the pool creation
# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT
comstar-pool 49.8G 114K 49.7G 0% ONLINE -

After the pool is created, the zfs utility can be used to create a ZFS volume.
#  zfs create -V 48g comstar-pool/comstar-vol1


For more information about COMSTAR please check the COMSTAR  project on OpenSolaris




Tuesday Mar 03, 2009

OpenSolaris on Amazon EC2


Yesterday Sun hosted the Israeli Association of Grid Technology (IGT) event "Amazon AWS Hands-on workshop" at the Sun office in Herzeliya. During the event Simone Brunozzi, Amazon Web Services Evangelist, demonstrated Amazon's Ec2 and S3 using the AWS console. There were 40 attendees from a wide breadth of technology firms.

For more information regarding using OpenSolaris on amazon EC2 see http://www.sun.com/third-party/global/amazon/index.jsp.


About

This blog covers cloud computing, big data and virtualization technologies

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today