Wednesday Nov 25, 2009

Solaris Zones migration with ZFS

ABSTRACT
In this entry I will demonstrate how to migrate a Solaris Zone running
on T5220 server to a new  T5220 server using ZFS as file system for
this Zone.
Introduction to Solaris Zones

Solaris Zones provide a new isolation primitive for the Solaris OS,
which is secure, flexible, scalable and lightweight. Virtualized OS
services  look like different Solaris instances. Together with the
existing Solaris Resource management framework, Solaris Zones forms the
basis of Solaris Containers.

Introduction to ZFS


ZFS is a new kind of file system that provides simple administration,
transactional semantics, end-to-end data integrity, and immense
scalability.
Architecture layout :





Prerequisites :

The Global Zone on the target system must be running the same Solaris release as the original host.

To ensure that the zone will run properly, the target system must have
the same versions of the following required operating system packages
and patches as those installed on the original host.


Packages that deliver files under an inherit-pkg-dir resource
Packages where SUNW_PKG_ALLZONES=true
Other packages and patches, such as those for third-party products, can be different.

Note for Solaris 10 10/08: If the new host has later versions of the
zone-dependent packages and their associated patches, using zoneadm
attach with the -u option updates those packages within the zone to
match the new host. The update on attach software looks at the zone
that is being migrated and determines which packages must be updated to
match the new host. Only those packages are updated. The rest of the
packages, and their associated patches, can vary from zone to zone.
This option also enables automatic migration between machine classes,
such as from sun4u to sun4v.


Create the ZFS pool for the zone
# zpool create zones c2t5d2
# zpool list

NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
zones   298G    94K   298G     0%  ONLINE  -

Create a ZFS file system for the zone
# zfs create zones/zone1
# zfs list

NAME          USED  AVAIL  REFER  MOUNTPOINT
zones         130K   293G    18K  /zones
zones/zone1    18K   293G    18K  /zones/zone1

Change the file system permission
# chmod 700 /zones/zone1

Configure the zone
# zonecfg -z zone1

zone1: No such zone configured
Use 'create' to begin configuring a new zone.

zonecfg:zone1> create -b
zonecfg:zone1> set autoboot=true
zonecfg:zone1> set zonepath=/zones/zone1
zonecfg:zone1> add net
zonecfg:zone1:net> set address=192.168.1.1
zonecfg:zone1:net> set physical=e1000g0
zonecfg:zone1:net> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
Install the new Zone
# zoneadm -z zone1 install

Boot the new zone
# zoneadm -z zone1 boot

Login to the zone
# zlogin -C zone1

Answer all the setup questions

How to Validate a Zone Migration Before the Migration Is Performed

Generate the manifest on a source host named zone1 and pipe the output
to a remote command that will immediately validate the target host:
# zoneadm -z zone1 detach -n | ssh targethost zoneadm -z zone1 attach -n -

Start the migration process

Halt the zone to be moved, zone1 in this procedure.
# zoneadm -z zone1 halt

Create snapshot for this zone in order to save its original state
# zfs snapshot zones/zone1@snap
# zfs list

NAME               USED  AVAIL  REFER  MOUNTPOINT
zones             4.13G   289G    19K  /zones
zones/zone1       4.13G   289G  4.13G  /zones/zone1
zones/zone1@snap      0      -  4.13G  -
Detach the zone.
# zoneadm -z zone1 detach

Export the ZFS pool,use the zpool export command
# zpool export zones


On the target machine
 Connect the storage to the machine and then import the ZFS pool on the target machine
# zpool import zones
# zpool list

NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
zones   298G  4.13G   294G     1%  ONLINE  -
# zfs list

NAME               USED  AVAIL  REFER  MOUNTPOINT
zones             4.13G   289G    19K  /zones
zones/zone1       4.13G   289G  4.13G  /zones/zone1<
zones/zone1@snap  2.94M      -  4.13G  -

On the new host, configure the zone.
# zonecfg -z zone1

You will see the following system message:

zone1: No such zone configured

Use 'create' to begin configuring a new zone.

To create the zone zone1 on the new host, use the zonecfg command with the -a option and the zonepath on the new host.

zonecfg:zone1> create -a /zones/zone1
Commit the configuration and exit.
zonecfg:zone1> commit
zonecfg:zone1> exit
Attach the zone with a validation check.
# zoneadm -z zone1 attach

The system administrator is notified of required actions to be taken if either or both of the following conditions are present:

Required packages and patches are not present on the new machine.

The software levels are different between machines.

Note for Solaris 10 10/08: Attach the zone with a validation check and
update the zone to match a host running later versions of the dependent
packages or having a different machine class upon attach.
# zoneadm -z zone1 attach -u

Solaris 10 5/09 and later: Also use the -b option to back out specified patches, either official or IDR, during the attach.
# zoneadm -z zone1 attach -u -b IDR246802-01 -b 123456-08

Note that you can use the -b option independently of the -u option.

Boot the zone
# zoneadm -z zone1 boot

Login to the new zone
# zlogin -C zone1

[Connected to zone 'zone1' console]

Hostname: zone1

All the process took approximately five minutes

For more information about Solaris ZFS and Zones

Monday Jan 12, 2009

Solaris iSCSI Server

This document describes how to build Iscsi server based on Solaris platform on sun X4500 server.



On the target (server)


The server has six controllers each with eight disks and I have built the storage pool to spread I/O evenly and to enable me to build 8 RAID-Z stripes of equal length.


zpool create -f  tank \\
raidz c0t0d0 c1t0d0 c4t0d0 c6t0d0 c7t0d0 \\
raidz c1t1d0 c4t1d0 c5t1d0 c6t1d0 c7t1d0 \\
raidz c0t2d0 c4t2d0 c5t2d0 c6t2d0 c7t2d0 \\
raidz c0t3d0 c1t3d0 c5t3d0 c6t3d0 c7t3d0 \\
raidz c0t4d0 c1t4d0 c4t4d0 c6t4d0 c7t4d0 \\
raidz c0t5d0 c1t5d0 c4t5d0 c5t5d0 c7t5d0 \\
raidz c0t6d0 c1t6d0 c4t6d0 c5t6d0 c6t6d0 \\
raidz c0t7d0 c1t7d0 c4t7d0 c6t7d0 c7t7d0 \\
spare c0t1d0 c1t2d0 c4t3d0 c6t5d0 c7t6d0 c5t7d0

After the pool is created, the zfs utility can be used to create 50GB ZFS volume.


zfs create -V 50g tank/iscsivol000

Enable the Iscsi service


svcadm enable iscsitgt

Verify that the service is enabled.


svcs –a | grep iscsitgt


To view the list of commands, iscsitadm can be run without any options:


iscsitadm

Usage: iscsitadm -?,-V,--help Usage: iscsitadm create [-?]  [-?] [ Usage: iscsitadm list [-?]  [-?] [ Usage: iscsitadm modify [-?]  [-?] [ Usage: iscsitadm delete [-?]  [-?] [ Usage: iscsitadm show [-?] 

[-?] [ For more information, please see iscsitadm(1M)



To begin using the iSCSI target, a base directory needs to be created.


This directory is used to persistently store the target and initiator configuration that is added through the iscsitadm utility.


iscsitadm modify admin -d /etc/iscsi




Once the volumes are created, they need to be exported to an initiator


iscsitadm create target -b /dev/zvol/rdsk/tank/iscsivol000 target-label


Once the targets are created, iscsitadm's "list" command and "target" subcommand can be used to display the targets and their properties:


iscsitadm list target -v

On the initiator (client)


Install iscsi client from http://www.microsoft.com/downloads/details.aspx?FamilyID=12cb3c1a-15d6-4585-b385-befd1319f825&displaylang=en 

About

This blog covers cloud computing, big data and virtualization technologies

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today