Amazing easy - Solaris 10 as iSCSI target

3 Steps were needed in our environment

  1. mirror the boot disks (optional)
  2. create ZFS Volume(s)
  3. enable iSCSI

a few days ago I had the need to setup an X4500 with Solaris 10 and iSCSI to be able to demonstrate that to a customer who wanted to see how – based on Sun infrastructure the VMware ESX Server would work.

A nice solution would have been to just use a STK 2510 which is officially on the supported VMware certified iSCSI hardware list (page 40). But as I didn't have one – I decided to just use an X4500.

In my opinion (afaik) – iSCSI is a standard – so it shouldn't make differences.

So I searched the Internet how to do it best/easiest and found a lot of explanations which were good – but there always seemed to be something „wrong“ or missing. Probably I just found the wrong one :-).

So I asked one of my colleagues to help me on that.
Constantin (the colleague I've asked :-) ) had an easy idea how to do it.

I was preparing the X4500 just installed with Solaris 10 update 4.

Constantin wasn't happy with my configuration – as the boot disks were not mirrored. So we decided – as the machine should be used further and not just for that demo as iSCSI target to mirror the boot devices for better availability.

  1. mirror the boot disk

So first step was to find out what the boot disk(s) are. Usually on a Laptop you would expect something like c0t0d0s0 but as the X4500 is capable of up to 48 disks the boot device is dependent on how the server is populated. (can be found at the X4500 install Guide.)

To determine the correct disk
Find the logical disk name for the bootable disks.
a. Open a terminal window by right-clicking your mouse and choosing the option Program > Terminal.
b. Determine the bootable disk in physical slot #0 for installing the operating system by typing:

# cfgadm | grep sata3/0

The system displays the logical disk name for the disk in physical slot #0 that is available for booting, for example:

sata3/0::dsk/c5t0d0 disk connected configured OK

c. (Optional) To determine the bootable disk in physical slot #1, type:

# cfgadm | grep sata3/4

To setup the mirror it is easiest to just follow the documentation :-) but as you know you always need to know where the specific information is – so here you can read the official docs. (Solaris 10 / Admin Guide, / Chapter 11. Volume Manager / How to create a mirror from the boot disk)

Our boot disk was c5t0d0 and the new mirrored disk was c5t4d0

so we

# fdisk /dev/rdsk/c5t4d0

deleted the partition and created a „Solaris 100%“ disk (probably just necessary as the system was used differently before) then with

# prtvtoc /dev/rdsk/c5t0d0s2 | fmthard -s - /dev/rdsk/c5t4d0s2

we easily „copied“ the disk layout from the boot disk to the mirror disk.
Then we copied the boot sector to the mirror disk

# fdisk -b /usr/lib/fs/ufs/boot /dev/rdks/c5t4d0p0

Now we installed GRUB to the mirror disk

# /sbin/installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c5t4d0s0

Then we needed the replicas (which is documented in chapter 7 of the Solaris 10 Admin Guide)

# metadb -a -f -c 3 c5t0d0s7
# metadb -i
# metadb -a -c 4 /dev/dsk/
c5t4d0s7

Then we created the mirror – and as we also wanted to have swap mirrored but didn't want to reboot we decided to delete the swap

# swap -d /dev/dsk/c5t0d0s1
# swap -d /dev/dsk/c5t4d0s1

Now the creation/configuration of the mirror

# metainit -f d11 1 1 c5t0d0s0
# metainit -f d12 1 1 c5t4d0s0
# metainit d10 -m d11
# metaroot d10

 REBOOT NOW NECESSARY


# metattach d10 d12

Of course we need a swap again

# metainit -f d21 1 1 c5t0d0s1
# metainit -f d22 1 1 c5t4d0s1
# metainit d20 -m d21
# metattach d20 d22
# swap -a /dev/md/dsk/d20
# vi /etc/vfstab

(correct the swap entry)

Now it is important to modify the GRUB so that you can boot into different environments. Meaning booting from root disk or booting from mirror

So we

# vi /boot/grub/menu.lst

title alternate boot
root (hd1,0,a)
kernel /platform/i86pc/multiboot
module /boot/x86.miniroot-safe


  1. create ZFS Volume(s)

A great suggestion from Constantin was to make yourself an overview of what you have as disks in your system. In our case it was even more important as 1) we could have up to 48 disks and 2) the X4500 in our case was not fully populated.
So here our disk „table“ which in best case make together 48 disks (not in our case :-( )
and we decided to setup ZFS Volumes of each ZFS type possible



T

0

1

2

3

4

5

6

7

C










0


M

Z

X

X

Z2

Z2

X

X

1


M

Z

X

X

Z2

Z2

X

X

4


M

Z

X

X

Z2

S

X

X

5


B

Z

X

X

B

S

X

X

6


M

Z

X

X

Z2

HS

X

X

7


Z

Z2

X

X

Z2

HS

X

X

X = not available
B = Boot disk
M =
Mirror
Z = Raid-Z
≡ Raid 5 with 1 spare disks
Z2 = Raid-Z2 ≡ Raid 6 with 2 spare disks
S = Stripe
HS = Hotspare

So now finished our thinking about how many disks we have and how we can best use them we started with the creation of the Zpools (while <disk#> e.g. c0t0d0)

# zpool create mpool mirror <disk1> <disk2> mirror <disk3> disk4>
# zpool create rzpool raidz <disk5> <disk6> ... <disk10>
# zpool create rz2pool raidz2 <disk11> ... <disk18>
# zpool create spool <disk19> <disk20>
# zpool add
<poolname> spare <disk21>

Now creating the Volumes

# zfs create mpool/volumes
(equals the zfs hirarchy)

# zfs create -sV 2047g mpool/volumes/vol1

creates a 2 TB Volume independend if you have enough diskspace or not as you can grow your ZFS Volume just by adding new disks


  1. enable iSCSI

well this is the amazing easy step

# zfs set shareiscsi=on mpool/volumes/vol1

BTW if you set shareiscsi=on to mpool/volumes then all volumes beyond will be available as iSCSI targets.

  1. some additional

# zfs get all mpool/volumes/vol1

gives you all information about that specific volume. You can use that command of course also directly on the zpool which gives you then all information about all volumes.
Also usefull if gives you information about things like

  • usage
  • compression (which you can switch on to increase your capacity without adding disks :-) )
  • iscsi status
  • volume size (which you should always monitor against the ...)
  • available size (as this can be greater then the volume size)

# iscsitadm list target

Lists all iSCSI targets configured on the system which are available
gives you also the IQN which you need to know on the iSCSI initiator (client) to map the correct iSCSI Volume

Then also a nice feature is that ZFS can snapshot filesystems which gives you a point to which you can always get back if you want without additional physical storage needed. All deltas which are written to disk after the snapshot command are written as nothing was happen.

# zfs snapshot mpool/volume/vol1@<snapshotname>

If you want to work with that snapshot you have to make a clone which only means that after entering that clone command all deltas written to the clone are written to a separate disk space. If you don't understand that really short explanaition you can have a look in a very good presentation found at www.opensolaris.org

# zfs clone mpool/volume/vol1@<snapshot> mpool

So that's it. Hope that helps you – at least it will be a good mnemonic to me :-).
Or at least gives you an impression of what you can do with Solaris.

Thanks to Constantin – as he avoided me to search for the correct documentations and help me to set all that above up with explanaition within 2.5 h.

BTW if you are further more interested what we did with that iSCSI volumes?

We attached two VMware ESX 3.5 Server running von X6250 in a Sun Blade 6000.

Just a hint here – I had to invest some time to make this iSCSI Volume a shared volume between both x6250 – I'm still not sure what made suddenly the difference – but if you attach the same iSCSI target to two different ESX servers as shared storage make HBA Rescans again and again – and perhapse the infos of Scott's Blog which was showed me by the author (Joerg Moellenkamp) of www.c0t0d0s0.org.

Kommentare:

Senden Sie einen Kommentar:
  • HTML Syntax: Ausgeschaltet
About

blincks and hints around my work

Search

Categories
Archives
« April 2014
MoDiMiDoFrSaSo
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Heute
Friends
News

No bookmarks in folder

Blogroll

No bookmarks in folder