X

Alan Hargreaves's Weblog

Quick and Dirty iSCSI between Solaris 11.1 targets and a Solaris 10 Initiator

Alan Hargreaves
Senior Principal Technical Support Engineer
I recently found myself with a support request to do some research involving looking at the results of removing vdevs from a pool in a recoverable way while doing operations on the pool.

My initial thought was to make the disk devices available to a guest ldom from a control ldom, but I found that Solaris and LDOMS coupled things too tightly for me to do something which had the potential to cause damage.

After a bit of thought, I realised that I also had two Solaris machines already configured in our dynamic lab set up based in the UK that I could use to create some iSCSI targets that could be made available to the guest domain that I'd already built. I needed to use two hosts to provide the targets as for reasons that I really don't need to go in to, I wanted an easy way to make them progressively unavailable in such a way that I could make them available again. Using two hosts meant that I could do this with shutdown/boot.

The tricky part is that the ldom I wanted to test on was running Solaris 10 and the two target machines were running Solaris 11.1

I needed to reference the following documents

The boxes





























NameAddressLocationSolaris Release
target110.163.249.27UKSolaris 11.1
target210.163.246.122UKSolaris 11.1
initiator10.187.56.220AustraliaSolaris 10

Setting up target1


Install the iSCSI packages
target1# pkg install group/feature/storage-server
target1# svcadm enable stmf

 
Create a small pool. Use a file as we don't have any extra disk attached to the machine and we really don't need much and then make a small volume.
target1# mkfile 4g /var/tmp/iscsi
target1# zpool create iscsi /var/tmp/iscsi
target1# zfs create -V 1g iscsi/vol0

 
Make it available as an iSCSI target. Take note of the target name, we'll need that later.
target1# stmfadm create-lu /dev/zvol/rdsk/iscsi/vol0 
Logical unit created: 600144F000144FF8C1F0556D55660001
target1# stmfadm list-lu
LU Name: 600144F000144FF8C1F0556D55660001
target1# stmfadm add-view 600144F000144FF8C1F0556D55660001
target1# stmfadm list-view -l 600144F000144FF8C1F0556D55660001
target1# svcadm enable -r svc:/network/iscsi/target:default
target1# svcs -l iscsi/target
fmri svc:/network/iscsi/target:default
name iscsi target
enabled true
state online
next_state none
state_time Tue Jun 02 08:06:29 2015
logfile /var/svc/log/network-iscsi-target:default.log
restarter svc:/system/svc/restarter:default
manifest /lib/svc/manifest/network/iscsi/iscsi-target.xml
dependency require_any/error svc:/milestone/network (online)
dependency require_all/none svc:/system/stmf:default (online)
target1# itadm create-target
Target iqn.1986-03.com.sun:02:e9d04086-3bd7-e8a7-e5b6-ac91ba0d4394 successfully created
target1# itadm list-target -v
TARGET NAME STATE SESSIONS
iqn.1986-03.com.sun:02:e9d04086-3bd7-e8a7-e5b6-ac91ba0d4394 online 0
alias: -
auth: none (defaults)
targetchapuser: -
targetchapsecret: unset
tpg-tags: default

Setting up target2


Pretty much the same as what we just did on target1.
Install the iSCSI packages
target2# pkg install group/feature/storage-server
target2# svcadm enable stmf

 
Create a small pool. Use a file as we don't have any extra disk attached to the machine and we really don't need much and then make a small volume.
target2# mkfile 4g /var/tmp/iscsi
target2# zpool create iscsi /var/tmp/iscsi
target2# zfs create -V 1g iscsi/vol0

 
Make it available as an iSCSI target. Take note of the target name, we'll need that later.
target2# stmfadm create-lu /dev/zvol/rdsk/iscsi/vol0
Logical unit created: 600144F000144FFB7899556D5B750001
target2# stmfadm add-view 600144F000144FFB7899556D5B750001
target2# stmfadm list-view -l 600144F000144FFB7899556D5B750001
View Entry: 0
Host group : All
Target Group : All
LUN : Auto
target2# svcadm enable -r svc:/network/iscsi/target:default
target2# svcs -l iscsi/target
fmri svc:/network/iscsi/target:default
name iscsi target
enabled true
state online
next_state none
state_time Tue Jun 02 08:31:01 2015
logfile /var/svc/log/network-iscsi-target:default.log
restarter svc:/system/svc/restarter:default
manifest /lib/svc/manifest/network/iscsi/iscsi-target.xml
dependency require_any/error svc:/milestone/network (online)
dependency require_all/none svc:/system/stmf:default (online)
target2# itadm create-target
Target iqn.1986-03.com.sun:02:6cc0044c-3d29-6acd-a873-cfc80b91e52d successfully created
target2# itadm list-target -v
TARGET NAME STATE SESSIONS
iqn.1986-03.com.sun:02:6cc0044c-3d29-6acd-a873-cfc80b91e52d online 0
alias: -
auth: none (defaults)
targetchapuser: -
targetchapsecret: unset
tpg-tags: default

Setting up initiator


Now make them statically available on the initiator. Note that we use the Target Names we got from the last name of the earlier setups. We also need to provide the IP address of the machine hosting the target as we are attaching them statically for simplicity.
initiator# iscsiadm add static-config iqn.1986-03.com.sun:02:e9d04086-3bd7-e8a7-e5b6-ac91ba0d4394,10.163.249.27
initiator# iscsiadm add static-config iqn.1986-03.com.sun:02:6cc0044c-3d29-6acd-a873-cfc80b91e52d,10.163.246.122
initiator# iscsiadm modify discovery --static enable

 
Now we need to get the device nodes created.
initiator# devfsadm -i iscsi
initiator# format < /dev/null
Searching for disks...done
c1t600144F000144FF8C1F0556D55660001d0: configured with capacity of 1023.75MB
c1t600144F000144FFB7899556D5B750001d0: configured with capacity of 1023.75MB
AVAILABLE DISK SELECTIONS:
0. c0d0
/virtual-devices@100/channel-devices@200/disk@0
1. c0d1
/virtual-devices@100/channel-devices@200/disk@1
2. c0d2
/virtual-devices@100/channel-devices@200/disk@2
3. c1t600144F000144FF8C1F0556D55660001d0
/scsi_vhci/ssd@g600144f000144ff8c1f0556d55660001
4. c1t600144F000144FFB7899556D5B750001d0
/scsi_vhci/ssd@g600144f000144ffb7899556d5b750001
Specify disk (enter its number):

 
Great, we've found them. Let's make a mirrored pool.
initiator# zpool create tpool mirror c1t600144F000144FF8C1F0556D55660001d0 c1t600144F000144FFB7899556D5B750001d0
initiator# zpool status -v tpool
pool: tpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t600144F000144FF8C1F0556D55660001d0 ONLINE 0 0 0
c1t600144F000144FFB7899556D5B750001d0 ONLINE 0 0 0
errors: No known data errors


I was then in a position to go and do the testing that I needed to do.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.