iSCSI SAN - Part 1: The Server

If you have OpenSolaris as part of your home network, you can put that box to work as a storage server accessible by the other machines on your home network, regardless of their operating system (Linux, MacOS, Windows).

There are generally two approaches you can take to create a storage server: Networked-attached Storage (NAS) or  Storage Area Network (SAN).

With NAS, both the storage and the file system are provided by the server and it is clear to the client that the storage is remote. I wrote up an example of this approach earlier in the year: Accessing OpenSolaris Shares from Windows. This is probably the more common approach for home use.

With SAN, only a block level device is provided by the storage server and it's up to the client to provide the file system. The storage appears to the client as a locally attached device. It's equivalent to plugging a USB drive into your computer, however the device happens to be remote and exposed over the network.

Another key difference between the two approaches is shared access to the data. The NAS approach allows simultaneous computers to access the share. With SAN, since clients maintain their own file systems (remember, the device appears as if it's locally attached), any attempt by another client to write to that block device would just corrupt the data. So SAN is really ideal for extending a client's storage without having to add additional disks to the machine (instead, the disks are added to some other machine and exposed over the network via iSCSI).

In this blog I'm going to walk you through setting up a SAN using the COMSTAR framework provided by OpenSolaris. I'm going to create the SAN and expose 3 virtual hard drives (or LUNs): one each for OpenSolaris, Linux and Windows clients.

Creating the SAN

Step 1: Install the Storage Server software (COMSTAR)

The storage-server package (32 MB) installs almost everything you need to get started. I'm also installing the iSCSI COMSTAR Port Provider, SUNWiscsit (which is now a proper dependency of the storage-server package in the development builds):

bleonard@os200906:~$ pfexec pkg install storage-server SUNWiscsit
DOWNLOAD                                    PKGS       FILES     XFER (MB)
Completed                                  23/23     957/957   32.99/32.99 

PHASE                                        ACTIONS
Install Phase                              1974/1974 

A reboot is necessary because of this defect (which has also been fixed in the development builds):

bleonard@os200906:~$ pfexec reboot

Step 2:  Create the Storage Pool

You could conceivably use an existing storage pool, but for this example I'm going to create one dedicated for the SAN. I have two 1 GB disks available to my machine, represented at C9t0d0 and C9t1d0. I'm going to create a 1 GB mirrored pool so that my data would survive if one of the disks were to fail:

bleonard@os200906:~$ pfexec zpool create san_pool mirror c9t0d0 c9t1d0

bleonard@os200906:~$ zpool list san_pool
NAME       SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
san_pool  1008M   313K  1008M     0%  ONLINE  -

bleonard@os200906:~$ zpool status san_pool
  pool: san_pool
 state: ONLINE
 scrub: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	san_pool    ONLINE       0     0     0
	  mirror    ONLINE       0     0     0
	    c9t0d0  ONLINE       0     0     0
	    c9t1d0  ONLINE       0     0     0

errors: No known data errors

Step 3: Create the Disk Volumes

Here we create the disk volumes that will be made available to the clients. Since we plan to have 3 clients, we will create one disk volume for each:

bleonard@os200906:~$ pfexec zfs create -V 300M san_pool/vol_osol
bleonard@os200906:~$ pfexec zfs create -V 300M san_pool/vol_lx
bleonard@os200906:~$ pfexec zfs create -V 300M san_pool/vol_win

bleonard@os200906:~$ zfs list -t volume
NAME                USED  AVAIL  REFER  MOUNTPOINT
rpool/dump          511M  26.1G   511M  -
rpool/swap          512M  26.5G   137M  -
san_pool/vol_lx     300M   376M    16K  -
san_pool/vol_osol   300M   376M    16K  -
san_pool/vol_win    300M   376M    16K  -

Step 4: Create SCSI Logical Unit Numbers (LUNs) for the Disk Volumes

A LUN is a SCSI storage protocol entity which is responsible for performing the read and write operations on the disk volume. We create the LUN using the SCSI Block Disk administration utility - sbdadm. Here we're creating a LUN for each of our 3 disk volumes:

bleonard@os200906:~$ pfexec sbdadm create-lu /dev/zvol/rdsk/san_pool/vol_osol 

Created the following LU:

	      GUID                    DATA SIZE           SOURCE
--------------------------------  -------------------  ----------------
600144f01ea8050000004b17fc0a0001      314507264        /dev/zvol/rdsk/san_pool/vol_osol

bleonard@os200906:~$ pfexec sbdadm create-lu /dev/zvol/rdsk/san_pool/vol_lx 

Created the following LU:

	      GUID                    DATA SIZE           SOURCE
--------------------------------  -------------------  ----------------
600144f01ea8050000004b17fc0f0002      314507264        /dev/zvol/rdsk/san_pool/vol_lx

bleonard@os200906:~$ pfexec sbdadm create-lu /dev/zvol/rdsk/san_pool/vol_win 

Created the following LU:

	      GUID                    DATA SIZE           SOURCE
--------------------------------  -------------------  ----------------
600144f01ea8050000004b17fc120003      314507264        /dev/zvol/rdsk/san_pool/vol_win

To see the created LUNs:

bleonard@os200906:~$ pfexec sbdadm list-lu

Found 3 LU(s)

	      GUID                    DATA SIZE           SOURCE
--------------------------------  -------------------  ----------------
600144f01ea8050000004b17fc120003      314507264        /dev/zvol/rdsk/san_pool/vol_win
600144f01ea8050000004b17fc0f0002      314507264        /dev/zvol/rdsk/san_pool/vol_lx
600144f01ea8050000004b17fc0a0001      314507264        /dev/zvol/rdsk/san_pool/vol_osol

Step 5: Add Views To the LUNs

A view specifies which clients can see the LUN. You'll typically want to restrict your LUN to only the clients that plan to use it. To keep this example simple I'm going to expose the LUN to all clients. For details on how to make the view more restricted, check out How to Make a Logical Unit Available to Selected Hosts.

In the SCSI world, the disk volume provided by the server is referred to as the 'target'. Therefore, for this next step, we will use the SCSI Target Mode Framework Administration utility, stmfadm, to create the view. Before we can proceed we'll need the GUIDs for our LUNs. These are listed above using the sbdadm list-lu command. This same information is also available via the stmfadm command as follows:

bleonard@os200906:~$ pfexec stmfadm list-lu -v
LU Name: 600144F01EA8050000004B17FC0A0001
    Operational Status: Online
    Provider Name     : sbd
    Alias             : /dev/zvol/rdsk/san_pool/vol_osol
    View Entry Count  : 0
LU Name: 600144F01EA8050000004B17FC0F0002
    Operational Status: Online
    Provider Name     : sbd
    Alias             : /dev/zvol/rdsk/san_pool/vol_lx
    View Entry Count  : 0
LU Name: 600144F01EA8050000004B17FC120003
    Operational Status: Online
    Provider Name     : sbd
    Alias             : /dev/zvol/rdsk/san_pool/vol_win
    View Entry Count  : 0

Note in the output above, the GUID is used as the LU Name. Whatever you want to call it, we need this value to add the views. Here we will create one view for each of our LUNs:

bleonard@os200906:~$ pfexec stmfadm add-view 600144F01EA8050000004B17FC0A0001
bleonard@os200906:~$ pfexec stmfadm add-view 600144F01EA8050000004B17FC0F0002
bleonard@os200906:~$ pfexec stmfadm add-view 600144F01EA8050000004B17FC120003 

And you can list the views as follows. Note all hosts will be able to see them:

bleonard@os200906:~$ pfexec stmfadm list-view -l 600144F01EA8050000004B17FC0A0001
View Entry: 0
    Host group   : All
    Target group : All
    LUN          : 0
bleonard@os200906:~$ pfexec stmfadm list-view -l 600144F01EA8050000004B17FC0F0002
View Entry: 0
    Host group   : All
    Target group : All
    LUN          : 1
bleonard@os200906:~$ pfexec stmfadm list-view -l 600144F01EA8050000004B17FC120003
View Entry: 0
    Host group   : All
    Target group : All
    LUN          : 2

Step 6: Create the Target

In this final step we identify the server machine as a target device. Client machines (also known as initiators) will then be able to connect to the target and see the LUNs that have been exposed via the views. This involves 2 steps: starting the iSCSI target service and then creating the target:

A) Start the iSCSI target service and verify it started successfully:

bleonard@os200906:~$ svcadm enable -r target

bleonard@os200906:~$ svcs -l target
fmri         svc:/network/iscsi/target:default
name         iscsi target
enabled      true
state        online
next_state   none
state_time   Thu Dec 03 14:09:48 2009
logfile      /var/svc/log/network-iscsi-target:default.log
restarter    svc:/system/svc/restarter:default
dependency   require_any/error svc:/milestone/network (online)
dependency   require_all/none svc:/system/stmf:default (online)

B) Create the target. To do so we'll use the iSCSI Target Administration utility, itadm:

bleonard@os200906:~$ pfexec itadm create-target
Target iqn.1986-03.com.sun:02:7de26418-fe5e-c386-a457-e0e06a16d723 successfully created

You can list the target as follows:

bleonard@os200906:~$ pfexec itadm list-target
TARGET NAME                                                  STATE    SESSIONS 
iqn.1986-03.com.sun:02:7de26418-fe5e-c386-a457-e0e06a16d723  online   0     

And that should be it.

For what it's worth, here's a rough graphic of how I picture this is my mind. The only difference being that this graphic shows the views restricting clients to seeing only a single LUN. Based on what I configured above, the views do not restrict which LUNs the clients can see:


In Part 2 I'll address accessing this SAN from various clients (initiators).

References

Comments:

Cool,

this is exactly what I am looking. Looking forward for part2

Posted by Prudhvi Krishna Surapaneni on December 18, 2009 at 04:48 AM GMT #

ISCSI SANs work on TCP which is byte stream protocol,while data storage transfers are typically associated with fixed length data transfers for standard data transfers. When transferring large amount of data, this byte stream feature of ISCSI results in inefficient data transfer resulting in additional buffer overheads in receiving nodes.

Posted by zubehör on December 18, 2009 at 05:07 AM GMT #

Thank you so much for this article! I can hardly wait for part two. May I ask what software you used to create the graphic and where you got the icons for the PCs and hard drives?

Posted by John on December 18, 2009 at 08:41 AM GMT #

Hi John,

I hope to have part 2 published by the end of the day today. I used OpenOffice.org Draw to create the graphic. If it's useful, I'd love it if a real graphic artist were to polish it up - it's close but not exactly how I envision it in my mind. I found the graphics simply by doing a Google search for "computer" and "hard drive".

/Brian

Posted by Brian Leonard on December 18, 2009 at 09:11 AM GMT #

What happened to "zfs set shareiscsi=on zvol" ?

Posted by Mike on December 28, 2009 at 02:19 PM GMT #

I am interested in this type of scenario to store virtual machine disks for xenserver or vmware. Can you explain how the views might work in this scenario.

Posted by David on January 12, 2010 at 05:54 PM GMT #

nice work there on pointing out the difference between nas and san

Posted by nas server review on August 01, 2010 at 10:48 AM GMT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

The Observatory is a blog for users of Oracle Solaris. Tune in here for tips, tricks and more as we explore the Solaris operating system from Oracle.

Connect with Oracle Solaris:


Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
20
21
22
23
24
25
26
27
28
29
30
   
       
Today