X

An Oracle blog about Exadata

  • March 4, 2009

Need shared storage fast ? use the Linux Target Framework

Rene Kundersma
Software Engineer



For all of us that need (shared) (iSCSI) storage for test or education purposes and don't want to install for example OpenFiler (which still is a great solution), there is now the Linux Target Framework (tgt).

In short, tgt consists of a deamon and utilities that allow you to quickly setup (shared) storage.

Tgt can be used for more, however my example is purely focused on setting up shared iSCSI storage.

First, install the tgt software, this is available in Oracle Enterprise Linux 5.


[root@gridnode05 tmp]# rpm -i scsi-target-utils-0.0-0.20070620snap.el5.i386.rpm



Then, start the tgtd deamon.


[root@gridnode05 tmp]# service tgtd start

Starting SCSI target daemon: [ OK ]



Export a new iSCSI target


[root@gridnode05 tmp]# tgtadm --lld iscsi --op new \

--mode target --tid 2 -T 192.168.200.173:rkvol



Create storage to export from. Let's make it 100MB in size.

This will be the actual storage that the initiator will see.

In normal situation you should use a normal block or a lvm


[root@gridnode05 tmp]# dd if=/dev/zero of=/scratch/rk.vol bs=1M count=100

100+0 records in

100+0 records out

104857600 bytes (105 MB) copied, 0.367602 seconds, 285 MB/s



Add the "storage volume" to the target:


[root@gridnode05 tmp]# tgtadm --lld iscsi --op new --mode logicalunit \

--tid 2 --lun 1 -b /scratch/rk.vol



Allow all initiator clients to use the target:


[root@gridnode05 tmp]# tgtadm --lld iscsi --op bind --mode target --tid 2 -I ALL




On the client install the iSCSI initiator:


[root@gridnode03 ~]# rpm -i /tmp/iscsi-initiator-utils-6.2.0.868-0.7.el5.i386.rpm



After installation, start the service iscsi



[root@gridnode03 ~]# chkconfig iscsi on


[root@gridnode03 ~]# service iscsi start

iscsid is stopped

Turning off network shutdown. Starting iSCSI daemon: [ OK ]

[ OK ]

Setting up iSCSI targets: iscsiadm: No records found!

[ OK ]



Discover the iscsi device:


[root@gridnode03 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.200.175

192.168.200.175:3260,1 192.168.200.173:rkvol



Restart the iscsi service and notice the target coming in:


[root@gridnode03 ~]# service iscsi restart

Stopping iSCSI daemon: /etc/init.d/iscsi: line 33: 29176 Killed /etc/init.d/iscsid stop

iscsid dead but pid file exists

Turning off network shutdown. Starting iSCSI daemon: [ OK ]

[ OK ]

Setting up iSCSI targets: Logging in to [iface: default,

target: 192.168.200.173:rkvol, portal: 192.168.200.175,3260]

Login to [iface: default, target: 192.168.200.173:rkvol,

portal: 192.168.200.175,3260]: successful

[ OK ]




See the block device coming in, in the messages file


[root@gridnode03 ~]# tail -f /var/log/messages

Mar 4 13:24:19 gridnode03 last message repeated 2 times

Mar 4 13:24:19 gridnode03 iscsid: connection1:0 is operational now

Mar 4 13:24:19 gridnode03 kernel: SCSI device sda: 204800 512-byte hdwr sectors (105 MB)

Mar 4 13:24:19 gridnode03 kernel: sda: Write Protect is off

Mar 4 13:24:19 gridnode03 kernel: SCSI device sda: drive cache: write back

Mar 4 13:24:19 gridnode03 kernel: SCSI device sda: 204800 512-byte hdwr sectors (105 MB)

Mar 4 13:24:19 gridnode03 kernel: sda: Write Protect is off

Mar 4 13:24:19 gridnode03 kernel: SCSI device sda: drive cache: write back

Mar 4 13:24:19 gridnode03 kernel: sda: unknown partition table

Mar 4 13:24:19 gridnode03 kernel: sd 0:0:0:1: Attached scsi disk sda



Verify the size of the block device



[root@gridnode03 ~]# fdisk -l /dev/sda

Disk /dev/sda: 104 MB, 104857600 bytes

4 heads, 50 sectors/track, 1024 cylinders

Units = cylinders of 200 * 512 = 102400 bytes

Disk /dev/sda doesn't contain a valid partition table

[root@gridnode03 ~]#

This seems a very neat utility to use in order to obtain shared storage for education, or testing purposes.

Rene Kundersma

Oracle Expert Services, The Netherlands

Join the discussion

Comments ( 4 )
  • Marcelo Ochoa Thursday, March 5, 2009
    Hi Rene:
    We went one step forward using this kind of sharing solution.
    I mean, our University doesn't have a big budget to buy hardware iSCSI storage :(
    But We replaced it using iSCSI target/initiator to implement a shared storage using OCFS, finally this shared storage is used to put Xen virtual image disk that can me accessed by a cluster Xen servers.
    Using OCFS we can move (migrate) on the fly Xen Virtual machine across the cluster nodes.
    Using 100Mb network cards we have 7Mb/s data transfer on guest OS compared to 10Mb/s using a virtual disk locally. This is not a big difference and we have a plan to make a local backbone for iSCSI using 1Gb cards/switches.
    Best regards, Marcelo.
  • guest Thursday, January 7, 2010
    I just submitted your site to digg it's that good. Thanks for the great content.
  • Burt Pramu Friday, January 8, 2010
    Pretty good post. I just stumbled upon your blog and wanted to say that I have really enjoyed reading your blog posts. Any way I'll be subscribing to your feed and I hope you post again soon.
  • Grant McWilliams Monday, January 25, 2010
    Did you do any performance testing to see how much of a hit you're getting?
    I'm curious about iscsi, ATAoE, drdb, clvm etc.. I think in this environment where you only have one file be accessed by one process (DomU) at a time sharing out a RAID disk via NFS might actually be fine too. Only if something goes wrong during migration would you get the FS screwed up. Also have you tried gfs2 or ocfs or have you any need? I'm going to be implementing an iSCSI server exporting a RAID 10 to a Dom0 which in turn will slice it up into LVs for use in 42 DomUs. I'm hoping someone has already found the pitfalls before I get to them.
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.