Leveraging ZFS on Solaris for WebSphere Applications Deployment

As you may already know, ZFS has been available on Solaris 10 since Update 2 (6/06) and it gets better with improvements as now we're at Update 4 (11/06).  So, I thought I just put this brief introductory to ZFS for IBM WebSphere administrators out there.

If you're not familiar with ZFS, I recommend you take a quick look at Sun or OpenSolaris websites.  In short, "ZFS presents a pooled storage model that completely eliminates the concept of volumes and the associated problems of partitions, provisioning, wasted bandwidth and stranded storage".  There is a great demo at this OpenSolaris community site.   [Note: ZFS is open sourced -- just like other Open Solaris features.  A community discussion is available here.  Do you know that Apple has made ZFS "read" capability in the recently released OS X 10.5 Leopard? ]

Let's just take a quick look at a Solaris system's available disks using the traditional "format" command. [The following examples were performed on a Sun x4200 AMD Opteron server.]

-bash-3.00# format       
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c0t0d0 <DEFAULT cyl 8894 alt 2 hd 255 sec 63>
/pci@0,0/pci1022,7450@2/pci1000,3060@3/sd@0,0
1. c0t1d0 <SEAGATE-ST973401LSUN72G-0556-68.37GB>
/pci@0,0/pci1022,7450@2/pci1000,3060@3/sd@1,0
2. c0t2d0 <FUJITSU-MAY2073RCSUN72G-0501-68.37GB>
/pci@0,0/pci1022,7450@2/pci1000,3060@3/sd@2,0
3. c0t3d0 <FUJITSU-MAY2073RCSUN72G-0501-68.37GB>
/pci@0,0/pci1022,7450@2/pci1000,3060@3/sd@3,0
Specify disk (enter its number): \^C

To create a file system on Solaris, you'd typically go through the above command to select a disk, create partitions/slices, and do "mkfs".  ZFS eliminates all that and simplifies the process with the zpool and zfs commands.  The example below demonstrates that a ZFS file system, named pool1, is created using the device c0t1d0 with 68GB.

-bash-3.00# zpool list
no pools available
-bash-3.00# zpool create pool1 c0t1d0
-bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
pool1 68G 88K 68.0G 0% ONLINE -
-bash-3.00# zpool status
pool: pool1
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
c0t1d0 ONLINE 0 0 0

errors: No known data errors
 

If I want to make this volume bigger, I can simply concatenate one or more additional devices to pool1 as below.  You'll see that the pool size is grown from 68GB to 136GB after adding another device (c0t2d0) with 68GB.

-bash-3.00# zpool add pool1 c0t2d0
-bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
pool1 136G 91K 136G 0% ONLINE -
-bash-3.00# zpool status
pool: pool1
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
c0t1d0 ONLINE 0 0 0
c0t2d0 ONLINE 0 0 0

errors: No known data errors

Now, let me delete this pool and create another pool, called mpool1, that is a two-way mirrored volume with c0t1d0 and c0t2d0.  [Note: mpool1 has 2x68GB on disks in the zfs pool and is mirrored; thus, total pool size is 68GB.]

-bash-3.00# zpool destroy pool1
-bash-3.00# zpool list
no pools available
-bash-3.00# zpool create mpool1 mirror c0t1d0 c0t2d0
-bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
mpool1 68G 89K 68.0G 0% ONLINE -
-bash-3.00# zpool status
pool: mpool1
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
mpool1 ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t1d0 ONLINE 0 0 0
c0t2d0 ONLINE 0 0 0

errors: No known data errors

Now, we can turn mpool1 from a two-way mirrored volume to a three-way mirrored volume as below.

-bash-3.00# zpool attach mpool1 c0t1d0 c0t3d0  
-bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
mpool1 68G 90K 68.0G 0% ONLINE -
-bash-3.00# zpool status
pool: mpool1
state: ONLINE
scrub: resilver completed with 0 errors on Mon Oct 29 16:39:13 2007
config:

NAME STATE READ WRITE CKSUM
mpool1 ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t1d0 ONLINE 0 0 0
c0t2d0 ONLINE 0 0 0
c0t3d0 ONLINE 0 0 0

errors: No known data errors

And, we can turn mpool1 back into a two-way mirrored volume between c0t1d0 and c0t3d0 by taking out c0t2d0 from the pool with the "detach" command.

-bash-3.00# zpool detach mpool1 c0t2d0       
-bash-3.00# zpool status
pool: mpool1
state: ONLINE
scrub: resilver completed with 0 errors on Mon Oct 29 16:43:13 2007
config:

NAME STATE READ WRITE CKSUM
mpool1 ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t1d0 ONLINE 0 0 0
c0t3d0 ONLINE 0 0 0

errors: No known data errors
 

Furthermore, we can create zfs file systems within this pool that we just created.

-bash-3.00# zfs create mpool1/IBMSW1
-bash-3.00# zfs create mpool1/IBMSW2

You can check the filesystem with the standard Solaris command like df.

-bash-3.00# df
/ (/dev/dsk/c0t0d0s0 ):129268622 blocks 8041716 files
/devices (/devices ): 0 blocks 0 files
/system/contract (ctfs ): 0 blocks 2147483614 files
/proc (proc ): 0 blocks 16301 files
/etc/mnttab (mnttab ): 0 blocks 0 files
/etc/svc/volatile (swap ):17003904 blocks 1166846 files
/system/object (objfs ): 0 blocks 2147483494 files
/lib/libc.so.1 (/usr/lib/libc/libc_hwcap2.so.1):129268622 blocks 8041716 files
/dev/fd (fd ): 0 blocks 0 files
/tmp (swap ):17003904 blocks 1166846 files
/var/run (swap ):17003904 blocks 1166846 files
/mpool1 (mpool1 ):140377815 blocks 140377815 files
/mpool1/IBMSW1 (mpool1/IBMSW1 ):140377815 blocks 140377815 files
/mpool1/IBMSW2 (mpool1/IBMSW2 ):140377815 blocks 140377815 files

We may also change the mount point to something that we like, such as /opt2.

-bash-3.00# zfs set mountpoint=/opt2 mpool1       
-bash-3.00# df
/ (/dev/dsk/c0t0d0s0 ):129268624 blocks 8041717 files
/devices (/devices ): 0 blocks 0 files
/system/contract (ctfs ): 0 blocks 2147483614 files
/proc (proc ): 0 blocks 16301 files
/etc/mnttab (mnttab ): 0 blocks 0 files
/etc/svc/volatile (swap ):17003648 blocks 1166846 files
/system/object (objfs ): 0 blocks 2147483494 files
/lib/libc.so.1 (/usr/lib/libc/libc_hwcap2.so.1):129268624 blocks 8041717 files
/dev/fd (fd ): 0 blocks 0 files
/tmp (swap ):17003648 blocks 1166846 files
/var/run (swap ):17003648 blocks 1166846 files
/opt2 (mpool1 ):140377801 blocks 140377801 files
/opt2/IBMSW1 (mpool1/IBMSW1 ):140377801 blocks 140377801 files
/opt2/IBMSW2 (mpool1/IBMSW2 ):140377801 blocks 140377801 files
-bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mpool1 156K 66.9G 27.5K /opt2
mpool1/IBMSW1 24.5K 66.9G 24.5K /opt2/IBMSW1
mpool1/IBMSW2 24.5K 66.9G 24.5K /opt2/IBMSW2

Now, this filesystem is ready for IBM WebSphere deployment.  You can monitor the performance of ZFS as follows:

-bash-3.00# zpool iostat -v 5
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
mpool1 158K 68.0G 0 1 104 2.47K
mirror 158K 68.0G 0 1 104 2.47K
c0t1d0 - - 0 0 417 4.31K
c0t3d0 - - 0 0 106 3.83K
---------- ----- ----- ----- ----- ----- -----
 

Give ZFS a try.  I recommend you use the latest Solaris 10 8/07 release.   You can use this with Solaris Zones, too.  

Comments:

Post a Comment:
Comments are closed for this entry.
About

Mostly pertaining to Cloud Computing, Application Infrastructure, Oracle Exastack, Exalogic, Solaris, Java and Sun servers for the enterprise!

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today