Adventures of a home NAS server with Solaris 11

I hadn't considered Solaris for a home application but an engineer here, Keith Hargrove, told me about ZFS that would make a decent RAID5-like system that wouldn't cost more than my car. With the new low cost terabyte hard drives out, it seemed a lot came together to make a nice homebrew NAS box.

Nothing is without it's problems. Of course, I probably wouldn't have written about it if it was really that easy. While I've used Solaris in the past, I've never had the role of administering, installing or solving problems around a Solaris system. I found that Solaris always had more than one way to do something and it has gone through significant changes over time.

A home system is used by a small number of people who aren't going to come and go on a regular basis. There is a firewall but no managed switches, load balancers, domain servers, or other enterprise stuff. It has to be quiet, non-intrusive, low power and survive in a dusty environment. I could tolerate longer down times and didn't need hot swap drives or redundant power supplies.

We have four desktops and two laptops that run Windows XP or Linux with 80G to 300G disk space. I figured the NAS box would have 3TB to 4TB to cover the back up and general NAS stuff. The build cost we were shooting for was around $1200.

How not to set up a system

I wanted to use a cheap desktop motherboard. I liked the Intel D975XBX2 "Bad Axe 2" as it has 8 SATA ports, support for ECC memory and dual core processors. There was a detail I missed, the front side bus (FSB) of the Bad Axe 2 is 1066MHz and the latest Intel processors are 1333MHz. It wouldn't post with the newer dual core and the hacks were less than successful, so the board went back. Lesson learned: check the compatibility guide carefully and check to make sure it is the latest one.

It turned out the newer desktop motherboards were going for fewer SATA ports and no ECC support. Of all of Intel's latest desktop chip sets, only the X38 chipset supports ECC and many of the X38 motherboards are rumored to drop ECC capability. AMD desktop processors do support ECC and would probably have been a smoother ride. I recall not being able to find an AMD motherboard that had the number of  SATA ports I wanted, so I went with Intel ICH9R chipset.

The Server

Supermicro and Intel both make "Entry Level Servers". The Intel S3200SHV uses the 3200 chipset (ICH9R) with 6 SATA ports, 1G Ethernet, and LGA775 socket supporting Xeon and high end multicore processors. It looked like the closest I was getting to a Xeon without the cost was the E8400 "Wolfdale" dual core 3GHz 45nm processor with 6MB L2 Cache. I knew that Solaris and ZFS wanted a lot of memory.  I used four 2GB sticks of ECC Crucial memory. ECC memory was cheap insurance at $2 more per stick than non-ECC memory. The hard drives I picked were 1TB Western Digital Caviar SATA green drives and I used a 400G Hitachi pulled from another system as my boot drive.

The Antec "300" steel midtower case is now one of my favorites. A basic case with good power cable routing area and an almost full length of 3.5" drive bay slots. To make the airflow work, I covered the side fan hole with a piece of acrylic plastic and replaced the back fan with a Scythe Kama PWM fan.

The plastic hold downs on the stock Intel CPU fan were junk, so I used a Coolermaster 92mm PWM fan/heatsink in it's place.  The drives sit about 3/4" apart with great airflow. I replaced the stock SATA cables with 6" and 8" cables.

This unit ran remarkably cool with most of the unit running slightly warmer than ambient (ambient at 24C). The only exception was the Hitachi boot disk at 28C and the BMC processor 35C. The ipmitool reports ambient temp at 29C.

Here is the cost break down for my server:

S3200SHV motherboard $189
E8400 3GHz dual core processor $164
8G Crucial DDR2 ECC SDRAM (CT2KIT25672AA667) $120
4x1Tb Western Digital Caviar $520
400G Hitachi Deskstar $65
PC Power and Cooling power supply $95
Antec "300" Steel Case $45
PWM CPU Fan $29
5xSATA cables $25
Scythe Kama 120mm PWM Fan $12
--------------------------------------------
Build cost $1264

Setting up the system

The BIOS was minimal compared to other desktop BIOS probably to make room for the "EFI shell" or Intel's extensible firmware interface. EFI shell can be booted into and used to upload firmware for the BMC, BIOS, and FRU/SDR or to remotely set any of the BIOS settings. The BMC or Baseboard Managment Controller is Intel's lights out management solution. The BMC contains a Matrox VGA controller, sensor and fan control and a ARM926 processor. I upgraded the firmware using the EFI shell with the firmware files on a USB stick. It is highly recommended to upgrade all the firmware for this board. I couldn't find a BIOS option to turn off the display or LAN controller. My guess it's designed that way since it's part of the BMC and have to be up. I couldn't swap them out with different hardware to isolate the problem. I loaded up Ubuntu server and ran that for a few days. It did fine and didn't reveal any hardware problems. The Intel's Deployment Assistant didn't work for me. Intel tech said that it needed a SATA CDROM not a USB drive. I was able to update the firmware through the EFI shell and didn't pursue using the DA.

The compatibility guide had a "patch" in the /usr/binX11/Xserver file to set the depth and framebuffer bits per pixel. If this wasn't done, Solaris would hang shortly after bootup. LiveCD autoconfigure Xwindows and fatals with this board. Solaris will start Xwindows on bootup and hang shortly after. This is unique to this particular board and the patch does correct the problem.

Installing Solaris Express

I used the Solaris Express CE release 103 on DVD and installed from my external USB DVD player. The BIOS was set to default except that the SATA drives use AHCI, the boot order is set to your boot hard drive and boot now from USB CDROM. It will load Grub with Solaris or Solaris with TTY option. Pick the default "Solaris"

You will then be asked:
 1. Solaris Interactive (default)
 2. Custom JumpStart
 3. Solaris Interactive Text (Desktop session)
 4. Solaris Interactive Text (Console session)
 5. Apply driver updates
 6. Single user shell

Do not go with the default here, it will not install ZFS for your boot system. You can choose 3 or 4. I went with 4 as this board was having trouble with graphics systems. It will still eventually boot into Xwindows.

The function key F2 is used for confirming selections in this installer. Most is self explanatory but I can go through some of the issues I had with the install.

DHCP client for Solaris is different than desktops. Usually DNS IP addresses are pulled from the DHCP server but Solaris requires this info manually. On the other hand, if you use DHCP, it gets the hostname from the DHCP server, in my case DHCPPC253. Adding the hostname to /etc/nodename, /etc/inetd/hosts and /etc/hostname.e1000g0 should override this but there were cases where it didn't (using SSL certificates for one). I went back and just said no to DHCP. Provide the static IP address, hostname, subnet mask (255.255.255.0 for a 192.168.1.\* network), IPV6 (no) and default router (detect one worked for me). If your name services is DNS like mine, then provide the IP address for the services. It will ask for domain name (I put a period ".") and search domains (left blank). It will ask again for name services. If you are done, select none.

It does ask time, geographical location (many times), root password, boot file system (ZFS), pool name (tinkler), and boot drive. Your boot drive should be default if the boot order was set up properly in the BIOS. It will ask about using auto eject of CD and auto reboot. Because I needed to turn off Xwindows, I set these to manual. I installed everything and it took about 30 minutes.

Fixing the motherboard

Xwindows can be disabled by stopping the graphical-login services. The svcadm command replaces the /etc/init.d scripts on Solaris. I can find these commands by searching all services for graphical-login

svcs -a | grep graphical-login

shows:

svc:/application/graphical-login/gdm:default
svc:/application/graphical-login/cde-login:default

These are the one's we'll disable when booting for the first time. I logged into root quickly and shut off X window services with:

svcadm disable gdm
svcadm disable cde-login

The svcadm commands are persistant over reboots, so Xwindows is off and rock solid performance.
svcs -lv <service name> will give the properties of <service name>. A service running successfully is online, one that is disabled is offline and one with a problem is "maintenance". If you get maintenance, first place to check logfile property.  tail `svcs -lv <service name> | sed -n '/\^logfile / s/logfile//p'`

I wanted to administrate remotely using putty and Solaris is wisely set up to not allow remote root login.
This can be changed by editing /etc/ssh/sshd_config and changing the line with "PermitRootLogin no" to yes. Then reboot or run svcadm restart ssh. I changed this back once I had my users added.

Solaris comes with a complete compliment of software. The initial path doesn't show all that is installed. I added /usr/sfw/bin and /usr/sfw/sbin to path variable in /etc/default/login and /etc/default/su. This can also be done locally with your favorite shell login script, such as .bash_profile. This added emacs for me and I could put away my vi cheat sheet. If you cannot find the program or service you need, you can go to www.blastwave.org for packages for Solaris. Blastwave uses a program pkg_get as the package installer and you should add /opt/csw/bin to your path if you install from blastwave. Also I would suggest doing a

find / -name <command> -print

to see if a command is installed without a path to it.

The file system

I needed to set up my file systems and users next. ZFS uses zpool and zfs commands. zpool handles the physical hard drive side of things and zfs commands handle the file system.

zpool needs the drives your going to use and uses the cXTXdXsX notation for drive device address. We can get the drives using the format command:

format < /dev/null

will dump out something like:
Searching for disks...done

AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <DEFAULT cyl 48638 alt 2 hd 255 sec 63>
/pci@0,0/pci8086,34d0@1f,2/disk@0,0
       1. c1t1d0 <ATA-WDC WD10EACS-00D-1A01-931.51GB>
/pci@0,0/pci8086,34d0@1f,2/disk@1,0
       2. c1t2d0 <ATA-WDC WD10EACS-00D-1A01-931.51GB>
/pci@0,0/pci8086,34d0@1f,2/disk@2,0
       3. c1t3d0 <ATA-WDC WD10EACS-00D-1A01-931.51GB>
/pci@0,0/pci8086,34d0@1f,2/disk@3,0
       4. c1t4d0 <ATA-WDC WD10EACS-00D-1A01-931.51GB>
/pci@0,0/pci8086,34d0@1f,2/disk@4,0
Specify disk (enter its number):
#

Disks 1 through 4 are the 1TB pool disks that we want to create our pool.  I want to set up for single drive redundancy and so we'll let zpool know the virtual device (vdev) is raidz1.

To create a pool with the clever name of "tank", do the following command:
zpool create tank raidz1 c1t1d0 c1t2d0 c1t3d0 c1t4d0

You can see the status of the drives with:
zpool status

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c1t0d0s0  ONLINE       0     0     0

errors: No known data errors

  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0

errors: No known data errors
#

You can list the drives with

zpool list

NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rpool   372G  7.49G   365G     2%  ONLINE  -
tank   3.62T   134G  3.49T     3%  ONLINE  -

Creating Users

All our users will have /export/home path as the mount point. We want all users to have compress all files. We can set a property which will be inherited by all the users on the /export/home path to compress files automatically with:
zfs set compression=on rpool/export/home

Since our computers run Windows, we need our NAS to run SAMBA. We could run the old style SAMBA with swat, smbpasswd, smbserver, smb.conf but ZFS has CIFS and it is quite slick.

Before creating any users, add the following line to the end of /etc/pam.conf file:
other password required pam_smb_passwd.so.1 nowarn

Instead of using passwd and smbpasswd separately, this will use passwd for both.

Now enable samba service and connect it to the workgroup. See the man page if you for smbadm if you want to use a domain server here instead of workgroup.
svcadm enable -r smb/server

Replace <workgroup-name> with name of your workgroup on your PC.
smbadm join -w <workgroup-name>

I created a group, so that I could fine-tune permissions.
groupadd parents

Create your user:
useradd -d /export/home/bob -s /bin/bash -g parents -c "Bob" bob
passwd bob
<enter password here>
<twice>

Now create a zfs samba share for your user:
zfs create -o casesensitivity=mixed -o nbmand=on -o sharesmb=on tank/bob

tank/bob is the fileshare and it needs to be told to mount to /export/home/bob
zfs set mountpoint=/export/home/bob tank/bob

You can assign a name to your samba share
zfs set sharesmb=name=herbert tank/bob

If the property sharesmb=name=<name> is not set, the file share from Windows will be tank_bob rather than herbert

Set permission and ownership of your filesystem from the Solaris side of things:
chown -R bob:parents /export/home/bob
chmod -R 700 /export/home/bob

Rinse and repeat for the rest of your users.

You can then see the file shares with
sharemgr show -vp

For more information on this sharing mechanism, http://www.genunix.org/wiki/index.php/Getting_Started_With_the_Solaris_CIFS_Service
If your permissions look odd from Solaris point of view if you do ls -l, try ls -V to get the ACL properties.

You should then be able to go from a Windows box and add network place with \\\\<hostname>\\<username> to get the share

Whizzy ZFS features

You can take snapshots of filesystem bob with
zfs snapshot tank/bob@<snapshotname>

I use the date as the snapshot name.
zfs snapshot tank/bob@`date "+%Y%m%d"`

Contrary to popular documentation, zfs list doesn't show the snapshot by default.
try:
zfs list -t snapshot,filesystem

Setting up a simple subversion repository (and webserver)

I also wanted to set up a subversion repository. I imagine ZFS will pull web services and subversion into being zfs properties but for now you can use svnserve process or apache2 with the appropriate module.

I opted for apache2 - also gives me a web server if I need it.

Created a zfs on /export/home/svn with
zfs create tank/svn
zfs set mountpoint=/export/home/svn tank/svn
svnadmin create /export/home/svn
chown -R webservd:webservd /export/home/svn

Include the DAV svn module in apache2 by editting the file /etc/apache2/2.2/conf.d/modules-32.load
In the line after:
LoadModule dav_module libexec/mod_dav.so
add this line:
LoadModule dav_svn_module libexec/mod_dav_svn.so

Next edit the /etc/apache2/2.2/httpd.conf file
Put the correct email address in:
ServerAdmin <email address>

and static IP address in:
ServerName <static IP address>

Add the following stanza towards the end
<Location  /svn>
    DAV svn
    SVNPath /export/home/svn
    AuthType Basic
    AuthName "Subversion Repository"
    AuthUserFile /etc/apache2/2.2/svn.pw
    Require valid-user
</Location>
Save httpd.conf file.

Create user passwords with
htpasswd -c /etc/apache2/2.2/svn.pw <user1>
htpasswd /etc/apache2/2.2/svn.pw <user2>

and then start apache2 with
svcadm enable apache22

You should be able to browse the servers IP address and see the apache welcome page (if nothing else was changed in httpd.conf file)
You should also see the created repository at httpd://<ip address>/svn

IPMI

Since this motherboard has intelligent platform management interface (IPMI), you can use ipmitool to read sensor data:

ipmitool sdr list

BB +1.8V SM      | 1.77 Volts        | ok
BB +3.3V         | 3.37 Volts        | ok
BB +3.3V STBY    | 3.22 Volts        | ok
BB +5.0V         | 5.01 Volts        | ok
Processor Vcc    | 1.12 Volts        | ok
BB Ambient Temp  | 28 degrees C      | ok
Chassis Fan 1    | 2240 RPM          | ok
Chassis Fan 4    | 1190 RPM          | ok
P1 Therm Margin  | -90 degrees C     | ok
Power Unit       | 0x00              | ok
IPMI Watchdog    | 0x00              | ok
Processor Status | 0x00              | ok

So far, this has worked out as a storage server. I've gotten favorable comments like, "wow this is really fast" and haven't heard things like, "wow that thing sounds like a jet taking off"

I haven't tried zones yet and that sounds cool. Thanks to Keith and Johnson Earls for advice and help on getting this to work.

Comments:

Forget about the old packaging system on Blastwave, just setup Blastwave as a IPS package repository:

http://blogs.sun.com/observatory/entry/blastwave

http://blastwave.network.com:10000/

Posted by Wes W. on December 15, 2008 at 12:10 AM PST #

Thank you for a great post. I am trying to choose hardware for my own home, low power, solaris/zfs file server. I wanted to go with Intel because it has better power management support than AMD. In the same time it is hard to find a main board that supports ECC memory. Finally I came across D975XBX2 but it looks like Solaris does not have a SATA driver for the onboard controller (Marvell 88SE6145).
S3200SHV looks promising (based on your post :) ). Can you clarify if it is possible to turn on the graphics system?
Were there any issues with driver support?
How is your system doing in terms of power consumption?
Thank you,
-Ilya

Posted by Ilya on February 16, 2009 at 08:09 AM PST #

I'm trying to make an OpenSolaris fileserver using the S3200SHV, and it's not going so well. I wanted to go with OpenSolaris 2008.11, and there's no text install. I ended up booting into text mode, enabling SSH, and SSHing with -X for X11 forwarding and running gui-install to go the installation.

So I got it installed. Rebooting, I tried going into text mode because I knew the graphics won't work. Of course, "text mode" loads the GUI just the same as regular. Great.

The system will hang after ~1 min after the GUI loads. I was able to quickly edit /usr/bin/X11/Xserver and apply the patch specified in the compatibility guide, but it does not stop the machine from hanging. It does sharpen the image though.

This is very frustrating for an newbie like myself. Any help is greatly appreciated.

Posted by Craig Younkins on March 15, 2009 at 11:59 AM PDT #

I had similar hangs with the GUI. The graphics is a Matrox controller as part of the LOM (lights out management) and don't think it can be turned off. Putting a different video card didn't help either. I didn't plan to use X and just shut it off - this solved the same crashing problems that Craig was having.
To date, the machine has been up since I wrote the article 24/7 and has been very stable. The video controller just doesn't want to play well.

BTW I had similar problems with Intel's GUI configuration software when it booted and resorted using the text mode/USB stick.

I haven't measured the power consumption yet but I think it quite reasonable.

Bob.

Posted by Bob Alkire on April 23, 2009 at 12:31 PM PDT #

Thanks for this. Used it for the subversion.
Great.

Posted by Andrew Watkins on February 21, 2011 at 08:22 PM PST #

Post a Comment:
  • HTML Syntax: NOT allowed
About

user12611170

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today