Attention with Lustre 1.6.7 and new configuration whitepaper
By Gunter Roeth on Apr 22, 2009
Lustre 1.6.7 had some serious bugs with the MDT server, and it was withdrawn from the download list. These bugs have been fixed, and a new version 18.104.22.168 has been placed in the download area. If you did download or use Lustre 1.6.7 please upgrade it. Personally, I had a problem in mounting the Lustre filesystem (for both OSS and MDT). After creating the volumes and installing the patches, here on a RedHat CentOS 5.2 machine, I could do the
# mkfs.lustre --fsname lustre --mdt --mgs /dev/vg00n/mdt
but then I could not mount it
# mount -t lustre /dev/vg00/mdt /mdt
mount.lustre: mount /dev/vg00/mdt1 at /mdt failed: No such device
Are the lustre modules loaded?
Check /etc/modprobe.conf and /proc/filesystems
Note 'alias lustre llite' should be removed from modprobe.conf
and there only is a ldiskfs and no lustre in /proc/filesystems ....
When using Lustre 1.6.6 everything works fine.
Furthermore, there is an excellent whitepaper explaining the configuration and benchmarks of two different hardware setups. The first one uses a Sun Fire X4250 OSS server connected to a Sun Storage J4200 array with 12 300GB SAS drives, while the other uses one Sun Fire X4540 server (THOR) with 48 internal 1TB 7200rpm SATA drives. The first one uses the disk in a RAID0, while the second uses a RAID6 setup. All configuration descriptions include a HA (high availability) version. Please download this excellent paper from here.