X

Recent Posts

Backing up MySQL using ZFS Snapshots and Clones

The blog post serves as a follow-up to Deploying MySQL over NFS using the Oracle ZFS Storage Appliance and Deploying MySQL over Fibre Channel / iSCSI using the Oracle ZFS Storage Appliance.The snapshot and clone features of the Oracle ZFS Storage Appliance provides a convenient mechanism to backup, restore, and fork a MySQL database. A ZFS snapshot is a read-only copy of a file-system that is created instantaneously while initially occupying zero additional space. As the data of a file-system changes, the differences are tracked inside the snapshot. A ZFS clone is a writeable snapshot that can be used to branch off an existing filesystem without modifying the original contents.Creating a MySQL Snapshot1. From the MySQL console, disable autocommit:mysql> set autocommit=0;2. From the MySQL console, close and lock all database tables with a read lock. This will help to create a consistent snapshot.mysql> flush tables with read lock;3. From the ZFS console, create a project-level snapshot of the MySQL project:zfs:> shares select mysql snapshots snapshot test_snapshot4. From the MySQL console, unlock all tables to restore the databases to normal operation:mysql> unlock tables;Restoring a MySQL Snapshot1. From the MySQL console, close and lock all database tables with a read lock. This will help to create a consistent snapshot.mysql> flush tables with read lock;2. From the ZFS console, rollback the previously created snapshot for each InnoDB share:zfs:> confirm shares select mysql select innodb-data snapshots select test_snapshot rollbackzfs:> confirm shares select mysql select innodb-log snapshots select test_snapshot rollback3. From the MySQL console, unlock all tables to restore the databases to normal operation:mysql> unlock tables;Cloning a MySQL Snapshot1. From the ZFS console, create a clone of both InnoDB snapshots:zfs:> shares select mysql select innodb-data snapshots select test_snapshot clone innodb-data-clonezfs:shares mysql/innodb-data-clone (uncommitted clone)> commitzfs:> shares select mysql select innodb-log snapshots select test_snapshot clone innodb-log-clonezfs:shares mysql/innodb-log-clone (uncommitted clone)> commit2. Mount the newly created shares depending on which protocol you intend to deploy. Refer to the two previous blog posts on this subject.3. Update /etc/my.cnf with the new InnoDB locations:innodb_data_home_dir = /path/to/innodb-data-cloneinnodb_log_group_home_dir = /path/to/innodb-log-clone4. Start a new instance of MySQL with the updated my.cnf entries. Any further updates will have no effect on the original database.Sample REST script for creating a MySQL snapshotThe following python script has been developed to show how to automate this process. It leverages the REST API of the Oracle ZFS Storage Appliance.#!/usr/bin/python import MySQLdb import datetime import json import urllib2 def zfs_snapshot(): user = "zfssa_username" password = "zfssa_password" url = "zfssa_ip_address" path = "/api/storage/v1/pools/poolname/projects/mysql/snapshots" url = "https:// " + zfs + ":215" + path properties = {"name":"MySQL-snapshot"} post_data = json.dumps(properties) request = urllib2.Request(url, post_data) request.add_header("Content-type", "application/json") request.add_header("X-Auth-User", user) request.add_header("X-Auth-Key", password) response = urllib2.urlopen(request) def main(): mysql_server = "localhost" mysql_user = "root" mysql_pass = "" try: connection = MySQLdb.connect(host=mysql_server, user=mysql_user,passwd=mysql_pass) except MySQLdb.OperationalError: print "Could not connect to the MySQL server" sys.exit(-5) print "Connected to the MySQL server" start_time = datetime.datetime.now().replace(hour=0) backup = connection.cursor() backup.execute("set autocommit=0;") backup.execute("flush tables with read lock;") print "Creating project snapshot \"MySQL-snapshot\"" zfs_snapshot() backup.execute("unlock tables;") finish_time = datetime.datetime.now().replace(hour=0) total_time = finish_time - start_time print "Completed in ", total_time if __name__ == "__main__": main()

The blog post serves as a follow-up to Deploying MySQL over NFS using the Oracle ZFS Storage Appliance and Deploying MySQL over Fibre Channel / iSCSI using the Oracle ZFS Storage Appliance. The...

Deploying MySQL over Fibre Channel / iSCSI using the Oracle ZFS Storage Appliance

The blog post serves as a follow-up to Deploying MySQL over NFS using the Oracle ZFS Storage Appliance. The benefits remain the same which means this discussion will focus completely on configuring your Oracle ZFS Storage Appliance and database server.Configuring the Oracle ZFS Storage ApplianceDepending on which block protocol you plan on using, set up target and initiator groups per the following documentation:Fibre ChanneliSCSIThese instructions assume you are using a target and initiator group called 'mysql-tgt' and 'mysql-init'1. From the ZFS controller’s CLI, create a project called ‘mysql’: zfs:> shares project mysql2. Set logbias to latency to leverage write flash capabilities: zfs:shares mysql (uncommitted)> set logbias=latency logbias = latency (uncommitted)3. Commit the settings: zfs:shares mysql (uncommitted)> commit4. Select the 'mysql' project: zfs:> shares select mysql5. Create a LUN called 'innodb-data' to hold data files: zfs:shares mysql> lun innodb-data6. Set the volume block record size to 16K to match Innodb’s standard page size: zfs:shares mysql/innodb-data (uncommitted)> set volblocksize=16K volblocksize = 16K (uncommitted)7. Set the volume size to a value large enough to accommodate your database: zfs:shares mysql/innodb-data (uncommitted)> set volsize=1T volsize = 1T (uncommitted)8. Set the initiator and target groups: zfs:shares mysql/innodb-data (uncommitted)> set initiatorgroup=mysql-init initiatorgroup = 1T (uncommitted) zfs:shares mysql/innodb-data (uncommitted)> set targetgroup=mysql-tgt initiatorgroup = 1T (uncommitted) zfs:shares mysql/innodb-data (uncommitted)> commit9. Create a LUN called ‘innodb-log’ to hold the redo logs: zfs:shares mysql> lun innodb-log10. Set the volume block record size to 128K: zfs:shares mysql/innodb-log (uncommitted)> set volblocksize=128K volblocksize = 128K (uncommitted)11. Set the volume size to a value large enough to accommodate your database: zfs:shares mysql/innodb-log (uncommitted)> set volsize=256G volsize = 256G (uncommitted) zfs:shares mysql/innodb-log (uncommitted)> commitConfiguring your database server1. A directory structure should be created to contain the MySQL database:# mkdir -p /mysql/san/innodb-data# mkdir -p /mysql/san/innodb-log# chown -R mysql:mysql /mysql/san2. If using iSCSI, login to the ZFS Storage Appliance:# iscsiadm -m discovery -t sendtargets -p zfssa-ip-address# iscsiadm -m node -p zfssa-ip-address -login3. The multipath configuration held in /etc/multipath.conf should contain the following:defaults { find_multipaths yes user_friendly_names yes}devices { device { vendor "SUN" product "ZFS Storage 7430" getuid_callout "/lib/udev/scsi_id --page=0x83 --whitelisted --device=/dev/%n" prio alua hardware_handler "1 alua" path_grouping_policy group_by_prio failback immediate no_path_retry 600 rr_min_io 100 path_checker tur rr_weight uniform features "0" }}4. Discover the LUNs with multipath. This may require a restart of the multipath service to take in the new configuration changes.# multipath -llmpatha (3600144f0fa2f948b0000537cdb970008) dm-2 SUN,ZFS Storage 7430size=1.0T features='0' hwhandler='0' wp=rw|-+- policy='round-robin 0' prio=1 status=active| `-0:0:0:0 sda 8:0 active ready running|-+- policy='round-robin 0' prio=1 status=enabled| `-0:0:1:0 sdc 8:32 active ready running|-+- policy='round-robin 0' prio=1 status=enabled| `-1:0:0:0 sde 8:64 active ready running`-+- policy='round-robin 0' prio=1 status=enabled `-1:0:1:0 sdg 8:96 active ready runningmpathb (3600144f0fa2f948b0000537cdbc10009) dm-3 SUN,ZFS Storage 7430size=256G features='0' hwhandler='0' wp=rw|-+- policy='round-robin 0' prio=1 status=active| `-0:0:0:1 sdb 8:16 active ready running|-+- policy='round-robin 0' prio=1 status=enabled| `-0:0:1:1 sdd 8:48 active ready running|-+- policy='round-robin 0' prio=1 status=enabled| `-1:0:0:1 sdf 8:80 active ready running`-+- policy='round-robin 0' prio=1 status=enabled `-1:0:1:1 sdh 8:112 active ready running5. Align each LUN on a 32 byte boundary, specifying multiples of 256 for each partition. This isdocumented more extensively in Aligning Partitions to Maximize Performance, page 31.# fdisk -u -S 32 /dev/dm-2# fdisk -u -S 32 /dev/dm-36. Build an XFS filesystem* on top of each LUN for the full device path on the first partition:# mkfs.xfs /dev/mapper/mpathap1# mkfs.xfs /dev/mapper/mpathbp1* XFS is preferred on Linux. ZFS is preferred on Solaris.7. Mount each LUN either from a shell or automatically from /etc/fstab:# mount -t xfs -o noatime,nodiratime,nobarrier,logbufs=8,logbsize=256k /dev/mapper/mpathap1 /mysql/san/innodb-data# mount -t xfs -o noatime,nodiratime,nobarrier,logbufs=8,logbsize=256k /dev/mapper/mpathbp1 /mysql/san/innodb-logThat's it. Refer to the previous blog post to understand how to setup the MySQL database once these filesystems are mounted. In a future blog post, I'll cover how to backup your MySQL database using snapshots and cloning.

The blog post serves as a follow-up to Deploying MySQL over NFS using the Oracle ZFS Storage Appliance. The benefits remain the same which means this discussion will focus completely on configuring...

Deploying MySQL over NFS using the Oracle ZFS Storage Appliance

The Oracle ZFS Storage Appliance supports NFS, iSCSI, and Fibre Channel access to a MySQL database. By consolidating the storage of MySQL databases onto the Oracle ZFS Storage Appliance, the following goals can be achieved:1. Expand and contract storage space easily in a pooled environment.2. Focus high-end caching on the Oracle ZFS Storage Appliance to simplify provisioning.3. Eliminate network overhead by leveraging Infiniband and Fibre Channel connectivity.This blog post will focus specifically on NFS. A followup post will discuss iSCSI and Fibre Channel.Configuring the Oracle ZFS Storage ApplianceEach database should be contained in its own project.1. From the ZFS controller’s CLI, create a project called ‘mysql’. zfs:> shares project mysql2. Set logbias to latency to leverage write flash capabilities: zfs:shares mysql (uncommitted)> set logbias=latency logbias = latency (uncommitted)3. Set the default user to mysql and default group to mysql: zfs:shares mysql (uncommitted)> set default_user=mysql default_user = mysql (uncommitted) zfs:shares mysql (uncommitted)> set default_group=mysql default_group = mysql (uncommitted)Note: If a name service such as LDAP or NIS is not being used, change these to the actual UID and GID found in /etc/passwd and /etc/group on the host.4. Disable ‘Update access time on read’: zfs:shares mysql> set atime=false atime = false (uncommitted)5. Commit the changes: zfs:shares mysql> commit6. Create a filesystem called innodb-data to hold data files: zfs:shares mysql> filesystem innodb-data7. Set the database record size to 16K to match Innodb’s standard page size: zfs:shares mysql/innodb-data (uncommitted)> set recordsize=16K recordsize = 16K (uncommitted) zfs:shares mysql/innodb-data (uncommitted)> commit8. Create a filesystem called ‘innodb-log’ to hold redo logs: zfs:shares mysql> filesystem innodb-log9. Set the database record size to 128K: zfs:shares mysql/innodb-log (uncommitted)> set recordsize=128K recordsize = 128K (uncommitted) zfs:shares mysql/innodb-log (uncommitted)> commitConfiguring the serverThis example assumes a Linux server will be running the MySQL database. The following commands are roughly the same for a Solaris machine:1. A directory structure should be created to contain the MySQL database: # mkdir –p /mysql/nas/innodb-data # mkdir –p /mysql/nas/innodb-log # chown –R mysql:mysql /mysql/nas2. Each filesystem provisioned on the Oracle ZFS Storage Appliance should be mounted with the following options: rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,nointr,timeo=600,tcp, actimeo =0,nolock3. This should be supplied in /etc/fstab in order to be mounted automatically at boot, or it can be run manually from a shell like so: # mount –t nfs –o rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,nointr, timeo=600,tcp,actimeo =0,nolock zfs:/export/innodb-data /mysql/nas/innodb-data # mount –t nfs –o rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,nointr, timeo=600,tcp,actimeo =0,nolock zfs:/export/innodb-log /mysql/nas/innodb-logConfiguring the MySQL databaseThe option file my.cnf should be modified to offload the database onto the Oracle ZFS Storage Appliance and to make additional tunings for optimal performance. Prior to changing this file, MySQL should be stopped and restarted once completed. # service mysql stop # service mysql startImportant my.cnf changes1. innodb_doublewrite = 0A double write buffer is necessary in the event of partial page writes. However, the transactional nature of ZFS guarantees that partial writes will never occur. This can be safely disabled.2. innodb_flush_method = O_DIRECTEnsures that InnoDB calls directio() instead of fcntl() for the data files. This allows the data to be accessed without OS level buffering and read-ahead.3. innodb_data_home_dir = /path/to/innodb-dataThe data filesystem for InnoDB should be located on its own share or LUN on the Oracle ZFS Storage Appliance.4. innodb_log_group_home_dir = /path/to/innodb-logThe log filesystem for InnoDB should be located on its own share or LUN on the Oracle ZFS Storage Appliance.5. innodb_data_file_path = ibdatafile:1G:autoextendThis configures a single large tablespace for InnoDB. The ZFS controller is then responsible for managing the growth of new data. This eliminates the complexity needed for controlling multiple tablespaces.You can also download the following example my.cnf file to get started.Testing with SysbenchSysbench is a handy benchmark tool for generating a database workload. To fill a test database, run the following command: # sysbench \ --test=oltp \ --oltp-table-size=1000000 \ --mysql-db=test \ --mysql-user=root \ --mysql-password= \ prepareOnce filled, create an OLTP workload with the following command and parameters: # sysbench \ --test=oltp \ --oltp-table-size=1000000 \ --oltp-test-mode=complex \ --oltp-read-only=off \ --num-threads=128 \ --max-time=3600 \ --max-requests=0 \ --mysql-db=test \ --mysql-user=root \ --mysql-password= \ --mysql-table-engine=innodb \ runAnalyticsThe Analytics feature of the Oracle ZFS Storage Appliance offers an unprecedented level of observability into your database workload. This can assist in identifying performance bottlenecks based on the utilization of your network, hardware, and storage protocols. Its drill-down functionality can also narrow the focus of a MySQL instance into a workload’s operation type (read/write), I/O pattern (sequential / random), response latency, and I/O size for both the data and log files. At any point in time, a DBA can track a database instance at an incredibly granular level.Once you have a single database installed, you can try creating more instances to analyze your I/O patterns. Run separate sysbench processes for each database and then use Analytics to monitor the differences between workloads.

The Oracle ZFS Storage Appliance supports NFS, iSCSI, and Fibre Channel access to a MySQL database. By consolidating the storage of MySQL databases onto the Oracle ZFS Storage Appliance, the following...

Configuration Applications / OS's for 1MB ZFS Block Sizes

The latest release of the ZFS Storage Appliance, 2013.1.1.1, introduces 1MB block sizes for shares. This is a deferred update that can only be enabled inside of Maintenance → System. You can edit individual Filesystems or LUNs from within 'Shares' to enable the 1MB support (database record size).This new feature may need additional tweaking on all connected servers to fully realize significant performance gains. Most operating systems currently do not support a 1MB transfer size by default. This can be very easily spotted within analytics by breaking down your expected protocol by IO size. As an example, let's look at a fibre channel workload being generated by an Oracle Linux 6.5 server:ExampleThe IO size is sitting at 501K, a very strange number that's eerily close to 512K. Why is this a problem? Well, take a look at our backend disks:Our disk IO size (block size) is heavily fragmented! This causes our overall throughput to nosedive.2GB/s is okay, but we can do better if our buffer size was 1MB on the host side.Fixing the problemFibre ChannelSolaris# echo 'set maxphys=1048576' > /etc/systemOracle Linux 6.5 uek3 kernel (previous releases do not support 1MB sizes for multipath)# echo 1024 > /sys/block/dm*/queue/max_sectors_kb or create a permanent udev rule:# vi /etc/udev/rules.d/99-zfssa.rulesACTION=="add", SYSFS{vendor}=="SUN", SYSFS{model}=="*ZFS*", ENV{ID_FS_USAGE}!="filesystem", ENV{ID_PATH}=="*-fc-*", RUN+="sh -c 'echo 1024 > /sys$DEVPATH/queue/max_sectors_kb'"WindowsQLogic [qlfc]C:\> qlfcx64.exe -tsize /fc /set 1024Emulex [HBAnyware]set ExtTransferSize = 1Please see MOS Note 1640013.1 for configuration for iSCSI and NFS.ResultsAfter re-running the same FC workload with the correctly set 1MB transfer size, I can see the IO size is now where it should be.This has a drastic impact on the block sizes being allocated on the backend disks:And an even more drastic impact on the overall throughput:A very small tweak resulted in a 5X performance gain (2.1GB/s to 10.9GB/s)! Until 1MB is the default for all physical I/O requests, expect to make some configuration changes on your underlying OS's.System Configuration Storage 1 x Oracle ZS3-4 Controller 2013.1.1.1 firmware 1TB DRAM 4 x 16G Fibre Channel HBAs 4 x SAS2 HBAs 4 x Disk Trays (24 4TB 7200RPM disks each) Servers 4 x Oracle x4170 M2 servers Oracle Linux 6.5 (3.8.x kernel) 16G DRAM 1 x 16G Fibre Channel HBA WorkloadEach Oracle Linux server ran the following vdbench profile running against 4 LUNs:sd=sd1,lun=/dev/mapper/mpatha,size=1g,openflags=o_direct,threads=128sd=sd2,lun=/dev/mapper/mpathb,size=1g,openflags=o_direct,threads=128sd=sd3,lun=/dev/mapper/mpathc,size=1g,openflags=o_direct,threads=128sd=sd4,lun=/dev/mapper/mpathd,size=1g,openflags=o_direct,threads=128wd=wd1,sd=sd*,xfersize=1m,readpct=70,seekpct=0rd=run1,wd=wd1,iorate=max,elapsed=999h,interval=1This is a 70% read / 30% write sequential workload.

The latest release of the ZFS Storage Appliance, 2013.1.1.1, introduces 1MB block sizes for shares. This is a deferred update that can only be enabled inside of Maintenance → System. You can edit...

Solaris 11 IPoIB + IPMP

I recently needed to create a two port active:standby IPMP group to be served over Infiniband on Solaris 11. Wow that's a mouthful of terminology! Here's how I did it:List available IB links[root@adrenaline ~]# dladm show-ibLINK HCAGUID PORTGUID PORT STATE PKEYSnet5 21280001CF4C96 21280001CF4C97 1 up FFFFnet6 21280001CF4C96 21280001CF4C98 2 up FFFFPartition the IB links. My pkey will be 8001.[root@adrenaline ~]# dladm create-part -l net5 -P 0x8001 p8001.net5[root@adrenaline ~]# dladm create-part -l net6 -P 0x8001 p8001.net6[root@adrenaline ~]# dladm show-partLINK PKEY OVER STATE FLAGSp8001.net5 8001 net5 unknown ----p8001.net6 8001 net6 unknown ----Create test addresses for the newly created datalinks[root@adrenaline ~]# ipadm create-ip p8001.net5[root@adrenaline ~]# ipadm create-addr -T static -a 192.168.1.101 p8001.net5/ipv4[root@adrenaline ~]# ipadm create-ip p8001.net6[root@adrenaline ~]# ipadm create-addr -T static -a 192.168.1.102 p8001.net6/ipv4[root@adrenaline ~]# ipadm show-addrADDROBJ TYPE STATE ADDRp8001.net5/ipv4 static ok 192.168.1.101/24p8001.net6/ipv4 static ok 192.168.1.102/24Create an IPMP group and add the IB datalinks[root@adrenaline ~]# ipadm create-ipmp ipmp0[root@adrenaline ~]# ipadm add-ipmp -i p8001.net5 -i p8001.net6 ipmp0Set one IB datalink to standby[root@adrenaline ~]# ipadm set-ifprop -p standby=on -m ip p8001.net6Assign an IP address to the IPMP group[root@adrenaline ~]# ipadm create-addr -T static -a 192.168.1.100/24 ipmp0/v4That's it! Final checks:[root@adrenaline ~]# ipadmNAME CLASS/TYPE STATE UNDER ADDRipmp0 ipmp ok -- -- ipmp0/v4 static ok -- 192.168.1.100/24p8001.net5 ip ok ipmp0 -- p8001.net5/ipv4 static ok -- 192.168.1.101/24p8001.net6 ip ok ipmp0 -- p8001.net6/ipv4 static ok -- 192.168.1.102/24[root@adrenaline ~]# ping 192.168.1.100192.168.1.100 is alive

I recently needed to create a two port active:standby IPMP group to be served over Infiniband on Solaris 11. Wow that's a mouthful of terminology! Here's how I did it: List available IB links [root@adre...

Configuring a Basic DNS Server + Client in Solaris 11

Configuring the Server The default install of Solaris 11 does not come with a DNS server, but this can be added easily through IPS like so:[paulie@griff ~]$ sudo pkg install service/network/dns/bindBefore enabling this service, the named.conf file needs to be modified to support the DNS structure. Here's what mine looks like:[paulie@griff ~]$ cat /etc/named.confoptions { directory "/etc/namedb/working"; pid-file "/var/run/named/pid"; dump-file "/var/dump/named_dump.db"; statistics-file "/var/stats/named.stats"; forwarders { 208.67.222.222; 208.67.220.220; };};zone "hillvalley" { type master; file "/etc/namedb/master/hillvalley.db";};zone "1.168.192.in-addr.arpa" { type master; file "/etc/namedb/master/1.168.192.db";};My forwarders use the OpenDNS servers, so any request that the local DNS server can't process goes through there. I've also setup two zones: hillvalley.db for my forward zone and 1.168.192.db for my reverse zone. We need both for a proper configuration. We also need to create some directories to support this file:[paulie@griff ~]$ sudo mkdir /var/dump[paulie@griff ~]$ sudo mkdir /var/stats[paulie@griff ~]$ sudo mkdir -p /var/run/namedb[paulie@griff ~]$ sudo mkdir -p /etc/namedb/master[paulie@griff ~]$ sudo mkdir -p /etc/namedb/workingNow, let's populate the DNS server with a forward and reverse file.Forward file[paulie@griff ~]$ cat /etc/namedb/master/hillvalley.db $TTL 3h@ IN SOA griff.hillvalley. paulie.griff.hillvalley. ( 2013022744 ;serial (change after every update) 3600 ;refresh (1 hour) 3600 ;retry (1 hour) 604800 ;expire (1 week) 38400 ;minimum (1 day))hillvalley. IN NS griff.hillvalley.delorean IN A 192.168.1.1 ; Routerbiff IN A 192.168.1.101 ; NFS Servergriff IN A 192.168.1.102 ; DNS Serverbuford IN A 192.168.1.103 ; LDAP Servermarty IN A 192.168.1.104 ; Workstationdoc IN A 192.168.1.105 ; Laptopjennifer IN A 192.168.1.106 ; Boxeelorraine IN A 192.168.1.107 ; BoxeeReverse File[paulie@griff ~]$ cat /etc/namedb/master/1.168.192.db $TTL 3h@ IN SOA griff.hillvalley. paulie.griff.hillvalley. ( 2013022744 ;serial (change after every update) 3600 ;refresh (1 hour) 3600 ;retry (1 hour) 604800 ;expire (1 week) 38400 ;minimum (1 day)) IN NS griff.hillvalley.1 IN PTR delorean.hillvalley. ; Router101 IN PTR biff.hillvalley. ; NFS Server102 IN PTR griff.hillvalley. ; DNS Server103 IN PTR buford.hillvalley. ; LDAP Server104 IN PTR marty.hillvalley. ; Workstation105 IN PTR doc.hillvalley. ; Laptop106 IN PTR jennifer.hillvalley. ; Boxee107 IN PTR lorraine.hillvalley. ; BoxeeFor referencing how these files works: paulie is the admin user account name griff is the hostname of the DNS server hillvalley is the domain name of the network I love BTTF Feel free to tweak this example to match your own network. Finally, enable the DNS service and check that it's online:[paulie@griff ~]$ sudo svcadm enable dns/server[paulie@griff ~]$ sudo svcs | grep dns/serveronline 22:32:20 svc:/network/dns/server:default Configuring the Client We will need the IP address (192.168.1.102), hostname (griff), and domain name (hillvalley) to configure DNS with these commands:[paulie@buford ~]$ sudo svccfg -s network/dns/client setprop config/nameserver = net_address: 192.168.1.102[paulie@buford ~]$ sudo svccfg -s network/dns/client setprop config/domain = astring: hillvalley[paulie@buford ~]$ sudo svccfg -s network/dns/client setprop config/search = astring: hillvalley[paulie@buford ~]$ sudo svccfg -s name-service/switch setprop config/ipnodes = astring: '"files dns"'[paulie@buford ~]$ sudo svccfg -s name-service/switch setprop config/host = astring: '"files dns"'Verify the configuration is correct:[paulie@buford ~]$ svccfg -s network/dns/client listprop configconfig application config/value_authorization astring solaris.smf.value.name-service.dns.clientconfig/nameserver net_address 192.168.1.102config/domain astring hillvalleyconfig/search astring hillvalleyAnd enable:[paulie@buford ~]$ sudo svcadm enable dns/clientNow we need to test that the DNS server is working using both forward and reverse DNS lookups:[paulie@buford ~]$ nslookup lorraineServer: 192.168.1.102Address: 192.168.1.102#53Name: lorraine.hillvalleyAddress: 192.168.1.107[paulie@buford ~]$ nslookup 192.168.1.1Server: 192.168.1.102Address: 192.168.1.102#531.1.168.192.in-addr.arpa name = delorean.hillvalley.

Configuring the Server The default install of Solaris 11 does not come with a DNS server, but this can be added easily through IPS like so: [paulie@griff ~]$ sudo pkg install...

Configuring a Basic LDAP Server + Client in Solaris 11

Configuring the Server Solaris 11 ships with OpenLDAP to use as an LDAP server. To configure, you're going to need a simple slapd.conf file and an LDIF schema file to populate the database. First, let's look at the slapd.conf configuration:# cat /etc/openldap/slapd.confinclude /etc/openldap/schema/core.schemainclude /etc/openldap/schema/cosine.schemainclude /etc/openldap/schema/inetorgperson.schemainclude /etc/openldap/schema/nis.schemapidfile /var/openldap/run/slapd.pidargsfile /var/openldap/run/slapd.argsdatabase bdbsuffix "dc=buford,dc=hillvalley"rootdn "cn=admin,dc=buford,dc=hillvalley"rootpw secretdirectory /var/openldap/openldap-dataindex objectClass eqYou may want to change the lines suffix and rootdn to better represent your network naming schema. My LDAP server's hostname is buford and domain name is hillvalley. You will need to add additional domain components (dc=) if the name is longer. This schema assumes the LDAP manager will be called admin. Its password is 'secret'. This is in clear-text just as an example, but you can generate a new one using slappasswd:[paulie@buford ~]$ slappasswdNew password: Re-enter new password: {SSHA}MlyFaZxG6YIQ0d/Vw6fIGhAXZiaogk0GReplace 'secret' with the entire hash, {SSHA}MlyFaZxG6YIQ0d/Vw6fIGhAXZiaogk0G, for the rootpw line.Now, let's create a basic schema for my network.# cat /etc/openldap/schema/hillvalley.ldifdn: dc=buford,dc=hillvalleyobjectClass: dcObjectobjectClass: organizationo: bufford.hillvalleydc: buforddn: ou=groups,dc=buford,dc=hillvalleyobjectCLass: topobjectClass: organizationalunitou: groupsdn: ou=users,dc=buford,dc=hillvalleyobjectClass: topobjectClass: organizationalunitou: usersdn: cn=world,ou=groups,dc=buford,dc=hillvalleyobjectClass: topobjectClass: posixGroupcn: worldgidNumber: 1001dn: uid=paulie,ou=users,dc=buford,dc=hillvalleyobjectClass: topobjectClass: accountobjectClass: posixAccountobjectClass: shadowAccountcn: Paul Johnsonuid: paulieuidNumber: 1001gidNumber: 1001homeDirectory: /paulie/loginShell: /usr/bin/bashuserPassword: secretI've created a single group, world, and a single user, paulie. Both share the uid and gid of 1001. LDAP supports lots of additional variables for configuring a user and group account, but I've kept it basic in this example. Once again, be sure to change the domain components to match your network. Feel free to also change the user and group details. I've left the userPassword field in clear-text as 'secret'. The same slappasswd method above applies here as well.It's time to turn on the server, but first, let's change some ownership permissions:[paulie@buford ~]$ sudo chown -R openldap:openldap /var/openldap/... and now ...[paulie@buford ~]$ sudo svcadm enable ldap/serverCheck that it worked:[paulie@buford ~]$ svcs | grep ldaponline 12:13:49 svc:/network/ldap/server:openldap_24Neat, now let's add our schema file to the database:[paulie@buford ~]$ ldapadd -D "cn=admin,dc=buford,dc=hillvalley" -f /etc/openldap/schema/hillvalley.ldifEnter bind password: adding new entry dc=buford,dc=hillvalleyadding new entry ou=groups,dc=buford,dc=hillvalleyadding new entry ou=users,dc=buford,dc=hillvalleyadding new entry cn=world,ou=groups,dc=buford,dc=hillvalleyadding new entry uid=paulie,ou=users,dc=buford,dc=hillvalleyThat's it! Our LDAP server is up, populated, and ready to authenticate against. Configuring the Client I'm going to turn my example server, buford.hillvalley, into an LDAP client as well. To do this, we need to run the `ldapclient` command to map our new user and group data:[paulie@buford ~]$ ldapclient manual \-a credentialLevel=proxy \-a authenticationMethod=simple \-a defaultSearchBase=dc=buford,dc=hillvalley \-a domainName=buford.hillvalley \-a defaultServerList=192.168.1.103 \-a proxyDN=cn=admin,dc=buford,dc=hillvalley \-a proxyPassword=secret \-a attributeMap=group:gidnumber=gidNumber \-a attributeMap=passwd:gidnumber=gidNumber \-a attributeMap=passwd:uidnumber=uidNumber \-a attributeMap=passwd:homedirectory=homeDirectory \-a attributeMap=passwd:loginshell=loginShell \-a attributeMap=shadow:userpassword=userPassword \-a objectClassMap=group:posixGroup=posixgroup \-a objectClassMap=passwd:posixAccount=posixaccount \-a objectClassMap=shadow:shadowAccount=posixaccount \-a serviceSearchDescriptor=passwd:ou=users,dc=buford,dc=hillvalley \-a serviceSearchDescriptor=group:ou=groups,dc=buford,dc=hillvalley \-a serviceSearchDescriptor=shadow:ou=users,dc=buford,dc=hillvalleyAs usual, change the host and domain names as well as the IP address held in defaultServerList and the proxyPassword. The command should respond back that the system was configured properly, however, additional changes will need to be made if you use DNS for hostname lookups (most people use DNS, so run these commands).svccfg -s name-service/switch setprop config/host = astring: \"files dns ldap\"svccfg -s name-service/switch:default refreshsvcadm restart name-service/cacheNow, we need to change how users login so that the client knows that there is an extra LDAP server to authenticate against. This should not lockout local worries.Examine the two files /etc/pam.d/login and /etc/pam.d/other. Change any instance ofauth required pam_unix_auth.so.1toauth binding pam_unix_auth.so.1 server_policyAfter this line, add the following new line:auth required pam_ldap.so.1That's it! Finally, reboot your system and see if you can login with your newly created user.Update:Glenn Faden wrote an excellent guide to configuring OpenLDAP using the native Solaris user/group/role management system.

Configuring the ServerSolaris 11 ships with OpenLDAP to use as an LDAP server. To configure, you're going to need a simple slapd.conf file and an LDIF schema file to populate the database. First,...

CIFS Sharing on Solaris 11

Things have changed since Solaris 10 (and Solaris 11 Express too!) on how to properly set up a CIFS server on your Solaris 11 machine so that Windows clients can access files. There's some documentation on the changes here, but let me share the full instructions from beginning to end.hostname: adrenalineusername: pauliepoolname: poolmountpnt: /poolshare: mysharename Install SMB server package [paulie@adrenaline ~]$ sudo pkg install service/file-system/smb Create the name of the share[paulie@adrenaline ~]$ sudo zfs set share=name=mysharename,path=/pool,prot=smb pool Turn on sharing using zfs[paulie@adrenaline ~]$ sudo zfs set sharesmb=on pool Turn on your smb server[paulie@adrenaline ~]$ sudo svcadm enable -r smb/server Check that the share is active[paulie@adrenaline ~]$ sudo smbadm show-shares adrenalineEnter password: c$ Default ShareIPC$ Remote IPCmysharename 3 shares (total=3, read=3) Enable an existing UNIX user for CIFS sharing (you may have to reset the password again eg.`passwd paulie` )[paulie@adrenaline ~]$ sudo smbadm enable-user paulie Edit pam to allow for smb authentication (add line to end of file)Solaris 11 GA only:[paulie@adrenaline ~]$ vi /etc/pam.confother password required pam_smb_passwd.so.1 nowarnSolaris 11 U1 or later:[paulie@adrenaline ~]$ vi /etc/pam.d/otherpassword required pam_smb_passwd.so.1 nowarn Try to mount the share on your Windows machine \\adrenaline\mysharename

Things have changed since Solaris 10 (and Solaris 11 Express too!) on how to properly set up a CIFS server on your Solaris 11 machine so that Windows clients can access files. There's...

Compiling Alpine on Solaris 11

I use alpine as my primary e-mail client. In order to get it compiled for Solaris 11 (snv_166 and later), you will need to make a few changes to the source. [paulie@adrenaline ~]$ uname -orv5.11 snv_166 Solaris[paulie@adrenaline ~]$ ./configure --with-ssl-include-dir=/usr/include/openssl[paulie@adrenaline ~]$ gmakeWe run into a problem ...In file included from osdep.c:66:scandir.c: In function `Scandir':scandir.c:45: error: structure has no member named `dd_fd'Let's investigate:[paulie@adrenaline ~]$ vi /usr/include/dirent.h#if defined(__USE_LEGACY_PROTOTYPES__)/* traditional SVR4 definition */typedef struct { int dd_fd; /* file descriptor */ int dd_loc; /* offset in block */ int dd_size; /* amount of valid data */ char *dd_buf; /* directory block */} DIR; /* stream data from opendir() */#else/* default definition (POSIX conformant) */typedef struct { int d_fd; /* file descriptor */ int d_loc; /* offset in block */ int d_size; /* amount of valid data */ char *d_buf; /* directory block */} DIR; /* stream data from opendir() */#endif /* __USE_LEGACY_PROTOTYPES__ */Interesting, so alpine *should* be using POSIX instead of the older UNIX SVR4 definitions. Let's make a change to the scandir.c file, which is located in alpine-2.00/imap/c-client/scandir.c. On line 45 I see the following use of dd_fd: if ((!dirp) || (fstat (dirp->dd_fd,&stb) < 0)) return -1;Let's change that dd_fd to d_fd. if ((!dirp) || (fstat (dirp->d_fd,&stb) < 0)) return -1;After recompile, everything works as expected. I'm sure there is a better way of fixing this problem, but considering how trivial this issue is, a small edit is sufficient.

I use alpine as my primary e-mail client. In order to get it compiled for Solaris 11 (snv_166 and later), you will need to make a few changes to the source. [paulie@adrenaline ~]$ uname -orv5.11...

ZFS Encryption for USB sticks on Solaris 11 Express

USB memory sticks are easily lost, so to keep your data safe, it's best to use the new encryption feature of ZFS available since snv_149 (ZFS version 30). Here's how to take advantage of it.[paulie@adrenaline ~]$ uname -aSunOS adrenaline 5.11 snv_155 i86pc i386 i86pc SolarisGet the device id for the USB stick using rmformat.[paulie@adrenaline ~]$ rmformatLooking for devices... 1. Logical Node: /dev/rdsk/c11t0d0p0 Physical Node: /pci@0,0/pci108e,534a@2/hub@4/storage@1/disk@0,0 Connected Device: SanDisk U3 Cruzer Micro 8.02 Device Type: RemovableBus: USBSize: 1.9 GBLabel: Access permissions: Medium is not write protected.The device id is c11t0d0p0. Using this id, we can make a pool on the device called 'secret'. You can call yours whatever you want.[paulie@adrenaline ~]# zpool create -O encryption=on secret c11t0d0p0Enter passphrase for 'secret': Enter again: Let's create a random 128MB file in the new pool called file.enc.[paulie@adrenaline ~]# cd /secret; mkfile 128m file.encNow, let's make sure it works by exporting and importing the secret pool and hope it asks for a password.[paulie@adrenaline ~]# zpool export secret[paulie@adrenaline ~]# zpool import secretEnter passphrase for 'secret': It works as expected. Let's check for the created file.[paulie@adrenaline ~]# ls /secretfile.encWe can also check the encryption of any zfs filesystem by using the following command:[paulie@adrenaline ~]# zfs get encryption secretNAME PROPERTY VALUE SOURCEsecret encryption on localFor more information visit:http://docs.sun.com/app/docs/doc/821-1448/gkkih

USB memory sticks are easily lost, so to keep your data safe, it's best to use the new encryption feature of ZFS available since snv_149 (ZFS version 30). Here's how to take advantage of it. [paulie@ad...

Retrieving MAC Address in Solaris using C as a non-root user

I needed to find a way to get the physical (MAC) address using C. From what I could gather from searching opensolaris.org, there are two methods for retrieving it: libdlpi and arp. libdlpi is the more elegant solution as it requires a simple call to dlpi_get_physaddr(). This is how ifconfig prints your network interface's MAC address. Unfortunately, libdlpi calls are only permitted as root.As explained by James Carlson:The reason it was like this was historical: getting the MAC address inifconfig meant opening up the DLPI node and talking to the driver. Asthe drivers didn't have discrete privileges for each operation, andyou had to be almighty root to touch them, 'ifconfig' didn't show theMAC address when not privileged.\*whatever\*The second solution is to use arp. In Solaris you can determine the physical address by looking at the arp tables directly (`arp -a | grep <INTERFACE>` or `netstat -p | grep <INTERFACE>`). With C, this can be done by using the if sockets and arp libraries.I wrote up a solution called "getmac" using both methods. You can gather it here.Directions$ wget http://www.pauliesworld.org/project/getmac.c$ gcc getmac.c -o getmac -lsocket -ldlpi$ ./getmac <interface_name>arp:ffffffffffffdlpi:dlpi failure, are you root?$ pfexec ./getmac <interface_name>arp:ffffffffffffdlpi:ffffffffffffRemember to use pfexec for the libdlpi method.

I needed to find a way to get the physical (MAC) address using C. From what I could gather from searching opensolaris.org, there are two methods for retrieving it: libdlpi and arp. libdlpi is the more...

OpenSolaris + Fit-PC2 + Mediasonic Pro Box 4 Bay Enclosure

After the failure of the SATA and USB ports on my Intel D945GCL Atom board, I decided to build out a new file server. Sticking to the Atom theme, I decided to go small and get the CompuLab FIT-PC2. This little toy uses the Z530 1.6Ghz CPU that apparently uses only 6 watts of power. I'm assuming that means *without* a hard drive installed.Measuring in at around 115 x 101 x 27mm (~ 4.5"x4.0"x1.0"), it is only big enough to hold one laptop sized 2.5" SATA drive.The drive I installed only has 80GB of space. That would run out real quick with my needs, so I decided to get a MediaSonic USB disk enclosure to link up with my server. It can hold up to 4 SATA drives.The PC sits on top of the enclosure on my bookshelf taking up 8.5" x 5.0" x 6.5" amount of space. This is not only power efficient, but space efficient since I am using 4 x 1TB drives. 4TB total (theoretical), ~2.6TB in a ZFS raidz. If I were to have purchased the 2TB drives, it would be even better.Doug's blog on the FIT-PC2 gives a good overview on the features of the device and what works. There is no wifi driver and Xorg doesn't work, so you may want to install OpenSolaris on another machine before installing the internal HDD. My server is headless and uses the built-in gigabit ethernet, so I don't care about those issues.Links and pricesCompuLab FIT-PC2 Diskless Nettop PC$273 Mediasonic (HF2-SU2S2) Pro Box 4 Bay Enclosure$135 Samsung 1TB 7200 RPM4 x $85 = $340 Total = $748

After the failure of the SATA and USB ports on my Intel D945GCL Atom board, I decided to build out a new file server. Sticking to the Atom theme, I decided to go small and get the CompuLab FIT-PC2....

Capture Live Webcam Images and Create Time-Lapse Videos with Linksys WVC54GCA

On occasion I get an e-mail asking me about my live webcam. What model is it? Does it support FTP/SSH? How does it automatically upload to your site? And so on... Some answers:Model: Linksys WVC54GCA Wireless G Home Monitoring CameraFTP/SSH support: NopeCapturing Live ImagesSo how do I automate taking pictures with the thing? Well, a hidden feature to the camera is to navigate to the static picture site of the webcam. For my example, I use DNS to map my camera hostname garfunkel to its IP address. You may want to enter whatever IP your camera uses instead of garfunkel.http://garfunkel/img/snapshot.cgi?size=3&quality=1To grab a picture at any given time, use wget.wget http://garfunkel/img/snapshot.cgi?size=3\\&quality=1 -O webcam.jpgIf you have a UNIX-based or UNIX-like OS, you can schedule a cron job to grab this image as often as you like. I prefer capturing once every half hour, so I use this template:\*/30 \* \* \* \* /path/to/wget/scriptNow, if you have your own web page, you can use ftp/sftp/scp to upload the picture to your server using one script.Creating Time-Lapse VideosIf you can capture a picture once every minute, you can create a pretty neat time-lapse video every day. This can be accomplished using wget and ffmpeg. For my camera, I point it outside, so it's only worth capturing between certain hours, say 6am to 6pm. After 6pm, I will have created over 700 pictures. I can then use ffmpeg to stitch them together to form a short video on the day's weather. It's a little complicated to discuss every detail behind this, so I'll just post the bourne script I use. #!/bin/shhour=`date +%H`captime=`date +%H%M`dirpath="/location/of/your/timelapse/directory"img=`ls ${dirpath}\*jpg | wc -l`;expr=`expr 1 + $img`timelapse(){# File format will be Month.Day.Year.flv (flv for flash)date=`date +%m.%d.%y.flv`# ffmpeg reads in each image and incrementally makes a flash video at# 16 fpscd ${dirpath}ffmpeg -i %04d.jpg -r 16 ${dirpath}${date} # Cleanup, upload time-lapse to server and remove all jpg files# scp user@host:location# rm ${dirpath}\*jpg}capture(){ # ffmpeg expects pictures in the format 0001.jpg ... 0001.jpg so # we need to add a fluff of zeros to make each pic 4 digits longif [ $expr -lt 10 ]thenexpr="000${expr}"elif [ $expr -lt 100 ]thenexpr="00${expr}"elif [ $expr -lt 1000 ]thenexpr="0${expr}"fi wget http://garfunkel/img/snapshot.cgi?size=3\\&quality=1 --output-document=${dirpath}${expr}.jpg} case "$hour" in# Eliminate the hours of the day that are too dark to capture00|01|02|03|04|05|19|20|21|22|23);;# If it is 6:00pm (18), time to make a video18)if [ $captime -eq 1800 ]thentimelapsefi;;# Every other hour is assumed to have light, so take a pic\*)capture;;esacYou might want to change the dirpath variable to wherever you want to store the pictures. Finally, add a cron entry to run every minute:\* \* \* \* \* /path/to/timelapse/scriptAfter 6pm, you will have an flv file. This is a flash file that can be played by various flash players for viewing on the web. You can change the file format to avi or mpg instead of flv if you just want to view it on your computer.Sample video from my camera

On occasion I get an e-mail asking me about my live webcam. What model is it? Does it support FTP/SSH? How does it automatically upload to your site? And so on... Some answers:Model: Linksys WVC54GCA...

Latex for OpenSolaris

Ever wanted to experiment with latex to write a paper or redesign your resume, but unsure how to install a latex package or compose a latex document? I'll try to explain it simply using OpenSolaris. Install Latex d its dependencies from our friends at sunfreeware.comcd /tmpwget ftp://ftp.sunfreeware.com/pub/freeware/intel/10/libgcc-3.3-sol10-intel-local.gzwget ftp://ftp.sunfreeware.com/pub/freeware/intel/10/libiconv-1.11-sol10-x86-local.gzwget ftp://ftp.sunfreeware.com/pub/freeware/intel/10/ncurses-5.6-sol10-x86-local.gzwget ftp://ftp.sunfreeware.com/pub/freeware/intel/10/tetex-3.0-sol10-x86-local.gzgzip -d \*.gzInstall the packagespfexec pkgadd -d libgcc-3.3-sol10-intel-localpfexec pkgadd -d ncurses-5.6-sol10-x86-localpfexec pkgadd -d libiconv-1.11-sol10-x86-localpfexec pkgadd -d tetex-3.0-sol10-x86-local Play with Latex For my simple uses of latex, I use two binaries to compose my documents: latex and dvipdf.Latex requires a tex file to generate a document. For this example, I will use my favorite resume template created by David Grant. cd /tmpwget http://www.davidgrant.ca/sites/www.davidgrant.ca/files/resume.tex.txtwget http://www.davidgrant.ca/sites/www.davidgrant.ca/files/shading.sty.txtmv resume.tex.txt resume.texmv shading.sty.txt shading.styNow, let's use the binaries I mentioned earlier to create a pdf file./usr/local/teTeX/bin/i386-pc-solaris2.10/latex resume.tex/usr/local/teTeX/bin/i386-pc-solaris2.10/dvipdft resume.dviYou should find a pdf file in /tmp called resume.pdf. View it with acroread or evince to get an idea of how awesome latex is. I won't go into too much detail on how to create the resume.tex file, but viewing and editing it will you give you a good understanding on its syntax. This is David's resume that is generated: http://www.davidgrant.ca/sites/www.davidgrant.ca/files/resume.pdf.

Ever wanted to experiment with latex to write a paper or redesign your resume, but unsure how to install a latex package or compose a latex document? I'll try to explain it simply using OpenSolaris.Ins...

Testing Network Performance in Solaris

So you think your wireless link is slow, or your gigabit ethernet card is under performing. How can you tell? Here are two tools I use to test network throughput.netioCreated by Kai Uwe Rommel, it tests by sending and receiving packets of varying sizes and reports throughput in kilobytes per second.Installationx86# wget ftp://ftp.sunfreeware.com/pub/freeware/intel/10/netio-1.26-sol10-x86-local.gz# gzip -d netio-1.26-sol10-x86-local.gz# pkgadd -d netio-1.26-sol10-x86-localSPARC# wget ftp://ftp.sunfreeware.com/pub/freeware/sparc/10/netio-1.26-sol10-sparc-local.gz# gzip -d netio-1.26-sol10-sparc-local.gz# pkgadd -d netio-1.26-sol10-sparc-localUsageServer-side# netio -u -sClient-side# netio -u SERVER_IP_ADDRESSHere are results on a 100Mbps link:UDP connection established.Packet size 1k bytes: 11913 KByte/s (0%) Tx, 11468 KByte/s (0%) Rx.Packet size 2k bytes: 11954 KByte/s (0%) Tx, 11509 KByte/s (0%) Rx.Packet size 4k bytes: 12274 KByte/s (0%) Tx, 11687 KByte/s (0%) Rx.Packet size 8k bytes: 12284 KByte/s (0%) Tx, 11697 KByte/s (0%) Rx.Packet size 16k bytes: 12292 KByte/s (0%) Tx, 11702 KByte/s (0%) Rx.Packet size 32k bytes: 12348 KByte/s (0%) Tx, 11714 KByte/s (0%) Rx.Done.Sending and receiving hovered around 11-12MB/s which is on par with 100Mbps.netperfCreated by Rick Jones and discovers the maximum throughput of a link, reporting in megabits per second.Installation# wget ftp://ftp.netperf.org/netperf/netperf-2.4.4.tar.gz# tar zxvf netperf-2.4.4.tar.gz# cd netperf-2.4.4# export CFLAGS="-lsocket -lnsl -lkstat"# ./configure# make# make installUsageServer-side# netserverClient-side# netperf -H SERVER_IP_ADDRESSHere are results on a 100Mbps link:Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10\^6bits/sec 49152 49152 49152 10.00 94.8894.88 Mbps is the final result, not bad.

So you think your wireless link is slow, or your gigabit ethernet card is under performing. How can you tell? Here are two tools I use to test network throughput. netio Created by Kai Uwe Rommel, it...

APC UPS on OpenSolaris (apcupsd)

One of the best consumer UPS manufacturer on the market is APC. They make great products that can interface nicely with any UNIX-based system, whether it be Linux, BSD, or Solaris. Using their daemon (apcupsd), you can download the source and create a nice link-up with your UPS so you can be notified anytime your power goes out (or if you accidentally unplug it)My home model is the APC ES550 which is USB-based and has a weird USB->RJ45 cable to link a computer with the UPS. Let me explain how to get this running with OpenSolaris with at least this model. First you will need the latest apcupsd tarball located on Source Forge. Then, follow some standard compiling directions: $ tar zxvf apcupsd-3.14.4.tar.gz && cd apcupsd-3.14.4 $ ./configure --enable-usb $ make # make installIf your model is USB-based, you need the enable-usb flag with the configure script. Now when you make, however, it will most likely fail with the following error.libusb.h:9:34: /usr/sfw/include/usb.h: No such file or directory... lots of errors ...'make' fails on OpenSolaris because by default it is missing a usb.h header file located in /usr/sfw/include. This can be retrieved by running this command inside that directory as root. # wget http://src.opensolaris.org/source/raw/sfw/usr/src/lib/libusb/inc/usb.hBinaries are stored inside /etc/opt/apcupsd/sbin.The file apcupsd.conf is needed to configure this USB device. Two lines that say the following must be included in this file.UPSCABLE usbUPSTYPE usbDelete any other UPSCABLE/UPSTYPE lines. The apcupsd.conf file documents other types of APC UPS devices as well, usually those that rely on serial as opposed to usb.Now, you should be able to run the daemon and you can verify that it will notify you by simply pulling the plug on the UPS. If you modify your root alias in /etc/mail/aliases the UPS will send you an e-mail when the power goes out. This sounds handy if I'm campus and I lose the electricity in my apartment, so I'll have plenty of time to power off my machine remotely.

One of the best consumer UPS manufacturer on the market is APC. They make great products that can interface nicely with any UNIX-based system, whether it be Linux, BSD, or Solaris. Using their daemon...

ZFS NAS with the Intel Atom

I've been looking to build a network attached storage (NAS) device for some time to store all my music, photos, films, etc. in one global location. I had a few specific requirements that were as follows:Supports two disks (2TB RAID mirroring preferable)Supports UPnP server (media sharing)Low power (always on)Bonus points: ZFS, SSH, Samba, etc.There are a few ready-to-go NAS solutions available on the market that are compatible with my demands. The Linksys NAS200, at $130, supports two disks with Twonky Media server for UPnP and is low power. The downside is that reviews indicate it being very slow and it comes with no gigabit ethernet. On the high-end side, however, the QNAP TS-209 Pro also comes with the two disk slots, UPnP support, is fast, along with a bunch of extra goodies like samba and an Itunes server. The price tag at $400 makes it a bit too much for a diskless system, so I decided on a different solution... why not build it myself?I had a Thermaltake LANBOX Lite Mini-ITX/ATX case lying around along with a 200W power supply, so all I needed to do was find some cheap, low power hardware to support it. Intel recently released a new type of processor called the Atom, which is aimed at bringing x86 into the embedded market. I'm not so sure how successful they will be at this venture, but it fits my needs perfectly. According to their specs, it draws 2W TDP for the 1.6ghz version which is pretty amazing. The power output turns out to be a bit of marketing hype, but considering it is now being used in the ASUS Eee desktop and laptops, it should prove to be a viable candidate. To encourage the hobbyist market, Intel created a combo of motherboard + Atom (BOXD945GCLF) that is available on Newegg for $75. After getting it along with a pair of 1TB drives at $170, I had a workable system shipped to me for under $500. My first reaction when I got all the parts is that the Intel board is TINY. I can fit my hand around entire thing. Even in the media center case that I use, it has quite a bit of extra room. The LANBOX Lite is fully modular which makes installation much easier. If you have ever built a computer from scratch, then you probably understand how tedious it can be to screw in motherboards, install drives, etc. This case allows you to pull every section out to make installation a breeze. It also comes with a nice silo to install both the terabyte drives. The next step is to turn this into a fully functional NAS. After installing FreeBSD 7.0, I wanted to setup both my disks to mirror. Since the BSD family has such a friendly license, ZFS is included in the distribution. And ZFS mirroring makes things incredibly simple to setup.[root@bojangles ~]# zpool attach tank ad4s1d ad6s1dAnd that's it! Now to ensure that things went as expected.[root@bojangles ~]# zpool listNAME SIZE USED AVAIL CAP HEALTH ALTROOTtank 928G 72.9G 855G 7% ONLINE -[root@bojangles ~]# zpool status pool: tank state: ONLINE scrub: none requestedconfig:NAME STATE READ WRITE CKSUMtank ONLINE 0 0 0 mirror ONLINE 0 0 0 ad4s1d ONLINE 0 0 0 ad6s1d ONLINE 0 0 0errors: No known data errorsNow with ports, I can get Samba and Ushare up and running in no time.[root@bojangles ~]# cd /usr/ports/net/samba3 && make install clean[root@bojangles ~]# cd /usr/ports/net/ushare && make install cleanThe final product in my closet in a makeshift cabinet.

I've been looking to build a network attached storage (NAS) device for some time to store all my music, photos, films, etc. in one global location. I had a few specific requirements that were...

Oracle

Integrated Cloud Applications & Platform Services