Saturday Jun 27, 2009

Follow fork for dtrace pid provider?

There is a ongoing request to have follow fork functionality for the dtrace pid provider but so far no one has stood upto the plate for that RFE. In the mean time my best workaround for this is this:

cjg@brompton:~/lang/d$ cat followfork.d
proc:::start
/ppid == $target/
{
	stop();
	printf("fork %d\\n", pid);
	system("dtrace -qs child.d -p %d", pid);
}
cjg@brompton:~/lang/d$ cat child.d
pid$target::malloc:entry
{
	printf("%d %s:%s %d\\n", pid, probefunc, probename, ustackdepth)
}
cjg@brompton:~/lang/d$ pfexec /usr/sbin/dtrace -qws followfork.d -s child.d -p 26758
26758 malloc:entry 22
26758 malloc:entry 15
26758 malloc:entry 18
26758 malloc:entry 18
26758 malloc:entry 18
fork 27548
27548 malloc:entry 7
27548 malloc:entry 7
27548 malloc:entry 18
27548 malloc:entry 16
27548 malloc:entry 18

Clearly you can have the child script do what ever you wish.

Better solutions are welcome!

Thursday Jun 18, 2009

Diskomizer Open Sourced

I'm pleased to announce the Diskomizer test suite has been open sourced. Diskomizer started life in the dark days before ZFS when we lived in a world full1 of bit flips, phantom writes, phantom reads, misplaced writes and misplaced reads.

With a storage architecture that does not use end to end data verification the best that you can hope for was that your application will spot errors quickly and allow you to diagnose the broken part or bug quickly. Diskomizer was written to be a “simple” application that could verify all the data paths worked correctly and worked correctly under extreme load. It has been and is used by support, development and test groups for system verification.

For more details of what Diskomizer is and how to build and install read these pages:

http://www.opensolaris.org/os/community/storage/tests/Diskomizer/

You can download the source and precompiled binaries from:

http://dlc.sun.com/osol/test/downloads/current/

and can browse the source here:

http://src.opensolaris.org/source/xref/test/stcnv/usr/src/tools/diskomizer

Using Diskomizer

First remember in most cases Diskomizer will destroy all the data on any target you point it at. So extreme care is advised.

I will say that again.

Diskomizer will destroy all the data on any target that you point it at.

For the purposes of this explanation I am going to use ZFS volumes so that I can create and destroy them with confidence that I will not be destroying someone's data.

First lets create some volumes.

# i=0
# while (( i < 10 ))
do
zfs create -V 10G storage/chris/testvol$i
let i=i+1
done
#

Now write the names of the devices you wish to test into a file after the key “DEVICE=”:

# echo DEVICE= /dev/zvol/rdsk/storage/chris/testvol\* > test_opts

Now start the test. When you installed diskomizer it put the standard option files on the system and has a search path so that it can find them. I'm using the options file “background” which will make the test go into the back ground redirecting the output into a file called “stdout” and any errors into a file called “stderr”:


# /opt/SUNWstc-diskomizer/bin/diskomizer -f test_opts -f background
# 

If Diskomizer has any problems with the configuration it will report them and exit. This is to minimize the risk to your data from a typo. Also the default is to open devices and files exclusively to again reduce the danger to your data (and to reduce false positives where it detects data corruption).

Once up and running it will report it's progress for each process in the output file:

# tail -5 stdout
PID 1152: INFO /dev/zvol/rdsk/storage/chris/testvol7 (zvol0:a)2 write times (0.000,0.049,6.068) 100%
PID 1152: INFO /dev/zvol/rdsk/storage/chris/testvol1 (zvol0:a) write times (0.000,0.027,6.240) 100%
PID 1152: INFO /dev/zvol/rdsk/storage/chris/testvol7 (zvol0:a) read times (0.000,1.593,6.918) 100%
PID 1154: INFO /dev/zvol/rdsk/storage/chris/testvol9 (zvol0:a) write times (0.000,0.070,6.158)  79%
PID 1151: INFO /dev/zvol/rdsk/storage/chris/testvol0 (zvol0:a) read times (0.000,0.976,7.523) 100%
# 

meanwhile all the usual tools can be used to view the IO:

# zpool iostat 5 5                                                  
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storage      460G  15.9T    832  4.28K  6.49M  31.2M
storage      460G  15.9T  3.22K  9.86K  25.8M  77.2M
storage      460G  15.9T  3.77K  6.04K  30.1M  46.8M
storage      460G  15.9T  2.90K  11.7K  23.2M  91.4M
storage      460G  15.9T  3.63K  5.86K  29.1M  45.7M
# 


1Full may be an exaggeration but we will never know thanks to the fact that the data loss was silent. There were enough cases reported where there was reason to doubt whether the data was good to keep me busy.

2The fact that all the zvols have the same name (zvol0:a) is bug 6851545 found with diskomizer.

Friday Jun 05, 2009

Possibly the best shell programming mistake ever

A colleague, lets call him Lewis, just popped over with the most bizarrely behaving shell script I have seen.

The problem was that the script would hang while the automounter timed out an attempt to NFS mount a file system on the customer's system.

I narrowed it down to something in a shell function that looked like this:

# Make a copy even if the destination already exists.
safe_copy()
{
 	typeset src="$1"
	typeset dst="$2"

	/\* Nothing to copy \*/
	if [ ! -f $src ] ; then
		return
	fi

        if [ ! -h $src -a ! -h $dst -a ! -d $dst ] ; then
		cp -p $src $dst || exit 1
	fi
}

safe_copy was called with a file as the $1 and a file as $2.

I laughed when saw the problem. Funny how you can read something and miss such an obvious mistake!

Thankfully the script has quietly been fixed.

Tuesday May 26, 2009

Why everyone should be using ZFS

It is at times like these that I'm glad I use ZFS at home.


  pool: tank
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: none requested
config:

        NAME           STATE     READ WRITE CKSUM
        tank           ONLINE       0     0     0
          mirror       ONLINE       0     0     0
            c20t0d0s7  ONLINE       6     0     4
            c21t0d0s7  ONLINE       0     0     0
          mirror       ONLINE       0     0     0
            c21t1d0    ONLINE       0     0     0
            c20t1d0    ONLINE       0     0     0

errors: No known data errors
: pearson FSS 14 $; 

The drive with the errors was also throwing up errors that iostat could report and from it's performance was trying heroicially to give me back data. However it had failed. It's performance was terrible and then it failed to give the right data on 4 occasions. Anyother file system would, if that was user data, just had deliviered it to the user without warning. That bad data could then propergate from there on, probably into my backups. There is certainly no good that could come from that. However ZFS detected and corrected the errors.


Now I have offlined the disk the performance of the system is better but I have no redundancy until the new disk I have just ordered arriaves. Now time to check out Seagate's warranty return system.

Sunday May 10, 2009

Another update to Sun Ray access hours script

I have made a change to up access hours script for my Sun Rays. Now the access file can also contain a comma separated list of Sun Ray DTUs so that the control is only applied to those DTUs:

: pearson FSS 3 $; cat /etc/opt/local/access_hours 
user1:2000:2300:P8.00144f7dc383
user2:2000:2300:P8.00144f57a46f
user3:0630:2300
user4:0630:2300
: pearson FSS 4 $; 

The practical reason for this is that it allows control of DTUs that are in bedrooms but if the computer is really needed another DTU can be used for homework.

Now that bug 6791062 is fixed the script is safe to use in nevada.

The script is where it always was.

Friday May 08, 2009

Stopping find searching remote directories.

Grizzled UNIX users look away now.

The find command is a wonderful thing but there are some uses of it that seem to cause confusion enough that it seems worth documenting them for google. Today's is:

How can I stop find(1) searching remote file systems?

On reading the documentation the “-local” option should be just what you want and it is, but not on it's own. If you just do:

$ find . -local -print

It will indeed only report on files that are on local file systems below the current directory. However it will search the entire directory tree for those local files even if the directory tree is on NFS

To get find to stop searching when it finds a remote file system you need:


$ find . \\( ! -local -prune \\) -o -print

simple.

Sunday Apr 19, 2009

User and group quotas for ZFS!

This push will be very popular amoung those who are managing servers with thousands of users:

Repository: /export/onnv-gate
Total changesets: 1

Changeset: f41cf682d0d3

Comments:
PSARC/2009/204 ZFS user/group quotas & space accounting
6501037 want user/group quotas on ZFS
6830813 zfs list -t all fails assertion
6827260 assertion failed in arc_read(): hdr == pbuf->b_hdr
6815592 panic: No such hold X on refcount Y from zfs_znode_move
6759986 zfs list shows temporary %clone when doing online zfs recv

User quotas for zfs has been the feature I have been asked about most when talking to customers. This probably relfects that most customers are simply blown away by the other features of ZFS and the only missing feature was user quotas if you have a large user base.

Tuesday Apr 14, 2009

zfs list -d

I've just pushed the changes for zfs list that give it a -d option to limit the depth to which recursive listings will go. This is of most use when you wish to list the snapshots of a given data set and only the snapshots of that data set.

PSARC 2009/171 zfs list -d and zfs get -d
6762432 zfs list --depth

Before this you could achieve this using a short pipe line which while it produced the correct results was horribly inefficient and very slow for datasets that had lots of descendents.

: v4u-1000c-gmp03.eu TS 6 $; zfs list -t snapshot rpool | grep '\^rpool@'
rpool@spam                         0      -    64K  -
rpool@two                          0      -    64K  -
: v4u-1000c-gmp03.eu TS 7 $; zfs list -d 1 -t snapshot              
NAME         USED  AVAIL  REFER  MOUNTPOINT
rpool@spam      0      -    64K  -
rpool@two       0      -    64K  -
: v4u-1000c-gmp03.eu TS 8 $; 

It will allow the zfs-snapshot service to be much more efficient when it needs to list snapshots. The change will be in build 113.

Friday Apr 10, 2009

Native solaris printing working

The solution to my HP printer not working on Solaris is to use CUPS. Since I have a full install and solaris now has the packages installed all I had to do was switch the service over:

$ pfexec /usr/sbin/print-service -s cups

then configure the printer using http://localhosts:631 in the same way as I did with ubunto. Now I don't need the virtual machine running which is a bonus. I think cups may be a better fit for me at home since I don't have a nameservice running so the benefits of the lp system are very limited.


The bug is 6826086: printer drivers from package SUNWhpijs are not update with HPLIP

Sunday Apr 05, 2009

Recovering our Windows PC

I had reason to discover if my solution for backing up the windows PC worked. Apparently the PC had not been working properly for a while but no one had mentioned that to me. The symptoms were:

  1. No menu bar at the bottom of the screen. It was almost like the screen was the wrong size but how it was changed is/was a mystery.

  2. It was claiming it needed to revalidate itself as the hardware had changed, which it catagorically had not and I had 2 days to sort it out. Apparenty this message had been around for a few days (weeks?) but was ignored.

Now I'm sure I could have had endless fun reading forums to find out how to fix these things but it was Saturday night nd I was going cycling in the morning. So time to boot solaris and restore the back up. First I took a back up of what was on the disk, just in case I get a desire to relive the issue. I just needed one script to restore it over ssh. The script is:

: pearson FSS 14 $; cat /usr/local/sbin/xp_restore 
#!/bin/ksh 

exec dd of=/dev/rdsk/c0d0p1 bs=1k
: pearson FSS 15 $; 

and the command was:

$ ssh pc pfexec /usr/local/sbin/xp_restore < backup.dd

having chosen the desired snapshot. Obviously the command was added to /etc/security/exec_attr. Then just leave that running over night. In the morning the system booted up just fine, complained about the virus definitions being out of date and various things needing updates but all working. Alas doing this before I went cycling made me late enough to miss the peleton, if it was there.

Saturday Mar 28, 2009

snapshot on unlink?

This thread on OpenSolaris made me wonder how hard it would be to take a snapshot before any file is deleted. It turns out that using dtrace it is not hard at all. Using dtrace to monitor unlink and unlinkat calls and a short script to take the snapshots:

#!/bin/ksh93

function snapshot
{
	eval $(print x=$2)

	until [[ "$x" == "/" || -d "$x/.zfs/snapshot" ]]
	do
		x="${x%/\*}"
	done
	if [[ "$x" == "/" || "$x" == "/tmp" ]]
	then
		return
	fi
	if [[ -d "$x/.zfs/snapshot" ]]
	then
		print mkdir "$x/.zfs/snapshot/unlink_$1"
		pfexec mkdir "$x/.zfs/snapshot/unlink_$1"
	fi
}
function parse
{
	eval $(print x=$4)
	
	if [[ "${x%%/\*}" == "" ]]
	then
		snapshot $1 "$2$4"
	else
		snapshot $1 "$2$3/$4"
	fi
}
pfexec dtrace -wqn 'syscall::fsat:entry /pid != '$$' && uid > 100 && arg0 == 5/ {
	printf("%d %d \\"%s\\" \\"%s\\" \\"%s\\"\\n",
	pid, walltimestamp, root, cwd, copyinstr(arg2)); stop()
}
syscall::unlink:entry /pid != '$$' && uid > 100 / {
	printf("%d %d \\"%s\\" \\"%s\\" \\"%s\\"\\n",
	pid, walltimestamp, root, cwd, copyinstr(arg0)); stop()
}' | while read pid timestamp root cwd file
do
	print prun $pid
	parse $timestamp $root $cwd $file
	pfexec prun $pid
done

Now this is just a Saturday night proof of concept and it should be noted it has a significant performance impact and single threads all calls to unlink.

Also you end up with lots of snapshots:

cjg@brompton:~$ zfs list -t snapshot -o name,used | grep unlink

rpool/export/home/cjg@unlink_1238270760978466613                           11.9M

rpool/export/home/cjg@unlink_1238275070771981963                             59K

rpool/export/home/cjg@unlink_1238275074501904526                             59K

rpool/export/home/cjg@unlink_1238275145860458143                             34K

rpool/export/home/cjg@unlink_1238275168440000379                            197K

rpool/export/home/cjg@unlink_1238275233978665556                            197K

rpool/export/home/cjg@unlink_1238275295387410635                            197K

rpool/export/home/cjg@unlink_1238275362536035217                            197K

rpool/export/home/cjg@unlink_1238275429554657197                            136K

rpool/export/home/cjg@unlink_1238275446884300017                            350K

rpool/export/home/cjg@unlink_1238275491543380576                            197K

rpool/export/home/cjg@unlink_1238275553842097361                            197K

rpool/export/home/cjg@unlink_1238275643490236001                             63K

rpool/export/home/cjg@unlink_1238275644670212158                             63K

rpool/export/home/cjg@unlink_1238275646030183268                               0

rpool/export/home/cjg@unlink_1238275647010165407                               0

rpool/export/home/cjg@unlink_1238275648040143427                             54K

rpool/export/home/cjg@unlink_1238275649030124929                             54K

rpool/export/home/cjg@unlink_1238275675679613928                            197K

rpool/export/home/cjg@unlink_1238275738608457151                            198K

rpool/export/home/cjg@unlink_1238275800827304353                           57.5K

rpool/export/home/cjg@unlink_1238275853116324001                           32.5K

rpool/export/home/cjg@unlink_1238275854186304490                           53.5K

rpool/export/home/cjg@unlink_1238275862146153573                            196K

rpool/export/home/cjg@unlink_1238275923255007891                           55.5K

rpool/export/home/cjg@unlink_1238275962114286151                           35.5K

rpool/export/home/cjg@unlink_1238275962994267852                           56.5K

rpool/export/home/cjg@unlink_1238275984723865944                           55.5K

rpool/export/home/cjg@unlink_1238275986483834569                             29K

rpool/export/home/cjg@unlink_1238276004103500867                             49K

rpool/export/home/cjg@unlink_1238276005213479906                             49K

rpool/export/home/cjg@unlink_1238276024853115037                           50.5K

rpool/export/home/cjg@unlink_1238276026423085669                           52.5K

rpool/export/home/cjg@unlink_1238276041792798946                           50.5K

rpool/export/home/cjg@unlink_1238276046332707732                           55.5K

rpool/export/home/cjg@unlink_1238276098621721894                             66K

rpool/export/home/cjg@unlink_1238276108811528303                           69.5K

rpool/export/home/cjg@unlink_1238276132861080236                             56K

rpool/export/home/cjg@unlink_1238276166070438484                             49K

rpool/export/home/cjg@unlink_1238276167190417567                             49K

rpool/export/home/cjg@unlink_1238276170930350786                             57K

rpool/export/home/cjg@unlink_1238276206569700134                           30.5K

rpool/export/home/cjg@unlink_1238276208519665843                           58.5K

rpool/export/home/cjg@unlink_1238276476484690821                             54K

rpool/export/home/cjg@unlink_1238276477974663478                             54K

rpool/export/home/cjg@unlink_1238276511584038137                           60.5K

rpool/export/home/cjg@unlink_1238276519053902818                             71K

rpool/export/home/cjg@unlink_1238276528213727766                             62K

rpool/export/home/cjg@unlink_1238276529883699491                             47K

rpool/export/home/cjg@unlink_1238276531683666535                           3.33M

rpool/export/home/cjg@unlink_1238276558063169299                           35.5K

rpool/export/home/cjg@unlink_1238276559223149116                           62.5K

rpool/export/home/cjg@unlink_1238276573552877191                           35.5K

rpool/export/home/cjg@unlink_1238276584602668975                           35.5K

rpool/export/home/cjg@unlink_1238276586002642752                             53K

rpool/export/home/cjg@unlink_1238276586522633206                             51K

rpool/export/home/cjg@unlink_1238276808718681998                            216K

rpool/export/home/cjg@unlink_1238276820958471430                           77.5K

rpool/export/home/cjg@unlink_1238276826718371992                             51K

rpool/export/home/cjg@unlink_1238276827908352138                             51K

rpool/export/home/cjg@unlink_1238276883227391747                            198K

rpool/export/home/cjg@unlink_1238276945366305295                           58.5K

rpool/export/home/cjg@unlink_1238276954766149887                           32.5K

rpool/export/home/cjg@unlink_1238276955946126421                           54.5K

rpool/export/home/cjg@unlink_1238276968985903108                           52.5K

rpool/export/home/cjg@unlink_1238276988865560952                             31K

rpool/export/home/cjg@unlink_1238277006915250722                           57.5K

rpool/export/home/cjg@unlink_1238277029624856958                             51K

rpool/export/home/cjg@unlink_1238277030754835625                             51K

rpool/export/home/cjg@unlink_1238277042004634457                           51.5K

rpool/export/home/cjg@unlink_1238277043934600972                             52K

rpool/export/home/cjg@unlink_1238277045124580763                             51K

rpool/export/home/cjg@unlink_1238277056554381122                             51K

rpool/export/home/cjg@unlink_1238277058274350998                             51K

rpool/export/home/cjg@unlink_1238277068944163541                             59K

rpool/export/home/cjg@unlink_1238277121423241127                           32.5K

rpool/export/home/cjg@unlink_1238277123353210283                           53.5K

rpool/export/home/cjg@unlink_1238277136532970668                           52.5K

rpool/export/home/cjg@unlink_1238277152942678490                               0

rpool/export/home/cjg@unlink_1238277173482320586                               0

rpool/export/home/cjg@unlink_1238277187222067194                             49K

rpool/export/home/cjg@unlink_1238277188902043005                             49K

rpool/export/home/cjg@unlink_1238277190362010483                             56K

rpool/export/home/cjg@unlink_1238277228691306147                           30.5K

rpool/export/home/cjg@unlink_1238277230021281988                           51.5K

rpool/export/home/cjg@unlink_1238277251960874811                             57K

rpool/export/home/cjg@unlink_1238277300159980679                           30.5K

rpool/export/home/cjg@unlink_1238277301769961639                             50K

rpool/export/home/cjg@unlink_1238277302279948212                             49K

rpool/export/home/cjg@unlink_1238277310639840621                             28K

rpool/export/home/cjg@unlink_1238277314109790784                           55.5K

rpool/export/home/cjg@unlink_1238277324429653135                             49K

rpool/export/home/cjg@unlink_1238277325639636996                             49K

rpool/export/home/cjg@unlink_1238277360029166691                            356K

rpool/export/home/cjg@unlink_1238277375738948709                           55.5K

rpool/export/home/cjg@unlink_1238277376798933629                             29K

rpool/export/home/cjg@unlink_1238277378458911557                             50K

rpool/export/home/cjg@unlink_1238277380098888676                             49K

rpool/export/home/cjg@unlink_1238277397738633771                             48K

rpool/export/home/cjg@unlink_1238277415098386055                             49K

rpool/export/home/cjg@unlink_1238277416258362893                             49K

rpool/export/home/cjg@unlink_1238277438388037804                             57K

rpool/export/home/cjg@unlink_1238277443337969269                           30.5K

rpool/export/home/cjg@unlink_1238277445587936426                           51.5K

rpool/export/home/cjg@unlink_1238277454527801430                           50.5K

rpool/export/home/cjg@unlink_1238277500967098623                            196K

rpool/export/home/cjg@unlink_1238277562866135282                           55.5K

rpool/export/home/cjg@unlink_1238277607205456578                             49K

rpool/export/home/cjg@unlink_1238277608135443640                             49K

rpool/export/home/cjg@unlink_1238277624875209357                             57K

rpool/export/home/cjg@unlink_1238277682774484369                           30.5K

rpool/export/home/cjg@unlink_1238277684324464523                             50K

rpool/export/home/cjg@unlink_1238277685634444004                             49K

rpool/export/home/cjg@unlink_1238277686834429223                           75.5K

rpool/export/home/cjg@unlink_1238277700074256500                             48K

rpool/export/home/cjg@unlink_1238277701924235244                             48K

rpool/export/home/cjg@unlink_1238277736473759068                           49.5K

rpool/export/home/cjg@unlink_1238277748313594650                           55.5K

rpool/export/home/cjg@unlink_1238277748413593612                             28K

rpool/export/home/cjg@unlink_1238277750343571890                             48K

rpool/export/home/cjg@unlink_1238277767513347930                           49.5K

rpool/export/home/cjg@unlink_1238277769183322087                             50K

rpool/export/home/cjg@unlink_1238277770343306935                             48K

rpool/export/home/cjg@unlink_1238277786193093885                             48K

rpool/export/home/cjg@unlink_1238277787293079433                             48K

rpool/export/home/cjg@unlink_1238277805362825259                           49.5K

rpool/export/home/cjg@unlink_1238277810602750426                            195K

rpool/export/home/cjg@unlink_1238277872911814531                            195K

rpool/export/home/cjg@unlink_1238277934680920214                            195K

rpool/export/home/cjg@unlink_1238277997220016825                            195K

rpool/export/home/cjg@unlink_1238278063868871589                           54.5K

rpool/export/home/cjg@unlink_1238278094728323253                             61K

rpool/export/home/cjg@unlink_1238278096268295499                             63K

rpool/export/home/cjg@unlink_1238278098518260168                             52K

rpool/export/home/cjg@unlink_1238278099658242516                             56K

rpool/export/home/cjg@unlink_1238278103948159937                             57K

rpool/export/home/cjg@unlink_1238278107688091854                             54K

rpool/export/home/cjg@unlink_1238278113907980286                             62K

rpool/export/home/cjg@unlink_1238278116267937390                             64K

rpool/export/home/cjg@unlink_1238278125757769238                            196K

rpool/export/home/cjg@unlink_1238278155387248061                            136K

rpool/export/home/cjg@unlink_1238278160547156524                            229K

rpool/export/home/cjg@unlink_1238278165047079863                            351K

rpool/export/home/cjg@unlink_1238278166797050407                            197K

rpool/export/home/cjg@unlink_1238278168907009714                             55K

rpool/export/home/cjg@unlink_1238278170666980686                            341K

rpool/export/home/cjg@unlink_1238278171616960684                           54.5K

rpool/export/home/cjg@unlink_1238278190336630319                            777K

rpool/export/home/cjg@unlink_1238278253245490904                            329K

rpool/export/home/cjg@unlink_1238278262235340449                            362K

rpool/export/home/cjg@unlink_1238278262915331213                            362K

rpool/export/home/cjg@unlink_1238278264915299508                            285K

rpool/export/home/cjg@unlink_1238278310694590970                             87K

rpool/export/home/cjg@unlink_1238278313294552482                             66K

rpool/export/home/cjg@unlink_1238278315014520386                             31K

rpool/export/home/cjg@unlink_1238278371773568934                            258K

rpool/export/home/cjg@unlink_1238278375673503109                            198K

rpool/export/home/cjg@unlink_1238278440802320314                            138K

rpool/export/home/cjg@unlink_1238278442492291542                           55.5K

rpool/export/home/cjg@unlink_1238278445312240229                           2.38M

rpool/export/home/cjg@unlink_1238278453582077088                            198K

rpool/export/home/cjg@unlink_1238278502461070222                            256K

rpool/export/home/cjg@unlink_1238278564359805760                            256K

rpool/export/home/cjg@unlink_1238278625738732194                           63.5K

rpool/export/home/cjg@unlink_1238278633428599541                           61.5K

rpool/export/home/cjg@unlink_1238278634568579678                            137K

rpool/export/home/cjg@unlink_1238278657838186760                            288K

rpool/export/home/cjg@unlink_1238278659768151784                            223K

rpool/export/home/cjg@unlink_1238278661518121640                            159K

rpool/export/home/cjg@unlink_1238278664378073421                            136K

rpool/export/home/cjg@unlink_1238278665908048641                            138K

rpool/export/home/cjg@unlink_1238278666968033048                            136K

rpool/export/home/cjg@unlink_1238278668887996115                            281K

rpool/export/home/cjg@unlink_1238278670307970765                            227K

rpool/export/home/cjg@unlink_1238278671897943665                            162K

rpool/export/home/cjg@unlink_1238278673197921775                            164K

rpool/export/home/cjg@unlink_1238278674027906895                            164K

rpool/export/home/cjg@unlink_1238278674657900961                            165K

rpool/export/home/cjg@unlink_1238278675657885128                            165K

rpool/export/home/cjg@unlink_1238278676647871187                            241K

rpool/export/home/cjg@unlink_1238278678347837775                            136K

rpool/export/home/cjg@unlink_1238278679597811093                            199K

rpool/export/home/cjg@unlink_1238278687297679327                            197K

rpool/export/home/cjg@unlink_1238278749616679679                            197K

rpool/export/home/cjg@unlink_1238278811875554411                           56.5K

cjg@brompton:~$ 

Good job that snapshots are cheap. I'm not going to be doing this all the time but it makes you think what could be done.

Friday Mar 27, 2009

zfs list webrev

I've just posted the webrev for review for an RFE to “zfs list”:

PSARC 2009/171 zfs list -d and zfs get -d
6762432 zfs list --depth

This will allow you to limit the depth to which a recursive listing of zfs file systems will go. This is particularly useful if you only want to list the snapshots of the current file system.

The webrev is here:

http://cr.opensolaris.org/~cjg/zfs_list/zfs_list-d-2/

Comments welcome.

Friday Mar 20, 2009

Limiting scsi.d output to IOs that take a long time.

I have added support for only reporting on scsi packets that take more than a certain amount of time to complete to scsi.d. This is partularly useful when combined with the PERF_REPORT option to see just the IO requests that took over a time threshold and still collect statistics on all the other IO requests.

To use this you have to specify “-D REPORT_OVERTIME=X” where X is the time you wish to report on in nano seconds. Here is an example that will only print the details of IO requests that took more than 250ms:

# scsi.d -D REPORT_OVERTIME=$((250\*1000\*1000)) -D PERF_REPORT 
Only reporting IOs longer than 250000000ns
Hit Control C to interrupt
00005.388800363 glm0:-> 0x2a WRITE(10) address 00:00, lba 0x00270e67, len 0x000008, control 0x00 timeout 60 CDBP 300002df438 1 sched(0) cdb(10) 2a0000270e6700000800
00005.649494475 glm0:<- 0x2a WRITE(10) address 00:00, lba 0x00270e67, len 0x000008, control 0x00 timeout 60 CDBP 300002df438, reason 0x0 (COMPLETED) pkt_state 0x1f state 0x0 Success Time 260833us
00005.384612799 glm0:-> 0x0a  WRITE(6) address 00:00, lba 0x048564, len 0x000001, control 0x00 timeout 60 CDBP 30002541ac0 1 sched(0) cdb(6) 0a0485640100
00005.716416658 glm0:<- 0x0a  WRITE(6) address 00:00, lba 0x048564, len 0x000001, control 0x00 timeout 60 CDBP 30002541ac0, reason 0x0 (COMPLETED) pkt_state 0x1f state 0x0 Success Time 331939us
00005.385907691 glm0:-> 0x0a  WRITE(6) address 00:00, lba 0x0605b4, len 0x000001, control 0x00 timeout 60 CDBP 300035637a0 1 sched(0) cdb(6) 0a0605b40100
00005.773925990 glm0:<- 0x0a  WRITE(6) address 00:00, lba 0x0605b4, len 0x000001, control 0x00 timeout 60 CDBP 300035637a0, reason 0x0 (COMPLETED) pkt_state 0x1f state 0x0 Success Time 388153us
00005.389078533 glm0:-> 0x2a WRITE(10) address 00:00, lba 0x004b19d3, len 0x000003, control 0x00 timeout 60 CDBP 300002df0f8 1 sched(0) cdb(10) 2a00004b19d300000300
00005.824242527 glm0:<- 0x2a WRITE(10) address 00:00, lba 0x004b19d3, len 0x000003, control 0x00 timeout 60 CDBP 300002df0f8, reason 0x0 (COMPLETED) pkt_state 0x1f state 0x0 Success Time 435303us
\^C

  glm                                                       0
           value  ------------- Distribution ------------- count    
         4194304 |                                         0        
         8388608 |                                         3        
        16777216 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@        678      
        33554432 |@@@@@                                    98       
        67108864 |@                                        27       
       134217728 |                                         8        
       268435456 |                                         3        
       536870912 |                                         0        

# 

The implementation of this uses dtrace speculations and as such may require some tuning of the various settings. Specifically the number of speculation buffers I have set to 1000 which should be enough for all but the busiest systems but if you do reach that limit you can increase them using the following options:


-D NSPEC=N

Set the number of speculations to this value.

-D SPECSIZE=N

Set the size of the speculaton buffer. This should be 200 times the size of NSPEC.

-D CLEANRATE=N

Specify the clean rate.

As usual the script is available here. It is version 1.18.

Sunday Mar 15, 2009

Converting flac encoded audio to mp3.

Solaris has the flac command which will happily decode flac encoded file and metaflac to read the flac meta data but you need to download lame either precompiled or from sourceforge and build it. Then it is a simple matter to convert your flac encoded files:

$ flac -c -d file.flac | lame - file.mp3

you can add various flags to change the data rates and put tags into the mp3. I feel sure someone should have written a script that would convert an entire library but I could not find one so here is the one I wrote to do this:

#!/bin/ksh93
umask 22
SRC="$1"
DST="$2"
BITRATE=128

export PATH=/opt/lame/bin:/usr/sbin:/usr/sfw/bin:/usr/bin

function flactags2lametags
{
        typeset IFS="$IFS ="
        typeset key value
        metaflac --export-tags /dev/fd/1 "$1" | while read key value
        do
                print typeset "${key}=\\"$(print $value| sed 's/["'\\'']/\\\\&/g')\\";"
        done
}

function cleanup
{
        if [[ "$FILE" != "" ]]
        then
                rm "$FILE"
        fi
}

function convert_file
{
        typeset -i ret=0
        typeset f="${1%.flac}"
        if [[ "$f" != "${1}" ]]
        then
                typeset out="${DST}/${f}.mp3"
                if ! [[ -f "${out}" ]] || [[ "$1" -nt "$out" ]]
                then
                        print $out

                        eval $(flactags2lametags "$1") 
                        FILE="$out"
                        flac --silent -c -d "$1" | \\
                                lame --quiet -b ${BITRATE} -h \\
                                ${ARTIST:+--ta }"${ARTIST}" \\
                                ${ALBUM:+--tl }"${ALBUM}" \\
                                ${TRACKNUMBER:+--tn }"${TRACKNUMBER}" \\
                                ${DATE:+--ty }"${DATE%%-\*}" \\
                                - "${out}"
                        ret=$?
                        FILE=""
                fi
        elif [[ $1 != "${1%.jpg}" ]]
        then
                [[ -f "${DST}/${1}" ]] || cp "$1" "${DST}"
        fi
        return ${ret}
}

function do_dir
{
        typeset i
        for i in "$1"/\*
        do
                if [[ -f "$i" ]]
                then
                        convert_file "$i" || return 1
                elif [[ -d "$i" ]]
                then
                        if test -d "${DST}/$i" || mkdir "${DST}/$i"
                        then
                                do_dir "$i" || return 1
                        fi
                fi
        done
        return 0
}

function usage
{
        print USAGE: $1 src_dir dest_dist >&2
        exit 1
}

trap cleanup EXIT

(( $# == 2 )) || usage $0
if test -d "${DST}/$1" || mkdir "${DST}/$1"
then
        do_dir "$1"
fi
exit $?

Sunday Mar 08, 2009

AV codecs for Solaris

One of the big complaints about using OpenSolaris and Solaris is multi media support. The lack of codecs to allow you to play "common" if not "open" formats is irritatings.

I've been using the Fluendo mp3 codecs for a while as they have a compelling price point, being free. Fluendo also have a bundle of other codecs that they supply for OpenSolaris containing codecs for a number of other popular formats. I decided the €28 was would be worth it and have now installed the codecs on my home server. I now have gstreamer support for:

: pearson FSS 1 $; gst-inspect | grep -i flue
fluasf:  flurtspwms: Fluendo WMS RTSP Extension
fluasf:  fluasfcmdparse: Fluendo ASF Command Parser
fluasf:  fluasfdemux: Fluendo ASF Demuxer
fluwmvdec:  fluwmvdec: Fluendo WMV Decoder
fluaacdec:  fluaacdec: Fluendo AAC Decoder
flumms:  flummssrc: Fluendo MMS source
fludivx3dec:  fludivx3dec: Fluendo DivX 3.11 Decoder
flulpcm:  flulpcmdec: Fluendo LPCM decoder
fluwma:  fluwmsdec: Fluendo WMS Decoder
fluwma:  fluwmadec: Fluendo WMA Decoder (STD + PRO + LSL + WMS)
fluh264dec:  fluh264dec: Fluendo H264 Decoder
flumpeg4vdec:  flumpeg4vdec: Fluendo MPEG-4 ASP Video Decoder
flumpeg2vdec:  flumpeg2vdec: Fluendo MPEG-2 Video Decoder
flump3dec:  flump3dec: Fluendo MP3 Decoder (C build)
: pearson FSS 2 $; 

Alas on the Sun Rays the video performance is still less than great but on the console it is faultless and I am seeing the Totem is not quite as useless as I had previously thought.

It would be really cool if you could get OpenSolaris codecs via IPS repository.

The next thing I want is to be able to edit video.

Tuesday Mar 03, 2009

SqueezeBox!!!!

I have a real SqueezeBox at last. A wonderful birthday present from a wonderful person.

Getting the SqueezeCentre software working on Solaris is hard or at least getting perl and it's required modules is really hard. Luckily for me a colleague had already done this and helped me out. In due course we hope to get an IPS package but since I needed nevada images that is no good for me at the moment.

The Squuezebox itself is fantastic although wireless turned out to be to unreliable with the box not being able to get beyond 25% into it's buffer and tonight with the bad weather tonight that dropped to 1% and no music. The final resting place for the Squeezebox has a CAT5 connection but until it can move there it is now sharing one of the ethernet over power brick with the 40" Sun Ray.

I now want one for each room.....

Saturday Feb 21, 2009

[Open]Solaris logfiles and ZFS root

This week I had reason to want to see how often the script that controls the access hours of Sun Ray users actually did work so I went off to look in the messages files only to discover that there were only four and they only went back to January 11.

: pearson FSS 22 $; ls -l mess\*
-rw-r--r--   1 root     root       12396 Feb  8 23:58 messages
-rw-r--r--   1 root     root      134777 Feb  8 02:59 messages.0
-rw-r--r--   1 root     root       53690 Feb  1 02:06 messages.1
-rw-r--r--   1 root     root      163116 Jan 25 02:01 messages.2
-rw-r--r--   1 root     root       83470 Jan 18 00:21 messages.3
: pearson FSS 23 $; head -1 messages.3
Jan 11 05:29:14 pearson pcplusmp: [ID 444295 kern.info] pcplusmp: ide (ata) instance #1 vector 0xf ioapic 0x2 intin 0xf is bound to cpu 1
: pearson FSS 24 $; 

I am certain that the choice of only four log files was not a concious decision I have made but it did make me ponder whether logfile management should be revisted in the light of ZFS root. Since clearly if you have snapshots firing logs could go back a lot futher:

: pearson FSS 40 $; head -1 $(ls -t /.zfs/snapshot/\*/var/adm/message\*| tail -1)
Dec 14 03:15:14 pearson time-slider-cleanup: [ID 702911 daemon.notice] No more daily snapshots left
: pearson FSS 41 $; 

It did not take long for this shell function to burst into life:

function search_log
{
typeset path
if [[ ${2#/} == $2 ]]
then
        path=${PWD}/$2
else
        path=$2
fi
cat $path /.zfs/snapshot/\*$path | egrep $1 | sort -M | uniq
}

Not a generalized solution but one that works when you root filesystem contains all your logs and if you remember to escape any globbing on the command line will search all the log files:

: pearson FSS 46 $; search_log block /var/adm/messages\\\* | wc
      51     688    4759
: pearson FSS 47 $; 

There are two ways to view this. Either it it great that the logs are kept and so I have all this historical data or it is a pain as getting red of log files becomes more of a chore, indeed this is encouraging me to move all the logfiles into their own file systems so that the management of those logfiles is more granular.

At the very least it seems to me that OpenSolaris should sort out where it's log files are going and end the messages going in /var/adm and move them to /var/log which then should be it's own file system.

Tuesday Feb 17, 2009

VirtualBox pause and resume

I have a VirtualBox running ubuntu that is acting as a print server for my HP Officejet J6410 and although that works well it chews a significant amount of CPU resource which given that the printer is not in use most of the time seems a bit of a waste, especially when the Sun Ray server can have 4 active sessions running on it. So now I am running this simple script:

#!/bin/ksh93 -p 

logger -p daemon.notice -t ${0##\*/}[$$] |&

exec >&p 2>&1

while :
do
        echo resuming vbox
        su - vbox -c "VBoxManage -nologo controlvm ubunto resume"

        ssh print-server sudo ntpdate 3.uk.pool.ntp.org

        while ping printer 2 > /dev/null 2>&1
        do
                sleep 60
        done
        echo pausing vbox
        su - vbox -c "VBoxManage -nologo controlvm ubunto pause"
        until ping printer 2 > /dev/null 2>&1
        do
                sleep 1
        done
done

After the usual magic to redirect it's output to syslog this just loops forever pinging the printer. If the printer is alive the VrtualBox is resumed and if the printer is not alive the VirtualBox is paused. So as soon as the printer is turned on the virtual box is ready and then within 60 seconds of the printer being turned off the virtualbox is paused.


So that the clock on the VirtualBox is kept correct (the guest additions are, I am told, supposed to do this for free but in practice they do not for me) after resuming there is the ssh onto the virtual machine to set it's clock\*, so I have a work around in place for VirtualBox which is itself a work around for a defect in Solaris and that work around is also working around another issue with Solaris.


Life on the bleeding edge is always fun!



\*) I would normally use the address of my server to sync with but thanks to the issues with build 108 I currently don't have a working ntp server on the system.

Sunday Feb 08, 2009

New Home printer VirtualBox & Ubuntu to the rescue

I've got a new printer for home, an HP OfficeJet J6410, but the bad news is on Solaris the hpijs server for ghostscript fails and while I have made some progress debugging this it will clearly take some time to get to the root cause. As part of the debugging I ran up a Ubuntu linux in VirtualBox to see if the problem was a generic ghostscript/CUPS problem.

Since Ubuntu had no problem printing for the short term I have given the Ubuntu system a virtual network interface so that I can configure the Sun Ray server to print via the Ubuntu print-server, which if I use the cups-lpd server, will do the formatting for the remote hosts. The remote hosts therefore just have to send PostScript to the print-server and only it needs to be able run the hpijs server.

I'm running the VirtualBox headless and gdm turned off, infact everything except CUPS, xinetd and sshd turned off. I've not written an SMF service to start the VirtualBox yet in the hope that this is infact a very short lived situation as the Solaris print service will be up and running soon. Anyway I now have working printing on Solaris

Now back to what I know about ghostscript and the hpijs service. If you know more (which would not be hard) then let me know.

The problem is that ghostscript called from foomatic-gswrapper fails with a range check error:

: pearson FSS 51 $; truss -faeo /tmp/tr.$$ /usr/lib/lp/bin/foomatic-gswrapper>
truss -faeo /tmp/tr.$$  /usr/lib/lp/bin/foomatic-gswrapper  -D -dBATCH -dPARANOIDSAFER -dQUIET -dNOPAUSE -sDEVICE=ijs -sIjsServer=hpijs -dIjsUseOutputFD -sOutputFile=-
foomatic-gswrapper: gs '-D' '-dBATCH' '-dPARANOIDSAFER' '-dQUIET' '-dNOPAUSE' '-sDEVICE=ijs' '-sIjsServer=hpijs' '-dIjsUseOutputFD' '-sOutputFile=/dev/fd/3' 3>&1 1>&2
unable to set Quality=0, ColorMode=2, MediaType=0, err=4
following will be used Quality=0, ColorMode=2, MediaType=0
unable to set paper size=17, err=4
Unrecoverable error: rangecheck in setscreen
Operand stack:
    -0.0  0  --nostringval--
: pearson FSS 52 $; 

The errors, unable to set Quality and unable to set paper size appear to home from the hpijp server that returns NO_PRINTER_SELECTED (4) if these routines are called prior to the printer context is set up.

Quite how and why the printer context is not getting set up is the question. I've now rebuilt both ghostscript and the hpijs service from source and worked out how to get foomatic to use the new ones and I still have the issue but at least I should now be able to do diagnosis in my own time while the printer is usable by everyone on the local network.

Building ghostscript was simple:

$ ./configure –prefix=/opt/cjgsw

Then make and make install.

Building hpijs was less simple and I am not entirely sure I have it built correctly in fact I stronly suspect it is not built correctly. At least I have my binaries behaving the same way as the Solaris binaries, ie the failure mode is the same. My configure line was:

 $ ./configure LIBS=”-lsocket -lnsl” --prefix=/opt/cjgsw --disable-pp-build --ena
ble-static –disable-shared

All the –enable-static and –disable-shared are required or the linker gets upset during the build I think the next stop would be an update to gcc....

Wednesday Feb 04, 2009

40" Sun Ray Display

I managed to buy 2 Sun Ray 2's off Ebay and one of them was is now in place in the living room driving our 40” TV.


Combine this with a KeySonic wireless mini keyboard and the DTU does not only act as a photo frame. The Sun Ray unit is attached to the underside of the shelf as the top unit in the pile is a Virgin cable TV recorder which does not like having anything on top blocking the air flow. Thanks to the Sun Ray 2 being so light 5 strips of sticky back velcro do the trick so well that it really is going nowhere to the point that I could not remove it to plug the USB keyboard adapter directly in the back of the unit. The keyboard adapter has a button you have to press once plugged in to pair it with the keyboard. Alas with the Sun Ray in this configuration the button faces upwards. So there is a short USB cable hidden back there.

Networking is provided via Ethenet over mains.

The keyboard has impressive range and a really nice touch pad that pretends to have a scroll wheel down one side. However I've not yet got the keyboard map for it right but it only arrived an hour ago so there is time.

About

This is the old blog of Chris Gerhard. It has mostly moved to http://chrisgerhard.wordpress.com

Search

Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today