Friday Apr 10, 2009

Native solaris printing working

The solution to my HP printer not working on Solaris is to use CUPS. Since I have a full install and solaris now has the packages installed all I had to do was switch the service over:

$ pfexec /usr/sbin/print-service -s cups

then configure the printer using http://localhosts:631 in the same way as I did with ubunto. Now I don't need the virtual machine running which is a bonus. I think cups may be a better fit for me at home since I don't have a nameservice running so the benefits of the lp system are very limited.


The bug is 6826086: printer drivers from package SUNWhpijs are not update with HPLIP

Sunday Apr 05, 2009

Recovering our Windows PC

I had reason to discover if my solution for backing up the windows PC worked. Apparently the PC had not been working properly for a while but no one had mentioned that to me. The symptoms were:

  1. No menu bar at the bottom of the screen. It was almost like the screen was the wrong size but how it was changed is/was a mystery.

  2. It was claiming it needed to revalidate itself as the hardware had changed, which it catagorically had not and I had 2 days to sort it out. Apparenty this message had been around for a few days (weeks?) but was ignored.

Now I'm sure I could have had endless fun reading forums to find out how to fix these things but it was Saturday night nd I was going cycling in the morning. So time to boot solaris and restore the back up. First I took a back up of what was on the disk, just in case I get a desire to relive the issue. I just needed one script to restore it over ssh. The script is:

: pearson FSS 14 $; cat /usr/local/sbin/xp_restore 
#!/bin/ksh 

exec dd of=/dev/rdsk/c0d0p1 bs=1k
: pearson FSS 15 $; 

and the command was:

$ ssh pc pfexec /usr/local/sbin/xp_restore < backup.dd

having chosen the desired snapshot. Obviously the command was added to /etc/security/exec_attr. Then just leave that running over night. In the morning the system booted up just fine, complained about the virus definitions being out of date and various things needing updates but all working. Alas doing this before I went cycling made me late enough to miss the peleton, if it was there.

Saturday Mar 28, 2009

snapshot on unlink?

This thread on OpenSolaris made me wonder how hard it would be to take a snapshot before any file is deleted. It turns out that using dtrace it is not hard at all. Using dtrace to monitor unlink and unlinkat calls and a short script to take the snapshots:

#!/bin/ksh93

function snapshot
{
	eval $(print x=$2)

	until [[ "$x" == "/" || -d "$x/.zfs/snapshot" ]]
	do
		x="${x%/\*}"
	done
	if [[ "$x" == "/" || "$x" == "/tmp" ]]
	then
		return
	fi
	if [[ -d "$x/.zfs/snapshot" ]]
	then
		print mkdir "$x/.zfs/snapshot/unlink_$1"
		pfexec mkdir "$x/.zfs/snapshot/unlink_$1"
	fi
}
function parse
{
	eval $(print x=$4)
	
	if [[ "${x%%/\*}" == "" ]]
	then
		snapshot $1 "$2$4"
	else
		snapshot $1 "$2$3/$4"
	fi
}
pfexec dtrace -wqn 'syscall::fsat:entry /pid != '$$' && uid > 100 && arg0 == 5/ {
	printf("%d %d \\"%s\\" \\"%s\\" \\"%s\\"\\n",
	pid, walltimestamp, root, cwd, copyinstr(arg2)); stop()
}
syscall::unlink:entry /pid != '$$' && uid > 100 / {
	printf("%d %d \\"%s\\" \\"%s\\" \\"%s\\"\\n",
	pid, walltimestamp, root, cwd, copyinstr(arg0)); stop()
}' | while read pid timestamp root cwd file
do
	print prun $pid
	parse $timestamp $root $cwd $file
	pfexec prun $pid
done

Now this is just a Saturday night proof of concept and it should be noted it has a significant performance impact and single threads all calls to unlink.

Also you end up with lots of snapshots:

cjg@brompton:~$ zfs list -t snapshot -o name,used | grep unlink

rpool/export/home/cjg@unlink_1238270760978466613                           11.9M

rpool/export/home/cjg@unlink_1238275070771981963                             59K

rpool/export/home/cjg@unlink_1238275074501904526                             59K

rpool/export/home/cjg@unlink_1238275145860458143                             34K

rpool/export/home/cjg@unlink_1238275168440000379                            197K

rpool/export/home/cjg@unlink_1238275233978665556                            197K

rpool/export/home/cjg@unlink_1238275295387410635                            197K

rpool/export/home/cjg@unlink_1238275362536035217                            197K

rpool/export/home/cjg@unlink_1238275429554657197                            136K

rpool/export/home/cjg@unlink_1238275446884300017                            350K

rpool/export/home/cjg@unlink_1238275491543380576                            197K

rpool/export/home/cjg@unlink_1238275553842097361                            197K

rpool/export/home/cjg@unlink_1238275643490236001                             63K

rpool/export/home/cjg@unlink_1238275644670212158                             63K

rpool/export/home/cjg@unlink_1238275646030183268                               0

rpool/export/home/cjg@unlink_1238275647010165407                               0

rpool/export/home/cjg@unlink_1238275648040143427                             54K

rpool/export/home/cjg@unlink_1238275649030124929                             54K

rpool/export/home/cjg@unlink_1238275675679613928                            197K

rpool/export/home/cjg@unlink_1238275738608457151                            198K

rpool/export/home/cjg@unlink_1238275800827304353                           57.5K

rpool/export/home/cjg@unlink_1238275853116324001                           32.5K

rpool/export/home/cjg@unlink_1238275854186304490                           53.5K

rpool/export/home/cjg@unlink_1238275862146153573                            196K

rpool/export/home/cjg@unlink_1238275923255007891                           55.5K

rpool/export/home/cjg@unlink_1238275962114286151                           35.5K

rpool/export/home/cjg@unlink_1238275962994267852                           56.5K

rpool/export/home/cjg@unlink_1238275984723865944                           55.5K

rpool/export/home/cjg@unlink_1238275986483834569                             29K

rpool/export/home/cjg@unlink_1238276004103500867                             49K

rpool/export/home/cjg@unlink_1238276005213479906                             49K

rpool/export/home/cjg@unlink_1238276024853115037                           50.5K

rpool/export/home/cjg@unlink_1238276026423085669                           52.5K

rpool/export/home/cjg@unlink_1238276041792798946                           50.5K

rpool/export/home/cjg@unlink_1238276046332707732                           55.5K

rpool/export/home/cjg@unlink_1238276098621721894                             66K

rpool/export/home/cjg@unlink_1238276108811528303                           69.5K

rpool/export/home/cjg@unlink_1238276132861080236                             56K

rpool/export/home/cjg@unlink_1238276166070438484                             49K

rpool/export/home/cjg@unlink_1238276167190417567                             49K

rpool/export/home/cjg@unlink_1238276170930350786                             57K

rpool/export/home/cjg@unlink_1238276206569700134                           30.5K

rpool/export/home/cjg@unlink_1238276208519665843                           58.5K

rpool/export/home/cjg@unlink_1238276476484690821                             54K

rpool/export/home/cjg@unlink_1238276477974663478                             54K

rpool/export/home/cjg@unlink_1238276511584038137                           60.5K

rpool/export/home/cjg@unlink_1238276519053902818                             71K

rpool/export/home/cjg@unlink_1238276528213727766                             62K

rpool/export/home/cjg@unlink_1238276529883699491                             47K

rpool/export/home/cjg@unlink_1238276531683666535                           3.33M

rpool/export/home/cjg@unlink_1238276558063169299                           35.5K

rpool/export/home/cjg@unlink_1238276559223149116                           62.5K

rpool/export/home/cjg@unlink_1238276573552877191                           35.5K

rpool/export/home/cjg@unlink_1238276584602668975                           35.5K

rpool/export/home/cjg@unlink_1238276586002642752                             53K

rpool/export/home/cjg@unlink_1238276586522633206                             51K

rpool/export/home/cjg@unlink_1238276808718681998                            216K

rpool/export/home/cjg@unlink_1238276820958471430                           77.5K

rpool/export/home/cjg@unlink_1238276826718371992                             51K

rpool/export/home/cjg@unlink_1238276827908352138                             51K

rpool/export/home/cjg@unlink_1238276883227391747                            198K

rpool/export/home/cjg@unlink_1238276945366305295                           58.5K

rpool/export/home/cjg@unlink_1238276954766149887                           32.5K

rpool/export/home/cjg@unlink_1238276955946126421                           54.5K

rpool/export/home/cjg@unlink_1238276968985903108                           52.5K

rpool/export/home/cjg@unlink_1238276988865560952                             31K

rpool/export/home/cjg@unlink_1238277006915250722                           57.5K

rpool/export/home/cjg@unlink_1238277029624856958                             51K

rpool/export/home/cjg@unlink_1238277030754835625                             51K

rpool/export/home/cjg@unlink_1238277042004634457                           51.5K

rpool/export/home/cjg@unlink_1238277043934600972                             52K

rpool/export/home/cjg@unlink_1238277045124580763                             51K

rpool/export/home/cjg@unlink_1238277056554381122                             51K

rpool/export/home/cjg@unlink_1238277058274350998                             51K

rpool/export/home/cjg@unlink_1238277068944163541                             59K

rpool/export/home/cjg@unlink_1238277121423241127                           32.5K

rpool/export/home/cjg@unlink_1238277123353210283                           53.5K

rpool/export/home/cjg@unlink_1238277136532970668                           52.5K

rpool/export/home/cjg@unlink_1238277152942678490                               0

rpool/export/home/cjg@unlink_1238277173482320586                               0

rpool/export/home/cjg@unlink_1238277187222067194                             49K

rpool/export/home/cjg@unlink_1238277188902043005                             49K

rpool/export/home/cjg@unlink_1238277190362010483                             56K

rpool/export/home/cjg@unlink_1238277228691306147                           30.5K

rpool/export/home/cjg@unlink_1238277230021281988                           51.5K

rpool/export/home/cjg@unlink_1238277251960874811                             57K

rpool/export/home/cjg@unlink_1238277300159980679                           30.5K

rpool/export/home/cjg@unlink_1238277301769961639                             50K

rpool/export/home/cjg@unlink_1238277302279948212                             49K

rpool/export/home/cjg@unlink_1238277310639840621                             28K

rpool/export/home/cjg@unlink_1238277314109790784                           55.5K

rpool/export/home/cjg@unlink_1238277324429653135                             49K

rpool/export/home/cjg@unlink_1238277325639636996                             49K

rpool/export/home/cjg@unlink_1238277360029166691                            356K

rpool/export/home/cjg@unlink_1238277375738948709                           55.5K

rpool/export/home/cjg@unlink_1238277376798933629                             29K

rpool/export/home/cjg@unlink_1238277378458911557                             50K

rpool/export/home/cjg@unlink_1238277380098888676                             49K

rpool/export/home/cjg@unlink_1238277397738633771                             48K

rpool/export/home/cjg@unlink_1238277415098386055                             49K

rpool/export/home/cjg@unlink_1238277416258362893                             49K

rpool/export/home/cjg@unlink_1238277438388037804                             57K

rpool/export/home/cjg@unlink_1238277443337969269                           30.5K

rpool/export/home/cjg@unlink_1238277445587936426                           51.5K

rpool/export/home/cjg@unlink_1238277454527801430                           50.5K

rpool/export/home/cjg@unlink_1238277500967098623                            196K

rpool/export/home/cjg@unlink_1238277562866135282                           55.5K

rpool/export/home/cjg@unlink_1238277607205456578                             49K

rpool/export/home/cjg@unlink_1238277608135443640                             49K

rpool/export/home/cjg@unlink_1238277624875209357                             57K

rpool/export/home/cjg@unlink_1238277682774484369                           30.5K

rpool/export/home/cjg@unlink_1238277684324464523                             50K

rpool/export/home/cjg@unlink_1238277685634444004                             49K

rpool/export/home/cjg@unlink_1238277686834429223                           75.5K

rpool/export/home/cjg@unlink_1238277700074256500                             48K

rpool/export/home/cjg@unlink_1238277701924235244                             48K

rpool/export/home/cjg@unlink_1238277736473759068                           49.5K

rpool/export/home/cjg@unlink_1238277748313594650                           55.5K

rpool/export/home/cjg@unlink_1238277748413593612                             28K

rpool/export/home/cjg@unlink_1238277750343571890                             48K

rpool/export/home/cjg@unlink_1238277767513347930                           49.5K

rpool/export/home/cjg@unlink_1238277769183322087                             50K

rpool/export/home/cjg@unlink_1238277770343306935                             48K

rpool/export/home/cjg@unlink_1238277786193093885                             48K

rpool/export/home/cjg@unlink_1238277787293079433                             48K

rpool/export/home/cjg@unlink_1238277805362825259                           49.5K

rpool/export/home/cjg@unlink_1238277810602750426                            195K

rpool/export/home/cjg@unlink_1238277872911814531                            195K

rpool/export/home/cjg@unlink_1238277934680920214                            195K

rpool/export/home/cjg@unlink_1238277997220016825                            195K

rpool/export/home/cjg@unlink_1238278063868871589                           54.5K

rpool/export/home/cjg@unlink_1238278094728323253                             61K

rpool/export/home/cjg@unlink_1238278096268295499                             63K

rpool/export/home/cjg@unlink_1238278098518260168                             52K

rpool/export/home/cjg@unlink_1238278099658242516                             56K

rpool/export/home/cjg@unlink_1238278103948159937                             57K

rpool/export/home/cjg@unlink_1238278107688091854                             54K

rpool/export/home/cjg@unlink_1238278113907980286                             62K

rpool/export/home/cjg@unlink_1238278116267937390                             64K

rpool/export/home/cjg@unlink_1238278125757769238                            196K

rpool/export/home/cjg@unlink_1238278155387248061                            136K

rpool/export/home/cjg@unlink_1238278160547156524                            229K

rpool/export/home/cjg@unlink_1238278165047079863                            351K

rpool/export/home/cjg@unlink_1238278166797050407                            197K

rpool/export/home/cjg@unlink_1238278168907009714                             55K

rpool/export/home/cjg@unlink_1238278170666980686                            341K

rpool/export/home/cjg@unlink_1238278171616960684                           54.5K

rpool/export/home/cjg@unlink_1238278190336630319                            777K

rpool/export/home/cjg@unlink_1238278253245490904                            329K

rpool/export/home/cjg@unlink_1238278262235340449                            362K

rpool/export/home/cjg@unlink_1238278262915331213                            362K

rpool/export/home/cjg@unlink_1238278264915299508                            285K

rpool/export/home/cjg@unlink_1238278310694590970                             87K

rpool/export/home/cjg@unlink_1238278313294552482                             66K

rpool/export/home/cjg@unlink_1238278315014520386                             31K

rpool/export/home/cjg@unlink_1238278371773568934                            258K

rpool/export/home/cjg@unlink_1238278375673503109                            198K

rpool/export/home/cjg@unlink_1238278440802320314                            138K

rpool/export/home/cjg@unlink_1238278442492291542                           55.5K

rpool/export/home/cjg@unlink_1238278445312240229                           2.38M

rpool/export/home/cjg@unlink_1238278453582077088                            198K

rpool/export/home/cjg@unlink_1238278502461070222                            256K

rpool/export/home/cjg@unlink_1238278564359805760                            256K

rpool/export/home/cjg@unlink_1238278625738732194                           63.5K

rpool/export/home/cjg@unlink_1238278633428599541                           61.5K

rpool/export/home/cjg@unlink_1238278634568579678                            137K

rpool/export/home/cjg@unlink_1238278657838186760                            288K

rpool/export/home/cjg@unlink_1238278659768151784                            223K

rpool/export/home/cjg@unlink_1238278661518121640                            159K

rpool/export/home/cjg@unlink_1238278664378073421                            136K

rpool/export/home/cjg@unlink_1238278665908048641                            138K

rpool/export/home/cjg@unlink_1238278666968033048                            136K

rpool/export/home/cjg@unlink_1238278668887996115                            281K

rpool/export/home/cjg@unlink_1238278670307970765                            227K

rpool/export/home/cjg@unlink_1238278671897943665                            162K

rpool/export/home/cjg@unlink_1238278673197921775                            164K

rpool/export/home/cjg@unlink_1238278674027906895                            164K

rpool/export/home/cjg@unlink_1238278674657900961                            165K

rpool/export/home/cjg@unlink_1238278675657885128                            165K

rpool/export/home/cjg@unlink_1238278676647871187                            241K

rpool/export/home/cjg@unlink_1238278678347837775                            136K

rpool/export/home/cjg@unlink_1238278679597811093                            199K

rpool/export/home/cjg@unlink_1238278687297679327                            197K

rpool/export/home/cjg@unlink_1238278749616679679                            197K

rpool/export/home/cjg@unlink_1238278811875554411                           56.5K

cjg@brompton:~$ 

Good job that snapshots are cheap. I'm not going to be doing this all the time but it makes you think what could be done.

Friday Mar 27, 2009

zfs list webrev

I've just posted the webrev for review for an RFE to “zfs list”:

PSARC 2009/171 zfs list -d and zfs get -d
6762432 zfs list --depth

This will allow you to limit the depth to which a recursive listing of zfs file systems will go. This is particularly useful if you only want to list the snapshots of the current file system.

The webrev is here:

http://cr.opensolaris.org/~cjg/zfs_list/zfs_list-d-2/

Comments welcome.

Friday Mar 20, 2009

Limiting scsi.d output to IOs that take a long time.

I have added support for only reporting on scsi packets that take more than a certain amount of time to complete to scsi.d. This is partularly useful when combined with the PERF_REPORT option to see just the IO requests that took over a time threshold and still collect statistics on all the other IO requests.

To use this you have to specify “-D REPORT_OVERTIME=X” where X is the time you wish to report on in nano seconds. Here is an example that will only print the details of IO requests that took more than 250ms:

# scsi.d -D REPORT_OVERTIME=$((250\*1000\*1000)) -D PERF_REPORT 
Only reporting IOs longer than 250000000ns
Hit Control C to interrupt
00005.388800363 glm0:-> 0x2a WRITE(10) address 00:00, lba 0x00270e67, len 0x000008, control 0x00 timeout 60 CDBP 300002df438 1 sched(0) cdb(10) 2a0000270e6700000800
00005.649494475 glm0:<- 0x2a WRITE(10) address 00:00, lba 0x00270e67, len 0x000008, control 0x00 timeout 60 CDBP 300002df438, reason 0x0 (COMPLETED) pkt_state 0x1f state 0x0 Success Time 260833us
00005.384612799 glm0:-> 0x0a  WRITE(6) address 00:00, lba 0x048564, len 0x000001, control 0x00 timeout 60 CDBP 30002541ac0 1 sched(0) cdb(6) 0a0485640100
00005.716416658 glm0:<- 0x0a  WRITE(6) address 00:00, lba 0x048564, len 0x000001, control 0x00 timeout 60 CDBP 30002541ac0, reason 0x0 (COMPLETED) pkt_state 0x1f state 0x0 Success Time 331939us
00005.385907691 glm0:-> 0x0a  WRITE(6) address 00:00, lba 0x0605b4, len 0x000001, control 0x00 timeout 60 CDBP 300035637a0 1 sched(0) cdb(6) 0a0605b40100
00005.773925990 glm0:<- 0x0a  WRITE(6) address 00:00, lba 0x0605b4, len 0x000001, control 0x00 timeout 60 CDBP 300035637a0, reason 0x0 (COMPLETED) pkt_state 0x1f state 0x0 Success Time 388153us
00005.389078533 glm0:-> 0x2a WRITE(10) address 00:00, lba 0x004b19d3, len 0x000003, control 0x00 timeout 60 CDBP 300002df0f8 1 sched(0) cdb(10) 2a00004b19d300000300
00005.824242527 glm0:<- 0x2a WRITE(10) address 00:00, lba 0x004b19d3, len 0x000003, control 0x00 timeout 60 CDBP 300002df0f8, reason 0x0 (COMPLETED) pkt_state 0x1f state 0x0 Success Time 435303us
\^C

  glm                                                       0
           value  ------------- Distribution ------------- count    
         4194304 |                                         0        
         8388608 |                                         3        
        16777216 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@        678      
        33554432 |@@@@@                                    98       
        67108864 |@                                        27       
       134217728 |                                         8        
       268435456 |                                         3        
       536870912 |                                         0        

# 

The implementation of this uses dtrace speculations and as such may require some tuning of the various settings. Specifically the number of speculation buffers I have set to 1000 which should be enough for all but the busiest systems but if you do reach that limit you can increase them using the following options:


-D NSPEC=N

Set the number of speculations to this value.

-D SPECSIZE=N

Set the size of the speculaton buffer. This should be 200 times the size of NSPEC.

-D CLEANRATE=N

Specify the clean rate.

As usual the script is available here. It is version 1.18.

Sunday Mar 15, 2009

Converting flac encoded audio to mp3.

Solaris has the flac command which will happily decode flac encoded file and metaflac to read the flac meta data but you need to download lame either precompiled or from sourceforge and build it. Then it is a simple matter to convert your flac encoded files:

$ flac -c -d file.flac | lame - file.mp3

you can add various flags to change the data rates and put tags into the mp3. I feel sure someone should have written a script that would convert an entire library but I could not find one so here is the one I wrote to do this:

#!/bin/ksh93
umask 22
SRC="$1"
DST="$2"
BITRATE=128

export PATH=/opt/lame/bin:/usr/sbin:/usr/sfw/bin:/usr/bin

function flactags2lametags
{
        typeset IFS="$IFS ="
        typeset key value
        metaflac --export-tags /dev/fd/1 "$1" | while read key value
        do
                print typeset "${key}=\\"$(print $value| sed 's/["'\\'']/\\\\&/g')\\";"
        done
}

function cleanup
{
        if [[ "$FILE" != "" ]]
        then
                rm "$FILE"
        fi
}

function convert_file
{
        typeset -i ret=0
        typeset f="${1%.flac}"
        if [[ "$f" != "${1}" ]]
        then
                typeset out="${DST}/${f}.mp3"
                if ! [[ -f "${out}" ]] || [[ "$1" -nt "$out" ]]
                then
                        print $out

                        eval $(flactags2lametags "$1") 
                        FILE="$out"
                        flac --silent -c -d "$1" | \\
                                lame --quiet -b ${BITRATE} -h \\
                                ${ARTIST:+--ta }"${ARTIST}" \\
                                ${ALBUM:+--tl }"${ALBUM}" \\
                                ${TRACKNUMBER:+--tn }"${TRACKNUMBER}" \\
                                ${DATE:+--ty }"${DATE%%-\*}" \\
                                - "${out}"
                        ret=$?
                        FILE=""
                fi
        elif [[ $1 != "${1%.jpg}" ]]
        then
                [[ -f "${DST}/${1}" ]] || cp "$1" "${DST}"
        fi
        return ${ret}
}

function do_dir
{
        typeset i
        for i in "$1"/\*
        do
                if [[ -f "$i" ]]
                then
                        convert_file "$i" || return 1
                elif [[ -d "$i" ]]
                then
                        if test -d "${DST}/$i" || mkdir "${DST}/$i"
                        then
                                do_dir "$i" || return 1
                        fi
                fi
        done
        return 0
}

function usage
{
        print USAGE: $1 src_dir dest_dist >&2
        exit 1
}

trap cleanup EXIT

(( $# == 2 )) || usage $0
if test -d "${DST}/$1" || mkdir "${DST}/$1"
then
        do_dir "$1"
fi
exit $?

Tuesday Mar 10, 2009

Update to scsi.d

I've just released a new version of scsi.d that fixes a strange bug that only effected 10 but would result in scsi.d not starting:

# dtrace -qCs scsi.d-1.16  
dtrace: failed to compile script scsi.d-1.16: "/usr/include/sys/scsi/scsi_pkt.h", line 33: incomplete struct/union/enum struct scsi_address: pkt_address
# 

The solution was simple even if I still don't really understand why the header included fine in OpenSolaris but not in 10.

Sunday Mar 08, 2009

AV codecs for Solaris

One of the big complaints about using OpenSolaris and Solaris is multi media support. The lack of codecs to allow you to play "common" if not "open" formats is irritatings.

I've been using the Fluendo mp3 codecs for a while as they have a compelling price point, being free. Fluendo also have a bundle of other codecs that they supply for OpenSolaris containing codecs for a number of other popular formats. I decided the €28 was would be worth it and have now installed the codecs on my home server. I now have gstreamer support for:

: pearson FSS 1 $; gst-inspect | grep -i flue
fluasf:  flurtspwms: Fluendo WMS RTSP Extension
fluasf:  fluasfcmdparse: Fluendo ASF Command Parser
fluasf:  fluasfdemux: Fluendo ASF Demuxer
fluwmvdec:  fluwmvdec: Fluendo WMV Decoder
fluaacdec:  fluaacdec: Fluendo AAC Decoder
flumms:  flummssrc: Fluendo MMS source
fludivx3dec:  fludivx3dec: Fluendo DivX 3.11 Decoder
flulpcm:  flulpcmdec: Fluendo LPCM decoder
fluwma:  fluwmsdec: Fluendo WMS Decoder
fluwma:  fluwmadec: Fluendo WMA Decoder (STD + PRO + LSL + WMS)
fluh264dec:  fluh264dec: Fluendo H264 Decoder
flumpeg4vdec:  flumpeg4vdec: Fluendo MPEG-4 ASP Video Decoder
flumpeg2vdec:  flumpeg2vdec: Fluendo MPEG-2 Video Decoder
flump3dec:  flump3dec: Fluendo MP3 Decoder (C build)
: pearson FSS 2 $; 

Alas on the Sun Rays the video performance is still less than great but on the console it is faultless and I am seeing the Totem is not quite as useless as I had previously thought.

It would be really cool if you could get OpenSolaris codecs via IPS repository.

The next thing I want is to be able to edit video.

Tuesday Mar 03, 2009

SqueezeBox!!!!

I have a real SqueezeBox at last. A wonderful birthday present from a wonderful person.

Getting the SqueezeCentre software working on Solaris is hard or at least getting perl and it's required modules is really hard. Luckily for me a colleague had already done this and helped me out. In due course we hope to get an IPS package but since I needed nevada images that is no good for me at the moment.

The Squuezebox itself is fantastic although wireless turned out to be to unreliable with the box not being able to get beyond 25% into it's buffer and tonight with the bad weather tonight that dropped to 1% and no music. The final resting place for the Squeezebox has a CAT5 connection but until it can move there it is now sharing one of the ethernet over power brick with the 40" Sun Ray.

I now want one for each room.....

Saturday Feb 21, 2009

[Open]Solaris logfiles and ZFS root

This week I had reason to want to see how often the script that controls the access hours of Sun Ray users actually did work so I went off to look in the messages files only to discover that there were only four and they only went back to January 11.

: pearson FSS 22 $; ls -l mess\*
-rw-r--r--   1 root     root       12396 Feb  8 23:58 messages
-rw-r--r--   1 root     root      134777 Feb  8 02:59 messages.0
-rw-r--r--   1 root     root       53690 Feb  1 02:06 messages.1
-rw-r--r--   1 root     root      163116 Jan 25 02:01 messages.2
-rw-r--r--   1 root     root       83470 Jan 18 00:21 messages.3
: pearson FSS 23 $; head -1 messages.3
Jan 11 05:29:14 pearson pcplusmp: [ID 444295 kern.info] pcplusmp: ide (ata) instance #1 vector 0xf ioapic 0x2 intin 0xf is bound to cpu 1
: pearson FSS 24 $; 

I am certain that the choice of only four log files was not a concious decision I have made but it did make me ponder whether logfile management should be revisted in the light of ZFS root. Since clearly if you have snapshots firing logs could go back a lot futher:

: pearson FSS 40 $; head -1 $(ls -t /.zfs/snapshot/\*/var/adm/message\*| tail -1)
Dec 14 03:15:14 pearson time-slider-cleanup: [ID 702911 daemon.notice] No more daily snapshots left
: pearson FSS 41 $; 

It did not take long for this shell function to burst into life:

function search_log
{
typeset path
if [[ ${2#/} == $2 ]]
then
        path=${PWD}/$2
else
        path=$2
fi
cat $path /.zfs/snapshot/\*$path | egrep $1 | sort -M | uniq
}

Not a generalized solution but one that works when you root filesystem contains all your logs and if you remember to escape any globbing on the command line will search all the log files:

: pearson FSS 46 $; search_log block /var/adm/messages\\\* | wc
      51     688    4759
: pearson FSS 47 $; 

There are two ways to view this. Either it it great that the logs are kept and so I have all this historical data or it is a pain as getting red of log files becomes more of a chore, indeed this is encouraging me to move all the logfiles into their own file systems so that the management of those logfiles is more granular.

At the very least it seems to me that OpenSolaris should sort out where it's log files are going and end the messages going in /var/adm and move them to /var/log which then should be it's own file system.

Tuesday Feb 17, 2009

VirtualBox pause and resume

I have a VirtualBox running ubuntu that is acting as a print server for my HP Officejet J6410 and although that works well it chews a significant amount of CPU resource which given that the printer is not in use most of the time seems a bit of a waste, especially when the Sun Ray server can have 4 active sessions running on it. So now I am running this simple script:

#!/bin/ksh93 -p 

logger -p daemon.notice -t ${0##\*/}[$$] |&

exec >&p 2>&1

while :
do
        echo resuming vbox
        su - vbox -c "VBoxManage -nologo controlvm ubunto resume"

        ssh print-server sudo ntpdate 3.uk.pool.ntp.org

        while ping printer 2 > /dev/null 2>&1
        do
                sleep 60
        done
        echo pausing vbox
        su - vbox -c "VBoxManage -nologo controlvm ubunto pause"
        until ping printer 2 > /dev/null 2>&1
        do
                sleep 1
        done
done

After the usual magic to redirect it's output to syslog this just loops forever pinging the printer. If the printer is alive the VrtualBox is resumed and if the printer is not alive the VirtualBox is paused. So as soon as the printer is turned on the virtual box is ready and then within 60 seconds of the printer being turned off the virtualbox is paused.


So that the clock on the VirtualBox is kept correct (the guest additions are, I am told, supposed to do this for free but in practice they do not for me) after resuming there is the ssh onto the virtual machine to set it's clock\*, so I have a work around in place for VirtualBox which is itself a work around for a defect in Solaris and that work around is also working around another issue with Solaris.


Life on the bleeding edge is always fun!



\*) I would normally use the address of my server to sync with but thanks to the issues with build 108 I currently don't have a working ntp server on the system.

Sunday Feb 15, 2009

Build 108

I've managed to upgrade my home server to build 108 which is an important mile-stone for me as it has the fix for:

6763600: nautilus becomes unusable on a system with 39000 snapshots.

Which was rendering nautilus close to unusable for any users who moved out of their lofs automounted home directories. In partilcular any attempt to use it to manage the photo directories was painful.

However all was not smooth as again I hit this bug:

6795892: Sun Ray X servers (both Xsun and Xorg) suffer network problems in snv_106

but since I was expecting this I tried the workaround from bug 6799655 which is the same as the one for 6763600:


In /etc/sock2path change the following lines:

    2   2   0   tcp
    2   2   6   tcp

    26  2   0   tcp
    26  2   6   tcp

    2   1   0   udp
    2   1   17  udp

    26  1   0   udp
    26  1   17  udp

to:

    2   2   0   /dev/tcp
    2   2   6   /dev/tcp

    26  2   0   /dev/tcp6
    26  2   6   /dev/tcp6

    2   1   0   /dev/udp
    2   1   17  /dev/udp

    26   1  0   /dev/udp6
    26   1  17  /dev/udp6

While this got the Sun Rays up it also stopped named from working, spewing errors like this:

Feb 15 15:10:39 pearson named[15558]: [ID 873579 daemon.error] 71/Protocol error
Feb 15 15:10:39 pearson named[15558]: [ID 873579 daemon.error] socket.c:4315: unexpected error:

So have had to revert to some internal to Sun binaries that work around this while the underlying bug is fixed. It is slighly worring as I'm left wondering what other victims are out there. One I have already found is ntp, which is a known bug:

6796948: NTP completely broken by I_SETSIG semantics change in snv_106

I suspect that the system will have to revert to build 105 soon.

Friday Feb 13, 2009

Mobile internet on OpenSolaris

I have a new mobile phone which is less steam driven than the one it replaces and, hopefully, will not power itself off and reset to factory defaults every few days. Being modern it can act as a USB modem device and therefore allows me to have my OpenSolaris Laptop work with it.

Following the instructions from James Leggs blog and some from an Email I had been sent (James' instructions were the same as the email) it all worked just fine except: The phone is a Nokia E71 and by default the USB setting was menu->connectivity->usb was set with "USB mode" to mass storage and "Ask on connection" to Yes. It seems that that was enough for OpenSolaris to see it as a disk drive and then not plumb up the serial devices. Changing the "USB mode" to "PC Suite" resolved this.

The only addition I have made was to add these scripts into /etc/ppp to disable nwam and fix resolv.conf when the link comes up and reverse that when it is taken down.

cjg@brompton:~$ cat /etc/ppp/ip-up
#!/bin/ksh -p

/usr/sbin/svcadm disable -t svc:/network/physical:nwam

mv /etc/resolv.conf /etc/resolv.conf.preppp

cp /etc/ppp/resolv.conf /etc/resolv.conf
chmod 644 /etc/resolv.conf
cjg@brompton:~$ 

and

cjg@brompton:~$ cat /etc/ppp/ip-down
#!/bin/ksh -p

test -f /etc/resolv.conf.preppp && mv /etc/resolv.conf.preppp /etc/resolv.conf

/usr/sbin/svcadm enable nwam
cjg@brompton:~$ 

Not rocket science and when nwam knows about ppp they will surely go but until then quite satisfactory.


Finally create a launcher that will start this at the click of a button. I have cheated here and simply have it starting in a terminal so I can stoop it by quitting the terminal:

cjg@brompton:~/Desktop$ cat 3G\\ network.desktop 

[Desktop Entry]
Encoding=UTF-8
Version=1.0
Type=Application
Terminal=true
Icon[en_GB]=/usr/share/icons/gnome/24x24/actions/call-start.png
Name[en_GB]=3G network
Exec=\\spfexec pppd call vodafone\\n
Name=3G network
Icon=/usr/share/icons/gnome/24x24/actions/call-start.png
cjg@brompton:~/Desktop$ 

Sunday Feb 08, 2009

New Home printer VirtualBox & Ubuntu to the rescue

I've got a new printer for home, an HP OfficeJet J6410, but the bad news is on Solaris the hpijs server for ghostscript fails and while I have made some progress debugging this it will clearly take some time to get to the root cause. As part of the debugging I ran up a Ubuntu linux in VirtualBox to see if the problem was a generic ghostscript/CUPS problem.

Since Ubuntu had no problem printing for the short term I have given the Ubuntu system a virtual network interface so that I can configure the Sun Ray server to print via the Ubuntu print-server, which if I use the cups-lpd server, will do the formatting for the remote hosts. The remote hosts therefore just have to send PostScript to the print-server and only it needs to be able run the hpijs server.

I'm running the VirtualBox headless and gdm turned off, infact everything except CUPS, xinetd and sshd turned off. I've not written an SMF service to start the VirtualBox yet in the hope that this is infact a very short lived situation as the Solaris print service will be up and running soon. Anyway I now have working printing on Solaris

Now back to what I know about ghostscript and the hpijs service. If you know more (which would not be hard) then let me know.

The problem is that ghostscript called from foomatic-gswrapper fails with a range check error:

: pearson FSS 51 $; truss -faeo /tmp/tr.$$ /usr/lib/lp/bin/foomatic-gswrapper>
truss -faeo /tmp/tr.$$  /usr/lib/lp/bin/foomatic-gswrapper  -D -dBATCH -dPARANOIDSAFER -dQUIET -dNOPAUSE -sDEVICE=ijs -sIjsServer=hpijs -dIjsUseOutputFD -sOutputFile=-
foomatic-gswrapper: gs '-D' '-dBATCH' '-dPARANOIDSAFER' '-dQUIET' '-dNOPAUSE' '-sDEVICE=ijs' '-sIjsServer=hpijs' '-dIjsUseOutputFD' '-sOutputFile=/dev/fd/3' 3>&1 1>&2
unable to set Quality=0, ColorMode=2, MediaType=0, err=4
following will be used Quality=0, ColorMode=2, MediaType=0
unable to set paper size=17, err=4
Unrecoverable error: rangecheck in setscreen
Operand stack:
    -0.0  0  --nostringval--
: pearson FSS 52 $; 

The errors, unable to set Quality and unable to set paper size appear to home from the hpijp server that returns NO_PRINTER_SELECTED (4) if these routines are called prior to the printer context is set up.

Quite how and why the printer context is not getting set up is the question. I've now rebuilt both ghostscript and the hpijs service from source and worked out how to get foomatic to use the new ones and I still have the issue but at least I should now be able to do diagnosis in my own time while the printer is usable by everyone on the local network.

Building ghostscript was simple:

$ ./configure –prefix=/opt/cjgsw

Then make and make install.

Building hpijs was less simple and I am not entirely sure I have it built correctly in fact I stronly suspect it is not built correctly. At least I have my binaries behaving the same way as the Solaris binaries, ie the failure mode is the same. My configure line was:

 $ ./configure LIBS=”-lsocket -lnsl” --prefix=/opt/cjgsw --disable-pp-build --ena
ble-static –disable-shared

All the –enable-static and –disable-shared are required or the linker gets upset during the build I think the next stop would be an update to gcc....

Wednesday Feb 04, 2009

40" Sun Ray Display

I managed to buy 2 Sun Ray 2's off Ebay and one of them was is now in place in the living room driving our 40” TV.


Combine this with a KeySonic wireless mini keyboard and the DTU does not only act as a photo frame. The Sun Ray unit is attached to the underside of the shelf as the top unit in the pile is a Virgin cable TV recorder which does not like having anything on top blocking the air flow. Thanks to the Sun Ray 2 being so light 5 strips of sticky back velcro do the trick so well that it really is going nowhere to the point that I could not remove it to plug the USB keyboard adapter directly in the back of the unit. The keyboard adapter has a button you have to press once plugged in to pair it with the keyboard. Alas with the Sun Ray in this configuration the button faces upwards. So there is a short USB cable hidden back there.

Networking is provided via Ethenet over mains.

The keyboard has impressive range and a really nice touch pad that pretends to have a scroll wheel down one side. However I've not yet got the keyboard map for it right but it only arrived an hour ago so there is time.

Saturday Jan 31, 2009

Suspend and resume really does just work

The whole rebooting thing last night and the surprise that my laptop had been "up"for 18 days left me wondering how many times I had suspended the system. I turns out this is easy to check in /var/adm/messages, which thanks to the laptop being hibernated most of the night fails to get cleaned out by cron, using a short script:

cjg@brompton:~$ nawk '/SunOS/ { if (count > max) { max=count} ; count=0 } /resume/ { count++ } END { print "current",count; print "max",max }' /var/adm/messages 
current 3
max 109
cjg@brompton:~$ 

109 sucessful suspend and resume cycles is not bad. I can't find a kernel statistic that counts them directly which I think is a pity. Also I need to keep an eye on the messages files however with ZFS root and snapshots going on the whole log file rotation and clean up could do with rethinking even in the non laptop case.

Friday Jan 30, 2009

OpenSolaris updates virtualbox

After a short delay VirtualBox for OpenSolaris has been upgraded. Update manager popped up to tell me and ask if I wanted the upgrade and created a new boot environment with the package installed. Very cool. Alas now I have to reboot to install it so my laptop won't reach three weeks uptime:

cjg@brompton:~$ uptime
 10:12pm  up 18 days  0:05,  2 users,  load average: 0.16, 0.27, 0.84
cjg@brompton:~$ beadm list
BE            Active Mountpoint Space  Policy Created          
--            ------ ---------- -----  ------ -------          
opensolaris   -      -          89.37M static 2008-11-27 12:14 
opensolaris-1 -      -          8.39M  static 2008-11-30 13:14 
opensolaris-2 -      -          13.36M static 2008-12-02 11:27 
opensolaris-3 N      /          12.70M static 2008-12-06 20:02 
opensolaris-4 R      -          9.44G  static 2009-01-30 21:42 
cjg@brompton:~$ 

Thursday Jan 29, 2009

lwp_park and lwp_unpark

When you have a program that uses the locking primitives mutex_lock() and mutex_unlock and their POSIX equivlients if you truss the process you will often see calls to lwp_park() and lwp_unpark() appering:

/1:	 1.3658	lwp_park(0x00000000, 0)				= 0
/2:	 1.3659	lwp_unpark(1)					= 0
/1:	 1.3659	lwp_park(0x00000000, 0)				= 0
/2:	 1.3660	lwp_unpark(1)					= 0
/1:	 1.3660	lwp_park(0x00000000, 0)				= 0
/1:	 1.3661	lwp_unpark(2)					= 0
/2:	 1.3661	lwp_park(0x00000000, 0)				= 0

These system calls are, as their names imply, the calls that cause the current LWP to stop (park) and allow the current LWP to allow another parked LWP to run (unpark). If we consider the noddy example code:


#include <stdlib.h>
#include <unistd.h>
#include <thread.h>
#include <synch.h>

mutex_t global_lock;
int global_count;

static void \*
locker(void \*arg) 
{
	while (1) {
		mutex_lock(&global_lock);
		global_count++;
		mutex_unlock(&global_lock);
	}
}

int
main(int argc, char \*\*argv)
{
	int count;

	if (argc == 2) {
		count = strtol(argv[1], NULL, NULL);
	}  else {
		count = 2;
	}
	while (--count > 0) {
		thr_create(0, 0, locker, NULL, NULL, NULL);
	}
	locker((void \*)time);
}

The mutex global_lock is going to be battled over by all the threads that are created if one of those threads needs to sleep, as it can't get the mutex, then it has to make a system call so that it can stop using the CPU. The system call is lwp_park(). When the other thread, the one that has the mutex, releases the mutex it signals the blocked thread using lwp_unpark(), with the LWP id of the thread to which to start. This can be seen in the truss:


/1:	 1.3658	lwp_park(0x00000000, 0)				= 0
/2:	 1.3659	lwp_unpark(1)					= 0

/1:	 1.3659	lwp_park(0x00000000, 0)				= 0

/2:	 1.3660	lwp_unpark(1)					= 0
/1:	 1.3660	lwp_park(0x00000000, 0)				= 0
/1:	 1.3661	lwp_unpark(2)					= 0
/2:	 1.3661	lwp_park(0x00000000, 0)				= 0

however the truss can be a bit misleading. You have to remember that truss only reports on system calls when the system call returns (unless they block long enough to be reported as sleeping). So for a call like lwp_park which will sleep until there is a corresponding lwp_unpark call from another thread. In the truss output above you can see this. LWP 2, on the first line in red calls lwp_unpark(1) to unpark LWP 1, at this point LWP 1 returns from what was the blocked lwp_park() call and continues on it's merry way. Alas as can be seen it does not get very far before it once again finds itself blocked but that is what happens if your code is just fighting for a lock. If this were a real application then there are many D scripts that could be used to help track down your issue not least one like this:

pfexec /usr/sbin/dtrace -n 'syscall::lwp_park:entry /execname == "mutex"/
    { @[ustack(5)] = count() }'

Wednesday Jan 28, 2009

Cardiff System Administration Mash up

Big thank you Clive for arranging and Cardiff University for hosting the System Admin Mash up today. I was paritcularly pleased to have 100% of the Cardiff OpenSolaris User group present and to hear of people exploring COMSTAR on Thumpers to produce high performance OpenStorage at a very affordable price.

Lewis gave an excellent overview of the VSCAN & CIFS server with some brave demostrations using VirtualBox to host servers and clients. If you can persuade him to repeat it then do so.

However good though that was, for me, that was not the highlight of the day. That goes to Gwent Police Crime Forensics Unit who gave a excellent and and informative presentation about the challenges and successes of investigating issues around computer forensics. As storage devices get bigger the problems can only increase.

Tuesday Jan 27, 2009

New version of scsi.d required for build 106

This version supports some more filters. Specifically you can now specify these new options:

  • MIN_BLOCK only report on IO to less than or equal to this value.

  • MAX_BLOCK only report on IO to blocks greater or equal to this value.

This is most useful for limiting your trace to particular block ranges, be they file system or as was the case that caused me to add this to see who is trampling on the disk label.

In this contrived example it was format:

pfexec /usr/sbin/dtrace -o /tmp/dt.$$ -Cs  scsi.d -D MAX_BLOCK=3 
<SNIP>
00058.529467684 glm0:-> 0x0a  WRITE(6) address 01:00, lba 0x000000, len 0x000001
, control 0x00 timeout 60 CDBP 60012635560 1 format(3985) cdb(6) 0a0000000100
00058.542945891 glm0:<- 0x0a  WRITE(6) address 01:00, lba 0x000000, len 0x000001
, control 0x00 timeout 60 CDBP 60012635560, reason 0x0 (COMPLETED) pkt_state 0x1
f state 0x0 Success Time 13604us

While this answered my question there are neater ways of answering the question just by using the IO provider:

: s4u-nv-gmp03.eu TS 68 $; pfexec /usr/sbin/dtrace -n 'io:::start / args[0]->b_blkno < 3 && args[0]->b_flags & B_WRITE / { printf("%s %s %d %d", execname, args[1]->dev_statname, args[0]->b_blkno, args[0]->b_bcount) }'
dtrace: description 'io:::start ' matched 6 probes
CPU     ID                    FUNCTION:NAME
  0    629             default_physio:start format sd0 0 512
  0    629             default_physio:start format sd0 0 512
  0    629             default_physio:start format sd0 0 512
  0    629             default_physio:start format sd0 0 512
  0    629             default_physio:start format sd0 0 512
  0    629             default_physio:start format sd0 0 512

Also build 106 of nevada has changed the structure definition for scsi_address and in doing so this breaks scsi.d which has intimate knowledge of scsi_address structures. I have a solution that you can download but in writing it I also filed this bug:

679803 dtrace suffers from macro recursion when including scsi_address.h

which scsi.d has to work around. When that bug is resolved the work around may have to be revisited.

All versions of scsi.d are available here and this specific verison, version 1.16 here.

Thank you to Artem Kachitchkine for bringing the changes to scsi_address.h and their effects on scsi.d to my attention.

About

This is the old blog of Chris Gerhard. It has mostly moved to http://chrisgerhard.wordpress.com

Search

Categories
Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today