Tuesday Nov 24, 2009

Clear up those temporary files

One of my (many) pet peeves are shell scripts that fail to delete any temporary files they use. Included in this pet peeve are shell scripts that create more temporary files than they absolutely need, in most cases the number is 0 but there are a few cases where you really do need a temporary file but if it is temproary make sure you always delete the file.

The trick here is to use the EXIT trap handler to delete the file. That way if your script is killed (unless it is kill with SIGKILL) it will still clean up. Since you will be using mktemp(1) to create your temporary file and you want to minimize any race condition where the file could be left around you need to do (korn shell):

trap '${TMPFILE:+rm ${TMPFILE}}' EXIT

TMPFILE=$(mktemp /tmp/$0.temp.XXXXXX)

if further down the script you delete or rename the file all you have to do is unset TMPFILE eg:

mv $TMPFILE /etc/foo && unset TMPFILE

Tuesday Aug 04, 2009

Making a simple script faster

Many databases get backed up by simply stopping the database copying all the data files and then restarting the database. This is fine for things that don't require 24 hour access. However if you are concerned about the time it takes to take the back up then don't do this:

stop_database
cp /data/file1.db .
gzip file1.db
cp /data/file2.db .
gzip file2.db
start_database

Now there are many ways to improve this using ZFS and snapshots being one of the best but if you don't want to go there then at the very least stop doing the “cp”. It is completely pointless. The above should just be:

stop_database
gzip < /data/file1.db > file1.db
gzip < /data/file2.db > file2.db
start_database

You can continue to make it faster by backgrounding those gzips if the system has spare capacity while the back up is running but that is another point. Just stopping those extra copies will make life faster as they are completely unnecessary.

Friday Jun 05, 2009

Possibly the best shell programming mistake ever

A colleague, lets call him Lewis, just popped over with the most bizarrely behaving shell script I have seen.

The problem was that the script would hang while the automounter timed out an attempt to NFS mount a file system on the customer's system.

I narrowed it down to something in a shell function that looked like this:

# Make a copy even if the destination already exists.
safe_copy()
{
 	typeset src="$1"
	typeset dst="$2"

	/\* Nothing to copy \*/
	if [ ! -f $src ] ; then
		return
	fi

        if [ ! -h $src -a ! -h $dst -a ! -d $dst ] ; then
		cp -p $src $dst || exit 1
	fi
}

safe_copy was called with a file as the $1 and a file as $2.

I laughed when saw the problem. Funny how you can read something and miss such an obvious mistake!

Thankfully the script has quietly been fixed.

Tuesday Nov 25, 2008

Redirecting output to syslog

People are always asking this and often when they are not they should be. How do you redirect all the output from a script to syslog?

The obvious is:

my_script | logger -p local6.debug -t my_script 2>&1

but how can you do that from within the script? Simple put this at the top of your script:


#!/bin/ksh

logger -p daemon.notice -t ${0##\*/}[$$] |&

exec >&p 2>&1


Clearly this is korn shell specific but then who still writes bourne shell scripts. If you script was called redirect you get messages logged thus:

Nov 25 17:40:41 enoexec redirect[17449]: [ID 702911 daemon.notice] bar

Tuesday Mar 11, 2008

zone copy, aka zcp.

After messing around with zones for a few minutes it became clear that it would be really useful if there was a zcp command that worked just like scp(1) but used zlogin as the transport rather than using ssh. For those cases when you are root and don't want to mess with ssh authorizations since you know you can zlogin without a password anyway.

Specifically I wanted to be able to do:

# zcp  /etc/resolv.conf bookable-129-156-208-37.uk:/etc

Well it turns out that this is really easy to do. The trick is to let scp(1) do the heavy lifting for you and use zlogin(1) act as your transport. So I knocked together this script. You need to install it on your path called “zcp” and then make a hard link in the same directory called “zsh”. For example:

# /usr/sfw/bin/wget --quiet http://blogs.sun.com/chrisg/resource/zcp.sh
# cp zcp.sh /usr/local/bin/zcp 
# ln /usr/local/bin/zcp /usr/local/bin/zsh
# chmod 755  /usr/local/bin/zsh

Now the glorious simplicity of zcp, I'll even trhow in recursvice copy for free:

# zcp -r /etc/inet bookable-129-156-208-37.uk:/tmp
ipqosconf.1.sample   100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  2503       00:00    
config.sample        100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  3204       00:00    
wanboot.conf.sample  100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  3312       00:00    
hosts                100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   286       00:00    
ipnodes              100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   286       00:00    
netmasks             100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   384       00:00    
networks             100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   372       00:00    
inetd.conf           100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  1519       00:00    
sock2path            100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   566       00:00    
protocols            100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  1901       00:00    
services             100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  4201       00:00    
mipagent.conf-sample 100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  6274       00:00    
mipagent.conf.fa-sam 100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  6232       00:00    
mipagent.conf.ha-sam 100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  5378       00:00    
ntp.client           100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   291       00:02    
ntp.server           100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  2809       00:00    
slp.conf.example     100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  5750       00:00    
ntp.conf             100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   155       00:00    
ntp.keys             100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   253       00:00    
inetd.conf.orig      100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  6961       00:00    
ntp.drift            100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|     6       00:00    
ipsecalgs            100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   920       00:00    
ike.preshared        100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   308       00:00    
ipseckeys.sample     100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   510       00:00    
datemsk.ndpd         100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|    22       00:00    
ipsecinit.sample     100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  2380       00:00    
ipaddrsel.conf       100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   545       00:00    
inetd.conf.preupgrad 100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  6563       00:00    
hosts.premerge       100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   112       00:00    
ipnodes.premerge     100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|    61       00:00    
hosts.postmerge      100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|   286       00:00    
ipqosconf.2.sample   100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  3115       00:00    
ipqosconf.3.sample   100% |\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*|  1097       00:00    
# 

I'll file and RFE for this to go into Solaris and update this entry when I have the number.

Update: The Bug ID is 6673792. The script now also supports zsync and zdist although niether of those have been tested yet.

Friday Sep 07, 2007

PATH manipulation...

I just read this blog entry about reducing your path and realised there were two reasons I would not be doing that:

  1. Calling perl from a dot file would be one step to far.

  2. I have some korn shell functions for manipulating PATH variables that I have been using for years that amongst other things prevent duplicate entries in your path.

I realise I should share these with the world, however since the SCCS is dated 1996 I will use the defence that they work and are very useful when you all tell me that the scripts are not nice.

Shell function

Purpose

apath [ -p PATH_VARIABLE][-d|-f|-n] path ...

Append the paths to end of the PATH_VARIABLE listed. If the path is already in the PATH_VARIABLE then the path is moved to the end of the variable.

ipath [ -p PATH_VARIABLE][-d|-f|-n] path ...

Insert the paths at the beginning of the PATH_VARIABLE listed. If the path is already in the PATH_VARIABLE then the path is moved to the beginning of the variable.

rpath [ -p PATH_VARIABLE][-d|-f|-n] path ...

Delete the path from the PATH_VARIABLE .

tpath [ -p PATH_VARIABLE] path ....

Test if the path is in the PATH_VARIABLE



All the scripts, except tpath take the same arguments. -p lets you select the variable you are working on. -d, which is the default says that this is a path of directories, -f says it is a path of files and -n that it is a path of objects that don't exist in the file system (useful for NIS_PATH when I was supporting NIS+).

To use them you simply save the script in a directory that is part of your FPATH. Then make hard links to it for all the names:


$ for i in rpath tpath ipath
> do
> ln apath $i
> done

and now you can use them. Here is a typical use from my .profile:


ipath /opt/SUNWspro/bin
ipath -p MANPATH /opt/SUNWspro/man


This adds /opt/SUNWspro/bin to my PATH and /opt/SUNWspro/man to my MANPATH. (Actually this is a fabrication as I have a shell function like this to cover the common case:

function addpath
{
     typeset i
     for i in $@
     do
           if apath $i
           then
                 apath -p MANPATH ${i%/\*}/man
           fi
    done
} 

so my MANPATH remains close to my PATH but lets keep things simple.)


Not rocket science but really useful, even more so for interactive sessions when you want to manipulate your path.

Friday Jun 08, 2007

korn shell programming advice from David Korn

There has been some great stuff about shell scripts on the shell-discuss OpenSolaris list I'm lurking on (my cron changes were discussed there so I got added and have not removed myself).

Starting with the this previously internal advice. I particularly like the c-shell advice, even if the words “for any scripts” seem to be superfluous.

Then last night David Korn posted some advice for shell scripts.

I did not even know you could do

$ > foo date

Now I do and also know it is to be avoided.

Thursday Jun 07, 2007

zfs_versions of a file

This post reminds me that I should have posted my zfs_versions script a while back. I did not as it had a theoretical bug where if you could move a new file into place which had the same age as the old file it would not see this as a new version. I've fixed that now at the expense of some performance and the script is called zfs_versions.

The script lists all the different versions of a file that live in the snapshots.

Here is the output:

: pearson FSS 124 $; /home/cjg/bin/sh/zfs_versions  ~/.profile 
/tank/fs/users/cjg/.zfs/snapshot/month_09/.profile
/tank/fs/users/cjg/.zfs/snapshot/smb2007-03-27-15:54/.profile
/tank/fs/users/cjg/.zfs/snapshot/minute_2007-06-06-17:30/.profile
: pearson FSS 125 $; 

Compare this to the number of snapshots:


: pearson FSS 128 $; ls -1 ~/.zfs/snapshot/\*/.profile | wc -l
     705
: pearson FSS 129 $; 

So I have 705 snapshots that contain my .profile file but actually only three of them contain different versions.

Update

The addition of the check to fix the theoretical bug slows down the script enough that the programmer in me could not let it lie. Hence I now have the same thing in TCL.

Update 2

See http://blogs.sun.com/chrisg/entry/new_wish_and_tickle

Wednesday Jun 07, 2006

Update to icheck.sh

I have updated my icheck.sh script so that it now finds fragments. If you request it look for a fragment it will find all the inodes that use the block that contains that fragment.

# ~cg13442/lang/sh/icheck -d /dev/rdsk/c0t0d0s0  dd6d7 2> /tmp/err
dd6d7 is a fragment address. Rounding to block 16#dd6d0
inode 1ade4: file block: 16#0 device block: 16#dd6d7
inode 1adc8: file block: 16#0 device block: 16#dd6d0
# find / -xdev \\( -inum $((16#1ade4)) -o -inum $(( 16#1adc8 )) \\) -print
/usr/apache/htdocs/manual/mod/mod_speling.html.ja.jis
/etc/passwd
#

The script is here.

Tags:

Monday Jun 05, 2006

Mapping disk blocks to UFS file blocks.

Ever since Solaris 2.0 people have been asking for a way to map from a block on a disk back to the file which contains that block. In SunOS 4 and earlier used to have icheck(8) but that was never available in Solaris 2.0.

The answer that is usually given is short: “Use fsdb”. However fsdb is slightly less than friendly and in fact doing an exhaustive search of a file system for a particular block would be close to impossible to do by hand.

I was left thinking this must be able to be scripted. Since my particular issue was on Solaris 8 I had the added constraint that it would have to be a shell script from the choice of shells on Solaris 8.

As an example of things you can do I have written a script that will drive fsdb and can be used to:

  1. Copy files out of unmounted file systems (with the caveat that they get padded to be whole number of blocks). I used this to test the script then compare the source file and target. I have left it in more amusement.

  2. Find which inode and offset contains a particular disk block (blocks get specified in hex):

    # icheck  -d  /dev/rdsk/c0d0s6  007e1590 007dd2a0 008bb6c0
    inode 5c94: file block: 16#80b device block: 16#007e1590
    inode 5c94: file block: 16#1ffff device block: 16#007dd2a0
    inode 5c94: file block: 16#7ffffff device block: 16#008bb6c0
    #

    This search can be directly limited to a single inode using the -i option and an inode number (in hex).

  3. Print the extents of a file. Again this is just mildly amusing but shows how well or badly UFS is doing laying out your files.

    # icheck -d /dev/rdsk/c0t0d0s0 -x -i 186e
    file off 0 dev off 6684 len 1279
    file off 1279 dev off 6683 len 1
    #


The user interface could live with being tidied up but my original goal has been satisfied.


The script itself is not for those with a weak stomach as it works around some bugs and features in fsdb. The script is here if you wish to see the full horror.


Tags:

Thursday Mar 02, 2006

Logging commands in korn shell

Yet another blast from the past, but I was asked for this again today.


How can you log every command typed into a korn shell session? Here is the cheap and dirty but surprisingly useful way that logs them all into syslog.


Type this into your shell and you can capture the command, it's return code and the current working directory.

function dlog
{
        typeset -i stat=$?
        typeset x
        x=$(fc -ln -0)
        logger -p daemon.notice -t "ksh $LOGNAME $$" Status $stat PWD $PWD \\'${x#       }\\'
}
trap dlog DEBUG

(note that there is a tab after the # in “${x# }”)


You might want to use a different logging facility but that one gets it into /var/adm/messages:


Mar  2 14:44:15 estale ksh cg13442 497922: [ID 702911 daemon.notice] Status 0 PWD /home/cg13442 'ls'
Mar  2 14:44:18 estale ksh cg13442 497922: [ID 702911 daemon.notice] Status 1 PWD /home/cg13442 'false'
Mar  2 14:45:09 estale ksh cg13442 497922: [ID 702911 daemon.notice] Status 0 PWD /home/cg13442 'ls -la'

I had run ls, false and “ls -la” which is dutifully logged.


Tags:

Tuesday Jan 24, 2006

X11 forwarding 101

I got asked this today:


After I su to root how can I forward an X session over ssh?

This actually hits a huge bug bear of mine, that of people using the xhost command to open up the X server. That is bad but if those same people also have root access well that is just the end. You don't need to open all of X to get this to work. Here is the shell function I use to achieve this:


function xroot
{
        xauth extract ${1:-${TMPDIR:-/tmp}/.Xauthority} :${DISPLAY#\*:} && \\
        echo export DISPLAY=:${DISPLAY#\*:}  && \\
        echo export XAUTHORITY=${1:-${TMPDIR:-/tmp}/.Xauthority}
}

This assumes you are using MIT-MAGIC-COOKEI-1 authentication, I dabbled with the SUN-RPC authentication but that requires a fully integrated name space. All the shell function does is use the xauth command to copy the record for the current display from my .Xauthority file into /tmp and then echo the DISPLAY and XAUTHORITY variables so that they can easily be cut and pasted. It does this as typically my .Xauthority file is on an NFS mounted home directory that root can not access.


So here it is in action:

Sun Microsystems Inc.   SunOS 5.11      snv_30  October 2007
: estale.eu FSS 1 $; xroot
export DISPLAY=:30.0
export XAUTHORITY=/tmp/cg13442/636397/.Xauthority
: estale.eu FSS 2 $; su - kroot
Password:
Sun Microsystems Inc.   SunOS 5.11      snv_30  October 2007
estale <kroot> # export DISPLAY=:30.0
estale <kroot> # export XAUTHORITY=/tmp/cg13442/636397/.Xauthority
estale <kroot> # set -o vi
estale <kroot> # xterm -e sleep 10
estale <kroot> #


There is more that the shell function could to to verify that the file it chooses for the .Xauthority is safe, but I don't need that as I have TMPDIR set to be a directory that no one else has access to.


Tags:

Thursday Dec 15, 2005

Letting users create ZFS file systems

Darren has just posted his fast bringover script that solves some of my desire to be able to have a file system per workspace. I'm not commenting on the script since it manages to trip one of my shell script peeves that of calling a program and then calling exit $?. What is wrong with exec? I'll keep taking the tablets.

However it does not solve my wanting to be able to let users be able to create their own ZFS file systems below a file system that they own.

Like I said in the email this can mostly be done via an RBAC script, well here it is:

#!/bin/ksh -p

PATH=/usr/bin:/usr/sbin

if [ "$_" != "/usr/bin/pfexec" -a -x /usr/bin/pfexec ]; then
        exec /usr/bin/pfexec $0 $@
fi

function get_owner
{
	echo $(ls -dln ${PARENT} | nawk '{ print $3 }')
}

function create_file_system
{
	typeset mpt name

	zfs list -H -t filesystem -o mountpoint,name,quota | \\
		 while read mpt name quota
	do
		if [[ $mpt == $PARENT ]]
		then
			zfs create ${DIR#/} && chown $uid $DIR && \\
				zfs set quota=${quota} ${DIR#/}
			exit $?
		fi
	done
	echo no zfs file system $PARENT >&2
	exit 1
}

function check_quota
{
	typeset -i count
	typeset mpt name
	count=0

	zfs list -H -t filesystem -o mountpoint,name | while read mpt name
	do
		if [[ $(get_owner $name) == $uid ]]
		then
			let count=count+1
		fi
	done
	echo $count
}

MAX_FILE_SYSTEMS_PER_USER=10

test -f /etc/default/zfs_user_create && . /etc/default/zfs_user_create

if [[ $# -ne 1 ]]
then
	echo "Usage: $1 filesystem" >&2
	exit 1
fi

DIR=$1
PARENT=${1%/\*}

if ! [[ -d $PARENT ]]
then
	echo "$0: Failed to make directory \\"$1\\"; No such file or directory" >&2
	exit 1
fi

uid=$(id | sed -e s/uid=// -e 's/(.\*//')
owner=$(get_owner $1)

if [[ $uid != $owner ]]
then
	echo "$0: $1 not owner" >&2
	exit 1
fi

if [[ $(check_quota) -gt ${MAX_FILE_SYSTEMS_PER_USER} ]]
then
	echo "too many file systems"
	exit 1
fi

create_file_system

It has a hack in it to limit the number of file systems that a user can create just to stop them being silly. Then you just need the line in /etc/security/exec_attr:


All:suser:cmd:::/usr/local/share/sh/zfs_create:euid=0

Now any user can create a file system under a file system they already own. The file systems don't share a single quota which would be nice but for my purposes this will do.


Next trick to let them destroy them and take snapshots of them. The snapshots being the real reason I want all of this.

Tags:

Friday Nov 18, 2005

ZFS snapshot on boot

ZFS hits my Toshiba laptop and so I have moved the home directories onto a zfs file system.

The big advantage for me is that I can now have the system snapshot the filesystem each time it boots. First this script does the work:

#!/bin/ksh -p
date=$(date '+%F-%T')

for fs in $(zfs list -H -o name -t filesystem)
do
        zfs snapshot ${fs}@${date}
done

And this manifest gets it to be run:

<?xml version="1.0"?>

<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">

<service_bundle type='manifest' name='snapshot'>

<service
        name='system/filesystem/snapshot'
        type='service'
        version='1'>

        <create_default_instance enabled='true' />

        <dependency
                name='filesystem'
                grouping='require_all'
                restart_on='none'
                type='service'>
                <service_fmri value='svc:/system/filesystem/local' />
        </dependency>

        <exec_method
                type='method'
                name='start'
                exec='/mypool/root/snapshot'
                timeout_seconds='10' />

        <exec_method
                type='method'
                name='stop'
                exec=':true'
                timeout_seconds='3' />

        <property_group name='startd' type='framework'>
                <propval name='duration' type='astring' value='transient'/>
        </property_group>

</service>
</service_bundle>

Now import the manifest into smf:

 # svccfg import  snapshot.xml

Now I get a snapshot of the home directories everytime I reboot. These are mounted under .zfs/snapshot so users can pull back anything they need when they want to. Just what you want from a file system.

Tags: , , ,

Thursday Sep 01, 2005

Sad shell programming problem

While I was away an email ended up in my mail box like this:

I have a question on scripting ...

I want a "timedwait" for a background process started from a shellscript, i.e. do something like:

while [ 1 ]; do
        start_bg_job &
        "timedwait $! 30" || kill -9 $?
        cleanup
done

i.e. wait for the background job termination - but no more than 30 secs, if it didn't finish then kill it hard.

There were a couple of replies but none of them were “pure” shell scripts but it was weeks ago so who cares. Well it nagged at me last night as I cycled home. There has to be away to do this from the korn shell, and as usual the answer came to me just outside Fairoaks Airport (that is the answer came to me as usual while cycling home rather than outside the Airport.. If all the answers came to me outside the Airport I would be there a lot).


The trick (hack may be more appropriate) is to use a co process and start both the command you wish to run and a sleep command in the background in a sub shell that is the co process. When each of the commands return you echo information as to what to do back to the parent process which reads from the co process and takes the appropriate action.


So I ended up with this shell function:


#!/bin/ksh -p

# run_n_wait “command” [timeout]
function run_n_wait
{
        typeset com command time pid arg

        command="$1"
        time=${2:-60}

        ( ( ( $command ) > /dev/null 2>&1 & \\
                echo pid $! ; wait $! ;\\
                echo Done $? ) & \\
         (sleep $time ; echo kill ) & ) |&

         while read -p com arg
         do
                case $com in
                kill)  if [[ "${pid}" != "" ]]
                        then
                                kill ${pid} > /dev/null 2>&1
                                wait ${pid}
                        fi
                        return -1 ;;
                pid) pid=$arg ; arg="" ;;
                Done) return $arg ;;
                esac
        done
}


x=$SECONDS
run_n_wait "/bin/false" 3
echo Slept for $(($SECONDS - $x)) seconds ret $?
x=$SECONDS
run_n_wait "sleep 5 " 
echo Slept for $(($SECONDS - $x)) seconds ret $?
x=$SECONDS
run_n_wait "sleep 60" 3
echo Slept for $(($SECONDS - $x)) seconds ret $?

Yes there are lots of shells that could just do this out of the box but that was not the question. If you have a better way (korn shell, bourne shell only) let me know.


Tags:

Saturday Jul 02, 2005

A new root shell.

Fed up with the bourne shell for root? All the power of root but with a proper shell, not csh, a proper shell! You can add a role with the korn shell or any other shell and then assign that role to the users you wish to be able to access it. They still have to type the password of the role but they get a sensible shell when they get it right, plus others don't even get the option.

Here is how. For the a korn shell “root” account:

# roleadd -d /root -P "Primary Administrator" -s /usr/bin/pfksh kroot
# usermod -R root,kroot me
# passwd kroot
New Password:
Re-enter new Password:
passwd: password successfully changed for kroot
#

Now I have a role, kroot, to which only I can su(1M) and it has a decent shell. I can still use the root role if I want pain and I have not changed root's shell which is probably a good thing. Make sure /root already exists, it did for me as it is root's home directory already.


Tags:

Thursday May 05, 2005

Base Conversion

Some one in a chat room asked about converting from hex to decimal. The usual answers came up, use mdb or adb or dtcalc oddly the vastly superior gnome-calculator did not get a mention. Anyway my solution was declared “cool” by one person so that is enough to get on the blog.

The solution is the korn shell. Converting from hex to decimal do this:

echo $(( 16#4000 ))

Will convert 4000 to decimal

Converting to hex is slightly harder:

typeset -i16 x
x=4000
echo $x

Which cries out for a shell function:


function convert_base
{
        typeset -i${2:-16} x
        x=$1
        echo $x
}

Putting that in a file called convert_base in your FPATH thus allowing:


: enoexec.eu FSS 12 $; convert_base 4000
16#fa0
: enoexec.eu FSS 13 $; convert_base 4000 2
2#111110100000
: enoexec.eu FSS 14 $; convert_base 16#4000 2
2#100000000000000
: enoexec.eu FSS 15 $; convert_base 16#4000 10
16384
: enoexec.eu FSS 16 $;

Well I like it.

Tag: , ksh




        
    

Friday Apr 01, 2005

grep piped into awk

This has been one of my bug bears for years and it comes up in email all the time. I blame it on the VAX I used years ago that was so slow that this really mattered at the time but now, mostly it is just because it is wrong. This is what a mean, grep piped into awk:

grep foo | awk '{ print $1 }'



Why? Because awk can do it all for you, saving a pipe and a for and exec:

nawk '/foo/ { print $1 }' 



is exactly the same, I use nawk as it is just better. It gets worse when I see:

grep foo | grep -v bar | awk '{ print $1 }'

which should be:

nawk '/foo/ && ! /bar/ { print $1 }'

Mostly these just come about when typed on the command line but when I see them in scripts it just caused me to roll my eyes. They lead me to come up with these short rules for pipes:

If your pipe line contains

Use

grep and awk

nawk

grep and sed

nawk

awk and sed

nawk

awk sed and grep

nawk



Like the pirates code these are not so much rules as guide lines.

About

This is the old blog of Chris Gerhard. It has mostly moved to http://chrisgerhard.wordpress.com

Search

Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today