Saturday Feb 21, 2009

[Open]Solaris logfiles and ZFS root

This week I had reason to want to see how often the script that controls the access hours of Sun Ray users actually did work so I went off to look in the messages files only to discover that there were only four and they only went back to January 11.

: pearson FSS 22 $; ls -l mess\*
-rw-r--r--   1 root     root       12396 Feb  8 23:58 messages
-rw-r--r--   1 root     root      134777 Feb  8 02:59 messages.0
-rw-r--r--   1 root     root       53690 Feb  1 02:06 messages.1
-rw-r--r--   1 root     root      163116 Jan 25 02:01 messages.2
-rw-r--r--   1 root     root       83470 Jan 18 00:21 messages.3
: pearson FSS 23 $; head -1 messages.3
Jan 11 05:29:14 pearson pcplusmp: [ID 444295 kern.info] pcplusmp: ide (ata) instance #1 vector 0xf ioapic 0x2 intin 0xf is bound to cpu 1
: pearson FSS 24 $; 

I am certain that the choice of only four log files was not a concious decision I have made but it did make me ponder whether logfile management should be revisted in the light of ZFS root. Since clearly if you have snapshots firing logs could go back a lot futher:

: pearson FSS 40 $; head -1 $(ls -t /.zfs/snapshot/\*/var/adm/message\*| tail -1)
Dec 14 03:15:14 pearson time-slider-cleanup: [ID 702911 daemon.notice] No more daily snapshots left
: pearson FSS 41 $; 

It did not take long for this shell function to burst into life:

function search_log
{
typeset path
if [[ ${2#/} == $2 ]]
then
        path=${PWD}/$2
else
        path=$2
fi
cat $path /.zfs/snapshot/\*$path | egrep $1 | sort -M | uniq
}

Not a generalized solution but one that works when you root filesystem contains all your logs and if you remember to escape any globbing on the command line will search all the log files:

: pearson FSS 46 $; search_log block /var/adm/messages\\\* | wc
      51     688    4759
: pearson FSS 47 $; 

There are two ways to view this. Either it it great that the logs are kept and so I have all this historical data or it is a pain as getting red of log files becomes more of a chore, indeed this is encouraging me to move all the logfiles into their own file systems so that the management of those logfiles is more granular.

At the very least it seems to me that OpenSolaris should sort out where it's log files are going and end the messages going in /var/adm and move them to /var/log which then should be it's own file system.

Tuesday Apr 18, 2006

ZFS root file system

As Tabriz has pointed out you can now do “boot and switch” to get yourself a ZFS root file system so to give this a bit of a work out I flipped out build system to use it. It has a compressed root file system now.

: pod5.eu FSS 1 $; df -h /
Filesystem             size   used  avail capacity  Mounted on
tank/rootfs             19G   2.9G    16G    16%    /
: pod5.eu FSS 2 $;

The instructions Tabriz gives are slightly different if you are using live upgrade to keep those old UFS boot environments in sync. For a start if you use a build 37 BE that is not currently the one you are booted off for the source of your new zfs root file system then you don't have to do all the steps creating mount points and /devices.


So steps 6, 7 and 9 distil to:


  • lumount a build 37 archive on /a:

    lumount -n b37 /a

  • Copy that archive into /zfsroot;

    # cd /a
    # find . -xdev -depth -print | cpio -pvdm /zfsroot


You do have to take greater care when updating the boot archive as that may not live on the currently booted boot environment but apart from that it was a breeze. The system has been up for almost a week and I have a clone of a snapshot that is also bootable just in case I mess up the original. Doing that was as simple as taking the clone and editing /etc/vfstab and /etc/system in it to reflect it's new name. Then building it's boot archive.


# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
tank                  2.90G  15.8G     9K  /tank
tank/rootfs           2.90G  15.8G  2.89G  legacy
tank/rootfs@works     2.38M      -  2.77G  -
tank/rootfs@daytwo    1.71M      -  2.88G  -
tank/rootfs@daythree  1.89M      -  2.88G  -
tank/rootfs@dayfour    576K      -  2.89G  -
tank/rootfs2            51K  15.8G  2.88G  legacy
tank/scratch          98.5K  15.8G  98.5K  /tank/scratch
tank/scratch@x            0      -  98.5K  -
#

Whilst many the features that get released by a ZFS root file system are easy to predict the beauty of it in action is something else.


Tags:

Saturday Jul 02, 2005

A new root shell.

Fed up with the bourne shell for root? All the power of root but with a proper shell, not csh, a proper shell! You can add a role with the korn shell or any other shell and then assign that role to the users you wish to be able to access it. They still have to type the password of the role but they get a sensible shell when they get it right, plus others don't even get the option.

Here is how. For the a korn shell “root” account:

# roleadd -d /root -P "Primary Administrator" -s /usr/bin/pfksh kroot
# usermod -R root,kroot me
# passwd kroot
New Password:
Re-enter new Password:
passwd: password successfully changed for kroot
#

Now I have a role, kroot, to which only I can su(1M) and it has a decent shell. I can still use the root role if I want pain and I have not changed root's shell which is probably a good thing. Make sure /root already exists, it did for me as it is root's home directory already.


Tags:

About

This is the old blog of Chris Gerhard. It has mostly moved to http://chrisgerhard.wordpress.com

Search

Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today