Monday Sep 29, 2008

Finding that missing box

Reinstall with a graphical install option appears to have configured X for me, I'm now able to login to the headed console. I'll fix that later.

Now to reinstall everything.

Note that I kept the second zpool, so it was easy to import it and quicker to reinstall my stuff.

And sweet, the Sun Ray software just installs and runs! The crowds go wild!

Hmm, the fonts for the icons on the screen look sharp. Those in my terminal window look like they are from the 80s. Yuck!


Originally posted on Kool Aid Served Daily
Copyright (C) 2008, Kool Aid Served Daily

One of our boxes is missing

So my w2100z is fully installed, but not fully functional. I learned some valuable lessons along the way, and I'm not yet done. Things brought back to the surface:

  • Don't forget to edit your menu.lst if your system is headless. I did it for the install media, I should do it for the system.
  • If you break the root mirror, that can leave your menu.lst pointing to the wrong side of the pool. I.e., there are I expect a copy of the boot targets for each side of the mirror. Combine these two lessons and you have a system that looks dead.
  • Just because you think the Sun Ray Server install code rocks and it has always been easy, don't forget to backup your settings.

And the big one, sometimes it is better to go to bed than futz with things just a bit longer.

I've tried both Sun Ray Software 4 Update 3 Beta for Solaris 10 5/08 X86 and Sun Ray Software 4 09/07. In both cases I've got the 26D error. I know that the firmware on the DTU is being changed between releases and the server software knows about the DTU.

I've tried the following things to get this working:

And I left it there with the last one. I'm about to restart. I think that perhaps fixing my menu.lst has caused this issue or I have to face the fact that I upgraded to too modern a build. But since I never had this problem before and I don't have my hands on the prior configurations, I'll try to get it working.

Strike part of that, I do have /etc/dt/config archived off and it shows I never did the Xserver thing:

[th199096@warlock config]> ls -la
total 26
drwxr-xr-x   2 root     other          6 Sep 25 12:41 .
drwxr-xr-x   3 root     other          3 Mar 29  2008 ..
-r--r--r--   1 root     sys         1577 Aug  1  2007 README.SUNWut
lrwxrwxrwx   1 root     root          34 Sep 29 02:58 Xconfig -> /tmp/SUNWut/config/xconfig/Xconfig
-r--r--r--   1 root     root        5868 Mar 29  2008 Xconfig.SUNWut.prototype
lrwxrwxrwx   1 root     root          35 Sep 29 02:58 Xservers -> /tmp/SUNWut/config/xconfig/Xservers

Okay, I fixed that back out and I told grub not to boot to the console. But I didn't tell eeprom(1M):

[root@warlock ~]> eeprom
ata-dma-enabled=1
atapi-cd-dma-enabled=1
ttyb-rts-dtr-off=false
ttyb-ignore-cd=true
ttya-rts-dtr-off=false
ttya-ignore-cd=true
ttyb-mode=9600,8,n,1,-
ttya-mode=9600,8,n,1,-
lba-access-ok=1
prealloc-chunk-size=0x2000
keyboard-layout=US-English
console=ttya
boot-file=bootadm: kernel command on line 64 not recognized.
boot-args=bootadm: kernel command on line 64 not recognized.
[root@warlock ~]> eeprom console=text
[root@warlock ~]>

By the way, if this is horked, so am I. :->

Okay, I'm horked. I have to come up in failsafe mode. Now how do I fix my eeprom? Luckily, is it?, I've had to do this in the past - eeprom hosed on an x86. And it has the sed command I will need because I refuse to learn how to configure my terminal! And I've saved above what the real value should be!

# pwd
/a/boot/solaris
# sed 's/text/ttya/' bootenv.rc > xxx
# diff bootenv.rc xxx
39c39
< setprop console 'text'
---
> setprop console 'ttya'
# cp xxx bootenv.rc
# reboot
Creating boot_archive for /a

So I'm not getting X on the headed headless server (i.e., I've attached a monitor). I get output there until the OS takes over.

What is in my /var/dt/Xerrors?

Fatal server error:
could not open default font 'fixed'
XIO:  fatal IO error 146 (Connection refused) on X server ":2.0"\^M
      after 0 requests (0 known processed) with 0 events remaining.\^M
failed to set default font path '/usr/openwin/lib/X11/fonts/Type1/,/usr/openwin/lib/X11/fonts/Type1/sun/,/usr/openwin/lib/X11/fonts/F3bitmaps/,/
usr/openwin/lib/X11/fonts/Speedo/,/usr/openwin/lib/X11/fonts/misc/,/usr/openwin/lib/X11/fonts/75dpi/,/usr/openwin/lib/X11/fonts/100dpi/,/usr/ope
nwin/lib/X11/fonts/TrueType'
One of the directories in the list above does not exist
or it does not contain a valid 'fonts.dir' file

Okay, lets take care of that! All of them existed and none had a valid 'fonts.dir' file. And now:

Fatal server error:
could not open default font 'fixed'
XIO:  fatal IO error 146 (Connection refused) on X server ":2.0"\^M
      after 0 requests (0 known processed) with 0 events remaining.\^M

I'm really coming to suspect X is the thing horked on this system.

Notes

It looks like something, perhaps eeprom touched my menu.lst and added a new and default setting for me:

title Diagnostic Partition
        rootnoverify (hd0,0)
        chainloader +1
#---------- ADDED BY BOOTADM - DO NOT EDIT ----------
title Solaris bootenv rc
findroot pool_rpool
kernel$ /platform/i86pc/kernel/$ISADIR/unix -B console=ttya bootadm: kernel command on line 64 not recognized.
-B bootadm: kernel command on line 64 not recognized.
module$ /platform/i86pc/$ISADIR/boot_archive
#---------------------END BOOTADM--------------------
#BOOTADM RC SAVED DEFAULT: 0

Which yields:

krtld: Unused kernel arguments: `bootadm: kernel command on line 64 not recognized.'.
SunOS Release 5.11 Version snv_99 64-bit
Copyright 1983-2008 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
NOTICE: mount: not a UFS magic number (0x0)
Cannot mount root on /ramdisk:a fstype ufs

panic[cpu0]/thread=fffffffffbc293a0: vfs_mountroot: cannot mount root

fffffffffbc48dc0 genunix:vfs_mountroot+356 ()
fffffffffbc48df0 genunix:main+e6 ()
fffffffffbc48e00 unix:_locore_start+92 ()

skipping system dump - no dump device configured
SunOS Release 5.11 Version snv_99 64-bit
Copyright 1983-2008 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Hostname: warlock
Reading ZFS config: done.
Mounting ZFS filesystems: (8/8)

I've got the head on it, so I'm going to reinstall and see if I can at least get X working on it.


Originally posted on Kool Aid Served Daily
Copyright (C) 2008, Kool Aid Served Daily

Sunday Sep 28, 2008

First reboot after install of w2100z

Okay, so I got this configuration:

# zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rpool    68G  7.18G  60.8G    10%  ONLINE  -
# zpool iostat -v
                 capacity     operations    bandwidth
pool           used  avail   read  write   read  write
------------  -----  -----  -----  -----  -----  -----
rpool         7.18G  60.8G     31     14   814K   528K
  mirror      7.18G  60.8G     31     14   814K   528K
    c1t0d0s0      -      -     12      8   509K   530K
    c1t1d0s0      -      -     13      8   510K   530K
------------  -----  -----  -----  -----  -----  -----

But I don't want a mirror, I want space!

This should work, but it doesn't:

# zpool detach rpool c1t1d0s0
# zpool iostat -v
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
rpool       7.18G  60.8G      8      6   367K   383K
  c1t0d0s0  7.18G  60.8G      8      6   367K   383K
----------  -----  -----  -----  -----  -----  -----

# zpool add rpool c1t1d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c1t1d0s0 overlaps with /dev/dsk/c1t1d0s2
# zpool add -f rpool c1t1d0s0
cannot add to 'rpool': root pool can not have multiple vdevs or separate logs

Ahh, I should have done some light reading, from ZFS Troubleshooting Guide:

You cannot use a RAID-Z configuration for a root pool. Only single-disk pools or pools with mirrored disks are supported.

I was thinking of reinstalling, but no, I'll go with two different pools. By the way, I understand the need for redundancy, but I'd prefer more spindles here.

# zpool create tank c1t1d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c1t1d0s0 overlaps with /dev/dsk/c1t1d0s2
# zpool create -f tank c1t1d0s0
# zpool iostat -v
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
rpool       7.18G  60.8G      5      4   246K   255K
  c1t0d0s0  7.18G  60.8G      5      4   246K   255K
----------  -----  -----  -----  -----  -----  -----
tank        73.5K  68.0G      0      9  18.3K   165K
  c1t1d0s0  73.5K  68.0G      0      9  18.3K   165K
----------  -----  -----  -----  -----  -----  -----

Originally posted on Kool Aid Served Daily
Copyright (C) 2008, Kool Aid Served Daily

Time to update my w2100z

When I last configured my w2100z, it wasn't possible to have a ZFS root. And I did some funky stuff playing around with it. My current configuration (I have 2 drives, which I think should be 72G):

       0. c1t0d0 
          /pci@5,0/pci1022,7450@4/pci108e,534d@4,1/sd@0,0
Part      Tag    Flag     Cylinders        Size            Blocks
  0       root    wm     524 - 3134       20.00GB    (2611/0/0)  41945715
  1       swap    wu       1 -  523        4.01GB    (523/0/0)    8401995
  2     backup    wm       0 - 8913       68.28GB    (8914/0/0) 143203410
  3 unassigned    wm    3135 - 5745       20.00GB    (2611/0/0)  41945715
  4 unassigned    wm    5746 - 8356       20.00GB    (2611/0/0)  41945715
  5 unassigned    wm       0               0         (0/0/0)            0
  6 unassigned    wm       0               0         (0/0/0)            0
  7       home    wm    8357 - 8913        4.27GB    (557/0/0)    8948205
  8       boot    wu       0 -    0        7.84MB    (1/0/0)        16065
  9 unassigned    wm       0               0         (0/0/0)            0
       1. c1t1d0 
          /pci@5,0/pci1022,7450@4/pci108e,534d@4,1/sd@1,0
Part      Tag    Flag     Cylinders        Size            Blocks
  0      stand    wm       1 - 4466       34.21GB    (4466/0/0)  71746290
  1      stand    wm    4467 - 8932       34.21GB    (4466/0/0)  71746290
  2     backup    wu       0 - 8932       68.43GB    (8933/0/0) 143508645
  3 unassigned    wm       0               0         (0/0/0)            0
  4 unassigned    wm       0               0         (0/0/0)            0
  5 unassigned    wm       0               0         (0/0/0)            0
  6 unassigned    wm       0               0         (0/0/0)            0
  7 unassigned    wm       0               0         (0/0/0)            0
  8       boot    wu       0 -    0        7.84MB    (1/0/0)        16065
  9 unassigned    wm       0               0         (0/0/0)            0

I've shamelessly munged together output from different format commands. Anyway, the first drive has several available partitions for Live Update and grabbing in case of need. The second drive has two partitions used for ZFS.

This configuration is very flexible for doing updates. I can have several boot partitions on the root drive and I never have to worry about the data on my ZFS pool:

[root@warlock snv99]> zpool list zoo
NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
zoo     68G  51.5G  16.5G    75%  ONLINE  -
[root@warlock snv99]> zpool iostat -v
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zoo         51.5G  16.5G      0      1  21.9K  60.8K
  c1t1d0s0  33.6G   381M      0      0  7.31K  12.0K
  c1t1d0s1  17.9G  16.1G      0      1  14.6K  48.8K
----------  -----  -----  -----  -----  -----  -----

But I think I want to live more on the edge. I'm looking to get a more modern build on warlock:

[root@warlock snv99]> uname -a
SunOS warlock 5.11 snv_85 i86pc i386 i86pc

So, I'm going to back everything up onto an attached USB drive, and nuke the entire system.

Back in a bit

Since warlock is headless, the first task is to build an install DVD which has a modified menu.lst for grub - see Getting a Solaris bootable DVD for headless x86es.

While I'm doing that, I'm going to back up my system. I need the contents of /etc, my punchin configuration (a Sun VPN tool), my Sun Ray server configuration, and my homedirs. The rest I could probably care less about or already have saved off.

Also, I'm pretty ruthless, once I decide I don't need something, I will delete it. That gives me a better idea of how how much I still have to backup. And no, I'm not talking system stuff. Take for example here where I delete some ISO images:

[th199096@warlock isos]> df -h .
Filesystem             size   used  avail capacity  Mounted on
zoo/isos                67G    29G    16G    65%    /zoo/isos
[th199096@warlock x86]>	rm -rf snv7\* snv8\* snv90/ snv97
[th199096@warlock x86]>	df -h .
Filesystem             size   used  avail capacity  Mounted on
zoo/isos                67G    12G    33G    27%    /zoo/isos

You may not be comfortable with this approach, but once you reinstall it is gone anyway.

Cleaned out, the ISO is booting in a VirtualBox on my WinXP desktop, so I'm signing off here....


Originally posted on Kool Aid Served Daily
Copyright (C) 2008, Kool Aid Served Daily

Friday Sep 26, 2008

Moving a ZFS filesystem

In a past life, I did the code for moving files/directories across a qtree in WAFL. The mv used to degrade to a copy because of the effective change in the fsid. The algorithm I used to combat this was to copy directories and move files. I had to copy the directories because we needed a source and destination target for the renames. And this would not leave us in an inconsistent state.

So I was all prepared for pain and hoops when I wanted to rename a ZFS filesystem on a test box. And I am happy to be dismayed that did not happen:

[root@jhereg ~]> zfs create pool/builds
[root@jhereg ~]> zfs list
NAME                           USED  AVAIL  REFER  MOUNTPOINT
pool                          59.9G  74.0G    22K  /builds
pool/builds                     18K  74.0G    18K  /builds/builds
pool/jasmith                  22.5G  74.0G   866K  /builds/jasmith
pool/jasmith/nfs41-instp       773M  74.0G   731M  /builds/jasmith/nfs41-instp
pool/jasmith/nfs41-instp-bld  11.4G  74.0G  11.7G  /builds/jasmith/nfs41-instp-bld
pool/jasmith/nfs41-open        767M  74.0G   712M  /builds/jasmith/nfs41-open
pool/jasmith/nfs41-open-bld   9.62G  74.0G  10.3G  /builds/jasmith/nfs41-open-bld
pool/webaker                  37.4G  74.0G  17.5G  /builds/webaker
pool/webaker/vbox0            7.46G  74.0G  7.46G  /builds/webaker/vbox0
pool/webaker/vbox_ds1         1.48G  74.0G  7.82G  /builds/webaker/vbox_ds1
pool/webaker/vbox_ds2           17K  74.0G    18K  /builds/webaker/vbox_ds2
pool/webaker/vbox_ds3          118M  74.0G  7.77G  /builds/webaker/vbox_ds3
pool/webaker/vbox_master      7.77G  74.0G  7.77G  /builds/webaker/vbox_master
pool/webaker/vbox_mds         3.07G  74.0G  8.99G  /builds/webaker/vbox_mds
[root@jhereg ~]> zfs rename pool/jasmith pool/builds/jasmith
[root@jhereg ~]> zfs list
NAME                                  USED  AVAIL  REFER  MOUNTPOINT
pool                                 59.9G  74.0G    23K  /builds
pool/builds                          22.5G  74.0G    18K  /builds/builds
pool/builds/jasmith                  22.5G  74.0G   866K  /builds/builds/jasmith
pool/builds/jasmith/nfs41-instp       773M  74.0G   731M  /builds/builds/jasmith/nfs41-instp
pool/builds/jasmith/nfs41-instp-bld  11.4G  74.0G  11.7G  /builds/builds/jasmith/nfs41-instp-bld
pool/builds/jasmith/nfs41-open        767M  74.0G   712M  /builds/builds/jasmith/nfs41-open
pool/builds/jasmith/nfs41-open-bld   9.62G  74.0G  10.3G  /builds/builds/jasmith/nfs41-open-bld
pool/webaker                         37.4G  74.0G  17.5G  /builds/webaker
pool/webaker/vbox0                   7.46G  74.0G  7.46G  /builds/webaker/vbox0
pool/webaker/vbox_ds1                1.48G  74.0G  7.82G  /builds/webaker/vbox_ds1
pool/webaker/vbox_ds2                  17K  74.0G    18K  /builds/webaker/vbox_ds2
pool/webaker/vbox_ds3                 118M  74.0G  7.77G  /builds/webaker/vbox_ds3
pool/webaker/vbox_master             7.77G  74.0G  7.77G  /builds/webaker/vbox_master
pool/webaker/vbox_mds                3.07G  74.0G  8.99G  /builds/webaker/vbox_mds

In all fairness to WAFL, I expect that with their virtual volumes, this type of operation is just as painless. The FSID of the filesystem is not really changing - just the point at which it is mounted in the name space. I.e., the operation is on the filesystem and not the individual files. We don;t have to recurse to change the inodes for each file.

Back to the great ZFS discussion. What I wanted to do was push the name space down for the existing sub-filesystems and yet keep the existing name space. I.e., I don't want pool mounted on /builds, but I still want to see /builds/jasmith. I can finish this off thusly:

[root@jhereg ~]> zfs rename pool/webaker pool/builds/webaker
[root@jhereg ~]> zfs list pool
NAME   USED  AVAIL  REFER  MOUNTPOINT
pool  59.9G  74.0G    23K  /builds
[root@jhereg ~]> zfs set mountpoint=/pool pool
[root@jhereg ~]> zfs set mountpoint=/builds pool/builds
[root@jhereg ~]> zfs list
NAME                                  USED  AVAIL  REFER  MOUNTPOINT
pool                                 59.9G  74.0G    21K  /pool
pool/builds                          59.9G  74.0G    21K  /builds
pool/builds/jasmith                  22.5G  74.0G   866K  /builds/jasmith
pool/builds/jasmith/nfs41-instp       773M  74.0G   731M  /builds/jasmith/nfs41-instp
pool/builds/jasmith/nfs41-instp-bld  11.4G  74.0G  11.7G  /builds/jasmith/nfs41-instp-bld
pool/builds/jasmith/nfs41-open        767M  74.0G   712M  /builds/jasmith/nfs41-open
pool/builds/jasmith/nfs41-open-bld   9.62G  74.0G  10.3G  /builds/jasmith/nfs41-open-bld
pool/builds/webaker                  37.4G  74.0G  17.5G  /builds/webaker
pool/builds/webaker/vbox0            7.46G  74.0G  7.46G  /builds/webaker/vbox0
pool/builds/webaker/vbox_ds1         1.48G  74.0G  7.82G  /builds/webaker/vbox_ds1
pool/builds/webaker/vbox_ds2           17K  74.0G    18K  /builds/webaker/vbox_ds2
pool/builds/webaker/vbox_ds3          118M  74.0G  7.77G  /builds/webaker/vbox_ds3
pool/builds/webaker/vbox_master      7.77G  74.0G  7.77G  /builds/webaker/vbox_master
pool/builds/webaker/vbox_mds         3.07G  74.0G  8.99G  /builds/webaker/vbox_mds

And now I can construct other high level filesystems here:

[root@jhereg ~]> zfs create pool/home
[root@jhereg ~]> zfs set mountpoint=/export/home pool/home
[root@jhereg ~]> zfs set sharenfs=rw pool/home
[root@jhereg ~]> zfs create pool/home/tdh
[root@jhereg ~]> share | grep tdh
-@pool/home     /export/home/tdh   rw   ""  

Note that I set up the properties I want on pool/home and count on inheritance to make sure new filesystems under it have the properties I care about.


Originally posted on Kool Aid Served Daily
Copyright (C) 2008, Kool Aid Served Daily

Thursday Dec 06, 2007

ZFS disks, VMWare, and WinXP disk management

In VMWare and 4 physical hard drives I talked about how I couldn't get VMWare Server to use the 4 physical drives I had my ZFS pool on. I ended up archiving the contents (which was mainly blowing away duplicate copies or previous archives to get to the core of what I needed) and recreating the pool on 2 drives.

I wanted to clone my build 77 Solaris image, but I didn't have enough space. Since I had a couple of extra disks sitting there in my system, I thought I would go ahead and move my virtual machines to all of that free space. And there the problems started.

If we look at the Disk Management snapin under WinXP, we can first see that VMWare must be capping the physical drives at 128G:

Not shown

Okay, I can live with that for right now, but why can't I right-click on either of the two free drives and do anything? Anything at all!

If you look carefully, you will see that the disks are labeled as 'Healthy (GPT Protective Partition)'. And that is what is keeping me from doing anything with them. Microsoft has this to say about Windows and GPT FAQ. And wiki defines GPT as GUID Partition Table. But I found both of these much later. And I only found this Microsoft TechNet note now: Change a GUID partition table disk into a master boot record disk.

In short, the Disk Management snapin is not going to be able to do anything with these GPT disks.

I tried to use QTParted off of a Knoppix live disk to fix the problem, but Knoppix refused to see the disks. So I booted into a nevada b77 dvd and selected a single user shell. I then used fdisk to blow away the partitions on the two disks. Finally, I rebooted and the Disk Management snapin could manipulate the disks:

Not shown

I then started off a copy of the VMWare simulators to the new disk area.


Originally posted on Kool Aid Served Daily
Copyright (C) 2007, Kool Aid Served Daily

Saturday Dec 16, 2006

Recovering a zpool from a prior installation

I did a complete reinstall of Nevada on my x86 box today. I had just added another hard drive and made a mirrored zfs system on it. I had a backup of my homedir on it, but I also had that stashed elsewhere. So I didn't mind if I lost it.

The install went okay, I zapped the first disk. I assumed I could easily find a way to recover the pool and I was correct. It turns out I should have exported the pool first, but I was still able to get my data.

# zpool import
  pool: zoo
    id: 7078700821919819682
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
        The pool may be active on on another system, but can be imported using
        the '-f' flag.
config:

        zoo           ONLINE
          mirror      ONLINE
            c1t1d0s0  ONLINE
            c1t1d0s1  ONLINE
# zpool import zoo
cannot import 'zoo': pool may be in use from other system
use '-f' to import anyway
# zpool import -f zoo

If I had done a zpool export zoo, I wouldn't have needed to use the -f flag. Does it have my data?

# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
zoo                      34G   6.04G   28.0G    17%  ONLINE     -
# zfs list
NAME                USED  AVAIL  REFER  MOUNTPOINT
zoo                6.04G  27.4G  24.5K  /zoo
zoo/home           6.04G  27.4G  30.5K  /export/zfs
zoo/home/kanigix   24.5K  27.4G  24.5K  /export/zfs/kanigix
zoo/home/loghyr    24.5K  27.4G  24.5K  /export/zfs/loghyr
zoo/home/mrx       24.5K  27.4G  24.5K  /export/zfs/mrx
zoo/home/tdh       24.5K  27.4G  24.5K  /export/zfs/tdh

Sure looks like it.


Technorati Tags:
Orginally posted on Kool Aid Served Daily
Copyright (C) 2006, Kool Aid Served Daily
About

tdh

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today