Tuesday Sep 25, 2007

Solaris Bootdisk Layout discussion

There's been a vibrant discussion on the sysadmin-discuss(at)opensolaris.org alias about boot disk layouts, and it was suggested that we try to come to some form of consensus and document it.

Sun has a bunch of installation standards called EIS (Enterprise Installation Standards), and one of those standards is the EIS Bootdisk Standard. As one of the contributors to that document, I found it quite interesting to see that the sysadmin discussion was coming to similar conclusions about how a boot disk should be laid out as we had come to internally.

In the interests of sharing and getting to a common standard, I've now posted the relevant bits of the EIS standard on the BigAdmin wiki, and encouraged the members of the sysadmin-discuss alias to comment and contribute.

If you're interested, you should have a look at it, and join the alias to participate in the discussion.

tags:

Monday Sep 03, 2007

CEC 2007 Here we come (again)

 (I've reposted so that technorati picks up the tag we'll be using for this event)

(Hint, tag using suncec2007 and we'll be able to agregate all content using said tag)


Its that time of year again. CEC2007 will be in Las Vegas this year. I've never been, and I'm half looking forward to it, but as I think I have an addictive character, I'm a little scared of of Vegas.

At the minimum, I'm going to leave my credit cards in the hotel room when I go out, so that I don't end up losing more than just my shirt at the poker tables. 

More importantly, I think the conference is going to be great. I've seen the list of some of the topics that will be presented.

The theme this year is "Shift Our Universe, Our World, Your Move", sub themes being: Shift to red: (all the groovy bleeding edge stuff that GregP talks about, Shift to green: (all the nice eco-friendly stuff we've been doing), and Shift to grow: (which focusses on all the things we need to do to enable our growth (and our customer's as well))


We're also going to be giving out a bunch of prizes for participants and attendees of the conference. They'll be mainly based on level and quality of participation in the sessions. Stay tuned for more details.

As per last year, I'll be blogging my perspectives of the conference again. 

tags:

Friday Aug 10, 2007

Coolstack: Not just for Coolthreads

As per a previous article, I upgraded by home server to Nevada build 69, because I was about to make some changes, and wanted to be fully up to date before I did so.

I wanted to play with Confluence which is the wiki that Sun has chosen to run their wiki site. Confluence can run in a number of ways and connect to a number of different databases. Since I hadn't yet played with Postgres, I decided I would use that, and I also decided to run it using Tomcat instead of as a standalone app.

Given that I'm a bit of a neophyte when it comes to talking native SQL, I decided that I needed to run a tool of some sort to configure Postgres. I've used phpMyAdmin for MySQL for a while now, and figured there MUST be an equivalent for Postgres. Using my amazing powers of deduction, I googled phppgadmin, and lo and behold there it was: phpPgAdmin. All I needed was a php compatible webserver. Unfortunately the supplied apache2 with  Nevada isn't php ready, so I had to then decide.. do I compile myself, or do I find something easier. I vaguely remember hearing something about coolstack, which is "a collection of some of the most commonly used open source applications optimized for the Sun Solaris OS platform."

CoolStack 1.1 is available for both SPARC and x64, and contains pretty much all I needed, including tomcat. So a few downloads and pkgadds later, and I've got apache running, which means I can get phpPgAdmin running, which means I can perform my initial Postgres configuration, which means I can finally install Confluence under Tomcat, and everything's perfect. (Well it wasn't perfectly perfect, because I needed to read a couple of READMEs along the way, and a bit of on-line documentation, but it was fine.)

So, I've got tomcat listening on port 8081 as per normal and confluence sitting inside there as an applet talking happily to Postgres, and so I decide that I actually want to do the apache proxy thing so that I can have my confluence wiki appear to sit on my normal webserver port. This is quite easy to configure, you just need a few lines like this:

LoadModule proxy_module libexec/mod_proxy.so
LoadModule proxy_connect_module libexec/mod_proxy_connect.so
LoadModule proxy_ftp_module libexec/mod_proxy_ftp.so
LoadModule proxy_http_module libexec/mod_proxy_http.so
LoadModule proxy_ajp_module libexec/mod_proxy_ajp.so
LoadModule proxy_balancer_module libexec/mod_proxy_balancer.so
ProxyPass               /wiki           http://localhost:8081/wiki
ProxyPassReverse        /wiki           http://localhost:8081/wiki

(I don't actually know how many of the LoadModules I really need btw, I suspect I don't need the last 2)

Here comes the bad news. The apache2 that comes with CoolStack doesn't come with the mod_proxy.so modules, which means that I can't use it for proxying. I tried copying the  original mod_proxy.so modules to the coolstack one and loading it up, but when I tried to access the wiki, my system pretty much froze and the disks churned like mad. Had to kill the webserver to get things back to normal.

How did I fix? I reverted to the original Nevada supplied webserver, and configured the CoolStack based one to run on a different port which I can use when I want to use phpPgAdmin. At some point I'm going to have to solve the problem permanently, probably some further investigation, and trying to figure out exactly what's going wrong. But for now, its doing what I need, I just hope I don't want to run a php based app anytime soon. 

tags: 

CEC2007: Here we come

 Its that time of year again. CEC2007 will be in Las Vegas this year. I've never been, and I'm half looking forward to it, but as I think I have an addictive character, I'm a little scared of of Vegas.

At the minimum, I'm going to leave my credit cards in the hotel room when I go out, so that I don't end up losing more than just my shirt at the poker tables. 

More importantly, I think the conference is going to be great. I've seen the list of some of the topics that will be presented.

The theme this year is "Shift Our Universe, Our World, Your Move", sub themes being: Shift to red: (all the groovy bleeding edge stuff that GregP talks about, Shift to green: (all the nice eco-friendly stuff we've been doing), and Shift to grow: (which focusses on all the things we need to do to enable our growth (and our customer's as well))

As per last year, I'll be blogging my perspectives of the conference again. 

tags:

New JET Resource

Sun has just launched http://wikis.sun.com, which is great because it allows me to use it for putting all the JET info there rather than here.

I have created a space there called JET, which will become the primary Sun location for all information relating to the Jumpstart Enterprise Toolkit.

Have a look: Jet wiki


tags:

Wednesday Aug 01, 2007

Home Server tweaking

I've been working on my home server again (remember.. the Ultra40 I won at CEC last year) . I've been lazy for the past few months, and been stuck at Nevada Build 63 for a while. While I've been live-upgrading my laptop with every Nevada release, I kept on finding really good excuses to NOT upgrade my home server.

The main issue is that it actually does real work. It's my home DHCP, SMTP, Samba, Web, Jumpstart, (and JET development) and print server. The household tends to notice when it's down. (especially since all the home directories are mounted off it, which means that everything stops when its down. We get real tears in our house when people can't do their homework!)

The fact that I use Live Upgrade means that downtime is usually just a reboot. Chris Gerhard, who has a similar home set up upgraded before me and found a samba problem, which caused me to delay upgrading until I knew it had been fixed, then we had a minor problem with gnome-terminal character corruption, finally, gaim was replaced by pidgin, and in doing so, support for one of the IM protocols was kind of accidentally broken.

I don't have a problem being on the bleeding edge on my laptop, cos I can work around most of the issues, especially since I do most of my work on my home server.

Anyway, the long and short of it was that I was just about to do some other work on my Ultra40, namely try and get apache with php working (among other things), and decided it would be prudent to upgrade to Nevada Build 69 before I did anything.... just to be sure.

Happy to report that it all went swimmingly (as usual), and downtime was in fact reduced to a single reboot, which them indoors didn't even notice.

One interesting point. On rebooting and starting gnome, it automatically detected my USB plugged in HP printer, and gave me a pretty little "Add Printer Queue" window which allowed me to add my printer. Unfortunately it didn't detect that I had already added it manually many months previously. I suppose one can expect that to happen as new features begin to overlap manual workarounds. I'm pretty sure that printing should just work straight out of the box on a fresh Build 69 install without any manual intervention.. but I'm not going to bother to try. 

I then proceeded to get my AMP working... topic for my next article.

tags:

Thursday May 31, 2007

Do you Tiddlywikle?

Bruce Porter just sent me a link to something called TiddlyWiki.

It's author JeremyRuston, describes it as: a reusable non-linear personal web notebook.

In short its a wiki on an html page. You don't need ay server-side logic, just a web browser.

And the way it works is pretty cool too! 

Monday Jan 29, 2007

Printing on Nevada b56

I got asked in a comment in my previous article to describe EXACTLY how I got printing working.

I've got an HP5150 USB printer. I plugged it into a convenient USB port on my Solaris server, and then used the printmgr gui to set it up. I selected Use PPD, and in the  Printer-->New Attached Printer form, filled in the following:

Printer Name: unix
Printer Port: /dev/printers/0
Printer Make: HP
Printer Model: HP DeskJet 5150
Printer Driver: Foomatic/hpijs (recommended)

This allowed standard Unix printing to work. (In short, it pretends to be a standard postscript printer, and you can send pretty much anything to it)

Solaris - Solaris

I also turned on the ipp-listener service using svcadm. On a remote Solaris client (my laptop, also running Solaris Nevada b56), I used printmgr again, and did Printer-->Add Access to Printer. This uses the ipp service by default if available. lpstat -v shows:

# lpstat -v
system for unix: zaphod (as ipp://zaphod/printers/unix)


The printer name is "unix" and my server is called "zaphod". With that, I could print from my laptop to my printer attached to my home server.

Samba Printing

The Samba printing was a little more troublesome. In short, all the Windows machines, and any machine for that matter trying to print natively        to my "unix" queue had difficulties because it was expecting either Postscript or plain text. I could have set all my printers up as generic Postscript printers, but decided I'd rather print natively. To do this I could have probably played with the filters so that it passed through everything, but that seemed too much like hard work. Instead a set up a different queue (to the same printer) which would act as a "raw" queue.

#!/bin/sh
lpadmin -p hp5150 -v /dev/printers/0 -T unknown -I any
accept hp5150
enable hp5150
lpadmin -p hp5150 -o nobanner

I think my smb.conf was already pretty much properly configured for printing, but I'll repeat the relevent bits just in case:

# If you want to automatically load your printer list rather
# than setting them up individually then you'll need this
   load printers = yes

# NOTE: If you have a BSD-style print system there is no need to 
# specifically define each individual printer
[printers]
   comment = All Printers
   path = /var/spool/samba
   browseable = no
# Set public = yes to allow user 'guest account' to print
   guest ok = no
   writable = no
   printable = yes

To set up the printer on the local side, I had to manually install the HP5150 drivers, and then add the printer using the Add Printer wizard in Windows. The key to getting this to work properly, is to set it up as a "Local Printer", then select a Local Port, and specify the Local port as (in my case)  as: \\\\zaphod\\hp5150

Summary

I actually configured all of this when both my home server and my laptop were running Nevada B54, so it is possible that it is no longer necessary to set up 2 queues, but I've LiveUpgraded and the settings were retained, so I've had no need to change. A lot of things seem to have changed in the Solaris printing model in Solaris Nevada, but I've found it is now a lot easier than it ever was. (And it is the expectation that it is going to be really difficult that makes you do more than you need to). If you trust that it just works, and use the printmgr tool, I think it is difficult to go wrong.

tags:  

Friday Jan 26, 2007

It just works!: Zones + ZFS + BrandZ + LiveUpgrade

I've successfully LiveUpgraded my home server twice so far. (I've been slightly lazy and decided to do alternate builds, so I've gone from Nevada b52 to b54 to b56. Apart from the minor zfs issues which require me to:

  1. Create an exclude file for the lucreate so that it doesn't try to copy my zfs partitions.
  2. Manually empty mountpoints where some of my zfs filesystems mount.

it all works really well.

I'd shied away from creating zones because LiveUpgrade didn't support zones in the earlier builds, but I've now been assured that it works fine now.

Zones + ZFS. WOW

It just works. When you create a zone, if it detects that your zonepath is on a zfs filesystem, it'll automatically create a new zfs dataset for you for the zones you are creating. You don't need to do anything special, it just does the "Right Thing(tm)".

Furthermore, if you decide to clone a zone, and it's on a zfs filesystem, it simply creates a snapshot of the cloned zone for you.  You don't need to do anything special, it just does the "Right Thing(tm)".

This means that once you've created your first zone, all your additional zones can be created and booted in a matter of seconds. Talk about rapid provisioning!

The icing on the cake: BrandZ

I've simply followed the instructions here and I've now got CentOS running in a zone, running on top of Solaris. Got Skype working!!

I love it when a plan comes together. We've now got some really clever integration of some individual cool Solaris technologies. The combined benefit is compelling.

Just waiting for ZFS boot to become available (well, more available than it is), and my home server aspirations will be complete. 

 

tags:  

Wednesday Nov 22, 2006

Addendum: Building/Installing/Configuring....

I've been checking my spamassassin scores, and wasn't particularly happy with it's ability to rate SPAM. Given it's reputation I was pretty sure that it was due to a configuration error on my part.

I had a sneeky suspicion that not all the "tests" I was running were actually happening. I ran spamd in debug mode, and started seeing a bunch of interesting errors. In short, I was missing Net::DNS, so anything that required any sort of DNS lookup was failing.

Simply did the perl -MCPAN -e 'install Net::DNS' thing and that got DNS lookups working. 

Then started to debug the other errors in the log. Apparently the bayes plugin was having problems talking to my bayes files because I didn't have DB_File. Tried the CPAN trick again, which failed because it couldn't find db.h. Turns out I didn't have Berkeley DB installed. Short trip to sunfreeware.com later: 

  •  db-4.2.52.NC-sol10-intel-local

It installs in /usr/localBerkeleyDB.4.2 . Given that CPAN was looking in /usr/local/BerkeleyDB, I created the link, and hey presto perl -MCPAN -e 'install DB_File' worked fine.

At that point I ran sa-learn against my Junk folder to generate some content, and checked the permissions of the bayes files. (Note: bayes_path in the local.cf for spamassassin needs to contain the prefix of the bayes files.  e.g.:

use_bayes 1
bayes_path /export/home/spamd/bayes/bayes
bayes_file_mode 0666

# ls /export/home/spamd/bayes/
bayes_journal bayes_seen bayes_toks bayes.mutex

 

Now that its all sorted, I'm getting much higher SPAM scores, and it seems much more accurate. 

Some would say I should have just used Blastwave, but this was more fun, and a good learning experience. 

tags:

Tuesday Nov 21, 2006

BTConnect and Dynamic IP SMTP server problems.

So, I've got Exim working really nicely... I think.

I've got a "dynamic" IP address from my BTConnect business broadband connection. This means that I have trouble connecting directly to SMTP servers from my server because their reverse lookup of my name doesn't match... i.e. it resolves to a dynamic BT address, and some SMTP servers are rightfully suspicious of that and punt me away.

So I've configured Exim to relay to mail.btconnect.com. This works because I've registered my 2 domain names with them, and their mail relay will accept stuff coming from those domains.

However, I also forward e-mail for my sister, and have a couple of little mailing lists configured, and these get bounced because I retain the name of the original sender on my outgoing forwards.

Hmmm. This should be solvable by authenticating myself on BT's mail server.

Trawl of the BT Broadband site tells me that:

1: BT Connect, (the business one) doesn't allow SMTP authentication:

 # telnet mail.btconnect.com 25
Trying 194.73.73.217...
Connected to mail.btconnect.com.
Escape character is '\^]'.
220 C2bthomr06.btconnect.com ESMTP Mirapoint 3.7.4b-GA; Tue, 21 Nov 2006 17:50:20 GMT
ehlo <mydomain>
250-C2bthomr06.btconnect.com Hello host<nnn>.btcentralplus.com [xx.yy.zz.aa], pleased to meet you
250-8BITMIME
250-SIZE 52428800
250-DSN
250-ETRN
250 HELP

2: BTInternet, (the consumer one) does:

# telnet mail.btinternet.com 25
Trying 217.146.188.192...
Connected to pop-smtp1-f.bt.mail.vip.ird.yahoo.com.
Escape character is '\^]'.
220 smtp808.mail.ird.yahoo.com ESMTP
ehlo <mydomain>
250-smtp808.mail.ird.yahoo.com
250-AUTH LOGIN PLAIN XYMCOOKIE
250-PIPELINING
250 8BITMIME

Phone call to the BTConnect helpline. They understood my problem. Were slightly sympathetic, but confirmed that the only way to send none BT domain e-mail via their BTConnect server was by pre-registering EVERY domain with them. Clearly impractical. (Heck, I don't know who's going to send my sister an e-mail)

Phone call to the BTInternet helpline. I used to have an account. It was suspended due to lack of use. Got it re-instated. But they said I had to dial up at least once every 6 months to keep it going. I don't have a dialer at home, I'm bound to forget, and I'm not crazy about plain text authentication. But I reconfigured Exim to use mail.btinternet.com as my smart relay, and started authenticating. Thankfully, my sister's e-mail stopped bouncing at that point.

I wasn't particularly happy with the solution. I had a couple of long term fixes in mind:

  1. Convert to Static IP from BT, which means I could register my name in the rev-arpa tables. Costs money. (Though they should really do it for free, my "dynamic" IP address has not changed in about 3 months.)
  2. Sign up for BTInternet's premium e-mail solution @ 1.50 a month, which means I could continue to use the mail.btinternet.com mailserver. Costs money, and not happy with plaintext auth.
  3. Something Else.

I ended up doing something else.  I had a rant about this to one of my friends who happens to be my secondary MX (for when I break my machine), and he chuckled and reminded me that when he set the MX up, including my account on his server he also enabled encrypted SMTP authentication for me. All I needed to to is point at his server in the first place. (Which I am now doing)

The only other thing I want to do is set Exim up so that outgoing mail from my domain sent to mail.btconnect.com, otherwise to use my friend's mailserver. I know it's possible, I just need to write the rules. Here they are:

# Use this one for mail originating from my domain so I don't overload smart_route2
smart_route1:
driver = manualroute
domains = !+local_domains
senders = \*@<mydomain>
transport = remote_smtp
route_list = \* mail.btconnect.com

# Use this one for mail NOT originating from my domain. (i.e. mailing list/alias expansion)
smart_route2:
driver = manualroute
domains = !+local_domains
transport = remote_smtp
route_list = \* <my friend's smtp server>

(obviously <mydomain> is my actual domain)

Exim is COOL.

tags:

Friday Nov 17, 2006

VNCviewer trick

I've been using "VNC Viewer Free Edition 4.1.1 for X" so that I can remote control other machines in the house, but if the machine I try to connect to is down/unavailable/switched off etc, the vncviewer command kind of hangs and I can't do anything on my desktop (except wiggle my mouse) until it times out after a few minutes. Needless to say, this was annoying me. Wrote a simple script called vncsafe:

#!/bin/sh
ping $1 >/dev/null 2>&1
if [ $? = 0 ]; then
vncviewer $1 &
else
zenity --error --text="$1 is not responding."
fi

Works a treat


tags:

Thursday Nov 16, 2006

I ate all the SPAM

I was chatting to Chris about my spamassassin setup, and he was saying that he's configured his with .forward files for all his users (family), and explained the reason for doing so.

Basically, he wanted to set a low SPAM threshold, and have all the SPAM delivered to himself so he could check it. Given that I have 4 daughters under the age of 12, I thought that would be a GOOD THING(tm) as well.

I also wanted to configure it so that I could vary the level of SPAM detection based on the user.

I created a .forward file in each user's account that looks like this:

# Exim filter

if
$header_X-Spam-Score: does not begin -
then
if $header_X-Spam-Score: is above 10
then
deliver <to me>
endif
endif

I had to change the exim configuration to write the X-Spam-Score calculated by spamassassin in the acl_check_data filter to $spam_score_int because the user filter stuff in exim doesn't like decimals.

Finally, I wrote a filter in Thunderbird which puts all mail that's in my inbox that's NOT addressed to me in a folder called SPAM. That means I can check for false positives, forward the e-mails to my offspring if required, and add a rule to the whitelist to allow in the future.

The .forward file allows me to specify varying limits of SPAM checking to different people. e.g. the 7 year old twins have a setting of 5 (i.e. .5), while my wife has a setting of 1.

Need to let this run for a while so I can tweak between too much or too little as required. 

tags:

Building/Installing/Configuring apps on Solaris Home server

I've diverted slightly from Chris Gerhard's home server implementation by deciding to NOT use the blastwave packages, and instead either use the sunfreeware stuff, or compile it myself.

I've got a few reasons for doing it this way:

  1. Many of these packages have compile time options which allow me to include/exclude things. I kind of like the ability to be able to do that, rather than having to stick with the "defaults" compiled by someone else.
  2. Using blastwave (which is really great) means that you need to install a bunch of "dependent" libraries. It just kind of irks me that a lot of the libraries/applications dependencies that I install are already sitting in /usr/sfw. I then start getting really confused, and run into complications about LD_LIBRARY_PATHs and making sure things are seeing the right library etc. I think it's difficult to use blastwave without sort of subscribing to it wholesale. It's a bit like buying a "kit car" fully assembled.
  3. The sunfreeware stuff tries to make use of existing Solaris libraries as much as possible, so should reduce the number of additional packages I had to download.

Additional Software Installed on my home server:

From Blastwave:

  • xineui (and whatever it needed to get that working.. about 81MB)

From Sunfreeware:

  • imap-2004g-sol10-x86-local    (had manually update /etc/inetd.conf and run inetconv, then update /etc/services. Also need to generate a certificate as this is the SSL version, so wouldn't let me do plain text authentication.)
  • openssl-0.9.8d-sol10-x86-local
  • libiconv-1.9.2-sol10-x86-local

From CPAN:

  • spamassissin

    (# perl -MCPAN -e 'install Mail::SpamAssassin' , download the dependents listed as well.)

From Source:

  • Exim 4.63
  • Clamav-0.88.6

Building Exim

Building Exim was pretty straightforward. Grab the tar.bz2, unzip, untar, copy src/EDITME to Local/Makefile. Edit the Makefile. It is well commented and lets you turn on/off options, specify where you want the spool, bin directory and config files to go. I created zfs filesystems for /usr/exim and /var/spool/exim. Also elected to make sure I had all the content scanning and TLS/SSL options turned ON. I've got Sun Studio 11 installed, and it used this complier when I ran the make. Then ran make install to stick everything in the right place.

Before actually running exim, I had to make sure that clamav and spamassassin were all up and running, so I got those to the point where the clamd and spamd daemons were running, then simply editted the /usr/exim/configure file to set things up.

Building Clamav

Clamav is cool because it has the "configure" script. So I ran it, it found everything it wanted, and then I ran make and it just compiled. Followed by "make install" to put it all in /usr/local (which is also a zfs filesystem).

Plugging it all in

I kind of cheated and hacked the /lib/svc/method/smtp-sendmail file. I did this because there are dependencies on sendmail which I need to maintain. I cheated a little bit more by having this same file star clamd, freshclam and spamd. It gives me less granularity of control, but the reality is that I usually manage all of these as a group anyway.

The other tiny problem I had was that spamd didn't honour the LD_LIBRARY_PATH variable because bits of it are setuid. (Its a Solaris security thing). Anyway, this meant that I had to use crle to  make sure that all the apps had /usr/sfw/lib and /usr/local/lib in their paths.

In my next blog

I'll cover in some more detail the settings I used for exim, and the fun I had with outgoing SMTP e-mail.

tags:

More home server stuff

Almost there...

My "services" configurations are almost done now. I'm currently learning about the interactions among: Live Upgrade (which is REALLY cool), SVM mirrored root disks, GRUB menus, and ZFS. (oh, it'll be so much easier when zfs boot is available, but until then.....). The problem kind of starts with the fact that the boot failsafe doesn't recognise /dev/md devices, which means I need to break my mirrors if I need to fix a booting problem.... unless I've got Live Upgrade working which means I'll ALWAYS have a bootable environment.

 

The man page of lucreate suggests that I use lucreate to do my mirroring.

So, I've currently got the following:

2 boot disks partitioned as follows:

  • slice0=7GB
  • slice1=2GB
  • slice3=7GB
  • slice4=free
  • slice7=32MB 

Solaris nv Build 52 installed on c1d0s0, with an intent to mirror onto c2d0s0. My ABE for lu is going to be slice3 mirrored.

The lucreate man page says I simply need to type:

# lucreate -c slice0 -m /:/dev/md/dsk/d30:ufs,mirror \\
-m /:/dev/dsk/c1d0s3,d31:attach \\
-m /:/dev/dsk/c2d0s3,d32:attach -n slice3

which doesn't work due to a couple of "issues" with the interaction between LU and SVM. They are almost all fixed now, and the syntax of the command required is slightly different. I adopted the quick and easy workaround... simply create the metadevices manually first.

# metadb -a -f -c 3 c1d0s7 c2d0s7
# metainit d31 1 1 /dev/dsk/c1d0s3
# metainit d32 1 1 /dev/dsk/c2d0s3
# metainit d30 -m d31
# metattach d30 d32
# lucreate -c slice0 -m /:/dev/md/dsk/d30:ufs -n slice3

This fails after about 30 mins. My filesystem is full. No room to write the GRUB config stuff. Hmm, my slice0 is only 54% full. Slice3 is 100%. It's gone and copied all my data that's mounted via zfs. Referring to the Manual again and I learn how lucreate decides what to put into the BE or not. It looks in vfstab. Zfs filesystems aren't listed there, so it figures that my /opt/SUNWspro (which is a zfs filesystem) is on the / partition, and dutifully copies it. Hmm. Need to exclude all my zfs filesystems. (There are a couple of bugs logged against this as well, as one would think that lu would be clever enough to do this by itself... and it does kind of.)

# zfs list | awk '{print $5}' | grep -v \\- > /luexcludefile (edit it so that I only list top level directories)


Try the lucreate command again with the -f /luexcludefile . This fails because of package dependencies. (i.e. I'm excluding filesystems that are in the sadm/package files.). Need to use the -I flag. I need to clean up the configuration before I start again, ludelete doesn't work properly because the process failed. Manually delete by editting /etc/lutab file, and simply getting rid of the partially created BE. (ludelete is preferred, only do this if you think you know what you are doing) My final, successful lucreate command was:

# lucreate -I -l /data/downloads/luerror -m /:/dev/md/dsk/d30:ufs -n slice3 -f /data/downloads/luexclude

So, I luactive that and reboot. df -k shows I've forgotten to mirror the swap disk. oops.

# mkfile 1024m /var/tmp/swapfile
# swap -a /var/tmp/swapfile
# swap -d /dev/dsk/c1t0s1
# metainit d21 1 1 /dev/dsk/c1t0s1
# metainit d22 1 1 /dev/dsk/c2t0s1
# metainit d20 -m d21
# metattach d20 d22
# swap -a /dev/md/dsk/d20
# swap -d /var/tmp/swapfile
# rm /var/tmp/swapfile
# (vi /etc/vfstab and change swap location to the md device.)

I'm now running off my mirrored BE: slice3. I ludelete the single sliced slice0, and repeat the process I've just done, rebuilding the slice0 BU as a mirrored slice.

I feel a lot safer now. I've got 2 mirrored bootable partitions on my system. I'd need to to a LOT of stuff wrong for me to get into an unbootable position (again).

Did I mention that the motivation for getting this all to work was that I managed to break my boot environment? (Don't ask, I'm too embarrassed to say how). And that the only way for me to recover was to re-jumpstart the system from my laptop? The whole system ended up being down for about 3 hours, as it took that long to get all the services reconfigured and ready to go again.


tags:

About

mramcha

Search

Categories
Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today