Friday Mar 15, 2013

New OVM Server for SPARC WP

I've been working mainly on Engineered Systems, more specifically SPARC SuperCluster for the past year, and it occurred to me that the way that we use layered virtualisation within the SSC is not very heavily publicised, and although you could derive these configurations from the OVM Server for SPARC documentation, it wasn't clear that this was considered a valid way of configuring a T-Series based server.

One of the ways that the SSC achieves such high levels of performance and efficiency, is that all the domains are built as root domains, which means that there is zero virtualisation overhead within these domains. Implementing Root Domains with Oracle VM Server for SPARC co-written by me, Mikel Manitius and Jeff Savit explains how this is achieved, and how to use these techniques on your own T4 based servers, to get the same kind of benefits.

 Happy Reading!

Thursday Dec 17, 2009

Jumpstarting over multiple subnets

The question of Jumpstart and subnets is always coming up. I've finally documented the issue and the ways of "solving" it on the JET wiki site. As with any problem, there is no single easy answer, but a number of different solutions that can be combined in a number of ways.

tags: 

Tuesday Jan 06, 2009

FOLLOWUP: Solaris Cluster on a laptop using VirtualBox, iSCSI and a quorum server

A few months ago I wrote  Solaris Cluster on a laptop using VirtualBox, iSCSI and a quorum server, which detailed some of the hoops that I needed to jump through to get it to work. The good news is that the new features in VirtualBox 2.0 (and now 2.1) have gotten rid of 2 of those hoops:

  1.  I no longer need to play with vnics and that crazy script as Host Interface networking simply works out of the box
  2. I don't need to install the 32bit internal only SC package, as VirtualBox now supports 64bit guests.

I'm in the process of rebuilding my laptop cluster making use of the above new features.

 Hurrah!!!

tags:

Monday Jan 05, 2009

JET 4.6 externally available

As a follow-on to my previous post, JET 4.6 is now available externally. Check the JET wiki which has a link to the download.

This is mainly a bug fix release, and has been given a major number increase because it's the JET version that will be in the next release of xVM OpsCenter. (and we like to release the OpsCenter releases at major version numbers).

In terms of changes, I've fixed a couple of bugs with turning ntp on, and multiple disks in zfs pools for ZFS disks. This version also has tweaks in place to workaround some DHCP issues when installing S10 U6 and Solaris Nevada.

tags:

Tuesday Nov 04, 2008

JET 4.4.7 available externally

As a follow-on to my previous post, JET 4.4.7 is now available externally. Check the JET wiki which has a link to the download.

Also have a look at the JET User Guide which has now been posted on the wiki. I expect this to grow over time, but its a conversion from the original user guide with updates for correctness.

tags:

Friday Aug 10, 2007

New JET Resource

Sun has just launched http://wikis.sun.com, which is great because it allows me to use it for putting all the JET info there rather than here.

I have created a space there called JET, which will become the primary Sun location for all information relating to the Jumpstart Enterprise Toolkit.

Have a look: Jet wiki


tags:

Wednesday Aug 01, 2007

Home Server tweaking

I've been working on my home server again (remember.. the Ultra40 I won at CEC last year) . I've been lazy for the past few months, and been stuck at Nevada Build 63 for a while. While I've been live-upgrading my laptop with every Nevada release, I kept on finding really good excuses to NOT upgrade my home server.

The main issue is that it actually does real work. It's my home DHCP, SMTP, Samba, Web, Jumpstart, (and JET development) and print server. The household tends to notice when it's down. (especially since all the home directories are mounted off it, which means that everything stops when its down. We get real tears in our house when people can't do their homework!)

The fact that I use Live Upgrade means that downtime is usually just a reboot. Chris Gerhard, who has a similar home set up upgraded before me and found a samba problem, which caused me to delay upgrading until I knew it had been fixed, then we had a minor problem with gnome-terminal character corruption, finally, gaim was replaced by pidgin, and in doing so, support for one of the IM protocols was kind of accidentally broken.

I don't have a problem being on the bleeding edge on my laptop, cos I can work around most of the issues, especially since I do most of my work on my home server.

Anyway, the long and short of it was that I was just about to do some other work on my Ultra40, namely try and get apache with php working (among other things), and decided it would be prudent to upgrade to Nevada Build 69 before I did anything.... just to be sure.

Happy to report that it all went swimmingly (as usual), and downtime was in fact reduced to a single reboot, which them indoors didn't even notice.

One interesting point. On rebooting and starting gnome, it automatically detected my USB plugged in HP printer, and gave me a pretty little "Add Printer Queue" window which allowed me to add my printer. Unfortunately it didn't detect that I had already added it manually many months previously. I suppose one can expect that to happen as new features begin to overlap manual workarounds. I'm pretty sure that printing should just work straight out of the box on a fresh Build 69 install without any manual intervention.. but I'm not going to bother to try. 

I then proceeded to get my AMP working... topic for my next article.

tags:

Friday Jan 26, 2007

It just works!: Zones + ZFS + BrandZ + LiveUpgrade

I've successfully LiveUpgraded my home server twice so far. (I've been slightly lazy and decided to do alternate builds, so I've gone from Nevada b52 to b54 to b56. Apart from the minor zfs issues which require me to:

  1. Create an exclude file for the lucreate so that it doesn't try to copy my zfs partitions.
  2. Manually empty mountpoints where some of my zfs filesystems mount.

it all works really well.

I'd shied away from creating zones because LiveUpgrade didn't support zones in the earlier builds, but I've now been assured that it works fine now.

Zones + ZFS. WOW

It just works. When you create a zone, if it detects that your zonepath is on a zfs filesystem, it'll automatically create a new zfs dataset for you for the zones you are creating. You don't need to do anything special, it just does the "Right Thing(tm)".

Furthermore, if you decide to clone a zone, and it's on a zfs filesystem, it simply creates a snapshot of the cloned zone for you.  You don't need to do anything special, it just does the "Right Thing(tm)".

This means that once you've created your first zone, all your additional zones can be created and booted in a matter of seconds. Talk about rapid provisioning!

The icing on the cake: BrandZ

I've simply followed the instructions here and I've now got CentOS running in a zone, running on top of Solaris. Got Skype working!!

I love it when a plan comes together. We've now got some really clever integration of some individual cool Solaris technologies. The combined benefit is compelling.

Just waiting for ZFS boot to become available (well, more available than it is), and my home server aspirations will be complete. 

 

tags:  

Tuesday Nov 21, 2006

BTConnect and Dynamic IP SMTP server problems.

So, I've got Exim working really nicely... I think.

I've got a "dynamic" IP address from my BTConnect business broadband connection. This means that I have trouble connecting directly to SMTP servers from my server because their reverse lookup of my name doesn't match... i.e. it resolves to a dynamic BT address, and some SMTP servers are rightfully suspicious of that and punt me away.

So I've configured Exim to relay to mail.btconnect.com. This works because I've registered my 2 domain names with them, and their mail relay will accept stuff coming from those domains.

However, I also forward e-mail for my sister, and have a couple of little mailing lists configured, and these get bounced because I retain the name of the original sender on my outgoing forwards.

Hmmm. This should be solvable by authenticating myself on BT's mail server.

Trawl of the BT Broadband site tells me that:

1: BT Connect, (the business one) doesn't allow SMTP authentication:

 # telnet mail.btconnect.com 25
Trying 194.73.73.217...
Connected to mail.btconnect.com.
Escape character is '\^]'.
220 C2bthomr06.btconnect.com ESMTP Mirapoint 3.7.4b-GA; Tue, 21 Nov 2006 17:50:20 GMT
ehlo <mydomain>
250-C2bthomr06.btconnect.com Hello host<nnn>.btcentralplus.com [xx.yy.zz.aa], pleased to meet you
250-8BITMIME
250-SIZE 52428800
250-DSN
250-ETRN
250 HELP

2: BTInternet, (the consumer one) does:

# telnet mail.btinternet.com 25
Trying 217.146.188.192...
Connected to pop-smtp1-f.bt.mail.vip.ird.yahoo.com.
Escape character is '\^]'.
220 smtp808.mail.ird.yahoo.com ESMTP
ehlo <mydomain>
250-smtp808.mail.ird.yahoo.com
250-AUTH LOGIN PLAIN XYMCOOKIE
250-PIPELINING
250 8BITMIME

Phone call to the BTConnect helpline. They understood my problem. Were slightly sympathetic, but confirmed that the only way to send none BT domain e-mail via their BTConnect server was by pre-registering EVERY domain with them. Clearly impractical. (Heck, I don't know who's going to send my sister an e-mail)

Phone call to the BTInternet helpline. I used to have an account. It was suspended due to lack of use. Got it re-instated. But they said I had to dial up at least once every 6 months to keep it going. I don't have a dialer at home, I'm bound to forget, and I'm not crazy about plain text authentication. But I reconfigured Exim to use mail.btinternet.com as my smart relay, and started authenticating. Thankfully, my sister's e-mail stopped bouncing at that point.

I wasn't particularly happy with the solution. I had a couple of long term fixes in mind:

  1. Convert to Static IP from BT, which means I could register my name in the rev-arpa tables. Costs money. (Though they should really do it for free, my "dynamic" IP address has not changed in about 3 months.)
  2. Sign up for BTInternet's premium e-mail solution @ 1.50 a month, which means I could continue to use the mail.btinternet.com mailserver. Costs money, and not happy with plaintext auth.
  3. Something Else.

I ended up doing something else.  I had a rant about this to one of my friends who happens to be my secondary MX (for when I break my machine), and he chuckled and reminded me that when he set the MX up, including my account on his server he also enabled encrypted SMTP authentication for me. All I needed to to is point at his server in the first place. (Which I am now doing)

The only other thing I want to do is set Exim up so that outgoing mail from my domain sent to mail.btconnect.com, otherwise to use my friend's mailserver. I know it's possible, I just need to write the rules. Here they are:

# Use this one for mail originating from my domain so I don't overload smart_route2
smart_route1:
driver = manualroute
domains = !+local_domains
senders = \*@<mydomain>
transport = remote_smtp
route_list = \* mail.btconnect.com

# Use this one for mail NOT originating from my domain. (i.e. mailing list/alias expansion)
smart_route2:
driver = manualroute
domains = !+local_domains
transport = remote_smtp
route_list = \* <my friend's smtp server>

(obviously <mydomain> is my actual domain)

Exim is COOL.

tags:

Friday Nov 17, 2006

VNCviewer trick

I've been using "VNC Viewer Free Edition 4.1.1 for X" so that I can remote control other machines in the house, but if the machine I try to connect to is down/unavailable/switched off etc, the vncviewer command kind of hangs and I can't do anything on my desktop (except wiggle my mouse) until it times out after a few minutes. Needless to say, this was annoying me. Wrote a simple script called vncsafe:

#!/bin/sh
ping $1 >/dev/null 2>&1
if [ $? = 0 ]; then
vncviewer $1 &
else
zenity --error --text="$1 is not responding."
fi

Works a treat


tags:

Thursday Nov 16, 2006

I ate all the SPAM

I was chatting to Chris about my spamassassin setup, and he was saying that he's configured his with .forward files for all his users (family), and explained the reason for doing so.

Basically, he wanted to set a low SPAM threshold, and have all the SPAM delivered to himself so he could check it. Given that I have 4 daughters under the age of 12, I thought that would be a GOOD THING(tm) as well.

I also wanted to configure it so that I could vary the level of SPAM detection based on the user.

I created a .forward file in each user's account that looks like this:

# Exim filter

if
$header_X-Spam-Score: does not begin -
then
if $header_X-Spam-Score: is above 10
then
deliver <to me>
endif
endif

I had to change the exim configuration to write the X-Spam-Score calculated by spamassassin in the acl_check_data filter to $spam_score_int because the user filter stuff in exim doesn't like decimals.

Finally, I wrote a filter in Thunderbird which puts all mail that's in my inbox that's NOT addressed to me in a folder called SPAM. That means I can check for false positives, forward the e-mails to my offspring if required, and add a rule to the whitelist to allow in the future.

The .forward file allows me to specify varying limits of SPAM checking to different people. e.g. the 7 year old twins have a setting of 5 (i.e. .5), while my wife has a setting of 1.

Need to let this run for a while so I can tweak between too much or too little as required. 

tags:

Building/Installing/Configuring apps on Solaris Home server

I've diverted slightly from Chris Gerhard's home server implementation by deciding to NOT use the blastwave packages, and instead either use the sunfreeware stuff, or compile it myself.

I've got a few reasons for doing it this way:

  1. Many of these packages have compile time options which allow me to include/exclude things. I kind of like the ability to be able to do that, rather than having to stick with the "defaults" compiled by someone else.
  2. Using blastwave (which is really great) means that you need to install a bunch of "dependent" libraries. It just kind of irks me that a lot of the libraries/applications dependencies that I install are already sitting in /usr/sfw. I then start getting really confused, and run into complications about LD_LIBRARY_PATHs and making sure things are seeing the right library etc. I think it's difficult to use blastwave without sort of subscribing to it wholesale. It's a bit like buying a "kit car" fully assembled.
  3. The sunfreeware stuff tries to make use of existing Solaris libraries as much as possible, so should reduce the number of additional packages I had to download.

Additional Software Installed on my home server:

From Blastwave:

  • xineui (and whatever it needed to get that working.. about 81MB)

From Sunfreeware:

  • imap-2004g-sol10-x86-local    (had manually update /etc/inetd.conf and run inetconv, then update /etc/services. Also need to generate a certificate as this is the SSL version, so wouldn't let me do plain text authentication.)
  • openssl-0.9.8d-sol10-x86-local
  • libiconv-1.9.2-sol10-x86-local

From CPAN:

  • spamassissin

    (# perl -MCPAN -e 'install Mail::SpamAssassin' , download the dependents listed as well.)

From Source:

  • Exim 4.63
  • Clamav-0.88.6

Building Exim

Building Exim was pretty straightforward. Grab the tar.bz2, unzip, untar, copy src/EDITME to Local/Makefile. Edit the Makefile. It is well commented and lets you turn on/off options, specify where you want the spool, bin directory and config files to go. I created zfs filesystems for /usr/exim and /var/spool/exim. Also elected to make sure I had all the content scanning and TLS/SSL options turned ON. I've got Sun Studio 11 installed, and it used this complier when I ran the make. Then ran make install to stick everything in the right place.

Before actually running exim, I had to make sure that clamav and spamassassin were all up and running, so I got those to the point where the clamd and spamd daemons were running, then simply editted the /usr/exim/configure file to set things up.

Building Clamav

Clamav is cool because it has the "configure" script. So I ran it, it found everything it wanted, and then I ran make and it just compiled. Followed by "make install" to put it all in /usr/local (which is also a zfs filesystem).

Plugging it all in

I kind of cheated and hacked the /lib/svc/method/smtp-sendmail file. I did this because there are dependencies on sendmail which I need to maintain. I cheated a little bit more by having this same file star clamd, freshclam and spamd. It gives me less granularity of control, but the reality is that I usually manage all of these as a group anyway.

The other tiny problem I had was that spamd didn't honour the LD_LIBRARY_PATH variable because bits of it are setuid. (Its a Solaris security thing). Anyway, this meant that I had to use crle to  make sure that all the apps had /usr/sfw/lib and /usr/local/lib in their paths.

In my next blog

I'll cover in some more detail the settings I used for exim, and the fun I had with outgoing SMTP e-mail.

tags:

More home server stuff

Almost there...

My "services" configurations are almost done now. I'm currently learning about the interactions among: Live Upgrade (which is REALLY cool), SVM mirrored root disks, GRUB menus, and ZFS. (oh, it'll be so much easier when zfs boot is available, but until then.....). The problem kind of starts with the fact that the boot failsafe doesn't recognise /dev/md devices, which means I need to break my mirrors if I need to fix a booting problem.... unless I've got Live Upgrade working which means I'll ALWAYS have a bootable environment.

 

The man page of lucreate suggests that I use lucreate to do my mirroring.

So, I've currently got the following:

2 boot disks partitioned as follows:

  • slice0=7GB
  • slice1=2GB
  • slice3=7GB
  • slice4=free
  • slice7=32MB 

Solaris nv Build 52 installed on c1d0s0, with an intent to mirror onto c2d0s0. My ABE for lu is going to be slice3 mirrored.

The lucreate man page says I simply need to type:

# lucreate -c slice0 -m /:/dev/md/dsk/d30:ufs,mirror \\
-m /:/dev/dsk/c1d0s3,d31:attach \\
-m /:/dev/dsk/c2d0s3,d32:attach -n slice3

which doesn't work due to a couple of "issues" with the interaction between LU and SVM. They are almost all fixed now, and the syntax of the command required is slightly different. I adopted the quick and easy workaround... simply create the metadevices manually first.

# metadb -a -f -c 3 c1d0s7 c2d0s7
# metainit d31 1 1 /dev/dsk/c1d0s3
# metainit d32 1 1 /dev/dsk/c2d0s3
# metainit d30 -m d31
# metattach d30 d32
# lucreate -c slice0 -m /:/dev/md/dsk/d30:ufs -n slice3

This fails after about 30 mins. My filesystem is full. No room to write the GRUB config stuff. Hmm, my slice0 is only 54% full. Slice3 is 100%. It's gone and copied all my data that's mounted via zfs. Referring to the Manual again and I learn how lucreate decides what to put into the BE or not. It looks in vfstab. Zfs filesystems aren't listed there, so it figures that my /opt/SUNWspro (which is a zfs filesystem) is on the / partition, and dutifully copies it. Hmm. Need to exclude all my zfs filesystems. (There are a couple of bugs logged against this as well, as one would think that lu would be clever enough to do this by itself... and it does kind of.)

# zfs list | awk '{print $5}' | grep -v \\- > /luexcludefile (edit it so that I only list top level directories)


Try the lucreate command again with the -f /luexcludefile . This fails because of package dependencies. (i.e. I'm excluding filesystems that are in the sadm/package files.). Need to use the -I flag. I need to clean up the configuration before I start again, ludelete doesn't work properly because the process failed. Manually delete by editting /etc/lutab file, and simply getting rid of the partially created BE. (ludelete is preferred, only do this if you think you know what you are doing) My final, successful lucreate command was:

# lucreate -I -l /data/downloads/luerror -m /:/dev/md/dsk/d30:ufs -n slice3 -f /data/downloads/luexclude

So, I luactive that and reboot. df -k shows I've forgotten to mirror the swap disk. oops.

# mkfile 1024m /var/tmp/swapfile
# swap -a /var/tmp/swapfile
# swap -d /dev/dsk/c1t0s1
# metainit d21 1 1 /dev/dsk/c1t0s1
# metainit d22 1 1 /dev/dsk/c2t0s1
# metainit d20 -m d21
# metattach d20 d22
# swap -a /dev/md/dsk/d20
# swap -d /var/tmp/swapfile
# rm /var/tmp/swapfile
# (vi /etc/vfstab and change swap location to the md device.)

I'm now running off my mirrored BE: slice3. I ludelete the single sliced slice0, and repeat the process I've just done, rebuilding the slice0 BU as a mirrored slice.

I feel a lot safer now. I've got 2 mirrored bootable partitions on my system. I'd need to to a LOT of stuff wrong for me to get into an unbootable position (again).

Did I mention that the motivation for getting this all to work was that I managed to break my boot environment? (Don't ask, I'm too embarrassed to say how). And that the only way for me to recover was to re-jumpstart the system from my laptop? The whole system ended up being down for about 3 hours, as it took that long to get all the services reconfigured and ready to go again.


tags:

Thursday Nov 09, 2006

Following the dot in .....

Solaris x64 Home Server AND Desktop

For those of you who have been following Chris Gerhard's blog about his adventure in creating a home server running Solaris, none of this is going to be particularly new to you, because I'm shamelessly going to copy as much as he's done to work for me as well.

 

We both work in the same campus in the UK, and we've both been running our home networks using Sun's Cobalt Qubes for the past few years (not intentionally, I found that out AFTER I met him... actually our first conversation was probably via the Qube when he rescued me after I turned my Qube into a brick (pun intended)), with relatively good success. However, it's become a little old in the tooth (the Qube), and frankly, not Web 2.0 compatible. (or it's not worth the effort to make it so).

 

I was initially waiting for him to get it all sorted, and then duplicate everything he'd done (hardware and all).

 

However, I managed to win an Ultra40 with a 24inch LCD monitor at CEC2006 for the best blog coverage of the event, and have decided to use that as my "new" home server. (It came with 1GB RAM and an 80GB HDD :-( , so I've beefed it up to 4GB, and added another 80GB drive and a pair of 320GB drives which wil be dedicated solely to zfs, which is good because then it can turn on the disk write cache. I've also created 2 partitions on the mirrored root disk for live upgrade purposes, and handed over the rest of the disk for zfs. I'm going to use that bit for disk based backups of the important stuff on the 320GB disk.)

 

Given that it would be a crime to attach the 24 inch LCD to a windows box, I've decided to also make it my primary workstation at home, therefore I'm also going to be playing with JDS4 and making it all work great as a desktop as well.

 

Progress so far:

  • Solaris Nevada Build 51 installed

  • JDS4 beta (which will be in build 53 anyway, couldn't wait as it has a heap of really groovy features such as Rhythmbox music player and CD burner and a host of friendly user applications.

  • Set up samba so wife and kids can get file and print access. (they were suitably unimpressed and didn't even notice that they were no longer running on a Qube with a 40GB drive and a broken mirror due to a failed disk. I guess that's a good thing)

  • Connected my HP5150 Deskjet via USB. Worked a charm, created a device called /dev/printer/0 and then used printmgr to configure it. (printmgr IS cool, but still slightly cryptic)

  • Discovered that the printmgr stuff worked fine for local printing, but couldn't handle the already processed stuff that windows sends it, so had to create an additional "raw" printer queue for printing from there. Couldn't figure out how to do it via printmgr, so did it the real man's way with lpadmin, and accept and enable. Word of warning. "enable <printername>" doesn't work in the bash shell. (I'm sure I could fix it by messing with the filter, but this is easier. I hardly ever print from home when anyone else is printing anyway.)

  • Got xine working for playing movies (simply grabbed it from blastwave). (Changed nautilus default preferences by editing some files in /usr/share/application)

 

My BIGGEST problem is that given I won the Ultra40 in the US, it was sent to me with a US UNIX keyboard. Which is kind of OK, given that I spent my formative years in the Caribbean which surprisingly also had US keyboards. However, I'm struggling having to context switch between the 2. (yes, they are different)

 

Things to do:

  • nameservices: dns and dhcp: I'd like to do static dhcp for some of the machines in the household, and I'd like internal nameservice resolution to work. Hence need to own dns and dhcp and don't leave it to my braindead ADSL router.

  • smtp/imap: need spam filtering and viruschecking. Chris has used exim/spamassasin/clamv, and it looks really good, but LOTS of configuration options. Might have to steal his config wholesale and tweak it.

  • Start doing my snapshotting and backuping onto alternate disk for resilience. (I don't NEED to do that yet as all my data is still sitting on original machines)

All in all, the Ultra40 is a stonking machine. Solaris with the latest JDS is easily a great desktop machine and it's all been relatively smooth so far.

tags:

About

mramcha

Search

Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today