Saturday May 17, 2008

links for 2008-05-17

Wednesday Jun 13, 2007

Xalan gotcha with OpenSSO on Tomcat on Ubuntu Feisty

I've been meaning to blog about this for a while, but haven't been able to scrape together a few minutes. Anyway, if you've been reading Superpatterns for a while you'll know that I use Tomcat on Ubuntu to run OpenSSO. I wrote a little while ago about some problems with Tomcat in Ubuntu 7.04 'Feisty Fawn' - Ubuntu hanging at startup due to issues with catalina.out and security policy needing to be updated due to a change in where Tomcat keeps web applications on disk.

Another issue I've seen is the following stack trace when parsing XML:

javax.xml.transform.TransformerFactoryConfigurationError: Provider org.apache.xalan.processor.TransformerFactoryImpl not found
	javax.xml.transform.TransformerFactory.newInstance(TransformerFactory.java:119)
	com.sun.identity.shared.xml.XMLUtils.print(XMLUtils.java:674)
	com.sun.identity.saml.assertion.Assertion.parseAssertionElement(Assertion.java:191)
	com.sun.identity.saml.assertion.Assertion.(Assertion.java:147)
	com.sun.identity.wsfederation.profile.RequestSecurityTokenResponse.(RequestSecurityTokenResponse.java:131)
	com.sun.identity.wsfederation.profile.RequestSecurityTokenResponse.parseXML(RequestSecurityTokenResponse.java:62)
	com.sun.identity.wsfederation.servlet.RPSigninResponse.process(RPSigninResponse.java:93)
	com.sun.identity.wsfederation.servlet.WSFederationServlet.doPost(WSFederationServlet.java:143)
	javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
	[...]

A quick Google turns up this blog entry from Andrew Beacock in the UK. The issue is that, previously, Xalan was included in JDK 1.4, but since then, the Apache community has chosen XSLTC as the default processor for developing XSLT 2.0, and JDK 1.5 followed suit for JAXP 1.3. I'm running Tomcat on Sun's 1.5.0_11-b03 JVM, hence the missing TransformerFactoryImpl. The bottom line is this: grab Xalan for yourself and put it in your web app's WEB-INF/lib directory.

If you're working with OpenSSO, you can just copy xalan.jar, xercesImpl.jar, xml-apis.jar and serializer.jar from Xalan to opensso/products/extlib, rebuild the OpenSSO WAR and you should be good to go.

And, again, before anyone asks "Why aren't you using Glassfish?" - I am, I'm just using Tomcat as well, since a lot of the OpenSSO contributors use it. Their pain is my pain

Thursday May 03, 2007

Tomcat on Ubuntu Feisty

A while ago, I blogged about running OpenSSO on Tomcat in Ubuntu. I recently upgraded Ubuntu to 7.04 'Feisty Fawn', which, while most things work great, seems to have caused some issues with Tomcat...

The first is this bug - when you start Tomcat, it just hangs. Apparently it's to do with /var/lib/tomcat5.5/logs/catalina.out being a named pipe. The workaround that works for me is to add the following line (shown in bold) to the start block in /etc/init.d/tomcat5.5

                $DAEMON -user "$TOMCAT5_USER" -cp "$JSVC_CLASSPATH" \\
                    -outfile "$LOGFILE"  -errfile '&1' \\
                    -pidfile "$CATALINA_PID" $JAVA_OPTS "$BOOTSTRAP_CLASS"
                cat /var/log/tomcat5.5/catalina.out > /dev/null &
        else
                log_progress_msg "(already running)"
        fi

The second issue is that Tomcat seems to have changed where it puts its web applications. They were in /usr/share/tomcat5.5/webapps; they are now in /var/lib/tomcat5.5/webapps. This breaks the security policy I blogged about last time - you now need to add the following to /etc/tomcat5.5/policy.d/50user.policy:

grant codeBase "file:${catalina.base}/webapps/openfm/-" {
  permission java.security.AllPermission;
};

(i.e. switch from ${catalina.home} to ${catalina.base})

And before anyone asks "Why aren't you using Glassfish?" - I am, I'm just using Tomcat as well, since a lot of the OpenSSO contributors use it. Their pain is my pain

Friday Nov 10, 2006

OpenSSO on Tomcat in Ubuntu

The 'single WAR' deployment of OpenSSO allows you to simply deploy a WAR file into a web container such as Glassfish or Tomcat. The first time you hit the OpenSSO URL, a configurator runs, collecting some basic parameters, saving them to configuration files and setting up OpenSSO for use. You can save this configuration anywhere in the file system; the configurator saves that location in a file in the home directory of user as which the web container is running (that's a really clumsy way to put it, but hopefully the meaning is almost clear).

Numerous folks are deploying OpenSSO on Tomcat. In a typical 'developer' installation, where you run Tomcat from the command line, all works well - you get a file named something like AMConfig_localhost_opensso_ in your home directory. AMConfig is a constant prefix and _localhost_opensso_ is OpenSSO's deployment location (/localhost/opensso/) with slashes replaced by underscores. Ubuntu installs Tomcat on 'localhost', and I deployed the OpenSSO war file into /opensso, so I get a file called AMConfig_localhost_opensso_ whose content is simply the path to the main configuration data. Your mileage will vary!

Now - I'm running Ubuntu on my laptop, with the default Ubuntu distribution of Tomcat 5.5. The first time I tried to deploy OpenSSO it failed - looking at Tomcat's logs, I could see

localhost_2006-11-03.log:java.security.AccessControlException: access denied (java.util.PropertyPermission user.home read)

Tomcat is running with the Security Manager and is denying access to the user.home property. From previous experience, the quickest way round this (short of completely disabling the security manager) is to grant your web application all rights. I added the following to /etc/tomcat5.5/policy.d/99examples.policy:

grant codeBase "file:${catalina.home}/webapps/opensso/-" {
  permission java.security.AllPermission;
};

You could, of course, specify much more granular permissions, but this gets you going with the minimum fuss.

So - try again. This time, OpenSSO gets a little further, but fails again with

java.io.FileNotFoundException: /usr/share/tomcat5.5/AMConfig_localhost_opensso_ (Permission denied)

Although OpenSSO can now locate the user's home directory, it can't actually write to a file there, since, in this configuration, Tomcat is running as the tomcat5 user, whose home directory (/usr/share/tomcat5.5) is owned by root and is not writable by tomcat5. One solution is to temporarily make that directory writable by all (sudo chmod 777 /usr/share/tomcat5.5), flipping it back after OpenSSO configures itself successfully (sudo chmod 755 /usr/share/tomcat5.5). A more elegant approach, and one which doesn't require you to go back and tidy up, is to do

sudo touch /usr/share/tomcat5.5/AMConfig_localhost_opensso_
sudo chown tomcat5 /usr/share/tomcat5.5/AMConfig_localhost_opensso_

Now, you just need to ensure that you give the configurator a directory that is writable by tomcat5 and all is well - a working OpenSSO and an interesting excursion through the mechanisms that Tomcat and Ubuntu use to prevent web applications from running arbitrary code.

Thursday Jul 13, 2006

Getting the VPN to work on NetworkManager/Dapper Drake

UPDATE - September 15 - thanks to 'Ed', I have it all working. Please see the comments for the answer...

UPDATE - August 14 - the VPN package linked below doesn't seem to update resolv.conf, so I stopped using it and went back to vpnc from the command line. Please do leave a comment if you're able to get all this working properly!

As I've previously mentioned, I'm running Ubuntu Dapper Drake on my laptop. Everything has been working just dandy since I recovered from my hard disk crash, except for one minor annoyance: the version of NetworkManager in Dapper Drake doesn't do VPN.

I've been using the command line vpnc to connect, which works ok, except that, when the DHCP lease expires, NetworkManager overwrites the VPN's version of /etc/resolv.conf, so I have to either keep a backup /etc/resolv.conf to copy back, or just restart the vpn.

I finally got round to googling for an answer tonight and (on this page) found a VPN package for NetworkManager on Dapper. It seems to work fine. The one niggle was that, after configuring the VPN connection in nm-applet, I had to restart NetworkManager, but that's a one-time thing.

Roll on Edgy Eft!

Monday Jun 05, 2006

Hard Drive Recovery, Ubuntu-Style

CAUTION!!! Hard disk recovery is inherently fraught with peril. Every case is different. The actions I took may not work for you. Read all of the below before trying any individual action.

Thursday lunchtime I was just getting ready for an afternoon hacking on my latest pet project when I noticed that my laptop was running a bit slow. Make that really slow. The hard disk light was constantly on. Hmm - kill a few processes that might have run amok, reading or writing the disk... No improvement.

OK - let's take a look in /var/log - the messages file often has a clue. Whah? This is what I see (x about 1000):

Jun  1 12:14:54 localhost kernel: [4351775.355000] hda: task_in_intr: 
status=0x59 { DriveReady SeekComplete DataRequest Error }

Jun  1 12:14:54 localhost kernel: [4351775.355000] hda: task_in_intr: 
error=0x01 { AddrMarkNotFound }, LBAsect=54067515, sector=54067515

Jun  1 12:14:54 localhost kernel: [4351775.355000] ide: failed opcode 
was: unknown

Uh oh. This doesn't look so good. Google tells me that my hard disk probably has bad blocks. Luckily, you install Ubuntu (my choice of desktop OS) from a live CD... I boot from the live CD, run badblocks on /dev/hda and, sure enough, after a while, I have a list of > 100 bad blocks. I won't type the words I was muttering under my breath... This is a family blog.

So - back to Google. I manage to find information on dd_rescue. Briefly, dd_rescue is like plain old \*nix dd, except that it acts intelligently when it finds bad sectors. It reads the source with a big block size (fast!) while everything is OK and, when it hits an error, falls back to a small block size (slow) to try to get everything except for the bad block. Sounds like just what I need.

Adding the universe to /etc/apt/sources.list lets me apt-get install ddrescue and I'm away, thrilled that the Ubuntu live CD acts pretty much like a real installation. I have (had) three partitions on this 60GB disk - a 10GB NTFS primary partition, pre-installed and used about thrice in the past year, my 10GB Ubuntu root and my 37GB (go figure, the formatting gods stole 3GB from me) Ubuntu home. I run dd_rescue, creating images of each partition on my external USB drive which, fortunately, has ample space. dd_rescue retrieved the first two partitions quickly and without errors. Cool - I have my Ubuntu root - I don't have to redo the past couple of weeks of installation and configuration.

A quick trip to Fry's and I have a replacement HD. 80 GB instead of 60GB. It's useful to have a little wiggle room, and the extra 20GB was only $10+tax. Bargain.

I partition the new drive to match the old one (but with a bigger third partition for my Ubuntu home), put it in a spare USB caddy I had lying around, fire up Gnome Partition Editor and create new partitions to match the source drive - primary 10GB NTFS and two logical partitions in the extended partition - 10GB and the rest (some 56 or so). Google tells me that dd if=/dev/hda of=/dev/sdb bs=446 count=1 transfers the master boot record (MBR) from drive to drive, so my new drive will boot just like the old one. NOTE - this didn't actually work out - see below.

I quickly dd the NTFS and Ubuntu root partitions to the new drive and mount them. The NTFS drive looks fine. But - uh-oh - reiserfs complains about the Ubuntu root. More googling tells me that sometimes, the rounding that happens when you ask for a 10GB partition can vary depending on the drive geometry (or something, don't quote me on that). The safe thing to do is to create a target partition a bit bigger than the source. Now I'm glad I splurged the $10 for that 20GB wiggle room.

I leave NTFS as-is, delete the two other partitions and create two new partitions, this time 15GB and (approx) 50GB. dd the partition image, mount, and... cool! Ubuntu root is clean. Quick reiserfsck to double check. Yahoo - everything looks good. I can even use resize_reiserfs to expand my Ubuntu home to use all of its new partition.

Now for the biggie - my home partition. Luckily I took a full backup on 5/19, about 10 days ago, just before I moved from Suse 10.0 to Ubuntu Dapper, so it's not a total catastrophe if I get zilch back.

I run dd_rescue again and it just grinds to a halt after a couple of hours, still working, but plodding through the disk very slowly indeed. More googling tells me that there is a script called dd_rhelp that wraps around dd_rescue to skip past errors and read more good stuff. The theory is that you want to get the good stuff off the disk asap. Sounds good to me. Off I go with dd_rhelp.

Well, after several more hours, dd_rhelp seems to be getting slower and slower as it figures out what to do next from its log of bad disk chunks. At this point, dd_rhelp had recovered most of the partition, but there were 233 'chunks' totalling 271MB of unrecovered data. Not necessarily all bad, just as-yet-untried. Yet more googling (what did we do before Google?) and, thanks to this lucky find, I find the supreme ultimate, the t'ai chi of disk rescuers, GNU ddrescue (no underscore this time).

GNU ddrescue takes a similar approach to dd_rhelp except that, being a C++ native code program, rather than a shell script, it is much faster. I was a little apprehensive, I must admit, at the prospect of downloading and compiling a utility from source on a live CD system but, it worked! I had to apt-get install g++ and make, but everything worked just great (apart from installing ddrescue's info page. Not to sound churlish, but this is one of my pet peeves with GNU - why weren't man pages good enough - why did they have to go and re-invent that wheel?).

ddrescue's first pass through my drive took half the time that dd_rhelp had taken before I killed it and left 1091 'bad chunks' containing just 37MB of data. Pretty good - about 1/7 as much left to do, each chunk averaging about 34kB, compared to the 1165kB per chunk of dd_rhelp. Man - I wish I'd found that page earlier. Thank you, John Gilmore (John was Sun employee #5, according to his home page).

A second pass of ddrescue 'splits' the previously found bad chunks, reclaiming any readable data and leaving bad sectors sized 4k. Another neat feature of ddrescue is that it keeps a log file, so you can stop it (ctrl-c) at any point, restart it later, and it will pick up where it left off. I decided to stop ddrescue after it had been working for about 24 hours and swap my disks around, so I could boot off my Ubuntu root in the new drive.

So, new drive in the laptop, old drive in the USB caddy - let's boot up. Uh-oh - GRUB seems to be having problems. All I see is the word GRUB scrolling up my screen. Back to the live CD, Google, and I find that the dd bs=446 ... trick I mentioned above is great for backing up your MBR, but useless for transferring an MBR from one drive to another, unless they are otherwise identical. The right command is grub-install --root-directory=/mntpoint /dev/sda, where /mntpoint is wherever you have the root partition currently mounted. I also had to comment out the home partition mount entry in /etc/fstab and create an empty home directory so I would be able to login to Ubuntu. Don't forget to change ownership on the home directory - otherwise you won't be able to get a regular desktop login.

So - reboot and... we're in. I have my Ubuntu root up and running with an empty home directory. That's fine. I restart the ddrescue job and it picks up where it left off, but it's really slow now. I'm not sure if this is due to moving the HD to a USB caddy or the drive is really just dying, but even after 12 more hours, it's only processed a few kB. There are approx 20MB left to process at this point. I decide that I'll take my chances... All I have to lose is a few documents written over the last few days, and most of them I can get from my IMAP Sent folder.

Again I use dd to copy the rescued image to its new home and run reiserfsck --rebuild-tree to fix up the file system. It complains like crazy, but gives me a consistent reiserfs that I can mount. Surprisingly, things look pretty good, but I know that there are 0x00's all over the data.I edit /etc/fstab to mount it as /home and reboot. I log in successfully, and everything looks cool. Now I delete anything that looks like transient data - it can be rebuilt easily - this includes Firefox and Beagle's caches. I also copy my Documents and download directories from my ~10 day-old backup, so I know that anything older than 5/19 is clean. Right now, I'm working on my laptop, pretty much where I was on Thursday lunchtime, but with 20GB additional disk space.

When I'm confident that I have everything worth getting from the old drive, I'll erase it using

#!/bin/bash 
for n in `seq 7`
do 
    dd if=/dev/urandom of=/dev/sdb bs=1024k conv=notrunc,noerror; 
done

This little script writes random garbage to the disk seven times over. After that, the disk will be ready to be thrown in the trash, purged of all usable data.

06/07/2006 - UPDATE More advice from John Gilmore:

When you want to erase your old drive, use GNU shred. It's part of the GNU coreutils package, so it's probably already installed on your LiveCD and your system. It does all the right data security erasure stuff. You can use it on files (on ext2 or ext3 -- probably not on a log-structured data system like reiserfs) or you can use it on an entire partition or an entire disk drive. The script that you gave, sucking gigabytes of data out of /dev/urandom, will be very slow.

So do yourself a favour and use GNU shred instead of /dev/urandom, m'kay?

Monday May 22, 2006

I see your Breezy Badger and raise you a Dapper Drake

Inspired by Mark Shuttleworth on stage last week at JavaOne, and Eric's recent blog entry (partly) on installing Ubuntu, I spent a few hours last weekend installing Dapper Drake, the latest, greatest (currently beta) Ubuntu release.

There are more tips and tricks on installing Ubuntu on the Ubuntu Wiki than I could ever hope to cover here, so I'll just give my first impressions...

  • Installing from the Live CD - pretty painless. It's at this point that I start to Google for Ubuntu's lack of a root password. Cool - pretty much matches the way I work - never login as root, but keep root capability at arm's length via sudo.
  • Login and... what's this? An icon in the menu bar up top advising me of updated packages. 370-odd updated packages. OK - I'll grab those. After all, it's the weekend... Plenty of other stuff I can do for a while.
  • Now we have an up-to-date system. Time to install a few apps. Firefox and GAIM are already there, so I just copy in .firefox and .gaim from my backed up Suse home directory. So far so good - I see all my bookmarks and log into all my IM accounts.
  • I prefer Thunderbird to Ubuntu's default Evolution, so I use Applications - Add/Remove to grab it. Cool - it just works. Copy over .thunderbird and... I can't see my accounts or email folders. Hmmm. ls -latr. Oh - .mozilla-thunderbird. OK - I can live with that, a quick mv and I can see my email.
  • Ooh - NetworkManager - grab that, for definite. And... it just works. Cool! No VPN support in the Ubunutu version, though. Building NetworkManager from CVS could be pretty hairy - I know that Ubuntu tweaks the standard build. I'll live with vpnc for now.
  • I love Synergy - the open source keyboard/mouse switcher. At the time I installed Dapper, the Synaptic Package Manager was listing some 1.2.x version, rather than the current 1.3.1, so I build Synergy from source. A quick Google, apt-get install build-essential and a few other bits and pieces and I'm away - sharing my mouse and keyboard between my home system and my laptop. I just checked, and Synaptic is reporting 1.3.1, Let's make uninstall and grab the official one... Done.
  • Skype - grab the .deb from the Linux download page, copy in .Skype and we're done.
  • UPDATE: For VMware, Google tells me I need the correct headers for my kernel... sudo apt-get install linux-headers-`uname -r` then sudo ./vmware-install.pl. I found the IP addresses I was using for vmnet1 and vmnet8 in my old (backed up) /etc/vmware/ directory. Copy over .vmware for my favourites and the license file and... it works!

More as I discover interesting stuff... See you later

About

superpat

Search

Archives
« July 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
   
       
Today