Wednesday Sep 26, 2007

Setting up Subversion over SSH on Solaris

Before I forget how I did it, I figured I should probably document some steps on how I set up a Subversion repository on Solaris so that it can be accessed over SSH, using svn+ssh:// URLs.

The Tunneling over SSH section in the Version Control with Subversion book actually does a pretty good write up of the basics. The main tweak that I did with Solaris was to beef security slightly by creating a Solaris Role on the repository server which can only execute a limited number of subversion commands. Here's what I did.

First off, I created a RBAC profile called "Source Code Shell" which can run only the svnserve command, and runs it as the source code management user I had previously created called scm

cat >> /etc/security/exec_attr
Source Code Shell:suser:cmd:::/opt/svn-1.4.0/bin/svnserve:uid=scm
Source Code Shell:suser:cmd:::/opt/svn/bin/svnserve:uid=scm

cat >> /etc/security/prof_attr
Source Code Shell:::Access Subversion Only:

Ok that was easy enough. There are probably some more appropriate commands to use than just adding to the files directly. I should probably look into that.

Next, I need to create a role to assign this new profile to. The user name will be src

useradd -d /export/home/src -m -c "Source Code User" -s /usr/bin/pfsh -g 100 -u 242 src
passwd -N src

And assign the RBAC Profile to the src user.

usermod -P "Source Code Shell" src

Now, the rest of this is pretty much all described in the "Tunneling over SSH" section mentioned earlier. Essentially what is done is to use the public-key authentication mechanisms in SSH to identify the incoming user and automatically start up the svnserve command in tunnel mode.

This is accomplished by adding lines to the src user's authorized_keys file for each user who will be accessing the repository. The one thing you need for each user is their public-key file(s), typically id_dsa.pub or id_rsa.pub The format of the lines in authorized_keys is

command="/path/to/svnserve -t OPTIONS" KEY-TYPE KEY KEY-COMMENT

The -t option puts svnserve in tunnel-mode. There are a bunch of options you can pass to svnserv, but the most common ones are --tunnel-user to specify the username of the remote user and -r to specify a virtual root for the repository. The former allows subversion to recognize each user as their real username instead of the src user, so things like permissions work. The later allows you to hide the real path to the repository which can help shorten URLs and provide some abstration from the physical location in case you ever want to move it.

In my case, lines in authorized_keys are looking like

command="/opt/svn/bin/svnserve -t -r /app/repos/svn --tunnel-user=mock" ssh-dss KEY mock@watt

Not very painful at all. Actually what is the more painful part is trying to get NetBeans on Windows to access Subversion over SSH. I'll write that up soon.

Thursday Sep 13, 2007

Tagsoup is Super!

I've never been much of a fan of screen-scraping but I seem to be doing a fair amount in my spare time recently.

For example, I put together an application to run a private NASCAR "fantasy" pool with some friends and family. One of the features I have is a live update mechanism to see where everyone stands during the race.

Only problem, I couldn't find a simple, easy data feed to get the current race information. Ok, so, the next best thing, look at one of the sports sites like the Yahoo NASCAR update and try to extract information out of the HTML.

Well, the data looks relatively well formatted, but as we all know browsers are pretty lenient about what they accept as HTML, what with missing close tags and so on, and I wanted to use an XML parser, and even XSLT to extract the data. So I needed a way to fix the HTML before passing it to a parser.

Enter Tagsoup. A SAX-compliant HTML parser that spits out well-formatted XML. Ah that sounds like just the ticket. And even better, the maintainers include a modified version of Saxon - TSaxon - to process XSLT.

So with TSaxon, in hand, it made easy work of converting something like an ESPN Qualifying Grid into SQL that I can load into Derby with an XSL like

<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
  xmlns:html="http://www.w3.org/1999/xhtml"
  version="1.0">
  
  <xsl:param name="race"/>
  <xsl:param name="season"/>
  <xsl:param name="table"/>
  <xsl:param name="type"/>
  
  <xsl:template match="/">
    <xsl:apply-templates select="//html:table[html:tr[html:td[@colspan='5']]]"/>
  </xsl:template>
  
  <xsl:template match="html:table">
    <xsl:text>delete from </xsl:text>
    <xsl:value-of select="$table"/> 
    <xsl:text> where season = </xsl:text>
    <xsl:value-of select="$season"/>
    <xsl:text> and race = </xsl:text>
    <xsl:value-of select="$race"/>
    <xsl:text>;&#10;</xsl:text>

    <xsl:apply-templates select="html:tr[@class = 'evenrow' or @class='oddrow']"/>

    <xsl:text>commit;&#10;</xsl:text>
  </xsl:template>

  <xsl:template match="html:tr">
    <xsl:variable name="pos" select="html:td[1]"/>
    <xsl:variable name="car" select="normalize-space(html:td[3])"/>
    <xsl:variable name="spd" select="html:td[5]"/>

    <xsl:text>insert into </xsl:text>
    <xsl:value-of select="$table"/>
    <xsl:text> values (</xsl:text>
    <xsl:value-of select="$season"/>
    <xsl:text>, </xsl:text>
    <xsl:value-of select="$race"/>
    <xsl:text>, </xsl:text>
    <xsl:value-of select="$pos"/>
    <xsl:text>, '</xsl:text>
    <xsl:value-of select="$car"/>
    <xsl:text>', '</xsl:text>
    <xsl:value-of select="$spd"/>
    <xsl:text>', </xsl:text>
    <xsl:value-of select="$type"/>
    <xsl:text>);&#10;</xsl:text>
  </xsl:template>

</xsl:stylesheet>

By passing it to TSaxon like

java -jar lib/saxon.jar -H "$1" xsl/grid-to-sql.xsl race=$2 season=$3 table=grid type=$4

Ok, so that's pretty cool by itself. But now with the NFL season just beginning, and another private "fantasy" pool among relatives, I found that I wanted to do a similar "live tracker" to see everyone's points in the pool. This time I wanted to do it a little bit differently.

I wanted to programmatically call Tagsoup to parse a page and pass it through an XSLT. So why not try to use all the JAXP facilities in Java. And it turns out to be pretty easy. This easy

  // Create an instance of Tagsoup
  SAXBuilder builder = new SAXBuilder("org.ccil.cowan.tagsoup.Parser"); 

  // Parse my (HTML) URL into a well-formed document
  Document doc = builder.build(new URL(poolurl));

  JDOMResult result = new JDOMResult();
  JDOMSource source = new JDOMSource(doc);

  // Get a JAXP Factory
  TransformerFactory factory = TransformerFactory.newInstance();

  // Get a Transformer with the XSL loaded.
  StreamSource sheet = new StreamSource(sheetpath);
  Transformer transformer = factory.newTransformer(sheet);

  // Transform the page
  transformer.transform(source, result);

  // Spit out the result.
  XMLOutputter outputter  = new XMLOutputter(Format.getPrettyFormat());
  outputter.output(result.getDocument(), out);

The middle section is really the part where JAXP comes into play. But as you can see its quite simple.

Oh, and say you want to just programmatically select nodes with XPath. That's pretty easy too. Here's an example that gets the Title of the page

  JDOMXPath titlePath = new JDOMXPath("/h:html/h:head/h:title");
  titlePath.addNamespace("h", "http://www.w3.org/1999/xhtml");
  String title = ((Element) titlePath.selectSingleNode(doc)).getText();
  out.println("Title is " + title);

Voila.

Tuesday Aug 28, 2007

Updated JVM Options List

Its been quite a while, but I finally spent some time to update my List of JVM Options to include everything in Java 6. This is just the first pass as Java 6, so most of the new Java 6 options have no description for them. I still need to go out and search for references for any/all of them.

Hope people find this useful.

Monday Apr 16, 2007

sun.com performance

A couple of months ago, we went through a tuning exercise for our primary web site www.sun.com. A few execs noticed some strange pauses during page rendering (of primary the home page.) So a few of us started to look closer at the problem.

Before I get into it a bit, I just took a look at the daily reports again for today, and I'm happy to report that performance is still looking pretty good.

Although I'm not currently on the engineering team for the site, the I was one of the primary "architects" for the framework that serves the site, and I still have soft spot for it. I take offense when someone challenges the performance of the site. So, I decided to get involved, mainly to help gather some statistics for how the site is performing.

So what did performance look like when we looked into it? Here are a couple of charts showing what it looked like. The first chart shows the average response time to retrieve all the home page components. The second shows each component in detail.


Averages

Details

The charts don't look all that great. Very erratic, and pretty poor average response time. Now, the home page has about 70 components totaling about 396K. Making a very rudimentary guess at a theoretical minimum download time of all the components over say a 1.5Mbit DSL line, we're looking at about 2 seconds [ ((396 \* 8 / 1024) / 1.5) = 2 seconds ]. Now lets compare that against what we see on the average duration chart. Assuming a sequential download of components, even a 100 millisecond response time would take 7 seconds to download all the comonents [ 100 \* 70 / 1000 = 7 ]. Which wasn't far from the truth.

One thing to note, through our analysis, there is actually some parallelism going on to retrieving the page components. There is a great Firefox Extension that will show this visually called Firebug. So that helps reduce some overall download time.

Eventually we found the culprit. Having narrowed the problem down to some thing with Sun Java System Web Server, Chris Elving with Web Server Engineering helped us resolve the problem. As it turned out, it ended up being Reverse Proxy Plugin Bug # 6435723, which, fortunately had a fix available for it.

So, how are we looking today? Well, here's the charts as of Friday. Not much fluctuation in response times throughout the day.


Averages

Details

I'm sure you notice the occasional blips of response times in the over 1.5 second range. Well this is due to occasional cache flushes and page rendering due to content changes. Our rendering framework makes extensive use of XML and J2EE technologies to dynamically render the web site. I'd say in light of that, the platform is doing an incredible job at serving content.

For comparison... Here's similar charts for one of our competitors from Friday as well. Their performance hasn't changed much over the last two months. I'm not gonna tell them.


Averages

Details

Thursday Mar 22, 2007

And now Chicken of the VNC tunneled through SSH on OS X

After I posted yesterday about the VNC Over SSH Startup Script I use with TightVNC on Solaris, I started thinking about how I could do the same thing with my MacBook Pro and Chicken of the VNC.

And I have just completed the first version. Its amazing what you can fumble through with an internet search engine. So here we go...

Password Please -- One thing you don't get with CotVNC is the vncpasswd command. So we need a copy. I just did it the quick and dirty way by grabbing the TightVNC source and compiling only the vncpasswd command:

wget http://umn.dl.sourceforge.net/sourceforge/vnc-tight/tightvnc-1.2.9_unixsrc.tar.bz2
bzip2 -cd tightvnc-1.2.9_unixsrc.tar.bz2 | tar xvf -
cd vnc_unixsrc
gcc -I include -I libvncauth -o vncpasswd/vncpasswd libvncauth/\*.c vncpasswd/\*.c

Now squirrel off the vncpasswd/vncpasswd command in your favorite bin/ directory.

Passphrase Please -- If you look back at my previous post, I make use of ssh-agent and ssh-add to allow tunneling without entering a password. Only problem on OS X is that there is no gnome-ssh-askpass command available. So I had to hack one up myself. Its an odd combination of Bourne Shell and AppleScript, but it appears to work. Save a copy as macos-askpass.

#! /bin/sh

#
# An SSH_ASKPASS command for MacOS X
#
# Author: Joseph Mocker, Sun Microsystems

# 
# To use this script:
#     setenv SSH_ASKPASS "macos-askpass"
#     setenv DISPLAY ":0"
#

TITLE=${MACOS_ASKPASS_TITLE:-"SSH"}

DIALOG="display dialog \\"$@\\" default answer \\"\\" with title \\"$TITLE\\""
DIALOG="$DIALOG with icon caution with hidden answer"

result=`osascript -e 'tell application "Finder"' -e "activate"  \\
 -e "$DIALOG" -e 'end tell'`

if [ "$result" = "" ]; then
  exit 1
else
  echo "$result" | sed -e 's/\^text returned://' -e 's/, button returned:.\*$//'
  exit 0
fi

Now lets put it all together - I'm skipping over a bunch of stuff, assuming you have read my previous entry. But here's the final script that manages asking for all the information via dialogs, creating the SSH tunnel and starting up CotVNC with the appropriate arguments.

One thing to note here is to make sure the macos-askpass and vncpasswd commands are in the PATH. I would suggest you modify the PATH command in the following script to ensure this.

Oh, also, change COTVNC_HOME to the location of your Chicken of the VNC application.

#! /bin/sh

#
# Script to put a GUI front end around Chicken of the VNC over SSH for MacOS X
#
# Author: Joseph Mocker, Sun Microsystems.

#
# This script works only with Chicken of the VNC but needs the 
# "vncpasswd" command from TightVNC.
#

COTVNC_HOME="/Applications/DarwinPorts/Chicken of the VNC.app"
COTVNC_CMD="$COTVNC_HOME/Contents/MacOS/Chicken of the VNC"

PATH=$HOME/bin:$PATH
export PATH

# Define a general Prompt routine

prompt () {
  TITLE="$PROMPT_TITLE"

  DIALOG="display dialog \\"$@\\" default answer \\"\\" with title \\"$TITLE\\""
  DIALOG="$DIALOG with icon caution"
  
  result=`osascript -e 'tell application "Finder"' -e "activate"  \\
   -e "$DIALOG" -e 'end tell'`
  
  if [ "$result" = "" ]; then
    return 1
  else
    echo "$result" | sed -e 's/\^text returned://' -e 's/, button returned:.\*$//'
    return 0
  fi
}

# Prompt for VNC Server 

PROMPT_TITLE="VNC Connection"
export PROMPT
server=`prompt "Connect to server (host:port)"`

if [ "x$server" = "x" ]; then
   exit 0
fi

# Prompt for VNC Password

MACOS_ASKPASS_TITLE="VNC Password"
export MACOS_ASKPASS_TITLE
passwd=`macos-askpass "Password for server ${server}:"`

host=`echo $server | cut -d: -f1`
port=`echo $server | cut -d: -f2`
port=`expr $port + 5900`

touch /tmp/tmpvnc.$$
chmod 600 /tmp/tmpvnc.$$
echo $passwd | vncpasswd -f > /tmp/tmpvnc.$$

# Start up a "private" SSH agent

eval `ssh-agent -s`

# Register SSH Identities with the agent

DISPLAY=:0
export DISPLAY
SSH_ASKPASS=macos-askpass
export SSH_ASKPASS
unset MACOS_ASKPASS_TITLE

ssh-add < /dev/null

# Now lets find an open port for tunnelling

tunnel=41235
while [ "`telnet localhost $tunnel < /dev/null 2>&1 |grep refused`" == "" ]; do
  tunnel=`expr $tunnel + 1`
done

# Start up an SSH with a Local tunnel.

ssh -L $tunnel:$host:$port $host "sleep 60" &

# Wait for the tunnel to activate

while [ "`telnet localhost $tunnel < /dev/null 2>&1 |grep refused`" != "" ]; do
  sleep 1
done

"$COTVNC_CMD" localhost:$tunnel --PasswordFile /tmp/tmpvnc.$$ &

# Give it a little bit to make the connection

sleep 10

# Kill the "private" SSH Agent

eval `ssh-agent -k`

# Cleanup

rm /tmp/tmpvnc.$$

So you can now run the script in a Terminal window and have it do all the wonderful magic to burrow VNC through SSH. But that's not really in the point and click nature of MacOS so...

One tiny invoker -- What would really be the icing is a little AppleScript to invoke this whole thing from, say, the Scripts menu - you do have the Scripts menu enabled don't you? Well then here you go. I call mine VNC Over SSH and shoved it in ~/Library/Scripts

do shell script "$HOME/bin/macos-vncv > /tmp/vncv.$$ 2>&1 &"

Ahh... I don't know about you, but I feel a lot better about using VNC on the Mac now.

Wednesday Mar 21, 2007

A startup GUI for running TightVNC over SSH

Here's a little utility I put together to make it a little easier to startup TightVNC to run over SSH. Although TightVNC supports SSH itself, I wanted to be able to start it up without the need to always start it in a terminal window. So I got to thinking how I could use zenity with some scripts to do this. And here's what I came up with.

Essentially, what this utility does is use the ssh-agent utility to register your public SSH identities - using ssh-add - so that you can start up an SSH session without having to enter a password. The key piece to making this whole thing work as a GUI is by use of the gnome-ssh-askpass utility included with OpenSSH.

First off, set some passphrases -- So the first thing I would suggest is that you set a passphrase on all your SSH identities, otherwise someone could grab them, and gain entry to your account in various ways. The easiest way to do this is with ssh-keygen

ssh-keygen -p -f ~/.ssh/id_dsa
ssh-keygen -p -f ~/.ssh/id_rsa
ssh-keygen -p -f ~/.ssh/identity

Whew. I feel more secure already.

Up next, configure some authorized keys -- Ok, so now that you have some passphrases set, you need to tell SSH that they are authorized to allow connections with. The key here is that you want to do this on the system you are running the VNC server on. What you need to do is grab the id_dsa.pub and id_rsa.pub files and put them in the authorized_keys file on the target.

source% scp ~/.ssh/id_rsa.pub target:.
source% scp ~/.ssh/id_dsa.pub target:.

target% cat ~/id_rsa.pub >> ~/.ssh/authorized_keys
target% cat ~/id_dsa.pub >> ~/.ssh/authorized_keys

This should be enough to enable to to ssh to the target machine without a passowrd, it will however require you to enter a passphrase instead. This is good.

Configuring a password popup -- One nice little feature of ssh-add is that it can defer to an optional (GUI) utility to ask for passphrases. And in fact if you look in the contrib/ folder of the OpenSSH source, you will see a couple utilities for GNOME, the most relevant one now days being gnome-ssh-askpass2.c.

You can get ssh-add to invoke this by setting the environment variable SSH_ASKPASS to the path of the utility and invoking a "ssh-add < /dev/null".

Now, this is all well and good, but I wanted to generalize this a little bit so I can use it to ask for passwords in other contexts. First and foremost, TightVNC requires a password. And the gnome-ssh-askpass utility spits out some verbage specific to OpenSSH. I wanted to remove that. So a quick hack a the source and gnome-ssh-askpass became gnome-askpass. Here's the diff, which you can use with patch to enable my mods.

33,35c33,35
<  \* "GNOME_SSH_ASKPASS_GRAB_SERVER=true" then gnome-ssh-askpass will grab
<  \* the X server. If you set "GNOME_SSH_ASKPASS_GRAB_POINTER=true", then the
<  \* pointer will be grabbed too. These may have some benefit to security if
---
>  \* "GNOME_ASKPASS_GRAB_SERVER=true" then gnome-ssh-askpass will grab
>  \* the X server. If you set "GNOME_ASKPASS_GRAB_POINTER=true", then the
>  \* pointer will be grabbed too. These may have some benefit to security if
88a89
>         char \*title;
95,96c96,97
<       grab_server = (getenv("GNOME_SSH_ASKPASS_GRAB_SERVER") != NULL);
<       grab_pointer = (getenv("GNOME_SSH_ASKPASS_GRAB_POINTER") != NULL);
---
>       grab_server = (getenv("GNOME_ASKPASS_GRAB_SERVER") != NULL);
>       grab_pointer = (getenv("GNOME_ASKPASS_GRAB_POINTER") != NULL);
111,112c112,115
<
<       gtk_window_set_title(GTK_WINDOW(dialog), "OpenSSH");
---
>
>       title = getenv("GNOME_ASKPASS_TITLE");
>       if (title == NULL) title = "OpenSSH";
>       gtk_window_set_title(GTK_WINDOW(dialog), title);

With these mods, you can set a GNOME_ASKPASS_TITLE environment variable to change the title.

Finally, putting it all together -- Are you still with me? Now here is the final script that I use to fire up TightVNC. When you use the script it prompts for three things

  1. First, it asks which VNC Server (host:port) to connect to
  2. Next, it will ask for the VNC Server password
  3. Next, it will prompt you to enter your SSH Identity passphrase(s)

And now, without further adieu...

#! /bin/sh

#
# Script to put a GUI front end around TightVNC over SSH
#
# Author: Joseph Mocker, Sun Microsystems.

#
# This script works only with TightVNC
#


# Prompt for VNC Server 

server=`zenity --entry --title="VNC Server" --text="Connect to server (host:port):"`

if [ "x$server" = "x" ]; then
   exit 0
fi

# Prompt for VNC Password

GNOME_ASKPASS_TITLE="VNC Server"
export GNOME_ASKPASS_TITLE

passwd=`gnome-askpass Password for server ${server}:"`

host=`echo $server | cut -d: -f1`
port=`echo $server | cut -d: -f2`

echo $passwd | vncpasswd -f > /tmp/tmpvnc.$$

# Start up a "private" SSH agent

eval `ssh-agent -s`

# Register SSH Identities with the agent

SSH_ASKPASS=gnome-askpass
export SSH_ASKPASS
unset GNOME_ASKPASS_TITLE

ssh-add < /dev/null

vncviewer -passwd /tmp/tmpvnc.$$ -encodings "hextile zlib raw" -via $host localhost:$port &

# Give it a little bit to make the connection

sleep 10

# Kill the "private" SSH Agent

eval `ssh-agent -k`

# Cleanup

rm /tmp/tmpvnc.$$

Wednesday Mar 14, 2007

Compiling UW imapd with SSL on Solaris 10

I've been running IMAP over SSL for a while on Solaris, but until recently I've used STunnel to provide the SSL support in front of a plain IMAP daemon. I've known that you could compile SSL into imap for a while but never really looked into it until Rama figured out the magic certificate generation piece.

But what Rama did was to just install the Sunfreeware version of imapd. I have a love/hate relationship with those types of distributions, so I decided to look at compiling it myself. Heck Solaris includes OpenSSL so it should be easy.

Well, actually, I couldn't get it to build with the version of OpenSSL that ships with Solaris. Looking at syslog I'd see messages like:

Mar 14 10:23:24 watt imapd[5834]: [ID 853321 mail.error] SSL error status: error:140D308A:SSL routines:TLS1_SETUP_KEY_BLOCK:cipher or hash unavailable

And looking at the imapd binary I saw a missing libcrypto_extra. Searching the net I saw a bunch of people talking about it. It appears that this is no longer needed with Solaris 10, but others say that you need to install {{SUNWcry}} package. Well, I must be a loser because I could not find enough info to make it work.

So I decided to just compile up a fresh copy of OpenSSL to use to compile imapd. So here's what I did.

Compiling OpenSSL -- its pretty trivial to do, in this day and age, however my first attempt compiled it 64bit, and imapd had issues with that. There are a few extra configuration parameters to force it to 32 bit. Here's the Configure line.

Configure --prefix=/opt/openssl-0.9.8e 386 shared solaris-x86-gcc

After that.. Compile and Install...

gmake
gmake install

Compiling Imapd -- The instructions in docs/SSLBUILD go over the basics. But there were a few additional changes I needed to make. The main change was to make sure imap was built with my OpenSSL instead of the Solaris version. All these changes were to src/osdep/unix/Makefile:

Fist I set the SSLDIR and SSLCERTS variables to where I wanted them:

SSLDIR=/opt/openssl
SSLCERTS=/etc/sfw/openssl/certs

Next, I forced it to use the static version of libcrypto.a by changing SSLCRYPTO:

SSLCRYPTO=$(SSLLIB)/libcrypto.a

Finally, I need to force it to use my static version of libssl.a.

SSLLDFLAGS= -L$(SSLLIB) $(SSLLIB)/libssl.a $(SSLCRYPTO) $(SSLRSA)

After that. Simply compile it up, and install it where ever you want:

gmake gso
mkdir /opt/bin
cp imapd/imapd /opt/bin

Configuring the imapd certificate -- Thanks to Rama on the magic OpenSSL command. All that you really do is create a PEM certificate called imapd.pem in the OpenSSL certs folder:

cd /opt/sfw/openssl/certs
openssl req -new -x509 -nodes -out imapd.pem -keyout imapd.pem -days 3650

Starting imapd from inetd -- Ok well now with Solaris 10 this is done though SMF, but inetd has a conversion utility to do this. I put the following line in /etc/inetd.conf

imaps stream    tcp     nowait root     /opt/bin/imapd  imapd

Then added a line to /etc/services

imaps           1143/tcp        imap2           # Internet Mail Access Protocol v2

Then just run inetconv per instructions in inetd.conf and bob's your uncle.

Tuesday Mar 13, 2007

One of my favorite Firefox extensions

I use several computers on a daily basis. I've done this for quite a while. At work I have my workstation, at home I have a PC and lots of times I just use my laptop. The problem is I like to try to keep my desktop environment in sync across all those machines as much as possible.

With Firefox, for example, I want all my bookmarks up to date everywhere. And fortunately, for Firefox, I have found an extension quite some time ago which helps do this.

Its currently called Bookmark Sync and Sort, but previously it was called Bookmark Synchronizer [2,3].

What this extension allows you to do is to store your bookmarks on server somewhere, and load them in multiple Firefox instances. It does some cool things like allowing you to merge bookmarks on your server with any changes you've made locally. And it provides multiple protocols for saving and loading the bookmarks, including FTP and WebDAV.

What I did was set up Sun Java System Web Server 7.0 on a server I can access from anywhere I am, and configured a WebDAV collection for Bookmark Sync and Sort. What about security? Well a couple of things. My SJSWS instance is configured for SSL with a self-signed certificate. And I have restricted the WebDAV collection to authenticated users. Bookmark Sync and Sort handles all this great.

So how does one configure WebDAV with SJSWS 7.0? Its actually not that difficult. A good reference is Meena Vyas' SJSWS WebDAV entry.

Specifically, here's what I did. First enable WebDAV with wadm

wadm> enable-webdav --config=<server-instance>

Then create a WebDAV collection

wadm> create-dav-collection --config=<server-instance>  --vs=<virtual-server> --uri=/davx/marks --source-uri=/marks

This will create a WebDAV collection that you access via DAV at /marks, which is a little opposite of usual with WebDAV but we want to make sure the web server doesn't mess with the bookmarks file when it serves it.

Now, that is enough to set up an open WebDAV collection. But it would be good to control access to the collection. So, from the Admin Console, Set up access control. Go to the Access Control tab of Configurations -> server-instance.

Then select the Users tab and create or edit the users and passwords.

Next, go to the Access Control Lists (ACL) tab. Edit the default ACL, and add an entry with the settings

  • Access: Allow
  • Users & Groups: All in the authentication database
  • Rights: All Access Rights

Next, Edit the dav-src ACL, add a similar entry with the settings

  • Access: Allow
  • Users & Groups: All in the authentication database
  • Rights: All Access Rights

Now, Deploy the configuration and you are done.

Thursday Nov 23, 2006

Force SSL Web Server Hack

I've finally gotten around to upgrading my personal website from Sun Java System Web Server 6.1 to 7.0 - as well as a refresh of JSPWiki

As I have been doing the migration, one of the things I ran across was a method I have for forcing a URLs to the SSL/HTTPS port of the web server. For example, I have a WebMail application installed that I occasionally use. The web server is configured to serve both Plain HTTP and SSL HTTP requests. But in this case I don't want to be able to access the WebMail application over plain HTTP. How do I force SJSWS to redirect any plain HTTP requests to the SSL? Its really pretty easy.

This works for both 6.1 and 7.0. All you really do is edit the obj.conf (or with 7.0, it might be node-obj.conf) and add a Client tag to the beginning of the "default" Object of the form:

<Client match="all" security="false">
NameTrans fn="redirect" from="/webmail" url-prefix="https://server.org/webmail"
NameTrans fn="redirect" from="/webmail/\*" url-prefix="https://server.org/webmail/"
</Client>

What this is saying is, for any request that is over plain HTTP (security="false") evaluate, the included NameTrans parameters. In this case, any request starting with webmail will be redirected to the SSL port.

The Client tag can perform much more complex evaluations. For example, one thing that can be handy is to limit the evaluation of the inner directives to a specific virtual host. To do that, Simply add a urlhost attribute:

<Client match="all" security="false" urlhost="server.org">
NameTrans fn="redirect" from="/webmail" url-prefix="https://server.org/webmail"
NameTrans fn="redirect" from="/webmail/\*" url-prefix="https://server.org/webmail/"
</Client>

Monday Sep 25, 2006

Deploying JSPWiki on Sun Java System Web Server 7

Sriram and Marina's article on Deploying Wikis to Sun Java System Web Server 7.0, Part 1: JSPWiki reminded me that I wanted to write a few things about my experiences with deploying JSPWiki on SJSWS 7.0

The article does a good job at explaining the basics of installation and deployment. SJSWS 7 really makes it pretty easy to deploy web applications.

The tougher part is how to configure the JAAS components in JSPWiki with SJSWS. Actually once you get your bearing with SJSWS, its not really that difficult at all. There are two components to JAAS that you need to deal with. These are the JAAS login configuration and the Java 2 security policy. And here is how to do it.

JAAS login configuration

SJSWS comes with a login configuration already installed. The trick is to find the configuration file and add the JSPWiki configuration to it. The file is called login.conf and it is located in the config/ folder of the web server instance. The contents look something like

fileRealm {
	com.iplanet.ias.security.auth.login.FileLoginModule required;
};

ldapRealm {
	com.iplanet.ias.security.auth.login.LDAPLoginModule required;
};

solarisRealm {
	com.iplanet.ias.security.auth.login.SolarisLoginModule required;
};

nativeRealm {
	com.iplanet.ias.security.auth.login.NativeLoginModule required;
};

What needs to be done is simply add the JSPWiki configuration to the end of the file. The JSPWiki configuration typically looks like

JSPWiki-container {
  com.ecyrd.jspwiki.auth.login.WebContainerLoginModule    SUFFICIENT;
  com.ecyrd.jspwiki.auth.login.CookieAssertionLoginModule SUFFICIENT;
  com.ecyrd.jspwiki.auth.login.AnonymousLoginModule       SUFFICIENT;
};

JSPWiki-custom {
  com.ecyrd.jspwiki.auth.login.UserDatabaseLoginModule    REQUIRED;
};

Just add those lines to the end of login.conf and you are done.

Java 2 security policy

JSPWiki uses a standard Java 2 security policy to control access to one or more JSPWiki instances within a servlet container. The policy file comes with JSPWiki is WEB-INF/jspwiki.policy.

Now, in my observation, JSPWiki will find this file without doing anything, so this part is somewhat optional. However, without doing any explicit configration you will generally see the error messages:

WARN com.ecyrd.jspwiki.auth.PolicyLoader  - You have set your 'java.security.policy' to point at '/opt/sun/SUNwbsvr7/
https-node/config/file:/opt/sun/SUNwbsvr7/https-node/web-app/node/wiki/WEB-INF/jspwiki.policy', but that file d
oes not seem to exist.  I'll continue anyway, since this may be something specific to your servlet container.  Just c
onsider yourself warned.
WARN com.ecyrd.jspwiki.auth.PolicyLoader  - I could not locate the JSPWiki keystore ('jspwiki.jks') in the same direc
tory as your jspwiki.policy file. On many servlet containers, such as Tomcat, this needs to be done.  If you keep hav
ing access right permissions, please try copying your WEB-INF/jspwiki.jks to /opt/sun/SUNwbsvr7/https-node/config/fil
e:/opt/sun/SUNwbsvr7/https-node/web-app/node/wiki/WEB-INF

The bit about "Consider yourself warned" is a little dubious. And the fix is pretty easy. So why not just do it.

What needs to be done is to explicitly set the property java.security.policy for the container to the location of the jspwiki.policy file. This can be done easily in the administration console. Simply log in to the administration console and drill down into Configurations -> node -> Java, then click on JVM Settings.

The first section you will see on this page is a JVM Options table, which will include, among other things, a reference to the login.conf file that we tweaked earlier. Simply click on New to add another value, and enter a value like

-Djava.security.policy=/opt/sun/SUNwbsvr7/https-node/web-app/node/wiki/WEB-INF/jspwiki.policy

Then click Save and Deploy your configuartion as described in Sun Developer Network article and you are golden.

Thursday Jul 13, 2006

Upgrading Bit Torrent

The week before the break, Alon R at Azureus Inc. came in to help us redo our Bit Torrent servers. It all went pretty well. Working together it took us only a couple of days to implement a more simplified seeder and tracker network at Sun.

The main goal that we were trying to accomplish was to simplify authoring/management of torrents. I have described the original implementation a while ago. The authoring piece was a mess, because it required remote displaying Azureus from a remote data center. All in all the process was perhaps 15 steps. Not something that could be handed off to a support team very easily.

The other major requirements of the environment included

  • High Availability - both the seeders and more importantly the tracker needed to run on multiple machines and fail over automatically.
  • Geographical Location Screening - (aka rDNS) we needed to make sure that people from embargoed countries were denied access.
  • Simple remote administration - the most preferable would be the web

All in all it went together pretty quickly. I threw a couple of curves at Alon, most notably, being able to fail over the tracker to a secondary box, but he was able to get patches from the developers overnight to allow for it.

We did encounter a strange problem when we attempted to fire up Azureus on the first server, we received a ton of Exceptions with stacktraces of the form

java.io.IOException: Invalid argument
        at sun.nio.ch.DevPollArrayWrapper.poll0(Native Method)
        at sun.nio.ch.DevPollArrayWrapper.poll(DevPollArrayWrapper.java:158)
        at sun.nio.ch.DevPollSelectorImpl.doSelect(DevPollSelectorImpl.java:68)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)

We we're both perplexed. I know I had run Azureus on Solaris 10 before and not seen the problem. And it was a new one to Alon as well. I decided to look on bugs.sun.com for lack of anything else to try. Low and behold, I found the problem. The bug 6322825 Selector fails with invalid argument on Solaris 10. And the bug had a work around. Just set the maximum number of file descriptors on the process over 8K. Once I did that, it fired up like a charm.

After that, the only other major task was to hook into our rDNS provider. From my description, Alon thought we could write a plugin, and again, talked to the developers who provide us a shell of a plugin. All I needed to do was add about 10-20 lines of code which actually did the screening query with the provider.

In the end, the network consists of three servers, a Master Tracker and two Seeders.

The Master Tracker is configured to watch a particular folder for new and removed files. When it finds changes, it automatically creates torrents for the files and starts seeding them. It also copies the .torrent files to a separate folder within the web servers docroot for customers to access.

The two Seeders, watch an RSS feed provided by the Master Tracker. The RSS feed contains a list of all the torrents the Master is seeding. When the seeders detect a change to the RSS feed, they act appropriately. If its a new torrent, they download it from the Master and start seeding it. If if a torrent is removed, they remove it from themselves as well.

Now, authoring a torrent is a easy a rsync'ing, scp'ing or sftp'ing a file over to the Master Seeder and voila.

Slightly painful migration to ZFS

Fresh back from the break I decided to give and upgrade to Solaris 10_U2 a go, and migrate my data to ZFS (now included w/ U2).

The ZFS migration was slightly painful so I figured I'd post my experience, in case anyone else might want to attempt it.

Actually migrating the data itself to a ZFS partition was simple, however I wanted to mirror the data as I have been doing with UFS/SVM which caused the problem. Apparently logged as a known bug:

6355416 zpool scrubbing consumes all memory, system hung

Even though the system was unresponsive, I decided to let it do its thing overnight and it did eventually finish resyncing/mirroring, and now everything is fine.

So for what its worth, here it is. The system involved was a SunBlade 2000 with 3G of RAM.

The goal was to combine three separate UFS partitions (/app, /work, /extra) into a single ZFS pool which would then host all those partitions again. The partitions would then use portions of the pool as need be, precluding the need to set arbitrary sizes as has been needed with UFS.

Also, the partitions were mirrored, and I wanted to continue mirroring with ZFS.

The three partitions are

/dev/md/dsk/d4       90030867 30328334 58802225    35%    /extra
/dev/md/dsk/d5       24795240 23208057  1339231    95%    /app
/dev/md/dsk/d6       20646121 16648727  3790933    82%    /work

(Notice that I was bumping up against some partition limits for a couple of the filesystems. The primary reason I wanted to convert to ZFS was to just have a big pool of space which could be used by whoever needed it.)

With the following SVM meta partitions assigned

d4 had submirrors d40 (c1t1d0s4), d41 (c1t2d0s4)
d5 had submirrors d50 (c1t1d0s5), d51 (c1t2d0s5)
d6 had submirrors d60 (c1t1d0s6), d61 (c1t2d0s6)

The method I went through was

  • break the mirrors on one disk
  • combine the partitions
  • create a ZFS pool from the combined partition
  • migrate the UFS/SVM data to the ZFS pool
  • delete the remaining UFS/SVM partitions and combine them
  • attach the combined partition as the second side of the mirror

Here's the procedure.

Break the mirrors on one disk

metadetach  d4 d41
metaclear d41
metadetach  d5 d51
metaclear d51
metadetach d6 d61
metaclear d61

Remove the meta db from the disk

metadb -d /dev/dsk/c1t2d0s7

Combine the disk partitions

I used the partition commands in the format utility to change the c1t2d0 disk.

Create a ZFS pool called storage and filesystems within it

zpool create storage c1t2d0s4
zfs create storage/extra
zfs create storage/work
zfs create storage/app

Migrate the UFS/SVM data to the ZFS filesystems

cd /work
find . -depth -print | cpio -pdmv /storage/work
cd /app
find . -depth -print | cpio -pdmv /storage/app
cd /extra
find . -depth -print | cpio -pdmv /storage/extra

Unmount the UFS/SVM partitions

unshareall
umount /app
umount /work
umount /extra

Remount the ZFS partitions where I expect them

zfs set mountpoint=/work storage/work
zfs set mountpoint=/extra storage/extra
zfs set mountpoint=/app storage/app
shareall

Remove the remaining SVM meta partitions

metaclear d4 d40
metaclear d5 d50
metaclear d6 d60

Remove the meta db from the disk

metadb -d /dev/dsk/c1t1d0s7

Combine the disk partitions

I used the partition commands in the format utility to change the c1t1d0 disk.

Finally, add the combined partition as a ZFS mirror

This was the painful step. Apparently ZFS decided to take all available CPU and memory in the system. The system became unresponsive. Apparently ZFS is a little to aggressive with its resyncing (they call it scrubbing/resilvering).

I found bug 6355416 which appears to describe the issue.

I would recommend before issuing this last command, that you boot the system into single user mode, and kill and processes that you don't need.

zpool attach storage c1t2d0s4 c1t1d0s4

The system will probably become unresponsive within a few minutes. If so, just walk away and let it do its thing. You can use the zpool status command to check the progress of the resilver.

Friday May 26, 2006

Remote Monitoring hack for JMX

Quite some time ago when a team of us worked on rewriting the code which ran the Java Developer Connection, a co-worker came up with the idea of a simple instrumentation interface.  It was almost a pre-cursor to JMX. I'm sure other groups inside and outside of Sun had done the same thing. The nice thing about the framework we developed is that it was extremely simple, 'course this was around the time of JDK 1.2 so keeping the code simple was a good thing.

The instrumentation framework was invaluable for us as we worked through code iterations and daily support on the JDC. It provided a multi-level view of all of the components of the system, and helped us monitor things such as how many sessions and users are logged into the JDC, as well as help us resolve problems such as the culprit of a JDBC Connection Pool leaks.

Since then, I have always been keen on pushing fellow engineers to instrument their applications. Of course, now days I've been pushing JMX as the preferred instrumentation solution. It provides many of the same features that our home grown solution provided. And what's better is that because its the blessed Monitoring and Management standard for Java, more and more applications are being instrumented with JMX, and even the newer JDKs.

The only problem that had been unresolved for us is remote monitoring of JMX enabled applications. When I first looked into it a year or two ago, the stage was sort of dim. Although JMX does have remote monitoring capabilities, its based on RMI, which was unfortunate.

Its unfortunate because RMI is somewhat of a dynamic protocol, similar to SunRPC, where ports which services reside are dynamically allocated at runtime by the RMI Registry. This poses a problem when trying to connect to a remote data center, with umpteen firewalls to get through. Security didn't appear to be of much concern to the original RMI developers.

So I decided to abandon the use of RMI back then, and settled on a plain old HTTP based console embedded as part of the application. That sounds like a lot, however, looking around JMX implementations I found MX4J had this built into their framework. And in fact, the HTTP interface was quite reasonable. And that's what we went with.

Recently, I took another look at the whole problem of remote management. After some poking around I found a method of using RMI with JMX in a more "secure" way. That is, configuring RMI and JMX to use well defined ports for communication, which makes me happy, and my Operations people (who manage the firewalls) happy as well.

So here's the deal. There's really only two ports involved. The RMI Registry, which is already on a well-known port, typically 1099, and the JMX RMI Connector, which is the problem.

The solution lies in the fact that when creating a JMXServiceURL to define the connection parameters, you can, in fact, specify a port number for the service. This is not clear at all from the documentation. Here's an example

String svc = 
  "service:jmx:rmi://apphost:9994/jndi/rmi://apphost:1099/connector";

JMXServiceURL url = new JMXServiceURL(svc);
RMIConnectorServer rmiServer = new RMIConnectorServer(url, null, mbeanServer);
rmiServer.start();

The key is really the first two lines. What its saying is, the use RMI Registry is on  "apphost:1099" and register a service called "connector" which is located at "apphost:9994".Now, All I really have to do is tell my Operations folks to open the firewall to ports 1099 and 9994. After that's done, remotely monitoring the application is as simple as running the jconsole command:

jconsole service:jmx:rmi://apphost:9994/jndi/rmi://apphost:1099/connector

How cool is that. And all of a sudden this becomes just the tip of the iceberg. If I don't want to use the standalone RMI Registry, or I want to run an embedded registry within the application, well then all I have to do is instantiate one with the desired port number:

rmiRegistry = LocateRegistry.createRegistry(4099);

One slightly more complete final example to wrap it all up. The nice thing about JMX is that you can start out simple. For example, at the very least, just enabling JMX to monitor what the JVM is doing.  That can be accomplished with just a couple more lines of code. The key is getting a handle to the Platform MBean Server:

Registry rmiRegistry = LocateRegistry.createRegistry(4099);
MBeanServer mbeanServer = ManagementFactory.getPlatformMBeanServer();
String svc =
  "service:jmx:rmi://apphost:9994/jndi/rmi://apphost:4099/connector";

JMXServiceURL url = new JMXServiceURL(svc);
RMIConnectorServer rmiServer = new RMIConnectorServer(url, null, mbeanServer);
rmiServer.start();

Doesn't get much easier than this.

Wednesday Apr 26, 2006

Bit Torrent file distribution at Sun

A while ago I put together a Bit Torrent solution for the OpenSolaris folks. They wanted as many download options as possible for the OpenSolaris bits. As is typical, we only had a couple of weeks to select technologies and deploy a solution. Not a problem, I thought, I had some familiarity with Bit Torrent so it seemed like it would be pretty straight forward.

What we eventually decided on was a solution using a tracker called BNBT, and the Azureus bit torrent client. It all went together pretty easily however there were a few twists that we had to acaccommodatethe biggest of which was being able to restrict access to the torrents to users in non-embargoed countries.

For that, we decided on using a product called NetAcuity. The product has multiple language bindings including C and Java (which is great because BNBT is C++ and Azureus is written in Java). The software itself is fast and stable. The key was finding the right place in both BNBT and Azureus to interpose an IP Address check and permit or deny access.

The deployment environment presented a bit of a challenge as well. Anyone who's used Azureus knows that the main interface is a GUI, but GUIs aren't permitted in our hosting environment.. Its a pretty secure environment, Solaris is trimmed down considerably, including the removal of all of the X/Openwin/GNOME software.

However, after some plugging around on the Azureus Wiki, I found that an alternate TTY & Telnet interface was being worked on. And I had already known about the more established Swing WebUI. The TTY interface was fairly rudimentary at the time, but it was enough to allow it to be started and stopped without the need of an X server somewhere.

So that only left a model for torrent authoring. We devised a method that, although somewhat twisted, worked for low volume authoring. The method involved starting up a master or uber-seeder torrent client, Azureus, and remotely displaying it back to a desktop using the port forwarding capabilities of SSH. Within this master, the torrent file was created and the master started seeding the thtorrentThe torrent file would then get loaded by hand using the Swing WebUI into the seeders that we set up, there are four of those. After the torrent was downloaded and being seeded by all of our seeders, the master/uber-seeder could then be shut down until the next new torrent.

At the end of the day, it all has been working pretty well. At least for a first phase. But recently we started to look at how we could improve and streamline the framework. Particularly around torrent authoring.

So last week, we got together with a few folks from Aelitis, the main developer of Azureus. We described our environment and asked how it could be improved. They came back with a wealth of information on how to automate the process much better. There are some new features and UIs available that would help us greatly, including both an HTML WebUI and an RSS feeds. The method they suggested was to still create an master seeder, but enable a shared folder feature of Azureus which provides a way for it to automatically pick up new files in the shared folder and create torrents out of them. From there, the you would then point all the regular seeders to the RSS feed on the master seeder. They would periodically read the RSS feed, identify new torrents and automatically download and start seeding them.

Great stuff, that will go a long way to simplifying our torrent offering and provide a more scalable solution.

Sunday Jan 29, 2006

+1 for Synergy

Ok so I know its been blogged about a least a couple of times on blogs.sun.com. Both Yong Sun and Rich Burridge have commented about it, but this is cool enough I have to repeat it just again.

But I finally got around to trying Synergy after hearing about it from co-worker Kenny by way of Rama, and the virtual keyboard/mouse software is pretty dang cool. I've got a Asus desktop running Windows XP connected to Acer Ferrari laptop running Solaris Nevada 29, and I'm able to control the laptop from the PC.

Incredibly cool. Now I don't have to hunch over anymore while I work on the laptop.

About

mock

Search

Top Tags
Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
Bookmarks
Blogroll

No bookmarks in folder