Saturday Sep 15, 2007

Iraq war was about oil: Greenspan

So all the rhetoric about freedom and democracy seems to be just window dressing according to Greenspan's book...where does the W go now? Greenspan on Oil and Iraq.

Wednesday Aug 15, 2007

Changes in weather

In Melbourne, Australia, the city I come from, it is not uncommon for there to be quite dramatic changes in the temperature outside. A 10C drop in as many minutes is not uncommon in summer as a change sweeps through. An example of how dramatic this can be can be seen here, from the 15th of November, 2003, 2 weeks before the official start of summer:

The nice part about this graph is that it was lovely and warm during the day (bordering on hot), so perfect weather for the beach, but by the time peek hour had arrived and everyone wanted to go home, it was rapidly cooling off. Doesn't always work that way though.

Of course it can get hot in summer, quite rapidly, too:

Both of the images in this blog entry are brough to you courtesy of Andrew Watkins' web page with real time weather observations of Melbourne -

Footnote: there may well be other days with better illustrations of how rapidly our weather changes in Melbourne, these were just a couple I found quite quickly.

Monday Jul 30, 2007

Cookies and GMail

For a long while, I've been using GMail but have been keeping the privacy implicatins at bay by restricting what cookies I accept to (or

Now a login to gmail requires me to accept cookies from That's not quite so comfortable a thought!

By moving GMail services under, it is now no longer as easy to distinguish between the cookies you want (ie those required for GMail) and those you don't want (those Google wants you to have while you search.)

Monday Jun 11, 2007

Taking Holidays in the USA

I'll preface this by saying prior to 2006 I have been an employee in Australia where 4 weeks holiday leave per year is standard, with some places allowing it to acrue to be "some number of months worth." Sick leave is also generally available on top of that.

As a starting employee at Sun here in the USA, I'm entitled to a meagre 10 days of holiday each year for the first n years, after which it increases to 3 weeks and after more time again, it raises to 4 and thereafter it remains. As a company, Sun is struggling to find a way in which to post a profit and reducing the number of holidays for which they are liable for is one of the goals in which they can play accounting tricks to get there.

But with so few days of holiday available, there is absolutely no incentive to take them because they just don't amount to enough to take a holiday with and so when we're all told to "take a day or two off" to bring back the liability, there is absolutely no motivation for me to do so. Until relatively recently, Sun also liked to tell people in the USA when they would take two weeks of holiday: July 4 week and between Christmas and New Year. This is no longer the case and for that we owe Jonathan a big thank you!

How could things improve? Well clearly Sun wants to limit its liability, so having a cap on the amount of time off a person can take is quite desirable. Giving people n days of holiday to use up, at the start of some "year", rather than have them accrue, would change the nature of the accounting game as the liability would start high at the beginning of each year and reduce over time. However it would enable people to plan for a holiday over the boundary of that "year" that was actually longer than their total holiday allotment for an entire calendar year (good for employees!) Raising the number of days you get as holiday when you start at Sun from 2 weeks to 3 weeks would also be very welcome but if the company sees a high turnover of staff not staying around long then it supposedly doesn't lose as much - if I could make a casual observation about this, it is that it is the people who have been around Sun for longer that seem more inclined to leave, not those who have been around for a short period of time. But the general meme here is that while you have few, you're less inclined to use any until you have too many.

What should Sun do? Personally, I'd like to see everyone entitled to at least 4 weeks holiday per year, whether they're allowed to acrue or not, I don't mind so much. Is it in the best interest of Sun? Well, I'm not sure that any change could make things any worse (except to regress to the previous situation where they told us when we could take holidays.)

End note: It's not clear to me if this (so few days of holiday per year) is normal behaviour for companies in the USA - if it is, I'm shocked that American employees put up with it and don't demand more holiday time off from their employers.

p.s. For all the good that is said about Google, I'm curious about what their policy is, from an academic viewpoint, and if it is skewed to make the employee happy as are many of their other perks.

Monday May 21, 2007

Modern pollution in aviation

On the weekend I was lucky enough to be able to spend some time on the afternoon watching American aircraft from World War II flying in and out of Moffett Field airbase. Whilst I was there, two other, more modern, aircraft used the airfield.

The planes observed on Sunday afternoon were:

One very stark difference between both the P-3/Vomit-Comet and the WWII planes was the level of pollution that could be observed coming from the engines of the aircraft and quite surprisingly, it was the WWII planes that produced no observable pollution and the two more modern ones that made the blue sky brown as they passed. I don't know much about the engineering that goes into the engines for planes but I, just like everyone else that was present, could discern the difference between the engines that appeared to burn cleanly vs dirty. And it wasn't just a little bit of pollution that those two planes left in the air as they went on their way.

Maybe the FAA needs to come up with a "smog check" equivalent for planes...

Friday May 11, 2007

Data dissemination vs spam and phishing

Recently a slashdot article pointed out that a video had been published of a talk presented by Van Jacobson (VJ header compression for modems, TCP congestion, etc) on where the future of the Internet lies and how it will be about data dissemintation.

The talk was quite interesting to listen into and it lays out an interesting future for networking and indeed I can see it heading in the direction he was alluding to, using said technologies except for one thing: nothing in what he presented will do away with spam or wipe out phishing attacks like he alluded to.


Simply being able to associate spam with a specific sender will not solve the problem. Spam is possible because it makes use of the design of email that allows anyone to send email to anyone else. Let me digress for a bit.

In essence, spam is just email from "someone else" that you don't want, be it about drugs, stocks or something else. In other countries around the world where access to the mailbox isn't restricted to the Government backed postoffice organisation, spam existed in another form before email - as junk mail. Letterboxes would be filled with pamphlets from Safeway, etc, telling us about their weekly specials. The real difference between that and spam via email is that printed material and having it delivered has a very definite cost associated with it.

So back to it. In VJ's talk, he explores what things might look like if we move to a data dissemination model, where I declare what I'm interested in and someone replies that I can authenticate. He seems to prefer a model of authentication that is similar to PGP's web of trust rather than something such as X.509 certificates. That's all well and good but it still doesn't cater for the problem that spam exploits: email makes it possible for people unknown to you to easily contact you and in the event that it was, it's highly likely that at some point the web of trust would include those sending spam. Now maybe for a lot of people, such as your mother or grandmother, this works - they're only really interested in receiving email from people they know (ie are inside their web of trust) but for those of us in the open source world where random people email us, this doesn't work.

Another topic that VJ mentioned this would defeat is phishing. I don't see how. Phishing exploits human naivity by presenting something to us that isn't what it says. Again, if you can limit who sends you email to be only those within your web of trust, fine, maybe you stand to benefit, but if you can't then there is none. Why?

If I digitally sign an email using a public key that links back into Verisign and put in it content that tells the user to go to some random page that looks like their bank, what would fail an automatic verification? And if you trust me to send you that, then what? What makes phishing possible is that it exploits the human brain believing something to be what it isn't - an illusion if you like. No amount of digital signing or verification of electronic content is going to be able to usurp the brain's power to decide that something is benign when it actually isn't. That most phishing attacks are delivered via spam implies that if you can stop spam then you can reduce phishing attacks.

Today I could eliminate spam/phishing style emails by simply saying only put email in my inbox if I say this email address is ok ahead of time or if I have sent the sender an email. This still leaves me open to forged email headers but it would cut down on the influx significantly and more importantly, it doesn't require a new paradigm for networking.

Tuesday Apr 10, 2007

IPFilter 5.0.2

Going further along with adding new things to IPFilter, some of the recent things I've worked on adding code for are:

  • selective flushing - to flush just things matching port 80:
    # ipnat -m 'tcp.port=80' -F
    # ipf -m 'tcp.port=80' -Fs

    A list can be given - "tcp.port=25,80". The full list of currently allowed words is:

    ip.addr, ip.p ip.src, ip.dst, tcp.port, tcp.dport,, udp.port, udp.dport,
  • the matching from flushing also applies to listing active entries:
    # ipnat -m 'port=80' -l
    # ipfstat -m 'port=80' -sl

    will respectively show only NAT or state matcing port 80.

  • the above syntax can be used in ipf rules like this
    pass in exp { 'tcp.port=25,80' } keep state

    (this is really experimental - how many fields are required for it to be attractive or is it just a waste of time?)

  • Active NAT/state entries can now be printed out in columns:
    # ipnat -O all -l | head -1
    # ipfstat -O all -sl | head -1

    will print out the names of columns. A list can be given:

    # ipnat -O oldsrcip,newsrcip,olddstip,newdstip -l

    And you can change the name at the top

    # ipfstat -O src=saddr,dst=addr -sl

    or just not print out the heading line at all:

    # ipnat -O all= -l

Comments/thoughts/criticisms welcome.

Tuesday Mar 27, 2007

How to get a 200ml bottle of liquid onto a plane

With all of these restrictions in size on containers of liquids and boarding a plane, people are having a hard time transporting things like bottles of wine, etc.

So far my observations have been that if you board a plane at a suitable airport then the way to do this is to say "No" when asked if you're carrying any liquids in your carry on luggage. The only problem I've had has been tooth paste, so my approach is now to buy a small thing of toothpaste at my destination and throw the unused contents out.

Airports where this approach will have failed are:

  • Hong Kong (last USA bound transit, September 2006)
  • Los Angeles (last transit, November 2006)
  • Melbourne (last USA transit, October 2006)

Some where this will work are:

  • San Francisco
  • San Diego

And how many planes have dropped out of the sky because of bombs being made from liquids carried onto a plane? 0.

Of course I don't blame the ground staff at all for their approach to the situation - this is a most absurd restriction on travellers and does nothing except make it take longer for people to get through security checkpoints at airports.

Oh, as for this being terrorist advice, I'm pretty sure that if anyone was actually planning a terrorist activity that they'd work out which airports were "favourable" for getting someone past the security checkpoint with whatever it is they're not supposed to have when they're on a plane - without needing to read this.

Thursday Mar 22, 2007

Integrating an Ultra20M2

I decided to try integrate a Sun Ultra20M2 into my home experiences have been somewhat varied over the course of this week, ranging from:

  • CPU fan wanted to continuously run at 4500RPM, all the time. This appeared to be solved by reloading the BIOS config from the "boot device" screen. Why, I don't know.
  • Doing "Save changes and reset" from the BIOS often seems to not reset the system properly. Doing a
    while running Solaris seems to be equally as effective (screen goes black but is never reset properly.) Bug ID# 6539609
  • With only 512MB of RAM installed, you cannot install Solaris 10 Update 3 from a DVD using the GUI installer, regardless of the recommendations on Hopefully the situation will be better for Update 4.
  • With only USB connectors for keyboard and mouse, connecting a PS/2 keyboard using a PS/2 to USB frob does not appear to work. Bug ID# 6539611. This has been closed - the defect was with the PS/2 to USB frob - not all of the announce themselves properly as a USB device and probes return a vendor and product ID of 0.
  • The gap for the DVD-ROM tray to come out is the correct size for drives that Sun ships - the actual size for the front plate is larger for others, requiring some shaving off of plastic to get others to work correctly.

On the good side:

  • I can reuse hard drive sleds from years ago to mount a second SATA drive and just "slide it in".
  • With the fan working properly, it is much quieter than my old PC (not to mention faster), if only the keyboard worked...

And without a support contract (bought it for my own use), it's not clear that I've got any real recourse to have any of the above fixed...

Sunday Feb 18, 2007

How to save yourself $400

NOTE: This is being written with a perspective of NOT being a Sun employee...

With the upcoming change in timezones in America, people are all of a sudden being forced to think about updating their timezone definitions. For Solaris, if you're running anything older than Solaris 8 then it would appear you're running a vintage operating system. The timezone update for these operating systems (if you don't have a support contract) is $400/server.

Of course one answer here is to just upgrade your operating system to Solaris 10, but is there any real need for this?

In Australia where we've had quite a few timezone changes in the last 10 to 15 years, we're much more accustomed to knowing what to do - for Unix.

The first important thing to understand is that nearly all Unix platforms use the same text format for describing time zone data.

The next thing you need to know is that there is a program, called zic (zone information compiler), that you use to generate the timezone text descriptions into the binary data files.

So how do you save yourself $400? By doing this:

man zic

at your vintage Solaris shell prompt.

On Solaris, the timezone data files all live in /usr/share/lib/zoneinfo. Both the text rules and binary data files are here. The rules for Australia (for example) are found in a file called "australasia" and the binary data for each zone found in the directory "Australia".

America on the nose

Americans take heed, the shine is wearing off the name of your country. As seen in this survey, Global backlash against America, people around the world no longer like what they see of your country as they used to.

Friday Dec 08, 2006

Requiring SSL authentican for IMAP

For a while now, I've been looking at getting away from password authentication to my SSL server and wanting it to be SSL authentication only (without going to the extreme of deploying Kerberos.)

Finally I've managed to lick it, after picking it up and putting it down a couple of times. What made it so difficult? Having to pilot openssl. There are so many options with this program, it is very easy to make a wrong step, forget something, and have to start over. To make my life a tad easier, I disabled basic constraints on all of the certificates I generated below.

Building my own CA and certificates to use happened in 3 stages:

  • generating a certificate authority (CA) for my own use
  • generating a server certificate for the imap server and signing that with the CA cert
  • generating an email certificate for me to use, also signed by my CA
An easy mistake to make here is to use any email address more than once. So for email of the above, I have a different email address - root-AT-domain, root@imap-DOT-domain and me@domain. Why is this necessary? The email address is one of they key identifiers and needs to be unique (or at least according to my own openssl.cnf file.)

$bopenssl req -x509 -newkey rsa -out cacert.pem -outform PEM -days 10000
AU [AU]:
Victoria [Victoria]:
Melbourne [Melbourne]:
domainname [domainname]:
Organizational Unit Name (eg, section) [OU]:
Common Name (eg, YOUR name) [Deus Ex Machine]:Deus Ex Machina
Email Address [root@domainname]:root@domainname

$ mkdir private
$ mv privkey.pem private/cakey.pem

$ openssl req -newkey rsa -keyout imapd.pem -out imapd.pem
[CN] = imap.domainname

$ mkdir newcerts
$ touch index.txt
$ echo 01 > serial

$ openssl ca -in imapd.pem -notext -out imapd-cert.pem

$ openssl req -newkey rsa -keyout darren.pem -out darren.pem
[CN] = My Name

$ openssl ca -in user.pem -notext -out user-cert.pem

$ openssl rsa -in imapd.pem -out imapd-key.pem
Enter pass phrase for imapd.pem:
writing RSA key
$ cp imapd-key.pem /etc/openssl/private/imapd.pem

$ cp cacert.pem /etc/openssl/`openssl x509 -hash -in cacert.pem -noout`.0

$ cat user.pem user-cert.pem > ~/user-cert.pem
$ openssl pkcs12 -in ~/user-cert.pem -export -out ~/user.pkcs12

I should have added, that for sending email, I wanted to continue using the same mail server software that I'd been using, without risking changing any config files. Enter stunnel. The config file below is very simple (yay!), although there is one non-obvious aspect to it: if you provide both CApath and CAfile in the config file, it will produce an error relating to a certificate already being loaded. Solution? Only specify CApath. In retrospect, maybe I could have taken this approach with imap too...but a quick test reveals it doesn't work quite that easily.

debug = 5
cert = /etc/openssl/private/imapd.pem
CApath = /etc/openssl
verify = 3
        accept = 465
        connect =
        local =

The other part of the changes required was modifications to the UW IMAP server software (patches below.) Although the software package as it stands accepts SSL connections on port 993, it doesn't have any option for enforcing all of the SSL clients to have a valid certificate and/or dropping a connection if they don't. My hacks below are a tad extreme: they force all clients to have a valid certificate. But then this is my desire :) Why? Because I don't want to expose my IMAP server to password guessing via IMAP SSL connections. This way they need to first obtain a valid certificate :) Maybe not impossible but it changes the game slightly...

\*\*\* 223,231 \*\*\*\*
      return "SSL context failed";
    SSL_CTX_set_options (stream->context,0);
                                /\* disable certificate validation? \*/
!   if (flags & NET_NOVALIDATECERT)
      SSL_CTX_set_verify (stream->context,SSL_VERIFY_NONE,NIL);
!   else SSL_CTX_set_verify (stream->context,SSL_VERIFY_PEER,ssl_open_verify);
                                /\* set default paths to CAs \*/
    SSL_CTX_set_default_verify_paths (stream->context);
                                /\* want to send client certificate? \*/
--- 363,371 ----
      return "SSL context failed";
    SSL_CTX_set_options (stream->context,0);
                                /\* disable certificate validation? \*/
!   /\*if (flags & NET_NOVALIDATECERT)
      SSL_CTX_set_verify (stream->context,SSL_VERIFY_NONE,NIL);
!   else\*/SSL_CTX_set_verify (stream->context,SSL_VERIFY_PEER,ssl_open_verify);
                                /\* set default paths to CAs \*/
    SSL_CTX_set_default_verify_paths (stream->context);
                                /\* want to send client certificate? \*/
\*\*\* 261,267 \*\*\*\*
    if (SSL_write (stream->con,"",0) < 0)
      return ssl_last_error ? ssl_last_error : "SSL negotiation failed";
                                /\* need to validate host names? \*/
!   if (!(flags & NET_NOVALIDATECERT) &&
        (err = ssl_validate_cert (cert = SSL_get_peer_certificate (stream->con),
                                host))) {
                                /\* application callback \*/
--- 401,407 ----
    if (SSL_write (stream->con,"",0) < 0)
      return ssl_last_error ? ssl_last_error : "SSL negotiation failed";
                                /\* need to validate host names? \*/
!   if (/\*!(flags & NET_NOVALIDATECERT) &&\*/
        (err = ssl_validate_cert (cert = SSL_get_peer_certificate (stream->con),
                                host))) {
                                /\* application callback \*/
\*\*\* 697,702 \*\*\*\*
--- 837,848 ----
        if (SSL_CTX_need_tmp_RSA (stream->context))
        SSL_CTX_set_tmp_rsa_callback (stream->context,ssl_genkey);
                                /\* create new SSL connection \*/
+       SSL_CTX_load_verify_locations(stream->context, NULL, "/etc/openssl");
+       SSL_CTX_set_verify(stream->context, SSL_VERIFY_PEER, NULL);
+       SSL_CTX_set_verify_depth(stream->context, 1);
        if (!(stream->con = SSL_new (stream->context)))
        syslog (LOG_ALERT,"Unable to create SSL connection, host=%.80s",
                tcp_clienthost ());
\*\*\* 707,712 \*\*\*\*
--- 853,870 ----
          syslog (LOG_INFO,"Unable to accept SSL connection, host=%.80s",
                  tcp_clienthost ());
        else {                  /\* server set up \*/
+         if (SSL_get_peer_certificate(stream->con) != NULL) {
+           if (SSL_get_verify_result(stream->con) == X509_V_OK)
+             syslog(LOG_ERR, "SSL client verification succeeded");
+           else {
+             syslog(LOG_ERR, "SSL client verification failed");
+               goto badclient;
+             }
+         } else {
+             syslog(LOG_ERR, "the peer certificate was not presented");
+               goto badclient;
+         }
          sslstdio = (SSLSTDIOSTREAM \*)
            memset (fs_get (sizeof(SSLSTDIOSTREAM)),0,sizeof (SSLSTDIOSTREAM));
          sslstdio->sslstream = stream;
\*\*\* 725,730 \*\*\*\*
--- 883,889 ----
+ badclient:
    while (i = ERR_get_error ())        /\* SSL failure \*/
      syslog (LOG_ERR,"SSL error status: %.80s",ERR_error_string (i,NIL));
    ssl_close (stream);         /\* punt stream \*/

Tuesday Nov 07, 2006

Packet Filtering Hooks integrated into Solaris Nevada

On the 20th of October, during the 1st week of build 52 of Solaris Nevada, the Packet Filtering Hooks project was finally putback into the main Solaris gate. The project took around 16 months to complete and whilst it started out with 5 developers working on it, in the end it finished up with 2: 2 people working on the project changed their careers within Sun from development to management and another was lost to another project. Of the 2 who were with it at the end, only myself was there from the start - the other project member joined the project early this year to cover someone else who left the company to live with his wife in another city.

The aim of this project is to be a first pass at introducing an API for the Solaris kernel that allows us to implement intercepting IP packets in Solaris without the need for a STREAMS module. The reliance on using a STREAMS module brought with it two major problems for Solaris 10:

  • Unable to access packets flowing between zones
  • Disables performance improvements from features such as hardware checksum, etc

In internal testing, the performance improvement with IPFilter using Packet Filtering Hooks vs the STREAMS module is between 20% and 30% in some tests - a very worthwhile gain!

There is one glaringly obvious limitation with the current implementation - it is limited to a single callback being registered for hook events that allow data to be modified. This limitation was brought about in discussion with PSARC where the requirement for making this capability available included solving the problem completely - i.e. in a more complete manner than simply assigning different modules a "priority". Amongst the projects we are currently looking at, layer 2 (or MAC layer) filtering is amongst them and with any luck, solving this problem can be wrapped up in this project!

For people who are fimiliar with Netfilter in Linux today, the table below compares the interception points available with Solaris and Linux today. The hooks that are defined in Solaris today have been defined because that is where we currently have someone interested in using them - we currently have nobody asking to use the interception points below that are unimplemented.

If you are developing a product on Solaris or porting a Linux product to Solaris and would benefit from having a hook that is currently unimplemented (such as Local In/Out), please get in contact with me so we can discuss your needs and what can be done to achieve them. Maybe there's scope for you to become involved in OpenSolaris as a developer and do the initial work to support them.

Solaris Packet Filtering HooksLinux Netfilter
Loopback inbound-
Loopback outbound-
Physical InPre-Routing
Physical OutPost-Routing
-Local In
-Local Out

Sunday Oct 15, 2006

Furniture shopping with a laptop

Shopping for furniture to go in your home isn't an easy task, whether it is an apartment, house, mansion or anywhere inbetween. Will it fit through the door is one problem, how much space is left around it is another. Colours...well, that's another source of debate for many. The spacial problem is quite a difficult one to solve. One solution might be to build a prototype couch from cardboard and see how that fits in if you have a lot of cardboad left over. Another solution is to build a CAD model of the room(s) - this was recommended to me by a friend's wife who has studied architecture and is the path I decided to take.

The first step in going the CAD route (assuming you have the right software) is to build a model in your computer of the room(s) you will be decorating. During the week, I spent a couple of nights with a tape measure walking around and measuring various parts of some rooms. The hard part of this is my tape measure was only 2 meters long - many walls are often much longer. This was resolved with measuring in steps - not perfect but good enough. If you've got a good CAD program, you might also want to build a 3D picture of the spaces you want to fill. An important step here is to make sure you include things like air vents, air conditioners, etc, on walls so that when you place items around the room, you're not blocking important features.

Next comes filling in the room with furniture. At this point, I found it sufficient to just represent furniture by solid boxes. For round tables, a cylinder that matches the table's widest radius would be the go.

If you are comfortable shopping for furniture from home, on the Internet, it would be sufficient to just add bits to your model using the dimensions from the web sites of manufacturers.

If you're intending to do some leg work then you should be using CAD software on a laptop. The idea here being that when you walk into a showroom with a laptop under your arm, you can sit down and add in a shape to represent the piece of furniture you're interested in and test out how it fits in with everything else.

Using CAD software should also allow you to measure the distance between objects in the model, so that you can see if you put the sofa there and the TV over there then the distance will be X. This can be more important when you're making sure that the spaces left between things are wide enough for people to walk through.

How well did this work for me? Very well. I was able to instantly see how a table would fit in, including drawing out a circle to represent the extra space needed for people sitting down. It also let me compare the sizes of couches/sofas with respect to the space they occupied - I could use different coloured boxes for different ones and see what the different dimensions in length, width and height meant by comparison.

Monday Oct 09, 2006

Insecure pin codes for access to planes at LAX

Whilst sitting around waiting for my flight up from LAX to SFO today, I happened to be sitting where I could easily see the number pad for opening the doors out to the plane.

After watching both a pilot and hostess enter in their access codes, I began to wonder if there was any security at all in the PIN number they use. I'm not going to repeat what buttons I saw them push, but the sequence was quite easily observed...and easily remembered. I've never taken any real notice of this before, but I'm sure if I was thoughtful about it, recording the sequence would have been close to trivial with a video camera installed in a mobile phone whilst maintaining some degree of denyability.

The first problem here is observability. There is no privacy for the touch pad itself from casual observers. The security of the key entry relies on the body of the person entering in the PIN.

The second problem is that I could not see the use of or requirement for any other access control mechanism required - no swipe card of any sort.

So while TSA is making sure you can't carry on any lipstick or toothpaste, there seems to be little to stop a random person from walking down the air bridge, if they're observant enough.




« July 2016