Tuesday Nov 21, 2006

BTConnect and Dynamic IP SMTP server problems.

So, I've got Exim working really nicely... I think.

I've got a "dynamic" IP address from my BTConnect business broadband connection. This means that I have trouble connecting directly to SMTP servers from my server because their reverse lookup of my name doesn't match... i.e. it resolves to a dynamic BT address, and some SMTP servers are rightfully suspicious of that and punt me away.

So I've configured Exim to relay to mail.btconnect.com. This works because I've registered my 2 domain names with them, and their mail relay will accept stuff coming from those domains.

However, I also forward e-mail for my sister, and have a couple of little mailing lists configured, and these get bounced because I retain the name of the original sender on my outgoing forwards.

Hmmm. This should be solvable by authenticating myself on BT's mail server.

Trawl of the BT Broadband site tells me that:

1: BT Connect, (the business one) doesn't allow SMTP authentication:

 # telnet mail.btconnect.com 25
Connected to mail.btconnect.com.
Escape character is '\^]'.
220 C2bthomr06.btconnect.com ESMTP Mirapoint 3.7.4b-GA; Tue, 21 Nov 2006 17:50:20 GMT
ehlo <mydomain>
250-C2bthomr06.btconnect.com Hello host<nnn>.btcentralplus.com [xx.yy.zz.aa], pleased to meet you
250-SIZE 52428800
250 HELP

2: BTInternet, (the consumer one) does:

# telnet mail.btinternet.com 25
Connected to pop-smtp1-f.bt.mail.vip.ird.yahoo.com.
Escape character is '\^]'.
220 smtp808.mail.ird.yahoo.com ESMTP
ehlo <mydomain>

Phone call to the BTConnect helpline. They understood my problem. Were slightly sympathetic, but confirmed that the only way to send none BT domain e-mail via their BTConnect server was by pre-registering EVERY domain with them. Clearly impractical. (Heck, I don't know who's going to send my sister an e-mail)

Phone call to the BTInternet helpline. I used to have an account. It was suspended due to lack of use. Got it re-instated. But they said I had to dial up at least once every 6 months to keep it going. I don't have a dialer at home, I'm bound to forget, and I'm not crazy about plain text authentication. But I reconfigured Exim to use mail.btinternet.com as my smart relay, and started authenticating. Thankfully, my sister's e-mail stopped bouncing at that point.

I wasn't particularly happy with the solution. I had a couple of long term fixes in mind:

  1. Convert to Static IP from BT, which means I could register my name in the rev-arpa tables. Costs money. (Though they should really do it for free, my "dynamic" IP address has not changed in about 3 months.)
  2. Sign up for BTInternet's premium e-mail solution @ 1.50 a month, which means I could continue to use the mail.btinternet.com mailserver. Costs money, and not happy with plaintext auth.
  3. Something Else.

I ended up doing something else.  I had a rant about this to one of my friends who happens to be my secondary MX (for when I break my machine), and he chuckled and reminded me that when he set the MX up, including my account on his server he also enabled encrypted SMTP authentication for me. All I needed to to is point at his server in the first place. (Which I am now doing)

The only other thing I want to do is set Exim up so that outgoing mail from my domain sent to mail.btconnect.com, otherwise to use my friend's mailserver. I know it's possible, I just need to write the rules. Here they are:

# Use this one for mail originating from my domain so I don't overload smart_route2
driver = manualroute
domains = !+local_domains
senders = \*@<mydomain>
transport = remote_smtp
route_list = \* mail.btconnect.com

# Use this one for mail NOT originating from my domain. (i.e. mailing list/alias expansion)
driver = manualroute
domains = !+local_domains
transport = remote_smtp
route_list = \* <my friend's smtp server>

(obviously <mydomain> is my actual domain)

Exim is COOL.


Friday Nov 17, 2006

VNCviewer trick

I've been using "VNC Viewer Free Edition 4.1.1 for X" so that I can remote control other machines in the house, but if the machine I try to connect to is down/unavailable/switched off etc, the vncviewer command kind of hangs and I can't do anything on my desktop (except wiggle my mouse) until it times out after a few minutes. Needless to say, this was annoying me. Wrote a simple script called vncsafe:

ping $1 >/dev/null 2>&1
if [ $? = 0 ]; then
vncviewer $1 &
zenity --error --text="$1 is not responding."

Works a treat


Thursday Nov 16, 2006

I ate all the SPAM

I was chatting to Chris about my spamassassin setup, and he was saying that he's configured his with .forward files for all his users (family), and explained the reason for doing so.

Basically, he wanted to set a low SPAM threshold, and have all the SPAM delivered to himself so he could check it. Given that I have 4 daughters under the age of 12, I thought that would be a GOOD THING(tm) as well.

I also wanted to configure it so that I could vary the level of SPAM detection based on the user.

I created a .forward file in each user's account that looks like this:

# Exim filter

$header_X-Spam-Score: does not begin -
if $header_X-Spam-Score: is above 10
deliver <to me>

I had to change the exim configuration to write the X-Spam-Score calculated by spamassassin in the acl_check_data filter to $spam_score_int because the user filter stuff in exim doesn't like decimals.

Finally, I wrote a filter in Thunderbird which puts all mail that's in my inbox that's NOT addressed to me in a folder called SPAM. That means I can check for false positives, forward the e-mails to my offspring if required, and add a rule to the whitelist to allow in the future.

The .forward file allows me to specify varying limits of SPAM checking to different people. e.g. the 7 year old twins have a setting of 5 (i.e. .5), while my wife has a setting of 1.

Need to let this run for a while so I can tweak between too much or too little as required. 


Building/Installing/Configuring apps on Solaris Home server

I've diverted slightly from Chris Gerhard's home server implementation by deciding to NOT use the blastwave packages, and instead either use the sunfreeware stuff, or compile it myself.

I've got a few reasons for doing it this way:

  1. Many of these packages have compile time options which allow me to include/exclude things. I kind of like the ability to be able to do that, rather than having to stick with the "defaults" compiled by someone else.
  2. Using blastwave (which is really great) means that you need to install a bunch of "dependent" libraries. It just kind of irks me that a lot of the libraries/applications dependencies that I install are already sitting in /usr/sfw. I then start getting really confused, and run into complications about LD_LIBRARY_PATHs and making sure things are seeing the right library etc. I think it's difficult to use blastwave without sort of subscribing to it wholesale. It's a bit like buying a "kit car" fully assembled.
  3. The sunfreeware stuff tries to make use of existing Solaris libraries as much as possible, so should reduce the number of additional packages I had to download.

Additional Software Installed on my home server:

From Blastwave:

  • xineui (and whatever it needed to get that working.. about 81MB)

From Sunfreeware:

  • imap-2004g-sol10-x86-local    (had manually update /etc/inetd.conf and run inetconv, then update /etc/services. Also need to generate a certificate as this is the SSL version, so wouldn't let me do plain text authentication.)
  • openssl-0.9.8d-sol10-x86-local
  • libiconv-1.9.2-sol10-x86-local

From CPAN:

  • spamassissin

    (# perl -MCPAN -e 'install Mail::SpamAssassin' , download the dependents listed as well.)

From Source:

  • Exim 4.63
  • Clamav-0.88.6

Building Exim

Building Exim was pretty straightforward. Grab the tar.bz2, unzip, untar, copy src/EDITME to Local/Makefile. Edit the Makefile. It is well commented and lets you turn on/off options, specify where you want the spool, bin directory and config files to go. I created zfs filesystems for /usr/exim and /var/spool/exim. Also elected to make sure I had all the content scanning and TLS/SSL options turned ON. I've got Sun Studio 11 installed, and it used this complier when I ran the make. Then ran make install to stick everything in the right place.

Before actually running exim, I had to make sure that clamav and spamassassin were all up and running, so I got those to the point where the clamd and spamd daemons were running, then simply editted the /usr/exim/configure file to set things up.

Building Clamav

Clamav is cool because it has the "configure" script. So I ran it, it found everything it wanted, and then I ran make and it just compiled. Followed by "make install" to put it all in /usr/local (which is also a zfs filesystem).

Plugging it all in

I kind of cheated and hacked the /lib/svc/method/smtp-sendmail file. I did this because there are dependencies on sendmail which I need to maintain. I cheated a little bit more by having this same file star clamd, freshclam and spamd. It gives me less granularity of control, but the reality is that I usually manage all of these as a group anyway.

The other tiny problem I had was that spamd didn't honour the LD_LIBRARY_PATH variable because bits of it are setuid. (Its a Solaris security thing). Anyway, this meant that I had to use crle to  make sure that all the apps had /usr/sfw/lib and /usr/local/lib in their paths.

In my next blog

I'll cover in some more detail the settings I used for exim, and the fun I had with outgoing SMTP e-mail.


More home server stuff

Almost there...

My "services" configurations are almost done now. I'm currently learning about the interactions among: Live Upgrade (which is REALLY cool), SVM mirrored root disks, GRUB menus, and ZFS. (oh, it'll be so much easier when zfs boot is available, but until then.....). The problem kind of starts with the fact that the boot failsafe doesn't recognise /dev/md devices, which means I need to break my mirrors if I need to fix a booting problem.... unless I've got Live Upgrade working which means I'll ALWAYS have a bootable environment.


The man page of lucreate suggests that I use lucreate to do my mirroring.

So, I've currently got the following:

2 boot disks partitioned as follows:

  • slice0=7GB
  • slice1=2GB
  • slice3=7GB
  • slice4=free
  • slice7=32MB 

Solaris nv Build 52 installed on c1d0s0, with an intent to mirror onto c2d0s0. My ABE for lu is going to be slice3 mirrored.

The lucreate man page says I simply need to type:

# lucreate -c slice0 -m /:/dev/md/dsk/d30:ufs,mirror \\
-m /:/dev/dsk/c1d0s3,d31:attach \\
-m /:/dev/dsk/c2d0s3,d32:attach -n slice3

which doesn't work due to a couple of "issues" with the interaction between LU and SVM. They are almost all fixed now, and the syntax of the command required is slightly different. I adopted the quick and easy workaround... simply create the metadevices manually first.

# metadb -a -f -c 3 c1d0s7 c2d0s7
# metainit d31 1 1 /dev/dsk/c1d0s3
# metainit d32 1 1 /dev/dsk/c2d0s3
# metainit d30 -m d31
# metattach d30 d32
# lucreate -c slice0 -m /:/dev/md/dsk/d30:ufs -n slice3

This fails after about 30 mins. My filesystem is full. No room to write the GRUB config stuff. Hmm, my slice0 is only 54% full. Slice3 is 100%. It's gone and copied all my data that's mounted via zfs. Referring to the Manual again and I learn how lucreate decides what to put into the BE or not. It looks in vfstab. Zfs filesystems aren't listed there, so it figures that my /opt/SUNWspro (which is a zfs filesystem) is on the / partition, and dutifully copies it. Hmm. Need to exclude all my zfs filesystems. (There are a couple of bugs logged against this as well, as one would think that lu would be clever enough to do this by itself... and it does kind of.)

# zfs list | awk '{print $5}' | grep -v \\- > /luexcludefile (edit it so that I only list top level directories)

Try the lucreate command again with the -f /luexcludefile . This fails because of package dependencies. (i.e. I'm excluding filesystems that are in the sadm/package files.). Need to use the -I flag. I need to clean up the configuration before I start again, ludelete doesn't work properly because the process failed. Manually delete by editting /etc/lutab file, and simply getting rid of the partially created BE. (ludelete is preferred, only do this if you think you know what you are doing) My final, successful lucreate command was:

# lucreate -I -l /data/downloads/luerror -m /:/dev/md/dsk/d30:ufs -n slice3 -f /data/downloads/luexclude

So, I luactive that and reboot. df -k shows I've forgotten to mirror the swap disk. oops.

# mkfile 1024m /var/tmp/swapfile
# swap -a /var/tmp/swapfile
# swap -d /dev/dsk/c1t0s1
# metainit d21 1 1 /dev/dsk/c1t0s1
# metainit d22 1 1 /dev/dsk/c2t0s1
# metainit d20 -m d21
# metattach d20 d22
# swap -a /dev/md/dsk/d20
# swap -d /var/tmp/swapfile
# rm /var/tmp/swapfile
# (vi /etc/vfstab and change swap location to the md device.)

I'm now running off my mirrored BE: slice3. I ludelete the single sliced slice0, and repeat the process I've just done, rebuilding the slice0 BU as a mirrored slice.

I feel a lot safer now. I've got 2 mirrored bootable partitions on my system. I'd need to to a LOT of stuff wrong for me to get into an unbootable position (again).

Did I mention that the motivation for getting this all to work was that I managed to break my boot environment? (Don't ask, I'm too embarrassed to say how). And that the only way for me to recover was to re-jumpstart the system from my laptop? The whole system ended up being down for about 3 hours, as it took that long to get all the services reconfigured and ready to go again.


Thursday Nov 09, 2006

Following the dot in .....

Solaris x64 Home Server AND Desktop

For those of you who have been following Chris Gerhard's blog about his adventure in creating a home server running Solaris, none of this is going to be particularly new to you, because I'm shamelessly going to copy as much as he's done to work for me as well.


We both work in the same campus in the UK, and we've both been running our home networks using Sun's Cobalt Qubes for the past few years (not intentionally, I found that out AFTER I met him... actually our first conversation was probably via the Qube when he rescued me after I turned my Qube into a brick (pun intended)), with relatively good success. However, it's become a little old in the tooth (the Qube), and frankly, not Web 2.0 compatible. (or it's not worth the effort to make it so).


I was initially waiting for him to get it all sorted, and then duplicate everything he'd done (hardware and all).


However, I managed to win an Ultra40 with a 24inch LCD monitor at CEC2006 for the best blog coverage of the event, and have decided to use that as my "new" home server. (It came with 1GB RAM and an 80GB HDD :-( , so I've beefed it up to 4GB, and added another 80GB drive and a pair of 320GB drives which wil be dedicated solely to zfs, which is good because then it can turn on the disk write cache. I've also created 2 partitions on the mirrored root disk for live upgrade purposes, and handed over the rest of the disk for zfs. I'm going to use that bit for disk based backups of the important stuff on the 320GB disk.)


Given that it would be a crime to attach the 24 inch LCD to a windows box, I've decided to also make it my primary workstation at home, therefore I'm also going to be playing with JDS4 and making it all work great as a desktop as well.


Progress so far:

  • Solaris Nevada Build 51 installed

  • JDS4 beta (which will be in build 53 anyway, couldn't wait as it has a heap of really groovy features such as Rhythmbox music player and CD burner and a host of friendly user applications.

  • Set up samba so wife and kids can get file and print access. (they were suitably unimpressed and didn't even notice that they were no longer running on a Qube with a 40GB drive and a broken mirror due to a failed disk. I guess that's a good thing)

  • Connected my HP5150 Deskjet via USB. Worked a charm, created a device called /dev/printer/0 and then used printmgr to configure it. (printmgr IS cool, but still slightly cryptic)

  • Discovered that the printmgr stuff worked fine for local printing, but couldn't handle the already processed stuff that windows sends it, so had to create an additional "raw" printer queue for printing from there. Couldn't figure out how to do it via printmgr, so did it the real man's way with lpadmin, and accept and enable. Word of warning. "enable <printername>" doesn't work in the bash shell. (I'm sure I could fix it by messing with the filter, but this is easier. I hardly ever print from home when anyone else is printing anyway.)

  • Got xine working for playing movies (simply grabbed it from blastwave). (Changed nautilus default preferences by editing some files in /usr/share/application)


My BIGGEST problem is that given I won the Ultra40 in the US, it was sent to me with a US UNIX keyboard. Which is kind of OK, given that I spent my formative years in the Caribbean which surprisingly also had US keyboards. However, I'm struggling having to context switch between the 2. (yes, they are different)


Things to do:

  • nameservices: dns and dhcp: I'd like to do static dhcp for some of the machines in the household, and I'd like internal nameservice resolution to work. Hence need to own dns and dhcp and don't leave it to my braindead ADSL router.

  • smtp/imap: need spam filtering and viruschecking. Chris has used exim/spamassasin/clamv, and it looks really good, but LOTS of configuration options. Might have to steal his config wholesale and tweak it.

  • Start doing my snapshotting and backuping onto alternate disk for resilience. (I don't NEED to do that yet as all my data is still sitting on original machines)

All in all, the Ultra40 is a stonking machine. Solaris with the latest JDS is easily a great desktop machine and it's all been relatively smooth so far.


Wednesday Oct 18, 2006

Project Blackbox

Five words:

Pimp my ride for GEEKS


Read about it here: Blackbox

Wednesday Oct 04, 2006

CEC2006 Closing

You could be the most depressed and hungover person in the world, but sit and listen to a talk by Jonathan Schwarz and you'll be rejuvenated, and able to face the world with a spring in your step, and a twinkle in your eye. Unless of course you happen to be one of Sun's competitors. (Read his blog)

Couple of things worth repeating: Sun is HUGELY committed to Solaris on ALL platforms. i.e. Solaris x86/x64, the prodigal port, is home to stay. Sun is also massively committed to continuing STKs traditional Mainframe connect business. In other words, if you are an ex-STK customer, don't worry. We are not disinvesting in anything. (and we need the money)

Jonathan also espoused a much more tactfull approach to our competitors. Yes, we should still tease and bear-bait them ('cos that's what makes it fun), but also understand that in many different areas they are our partners as well. Play nice.

You know, Sun is really betting on the fact that "We get it, and they don't", unfortunately sometimes "they" are customers, and analysts. More are beginning to get it though. Bottom line is that we (me and Jonathan :-) ) are convinced that we are going in the right direction, and the longer you take to get it, the more you're gonna have to catch up.

The big news is that Jonathan has promised to cut his ponytail off when Sun stock hits £15.00. The scissors, as it were, is in your hands!


The Geeks are back in Town

Had a really good article sitting in my head, just needing to be typed up.

Jim and Dan did the last session of Tuesday. Jim finally explained what an Eigenvector was, and Dan explained what was being put in place to formalise Technical structure so that the geeks can actually start to begin to plan their careers.

To be honest, both put me in such a good mood that I celebrated too much at the CEC2006 party, and the article I was going to write kind of disappeared.

It might come back to me when I've recovered more completely, but until then, you're gonna have to just put up with this.

I'm sitting outside the general session waiting for it to start. This is the closing address of the conference, and Jonathan is going to be talking to us. Prepare to be enthused.


Tuesday Oct 03, 2006

Andy Bechtolsheim's CEC2006 Talk

He was GREAT!

But I can't say anymore.

One of the funny comments on the interactive messaging thing we were doing was a comment from someone along the lines of: Andy's processor never stalls, he never stops talking, even to take a breath. Andy replied that he once went to a speech therapist to help him become more understandable. He thought it was all about his accent. Basically, after 2 days she just said: t-a-l-k s-l-o-w-e-r....


Peter Weber and Ian White's CEC2006 Talk

Next up after David was the Services talk.

When I first came to CEC, or STS as it was then known, the conference was actually dominated by Support Services content, keynotes, and messaging. Over the years the balance has slowly shifted to the point where its now dominated with pre-sales messaging, (although the breakout sessions and demos are still a pretty good mix.)

I've been told that the Services business at Sun accounts for about $950 million in revenue, which is a LOT of money and therefore should be really important to us.

Peter discussed how he is trying to simplify the business by consolidating the number of services we offer, and categorising them properly. So basically, his division has done a huge clean-up campaign, EOLing part numbers that are no longer in use or irrelevent.

He has also repositioned all the services so that they fit into 4 clear categories which map clearly to the "Architect, Implement, Manage" phases of customer engagement (AIM).

So we end up with:

  • Professional Services - Architect

  • Support Services - Implement

  • Managed Services - Manage

  • Learning Services - ALL

I've grossly oversimplified and there is probably a little overlap between the PS and SS bits, but the big picture is about right.

He also stressed that we'd try and deliver these services by "Integrating as a feature". i.e. including required services as part of the overall sale, and by using the network to allow as many of these services to be delivered remotely as possible, (with the associated cost savings for the customer. This is particularly relevent in the Managed and Learning Services spaces. (Web based training, Remote Management)

Ian White then spoke specifically about the Support Services organisation. His areas of focus were:

  • Engineering Alignment
    • Automated Services

  • Innovation and Technical Competence
    • Investing in Skills

    • Re-investing in Customer Engineering

    • Re-Skilling the Engineering Team

In short, in alignment with the Services that Peter Weber is developing, Ian committed to ensuring that the Customer Engineering people would be given all the tools and support required to ensure that they can and continue to be able to deliver the Services to customers. (Well, that's what I took away from it).


David Yen's CEC2006 Talk

First up on Tuesday morning was David Yen, who is now responsible for our Storage Product Portfolio.

I was particularly interested in hearing what he had to say because this will be the first full year where he is responsible for Storage, and secondly, its the first full year with StorageTek fully under our belt.

One of the things that a lot of people don't get is that StorageTek's main IP lived/lives in the Tape business. And it's a HUGE tape business. One of the metrics given out is that 65% of all data lives on Sun Storage. (and that's mainly due to STKs archival dominance in the Mainframe world).

The other good thing is that because David has been part of the Server Product group in the past, there's a good chance that we are gonna see much more connectedness between the products coming out of both the Server and Storage groups.

His first couple of slides covered what Sun's Storage strategy would be going forward:

  • Maintain Leadership in long term archive.

  • Partnering for Best of Breed Disk.

  • Innovate in new Storage Paradigm.

StorageTek has some really groovy Virtual Tape systems, and whole ILM management frameworks. We are going to continue to invest in these areas and refine the ILM model to take a more consultative approach which analyses the customer's complete Data Management requirements, and designs solutions based on those more complete requirements.

On the second point, I think David is acknowledging the fact that we probably can't really do disk in-house. What I mean by that is that we don't actually make the physical disks anyway, and there are other companies who are investing in building the best and most efficient HBAs, Disk controllers, RAID devices, and that we should work closely with those companies to create Sun branded Storage solutions based on our requirements. We've been doing this for our low end and to a certain extent mid range storage very successfully for a while. If it ain't broke. Don't fix it.

The opportunity seems to be adding value at the level above that. Providing solutions that deal with Metadata rather than disk blocks, and later on providing Application data rather than metadata. Other than Thumper, which as a general purpose box with a huge storage capacity and can be made to act as much more than a standard Storage array or NAS box (its running Solaris 10 you know), he was a little coy about what products we'd be launching in that space. I'm pretty sure he's got something interesting up his sleeve.


John Fowler's CEC2006 Talk

Monday's keynote session climaxed with John Fowler who reminded us that even with our navel gazing about internal structure, and star gazing trying to understand Greg's Vision about Sun's direction as a technology innovator, that in the end, right now, we are a company that invents and builds some really really really cool technology. A bunch of it is shipping today, and there's a bunch more in the pipeline.

If you don't already know, we've got an industry leading range of AMD Opteron basd servers that run Solaris, Linux and mumble mumble Windows in the form of the X2x00 and X4x00 servers, otherwise loosely known as Galaxy. We've also got a brand new blade story, and on the SPARC front we've got the CMT based servers T1000 and T2000 which are the most environmentally friendly servers on the planet. Oh, and I forgot to mention Thumper, is it a server or a storage box? You decide.

I get really confused about what's been announced and what hasn't so I won't mention anything more except to say its all pretty cool. We're doing some really smart things with chassis design which means that the UltraSPARC and Opteron range of boxes will be sharing an amazing number of components.

Feel good factor without having to think too hard!!


Wond'ring Around

Hung around on Monday afternoon. Went to 2 talks.... no comment.

Other than the talks, there are a few cool things to see at the demo areas.

The Storage Pavillion is obvious. If you've never seen a big tape library before, here's your chance. In the demo area there's also a display of some almost ready to release server products, you should go and have a look so you'll at least have touched/seen/stroked one before your customer does.

If you want to see/touch what Web 2.0 is about, go to 303 where the IC crowd are. Have a chat with Peter Reiser who can show you what's next in collaboration and tagging and participative computing at Sun.

If you don't already have Solaris on your laptop, go to the Installfest, and one of the guys there will stick the latest and greatest on for you.

Saw a "real" fire for the first time in my life on Monday, as there was a building about a block away, visible from the 3rd floor area which was billowing black smoke and a healthy amount of flames emanating from the roof. Saw firefighters climbin in and out of windows and dowsing the flames as best they could. Looks like they got everyone out, so hopefully no-one got hurt.

I've only mentioned a few of the things, but there's easily a couple hours worth of browsing throughout all the demo areas.


Greg's CEC2006 talk.

The next big keynote was Greg Papadopolous, who is the EVP for Research and Development and CTO. (His role in itself is a big thing as it gives our Chief techie jurisdiction over where we spend our research dollars... I'm pretty comfortable with that.)

He was the second person to use the Eigen word, but the context surrounding its usage gave the audience a much better idea of what it actually meant.

My interpretation of Greg's talk (grossly oversimplified... you had to be there) was that if you take Moore's Law into account, the projected increase in expected performance easily outgrows the traditional enterprise requirements, which means that we don't need to do anything special to sustain the traditional enterpise business. (This doesn't apply to the standard consumer pc environment, where Microsoft bloatware expands in pace with everything that Intel's Moore's Law throws at it, which is why my 9 year old PC running Windows 98 seems to run faster than my 1 year old PC running Windows XP.)

However, the increase of bandwidth to the home has increased the demand for a completely new breed of applications and services, and that these new application and services are characterised by the need to "fill the pipes", i.e. the ability to stream large amounts of data, and the ability to store large amounts of data to be streamed.

The epiphany was that the R&D efforts of 5(ish) years ago completely fulfill the needs of this new breed of applications in the form of our CMT (Chip Multi Threading) processors, and our breakthrough storage appliance: Thumper/X4600.

That's pretty cool.

Greg then put his CTOs: Mike Splain, Jeff Bonwich, Tim Marsland and Bob Bremin on the spot to do the post speech Q&A. I might have zoned out at this point, because I can't remember any of the specific questions asked, but it went on for about 20 minutes, and no-one embarassed themselves.





« April 2014