Tuesday Apr 21, 2009

Video: Top 5 Cool Features of the Sun Storage 7000 Unified Storage Systems

A couple of weeks ago, Marc (our producer from the HELDENFunk Podcast) and I sat down and put together a video about the top 5 reasons why the new Sun Storage 7000 systems are so cool. We even "invited" Brendan Gregg to show us his latest trick:

For the next video, I'll try to learn more phrases by heart and look less at the prompter screen for a more natural feel. I apologize for my German accent (some people say it adds credibility :) ). Still, people seem to like the video, at least it has been viewed about 200 times already.

There's a lot of discussion around the Sun Storage 7000, most of it is very positive. In Germany, we like to complain a lot so of course we also hear a lot of constructive criticism. Most of the comments I hear fall into one of the two following categories:

  1. The Storage 7000 systems are cool, but I know ZFS/OpenSolaris can do "X" and I really want this to be in the Storage 7000 GUI as well!
    Yes, we know that there are still many features we'd like to see in the Storage 7000 and we're working on making them available. Make sure your Sun contact knows about your wishlist, so she can forward it to our engineers. Please remember that the Storage 7000 systems are meant to be easy-to-use appliances: Taking your "X" feature from ZFS/OpenSolaris and building a GUI around it is a hard thing to do, especially if you want it to work reliably and if you want it to be self-explanatory and self-serviceable. Please be patient, we're most probably working on your favourite features already.

  2. The Storage 7000 systems are cool, but I want more control. I want to change the hardware/hack them/take them apart/add more functionality/get them to do exactly what I want, etc.
    Sure, that feature is called "OpenSolaris". Please go to OpenSolaris.org, download the CD, install it on your favourite hardware and off you go!
    But, can I have the GUI, too, maybe as an SDK of some sort?
    No. The Storage 7000 systems are not "just a GUI". They are full-blown appliances which means that they're more than just the hardware and a GUI. A big part of the ease-of-use, stability, performance and predictability of these products is in the way configuration options are selected, tested and yes, limited, as well as a careful consideration of which features to implement at what time and which not. Only then comes the GUI on top, which is tailored to the overall product as a whole. In other words: You wouldn't go to BMW and ask them to give you their dashboard, radio and the lights so you can bolt them onto a Volkswagen, would you?

You see, either you build your own storage machine out of the building blocks you have, and get all the functionality and flexibility you want at the expense of some configuration effort,
or you buy the car as a whole, nice, round, sweet package, so you don't worry about configuration, implementation details, complexity, etc. Asking for anything in between will get you into trouble: Either you'll spend more effort than you want, or you won't get the kind of control you want.

If you understand German, there's some discussion of this topic as well as a great overview of the MySQL future plus a primer on SSDs in the latest episode of the HELDENFunk podcast.

And if you like the Sun Storage 7000 Unified Storage Systems as much as I do, here are the slides in StarOffice format, as well as in PDF format, so you can tell your colleagues and friends as well.

Thursday Jan 08, 2009

Making 3D work over VNC

Dave recently played around with VNC on his computer and an iPod touch. While it worked surprisingly well, the achilles heel of many remote access solutions kicks in when you try doing some 3D stuff, such as a game, Second Life or maybe a scientific application.

This reminds me of one of the best kept secrets at Sun: We fixed the 3D-over-VNC problem.

International Supercomputing Conference 2008, LRZ booth showing 3D remote visualizationJust check out the Sun Shared Visualization Software, it is free and based on open source packages and it works like a charm. For example, here is a picture of the ISC 2008 conference in Dresden where you see a molecular visualization program in 3D stereo at the LRZ booth in Dresden, which is actually running in Garching near Munich.

That's right, the server runs in Munich, the client is in Dresden, there's more than 400km air line in between (probably close to double that in terms of network line) and we saw close to 30 frames per seconds of intricate molecular modeling madness that we could manipulate interactively like if the server was around the corner. In this case, the "server" was a supercomputer that fills the halls of the LRZ compute center, so it wouldn't quite fit the showfloor, thus they used Sun Shared Visualization to deliver the images, not the whole supercomputer, to Dresden.

And this is an increasingly common theme in HPC: As data amounts get bigger and bigger (Terabytes are for sissies, it's Petabytes where the fun starts) and compute clusters get bigger and bigger (think rows of racks after racks), your actual simluation becomes harder to transport (a truck is still the cheapest, fastest and easiest way to transmit PB class data across the nation). The key is: You don't need to transport your data/your simulation/your research. You just need to show the result, and that is just pictures.

Even if it's 3D models at 30 frames per second (= interactive speed) with 1920x1080 pixels (= HDTV) each frame, that's only about 180MB per second uncompressed. And after some efficient compressing, it boils down to only a fraction of it.

This means that you can transmit HDTV at interactive speeds in realtime across a GBE line without any noticeable degradation of image quality, or if you're restricted to 100 MBits or less, you can still choose between interactive speeds (at some degradation of picture quality) or high quality images (at some sacrifice in speed) or a mixture (less quality while spinning, hold the mouse to get the nicer picture). And this is completely independent of the complexity of the model that's being computed at the back-end server.

The Sun Shared Visualization Software is based on VirtualGL and TurboVNC, which are two open source projects that Sun is involved in. It also provides integration with the Sun Grid Engine, so you can allocate multiple graphics cards and handle reservations like "I need 3 cards on Monday, 3-5 PM for my presentation" automatically.

So, if you use a 3D application running on Linux or Solaris and you want to have access to it from everywhere, check out the Sun Shared Visualization Software for free and let me know what you've done with it. Also, make sure to check out Linda's blog, she runs the developer team and would love to get some feedback on what people are using it for.

P.S.: There's some subtle irony in the LRZ case. If you check their homepage, their supercomputer has been built by SGI. But their remote visualization system has been built by Sun. Oh, and we now have some good supercomputer hardware, too.

Tuesday Dec 16, 2008

New OpenSolaris Munich User Group

The Munich OpenSolaris User Group (MUCOSUG) LogoMunich is one of the IT centers of Germany. Some would say, the IT center in Germany. Most popular IT and media companies are based here, including Sun Germany, and of course Bavaria has the reputation of being an important technology powerhouse for Germany, between Laptops and Lederhosen.

It was about time that a Munich OpenSolaris User Group be created, which Wolfgang and I just did.

So, if you love OpenSolaris and happen to be near Munich, welcome to the Munich Open Solaris User Group (MUCOSUG). Feel free to visit our project page, subscribe to the mailing list, watch our announcements or participate in our events.

As you can see above, we already have a logo. It shows a silhouette of the Frauenkirche church, which is a signature landmark of downtown Munich, with the Olympiaturm tower in the background. This is meant to symbolize the old and new features of Solaris, but let's not get too sentimental here... Let us know if you like it, or provide your own proposal for a better logo, this is not set in stone yet.

Our first meeting will be on January 12th, 2009, 7-11 PM (19:00-23:00) at the Sun Munich office near Munich, Germany. Check out some more information about this event, we're looking forward to meeting you!



Sunday Oct 12, 2008

BarCamp Munich 2008 - Enterprise 2.0, Open Source and the Future of Technology

I'm astonished to see that I haven't blogged for so long. Sorry to my readers, it's been some very busy times lately, and I hope I can write more in the coming weeks. I also owe an apology to the people that pointed out a bug with my ZFS replicator script and cron(1M), I'll look into it and make it my next post.

Barcamp at Sun in Munich

Yesterday, I attended Barcamp Munich 2008 which was sponsored by Sun (among other cool sponsors) and so it took place in the Sun Munich offices.

I was surprised to see that both sessions I proposed were accepted, plus one about Open Source Software at Sun that my colleague Stefan proposed with some support by me.

You can find a list of sessions for Saturday and Sunday on the web and it pays off to check back regularly, as the wiki is filling up with more and more collateral information around each track.

So, here's a roundup of session descriptions, slides and other links and materials for those of you who attended my sessions or could not attend, in chronological order.

Enterprise 2.0 - From Co-Workers to Co-Creators

Central Slide about Enterprise 2.0This session was similar to the talk I did at Webkongress Erlangen a few months ago.

We had about 20 people in the room and quite a fruitful discussion on how to motivate employees to use new tools, how to guide employee behaviour and the challenges of opening up a company and making it more transparent.

Feel free to glance through my Enterprise 2.0 slides or read an earlier blog entry on a related subject. Also, check out Peter Reiser's blog, he has a number of great articles from behind the scenes of our SunSpace collaboration project.

Open Source Software at Sun

Stefan Schneider proposed a session about great software products that are available from Sun for free, as open source. We went through his list from least well-known to most popular.

Obviously, MySQL, StarOffice and OpenSolaris were at the end, but the more interesting software products were those that made the attendees go "Oh, I didn't know that!". One example of this category was Lightning, a rich calendar client.

Stefan recently posted his slides into the Sun Startups blog, thanks, Stefan!

The Future of Technology in 10, 20, 30 Years and More

This was a spontaneous talk that I offered after having seen the Barcamp Munich wishlist where people asked for a session on future technology developments, their effects on society and how one can cope with it.

I took some slides from a couple of earlier talks I did a while ago on similar topics and updated it for the occasion. The updated "Future of Technology" slidedeck is in German, but if enough people are interested, I can provide a translated version as well.

We started by looking at Moore's Law as an indicator of technology development. In "The Age of Spiritual Machines", Ray Kurzweil, a well-known futurist, pointed out that this law also holds for technology prior to integrated circuits, all the way down to Charles Babbage's difference engine of the 19th century.

With that in mind, we can confidently extend Moore's Law into the future, knowing that even if traditional chip technology ceases to deliver on Moore's Law, other technologies will pick up and help us achieve even higher amounts of computing power per amount of money/space/energy. Again, Kurzweil points out that if we compare the amount of computational power that one can purchase for $1000 for a given year with the complexity of all neurons of a brain and their connections to neighbouring neurons at their typical firing frequency, then the 2020s will be an interesting decade.

Key technologies of the future will be: Genetics and Biotechnology, Robotics and Nanotechnology.

DNA being replicatedWe watched a fascinating video about Molecular Visualizations of DNA (here's a longer, more complete version) that made us witnesses of DNA being replicated, right before our eyes, at a molecular level. It's amazing to see how mechanical this process looks, almost like industrial robots grinding away on ribbons of DNA, cutting pieces, replicating them, then splicing them back in. In the near future, we will see personalized medicine, based on our own DNA, and optimized for our individual needs as well as novel applications of biotechnology for clean energy, new materials and the assembly of early molecular machines.

Robotics are another fascinating area of technology and we're seeing more and more robots enter our day to day life. Industrial and military robots may be an "old hat", but did you know that today, millions of households are already using robots to vacuum their floory, mow their lawns or perform other routine work? And we will see many more robots in the future, I'm sure. Meanwhile, I'm happy to say that my Roomba robot indeed saves a lot of precious time while fulfilling my natural geeky desire for cool gadgetry.

Finally, Nanotechnology will open up a new category of advanced technology that will affect all aspects of human life, the environment and the world. We watched a vision of a future nanofactory that fits onto a common desk and is capable of manufacturing an advanced laptop with 100 hours of battery life and a billion CPUs. But nanotechnology can do much more: Highly efficient solar cells, clean water, lightweight spacecrafts, nanobots that clean up your bloodstream, more advanced versions of your organs, brain implants and extensions, virtual reality that is indistinguishable from real reality and much more.

Check out the Foresight Institute's introduction to nanotechnology for more information about this fascinating topic, including a free PDF download of K. Eric Drexler's book "Engines of Creation". Real engineers will probably want to take a look at his textbook "Nanosystems Molecular Machinery Manufacturing and Computation"

One controversial topic when discussing the future is the Technological Singularity. This is the point in time, where artificial intelligence becomes powerful enough to create new technology on its own, thereby accelerating the advancement of technology without human intervention. A discussion of this topic can be found in Kurzweil's newest book "The Singularity is Near".

Another great way to think about the future is to read Stefan Pernar's sci-fi thriller "Jame5 - A Tale of Good and Evil". This book starts in the best Michael Crichton style and then becomes a deep and thoughtful discussion around the philosophy of the future, when mankind confronts the powers of strong AI. You can buy the book or just download the PDF for free. Highly recommended.

One of my favourite citations is said to be an old chinese curse: "May you live in interesting times."

Many thanks to all the people that I met during, or attended my sessions at, Barcamp Munich 2008, it was a most interesting event.

Edit (Oct., 13th): Meanwhile, a few blog reactions are rolling in: Dirk wrote a nice summary on the Enterprise 2.0 session (in German) while Ralph summarized the Future technology session (German as well). I found them through Markus' Barcamp Munich 2008 session meta entry. Thanks to all! Also, Stefan has posted his slides from the open source talk, see above.

Edit (Oct. 14th): Here are some more notes from Stefan Freimark (in German). Thank you!

Wednesday Aug 13, 2008

ZFS Replicator Script, New Edition

Many crates on a bicycle. A metaphor for ZFS snapshot replicationAbout a year ago, I blogged about a useful script that handles recursive replication of ZFS snapshots across pools. It helped me migrate my pool from a messy configuration into the clean two-mirrored-pairs configuration I have now.

Meanwhile, the fine guys at the ZFS developer team introduced recursive send/receive into the ZFS command, which makes most of what the script does a simple -F flag to the zfs(1M).

Unfortunately, this new version of the ZFS command has not (yet?) been ported back to Solaris 10, so my ZFS snapshot replication script is still useful for Solaris 10 users, such as Mike Hallock from the School of Chemical Sciences at the University of Illinois at Urbana-Champaign (UIUC). He wrote:

Your script came very close to exactly what I needed, so I took it upon myself to make changes, and thought in the spirit of it all, to share those changes with you.

The first change he in introduced was the ability to supply a pattern (via -p) that selects some of the potentially many snapshots that one wants to replicate. He's a user of Tim Foster's excellent automatic ZFS snapshot service like myself and wanted to base his migration solely on the daily snapshots, not any other ones.

Then, Mike wanted to migrate across two different hosts on a network, so he introduced the -r option that allows the user to specify a target host. This option simply pipes the replication data stream through ssh at the right places, making ZFS filesystem migration across any distance very easy.

The updated version including both of the new features is available as zfs-replicate_v0.7.tar.bz2. I didn't test this new version but the changes look very good to me. Still: Use at your own risk.

Thanks a lot, Mike! 

Thursday Mar 20, 2008

How to compile/run MediaTomb on Solaris for PS3 and other streaming clients

MediaTomb showing photos on a PS3Before visiting CeBIT, I went to see my friend Ingo who works at the Clausthal University's computing center (where I grew up, IT-wise). This is a nice pre-CeBIT tradition we keep over the years when we get to watch movies in Ingo's home cinema and play computer games all day for a weekend or so :).

To my surprise, Ingo got himself a new PlayStation 3 (40GB). The new version is a lot cheaper (EUR 370 or so), less noisy (new chip process, no PS2 compatibility), and since HD-DVD is now officially dead, it's arguably the best value for money in Blu-Ray players right now (regular firmware upgrades, good picture quality, digital audio and enough horsepower for smooth Java BD content). All very rational and objective arguments to justify buying a new game console :).

The PS3 is not just a Blu-Ray player, it is also a game console (I recommend "Ratchett&Clank: Tools of Destruction" and the immensely cute "LocoRoco: Cocoreccho!", which is a steal at only EUR 3) and can act as a media renderer for DLNA compliant media servers: Watch videos, photos and listen to music in HD on the PS 3 from your home server.

After checking out a number of DLNA server software packages, it seemed to me that MediaTomb is the most advanced open source one (TwonkyVision seems to be nicer, but sorry, it isn't open source...). So here is a step-by-step guide on how to compile and run it in a Solaris machine.

Basic assumptions

This guide assumes that you're using a recent version of Solaris. This should be at least Solaris 10 (it's free!), a current Solaris Express Developer Edition (it's free too, but more advanced) is recommended. My home server runs Solaris Express build 62, I'm waiting for a production-ready build of Project Indiana to upgrade to.

I'm also assuming that you are familiar with basic compilation and installation of open source products.

Whenever I compile and install a new software package from scratch, I use /opt/local as my base directory. Others may want to use /usr/local or some other directory (perhaps in their $HOME). Just make sure you use the right path in the --prefix=/your/favourite/install/path part of the ./configure command.

I'm also trying to be a good citizen and use the Sun Studio Compiler here where I can. It generally produces much faster code on both SPARC and x86 architectures vs. the ubiquitous gcc, so give it a try! Alas, sometimes the code was really stubborn and it wouldn't let me use Sun Studio so I had to use gcc. This was the path of least resistance, but with some tinkering, everything can be made to compile on Sun Studio. You can also try gcc4ss which combines a gcc frontend with the Sun Studio backend to get the best of both worlds.

Now, let's get started!

MediaTomb Prerequisites

Before compiling/installing the actual MediaTomb application, we need to install a few prerequisite packages. Don't worry, most of them are already present in Solaris, and the rest can be easily installed as pre-built binaries or easily compiled on your own. Check out the MediaTomb requirements documentation. Here is what MediaTomb wants:

  • sqlite3, libiconv and curl are available on BlastWave. BlastWave is a software repository for Solaris packages that has almost everything you need in terms of pre-built open source packages (but not MediaTomb...). Setting up BlastWave on your system is easy, just follow their guide. After that, installing the three packages above is as easy as:
    # /opt/csw/bin/pkg-get -i sqlite3
    # /opt/csw/bin/pkg-get -i libiconv
    # /opt/csw/bin/pkg-get -i curl
  • MediaTomb uses a library called libmagic to identify file types. It took a little research until I found out that it is part of the file package that is shipped as part of many Linux distributions. Here I'm using file-4.23.tar.gz, which seems to be a reasonably new version. Fortunately, this is easy to compile and install:

    $ wget ftp://ftp.astron.com/pub/file/file-4.23.tar.gz
    $ gzip -dc  file-4.23.tar.gz | tar xvf -$ cd file-4.23
    $ CC=/opt/SUNWspro/bin/cc ./configure --prefix=/opt/local
    $ gmake
    $ su
    # PATH=$PATH:/usr/ccs/bin:/usr/sfw/bin; export PATH; gmake install

    Notice that the last step is performed as root for installation purposes while compilation should generally be performed as a regular user.

  • For tag extraction of MP3 files, MediaTomb uses taglib:
    $ wget http://developer.kde.org/~wheeler/files/src/taglib-1.5.tar.gz
    $ cd taglib-1.5
    $ CC=/usr/sfw/bin/gcc CXX=/usr/sfw/bin/g++ ./configure --prefix=/opt/local
    $ gmake
    $ su
    # PATH=$PATH:/usr/ccs/bin:/usr/sfw/bin; export PATH; gmake install
  • MediaTomb also uses SpiderMonkey, which is the Mozilla JavaScript Engine. Initially, I had some fear about having to compile all that Mozilla code from scratch, but then it dawned on me that we can just use the JavaScript libraries that are part of the Solaris Firefox standard installation, even the headers are there as well!

That was it. Now we can start building the real thing...

 Compiling and installing MediaTomb

Now that we have all prerequisites, we can move on to downloading, compiling and installing the MediaTomb package:

  • Download the MediaTomb source from http://downloads.sourceforge.net/mediatomb/mediatomb-0.11.0.tar.gz
  • Somehow, the mediatomb developers want to enforce some funny LD_PRELOAD games which is uneccesary (at least on recent Solaris versions...). So let's throw that part of the code out: Edit src/main.cc and comment lines 128-141 out by adding /\* before line 128 and \*/ at the end of line 141.
  • Now we can configure the source to our needs. This is where all the prerequisite packages from above are configured in:
    --prefix=/opt/local --enable-iconv-lib --with-iconv-h=/opt/csw/include
    --with-iconv-libs=/opt/csw/lib --enable-libjs
    --with-js-h=/usr/include/firefox/js --with-js-libs=/usr/lib/firefox
    --enable-libmagic --with-magic-h=/opt/local/include
    --with-magic-libs=/opt/local/lib --with-sqlite3-h=/opt/csw/include

    Check out the MediaTomb compile docs for details. One hurdle here was to use an extra iconv library because the MediaTomb source didn't work with the gcc built-in iconv library. Also, there were some issues with the Sun Studio compiler, so I admit I was lazy and just used gcc instead. 

  • After these preparations, compiling and installing should work as expected:
    PATH=$PATH:/usr/ccs/bin:/usr/sfw/bin; export PATH; gmake install

Configuring MediaTomb

Ok, now we have successfully compiled and installed MediaTomb, but we're not done yet. The next step is to create a configuration file that works well. An initial config will be created automatically during the very first startup of MediaTomb. Since we compiled in some libraries from different places, we either need to set LD_LIBRARY_PATH during startup (i.e. in a wrapper script) or update the linker's path using crle(1).

In my case, I went for the first option. So, starting MediaTomb works like this:

/opt/local/bin/mediatomb --interface bge0 --port 49194 --daemon
--pidfile /tmp/mediatomb.pid

Of course you should substitute your own interface. The port number is completely arbitrary, it should just be above 49152. Read the command line option docs to learn how they work.

You can now connect to MediaTomb's web interface and try out some stuff, but the important thing here is that we now have a basic config file in $HOME/.mediatomb/config.xml to work with. The MediaTomb config file docs should help you with this.

Here is what I added to my own config and why:

  • Set up an account for the web user interface with your own user id and password. It's not the most secure server, but better than nothing. Use something like this in the <ui> section:
    <accounts enabled="no" session-timeout="30">
      <account user="me" password="secret"/>
  • Uncomment the <protocolInfo> tag because according to the docs, this is needed for better PS3 compatibility.
  • I saw a number of iconv errors, so I added the following to the config file in the import section. Apparently, MediaTomb can better handle exotic characters in file names (very common with music files) with the following tag:
  • The libmagic library won't find its magic information because it's now in a nonstandard place. But we can add it with the following tag, again in the import section:
  • A few mime types should be added for completeness:

    <map from="mpg" to="video/mpeg"/>
    <map from="JPG" to="image/jpeg"/>
    <map from="m4a" to="audio/mpeg"/>

    Actually, it should "just work" through libmagic, but it didn't for me, so adding theses mime types was the easiest option. It also improves performance through saving libmagic calls. Most digital cameras use the uppercase "JPG" extension and MediaTomb seems to be case-sensitive so adding the uppercase variant was necessary. It's also apparent that MediaTomb doesn't have much support for AAC (.m4a) even though it is the official successor to MP3 (more than 95% of my music is in AAC format, so this is quite annoying).

  • You can now either add <directory> tags to the <autoscan> tags for your media data in the config file, or add them through the web interface.

MediaTomb browser on a PS3This is it. The pictures show MediaTomb running in my basement and showing some photos through the PS3 on the TV set. I hope that you can now work from here and find a configuration that works well for you. Check out the MediaTomb scripting guide for some powerful ways to create virtual directory structures of your media files.

MediaTomb is ok to help you show movies and pictures and the occasional song on the PS3 but it's not perfect yet. It lacks support for AAC (tags, cover art, etc.) and it could use some extra scripts for more comfortable browsing structures. But that's the point of open source: Now we can start adding more features to MediaTomb ourselves and bring it a few steps closer to usefulness.

Monday Feb 25, 2008

Meet Me Next Week at CeBIT 2008


CeBIT is the world's largest IT trade show. Whenever we mention this to our colleagues in the US, they say "sure". Only when they actually come over to our booth and experience the CeBIT feeling, they realize how really big it is. Most US trade shows use a really big exhibition hall. CeBIT has 21 (twenty-one) of them. Plus the space in between. Bring some good shoes.

CeBIT 2008 will take place next week, March 4-9 in Hannover, Germany. If you go there, visit the Sun booth. We'll have systems, storage, software and service exhibits, a Blackbox, even an installation of Project Wonderland.

I'll be at the Solaris part of the booth, talking to customers about Niagara 2 and other CPU and System Technologies, Solaris, OpenSolaris and ZFS, HPC and Grid Computing, Web 2.0 and what not. If you read this blog, stop by and say hi. Let me know what you like and what you don't like about this blog, about Sun or whatever else goes through your mind. I'll bring my voice recorder and a camera and we can talk about your own cool projects in a podcast interview that we can then publish through the HELDENFunk podcast. Join the System Heroes (or the german Systemhelden) and get a T-Shirt or I'll try to organize one of those champagne VIP passes for you. Just ask for me at the info counter.

See you at CeBIT!

Tuesday Feb 19, 2008

VirtualBox and ZFS: The Perfect Team

I've never installed Windows in my whole life. My computer history includes systems like the Dragon 32, the Commodore 128, then the Amiga, Apple PowerBook (68k and PPC) etc. plus the occasional Sun system at work. Even the laptop my company provided me with only runs Solaris Nevada, nothing else. Today, this has changed. 

A while ago, Sun announced the acquisition of Innotek, the makers of the open-source virtualization software VirtualBox. After having played a bit with it for a while, I'm convinced that this is one of the coolest innovations I've seen in a long time. And I'm proud to see that this is another innovative german company that joins the Sun family, Welcome Innotek!

Here's why this is so cool.

Windows XP running on VirtualBox on Solaris Nevada

After having upgraded my laptop to Nevada build 82, I had VirtualBox up and running in a matter of minutes. OpenSolaris Developer Preview 2 (Project Indiana) runs fine on VirtualBox, so does any recent Linux (I tried Ubuntu). But Windows just makes for a much cooler VirtualBox demo, so I did it:

After 36 years of Windows freedom, I ended up installing it on my laptop, albeit on top of VirtualBox. Safer XP if you will. To the top, you see my VirtualBox running Windows XP in all its Tele-Tubby-ish glory.

As you can see, this is a plain vanilla install, I just took the liberty of installing a virus scanner on top. Well, you never know...

So far, so good. Now let's do something others can't. First of all, this virtual machine uses a .vdi disk image to provide hard disk space to Windows XP. On my system, the disk image sits on top of a ZFS filesystem:

# zfs list -r poolchen/export/vm/winxp
NAME                                                          USED  AVAIL  REFER  MOUNTPOINT
poolchen/export/vm/winxp                                     1.22G  37.0G    20K  /export/vm/winxp
poolchen/export/vm/winxp/winxp0                              1.22G  37.0G  1.05G  /export/vm/winxp/winxp0
poolchen/export/vm/winxp/winxp0@200802190836_WinXPInstalled   173M      -   909M  -
poolchen/export/vm/winxp/winxp0@200802192038_VirusFree           0      -  1.05G  -

Cool thing #1: You can do snapshots. In fact I have two snapshots here. The first is from this morning, right after the Windows XP installer went through, the second has been created just now, after installing the virus scanner. Yes, there has been some time between the two snapshots, with lots of testing, day job and the occasional rollback. But hey, that's why snapshots exist in the first place.

Cool thing #2: This is a compressed filesystem:

# zfs get all poolchen/export/vm/winxp/winxp0
NAME                             PROPERTY         VALUE                    SOURCE
poolchen/export/vm/winxp/winxp0  type             filesystem               -
poolchen/export/vm/winxp/winxp0  creation         Mon Feb 18 21:31 2008    -
poolchen/export/vm/winxp/winxp0  used             1.22G                    -
poolchen/export/vm/winxp/winxp0  available        37.0G                    -
poolchen/export/vm/winxp/winxp0  referenced       1.05G                    -
poolchen/export/vm/winxp/winxp0  compressratio    1.53x                    -
poolchen/export/vm/winxp/winxp0  compression      on                       inherited from poolchen

ZFS has already saved me more than half a gigabyte of precious storage capacity already! 

Next, we'll try out Cool thing #3: Clones. Let's clone the virus free snapshot and try to create a second instance of Win XP from it:

# zfs clone poolchen/export/vm/winxp/winxp0@200802192038_VirusFree poolchen/export/vm/winxp/winxp1
# ls -al /export/vm/winxp
total 12
drwxr-xr-x   5 constant staff          4 Feb 19 20:42 .
drwxr-xr-x   6 constant staff          5 Feb 19 08:44 ..
drwxr-xr-x   3 constant staff          3 Feb 19 18:47 winxp0
drwxr-xr-x   3 constant staff          3 Feb 19 18:47 winxp1
dr-xr-xr-x   3 root     root           3 Feb 19 08:39 .zfs
# mv /export/vm/winxp/winxp1/WindowsXP_0.vdi /export/vm/winxp/winxp1/WindowsXP_1.vdi

The clone has inherited the mountpoint from the upper level ZFS filesystem (the winxp one) and so we have everything set up for VirtualBox to create a second Win XP instance from. I just renamed the new container file for clarity. But hey, what's this?

VirtualBox Error Message 

Damn! VirtualBox didn't fall for my sneaky little clone trick. Hmm, where is this UUID stored in the first place?

# od -A d -x WindowsXP_1.vdi | more
0000000 3c3c 203c 6e69 6f6e 6574 206b 6956 7472
0000016 6175 426c 786f 4420 7369 206b 6d49 6761
0000032 2065 3e3e 0a3e 0000 0000 0000 0000 0000
0000048 0000 0000 0000 0000 0000 0000 0000 0000
0000064 107f beda 0001 0001 0190 0000 0001 0000
0000080 0000 0000 0000 0000 0000 0000 0000 0000
0000336 0000 0000 0200 0000 f200 0000 0000 0000
0000352 0000 0000 0000 0000 0200 0000 0000 0000
0000368 0000 c000 0003 0000 0000 0010 0000 0000
0000384 3c00 0000 0628 0000 06c5 fa07 0248 4eb6
0000400 b2d3 5c84 0e3a 8d1c
8225 aae4 76b5 44f5
0000416 aa8f 6796 283f db93 0000 0000 0000 0000
0000432 0000 0000 0000 0000 0000 0000 0000 0000
0000448 0000 0000 0000 0000 0400 0000 00ff 0000
0000464 003f 0000 0200 0000 0000 0000 0000 0000
0000480 0000 0000 0000 0000 0000 0000 0000 0000
0000512 0000 0000 ffff ffff ffff ffff ffff ffff
0000528 ffff ffff ffff ffff ffff ffff ffff ffff
0012544 0001 0000 0002 0000 0003 0000 0004 0000

Ahh, it seems to be stored at byte 392, with varying degrees of byte and word-swapping. Some further research reveals that you better leave the first part of the UUID alone (I spare you the details...), instead, the last 6 bytes: 845c3a0e1c8d, sitting at byte 402-407 look like a great candidate for an arbitrary serial number. Let's try changing them (This is a hack for demo purposes only. Don't do this in production, please):

# dd if=/dev/random of=WindowsXP_1.vdi bs=1 count=6 seek=402 conv=notrunc
6+0 records in
6+0 records out
# od -A d -x WindowsXP_1.vdi | more
0000000 3c3c 203c 6e69 6f6e 6574 206b 6956 7472
0000016 6175 426c 786f 4420 7369 206b 6d49 6761
0000032 2065 3e3e 0a3e 0000 0000 0000 0000 0000
0000048 0000 0000 0000 0000 0000 0000 0000 0000
0000064 107f beda 0001 0001 0190 0000 0001 0000
0000080 0000 0000 0000 0000 0000 0000 0000 0000
0000336 0000 0000 0200 0000 f200 0000 0000 0000
0000352 0000 0000 0000 0000 0200 0000 0000 0000
0000368 0000 c000 0003 0000 0000 0010 0000 0000
0000384 3c00 0000 0628 0000 06c5 fa07 0248 4eb6
0000400 b2d3 2666 6fbb c1ca 8225 aae4 76b5 44f5
0000416 aa8f 6796 283f db93 0000 0000 0000 0000
0000432 0000 0000 0000 0000 0000 0000 0000 0000
0000448 0000 0000 0000 0000 0400 0000 00ff 0000
0000464 003f 0000 0200 0000 0000 0000 0000 0000
0000480 0000 0000 0000 0000 0000 0000 0000 0000
0000512 0000 0000 ffff ffff ffff ffff ffff ffff
0000528 ffff ffff ffff ffff ffff ffff ffff ffff
0012544 0001 0000 0002 0000 0003 0000 0004 0000

Who needs a hex editor if you have good old friends od and dd on board? The trick is in the "conv=notruc" part. It tells dd to leave the rest of the file as is and not truncate it after doing it's patching job. Let's see if it works:

VirtualBox with two Windows VMs, one ZFS-cloned from the other.

Heureka, it works! Notice that the second instance is running with the freshly patched harddisk image as shown in the window above.

Windows XP booted without any problem from the ZFS-cloned disk image. There was just the occasional popup message from Windows saying that it found a new harddisk (well observed, buddy!).

Thanks to ZFS clones we can now create new virtual machine clones in just seconds without having to wait a long time for disk images to be copied. Great stuff. Now let's do what everybody should be doing to Windows once a virus scanner is installed: Install Firefox:

Clones WinXP instance, running FireFox

I must say that the performance of VirtualBox is stunning. It sure feels like the real thing, you just need to make sure to have enough memory in your real computer to support both OSes at once, otherwise you'll run into swapping hell...

BTW: You can also use ZFS volumes (called ZVOLs) to provide storage space to virtual machines. You can snapshot and clone them just like regular file systems, plus you can export them as iSCSI devices, giving you the flexibility of a SAN for all your virtualized storage needs. The reason I chose files over ZVOLs was just so I can swap pre-installed disk images with colleagues. On second thought, you can dump/restore ZVOL snapshots with zfs send/receive just as easily...

Anyway, let's see how we're doing storage-wise:

# zfs list -rt filesystem poolchen/export/vm/winxp
NAME                              USED  AVAIL  REFER  MOUNTPOINT
poolchen/export/vm/winxp         1.36G  36.9G    21K  /export/vm/winxp
poolchen/export/vm/winxp/winxp0  1.22G  36.9G  1.05G  /export/vm/winxp/winxp0
poolchen/export/vm/winxp/winxp1   138M  36.9G  1.06G  /export/vm/winxp/winxp1

Watch the "USED" column for the winxp1 clone. That's right: Our second instance of Windows XP only cost us a meager 138 MB on top of the first instance's 1.22 GB! Both filesystems (and their .vdi containers with Windows XP installed) represent roughly a Gigabyte of storage each (the REFER column), but the actual physical space our clone consumes is just 138MB.

Cool thing #4: ZFS clones save even more space, big time!

How does this work? Well, when ZFS creates a snapshot, it only creates a new reference to the existing on-disk tree-like block structure, indicating where the entry point for the snapshot is. If the live filesystem changes, only the changed blocks need to be written to disk, the unchanged ones remain the same and are used for both the live filesystem and the snapshot.

A clone is a snapshot that has been marked writable. Again, only the changed (or new) blocks consume additional disk space (in this case Firefox and some WinXP temporary data), everything that is unchanged (in this case nearly all of the WinXP installation) is shared between the clone and the original filesystem. This is de-duplication done right: Don't create redundant data in the first place!

That was only one example of the tremenduous benefits Solaris can bring to the virtualization game. Imagine the power of ZFS, FMA, DTrace, Crossbow and whatnot for providing the best infrastructure possible to your virtualized guest operating systems, be they Windows, Linux, or Solaris. It works in the SPARC world (through LDOMs), and in the x86/x64 world through xVM server (based on the work of the Xen community) and now joined by VirtualBox. Oh, and it's free and open source, too.

So with all that: Happy virtualizing, everyone. Especially to everybody near Stuttgart.

Tuesday Nov 27, 2007

Shrink big presentations with ooshrink

I work in an environment where people use presentations a lot. Of course, we like to use StarOffice, which is based on OpenOffice for all of our office needs.

Presentation files can be big. Very big. Never-send-through-email-big. Especially, when they come from marketing departments and contain lots of pretty pictures. I just tried to send a Sun Systems overview presentation (which I created myself, so less marketing fluff), and it still was over 22MB big!

So here comes the beauty of Open Source, and in this case: Open Formats. It turns out, that OpenOffice and StarOffice documents are actually ZIP files that contain XML for the actual documents, plus all the image files that are associated with it in a simple directory structure. A few years ago I wrote a script that takes an OpenOffice document, unzips it, looks at all the images in the document's structure and optimizes their compression algorithm, size and other settings based on some simple rules. That script was very popular with my colleagues, it got lost for a while and thanks to Andreas it was found again. Still, colleagues are asking me about "That script, you know, that used to shrink those StarOffice presentations." once in a while.

Today, I brushed it up a little, teached it to accept the newer od[ptdc] extensions and it still works remarkably well. Here are some examples:

  • The Sun homepage has a small demo presentation with a few vacation photos. Let's see what happens:
    bash-3.00$ ls -al Presentation_Example.odp
    -rw-r--r--   1 constant sun       392382 Mar 10  2006 Presentation_Example.odp
    bash-3.00$ ooshrink -s Presentation_Example.odp
    bash-3.00$ ls -al Presentation_Example.\*
    -rw-r--r--   1 constant sun       337383 Nov 27 11:36 Presentation_Example.new.odp
    -rw-r--r--   1 constant sun       392382 Mar 10  2006 Presentation_Example.odp

    Well, that was a 15% reduction in file size. Not earth-shattering, but we're getting there. BTW: The -s flag is for "silence", we're just after results (for now).

  • On BigAdmin, I found a presentation with some M-Series config diagrams:

    bash-3.00$ ls -al Mseries.odp
    -rw-r--r-- 1 constant sun 1323337 Aug 23 17:23 Mseries.odp
    bash-3.00$ ooshrink -s Mseries.odp
    bash-3.00$ ls -al Mseries.\*
    -rw-r--r-- 1 constant sun 379549 Nov 27 11:39 Mseries.new.odp
    -rw-r--r-- 1 constant sun 1323337 Aug 23 17:23 Mseries.odp

    Now we're getting somewhere: This is a reduction by 71%!

  • Now for a real-world example. My next victim is a presentation by Teera about JRuby. I just used Google to search for "site:sun.com presentation odp", so Teera is completely innocent. This time, let's take a look behind the scenes with the -v flag (verbose):
    bash-3.00$ ooshrink -v jruby_ruby112_presentation.odp
    Required tools "convert, identify" found.
    ooshrink 1.2
    Check out "ooshrink -h" for help information, warnings and disclaimers.

    Creating working directory jruby_ruby112_presentation.36316.work...
    Unpacking jruby_ruby112_presentation.odp...
    Optimizing Pictures/1000020100000307000000665F60F829.png.
    - This is a 775 pixels wide and 102 pixels high PNG file.
    - This image is transparent. Can't convert to JPEG.
    - We will try re-encoding this image with PNG compression level 9.
    - Failure: Old: 947, New: 39919. We better keep the original.
    Optimizing Pictures/100000000000005500000055DD878D9F.jpg.
    - This is a 85 pixels wide and 85 pixels high JPEG file.
    - We will try re-encoding this image with JPEG quality setting of 75%.
    - Failure: Old: 2054, New: 2089. We better keep the original.
    Optimizing Pictures/1000020100000419000003C07084C0EF.png.
    - This is a 1049 pixels wide and 960 pixels high PNG file.
    - This image is transparent. Can't convert to JPEG.
    - We will try re-encoding this image with PNG compression level 9.
    - Failure: Old: 99671, New: 539114. We better keep the original.
    Optimizing Pictures/10000201000001A00000025EFBC8CCCC.png.
    - This is a 416 pixels wide and 606 pixels high PNG file.
    - This image is transparent. Can't convert to JPEG.
    - We will try re-encoding this image with PNG compression level 9.
    - Failure: Old: 286677, New: 349860. We better keep the original.
    Optimizing Pictures/10000000000000FB000001A6E936A60F.jpg.
    - This is a 251 pixels wide and 422 pixels high JPEG file.
    - We will try re-encoding this image with JPEG quality setting of 75%.
    - Success: Old: 52200, New: 46599 (-11%). We'll use the new picture.
    Optimizing Pictures/100000000000055500000044C171E62B.gif.
    - This is a 1365 pixels wide and 68 pixels high GIF file.
    - This image is too large, we'll resize it to 1280x1024.
    - We will convert this image to PNG, which is probably more efficient.
    - Failure: Old: 2199, New: 39219. We better keep the original.
    Optimizing Pictures/100000000000019A000002D273F8C990.png.
    - This is a 410 pixels wide and 722 pixels high PNG file.
    - This picture has 50343 colors, so JPEG is a better choice.
    - Success: Old: 276207, New: 32428 (-89%). We'll use the new picture.
    Patching content.xml with new image file name.
    Patching styles.xml with new image file name.
    Patching manifest.xml with new image file name.
    Optimizing Pictures/1000000000000094000000E97E2C5D52.png.
    - This is a 148 pixels wide and 233 pixels high PNG file.
    - This picture has 4486 colors, so JPEG is a better choice.
    - Success: Old: 29880, New: 5642 (-82%). We'll use the new picture.
    Patching content.xml with new image file name.
    Patching styles.xml with new image file name.
    Patching manifest.xml with new image file name.
    Optimizing Pictures/10000201000003E3000003E4CFFA65E3.png.
    - This is a 995 pixels wide and 996 pixels high PNG file.
    - This image is transparent. Can't convert to JPEG.
    - We will try re-encoding this image with PNG compression level 9.
    - Failure: Old: 196597, New: 624633. We better keep the original.
    Optimizing Pictures/100002010000013C0000021EDE4EFBD7.png.
    - This is a 316 pixels wide and 542 pixels high PNG file.
    - This image is transparent. Can't convert to JPEG.
    - We will try re-encoding this image with PNG compression level 9.
    - Failure: Old: 159495, New: 224216. We better keep the original.
    Optimizing Pictures/10000200000002120000014A19C2D0EB.gif.
    - This is a 530 pixels wide and 330 pixels high GIF file.
    - This image is transparent. Can't convert to JPEG.
    - We will convert this image to PNG, which is probably more efficient.
    - Failure: Old: 39821, New: 56736. We better keep the original.
    Optimizing Pictures/100000000000020D0000025EB55F72E3.png.
    - This is a 525 pixels wide and 606 pixels high PNG file.
    - This picture has 17123 colors, so JPEG is a better choice.
    - Success: Old: 146544, New: 16210 (-89%). We'll use the new picture.
    Patching content.xml with new image file name.
    Patching styles.xml with new image file name.
    Patching manifest.xml with new image file name.
    Optimizing Pictures/10000000000000200000002000309F1C.png.
    - This is a 32 pixels wide and 32 pixels high PNG file.
    - This picture has 256 colors, so JPEG is a better choice.
    - Success: Old: 859, New: 289 (-67%). We'll use the new picture.
    Patching content.xml with new image file name.
    Patching styles.xml with new image file name.
    Patching manifest.xml with new image file name.
    Optimizing Pictures/10000201000001BB0000006B7305D02E.png.
    - This is a 443 pixels wide and 107 pixels high PNG file.
    - This image is transparent. Can't convert to JPEG.
    - We will try re-encoding this image with PNG compression level 9.
    - Failure: Old: 730, New: 24071. We better keep the original.
    All images optimized.
    Success: The new file is only 67% as big as the original!
    Cleaning up...

    Neat. We just shaved a third off of a 1.3MB presentation file and it still looks as good as the original!

    As you can see, the script goes through each image one by one and tries to come up with better ways of encoding images. The basic rules are:

    • If an image if PNG or GIF and it has more than 128 colors, it's probably better to convert it to JPEG (if it doesn't use transparency). It also tries recompressing GIFs and other legacy formats as PNGs if JPEG is not an option.
    • Images bigger than 1280x1024 don't make a lot of sense in a presentation, so they're resized to be at most that size.
    • JPEG allows to set a quality level. 75% is "good enough" for presentation purposes, so we'll try that and see how much it buys us.
    The hard part is to patch the XML files with the new image names. They don't have any newlines, so basic Unix scripting tools may hiccup and so the script uses a more conservative approach to patching, but it works.


Before I give you the script, here's the obvious
Disclaimer: Use this script at your own risk. Always check the shrunk presentation for any errors that the script may have introduced. It only works 9 out of 10 times (sometimes, there's some funkiness about how OpenOffice uses images going on that I still don't understand...), so you have to check if it didn't damage your file.

The script works with Solaris (of course), but it should also work in any Linux or any other Unix just fine. It relies on ImageMagick to do the image heavy lifting, so make sure you have identify(9E) and convert(9E) in your path. 

My 22 MB Systems Overview presentation was successfully shrunk into a 13MB one, so I'm happy to report that after so many years, this little script is still very useful. I hope it helps you too, let me know how you use it and what shrink-ratios you have experienced!

Tuesday Nov 20, 2007

Foresight Vision Weekend 2007


About two weeks ago, two colleagues and I had the inspiring pleasure of attending the Foresight Vision Weekend 2007. This was the weekend before our annual TS Ambassador Conference at Sun, so we happened to be in the Bay Area where this unconference was held.

Ever since the year 2000, after I heard a talk from Eric Drexler on Nanotechnology during another Sun event, I've been fascinated by this topic and so I loosely followed the activities of the Foresight Institute. This event was a great way of catching up with recent developments - and an opportunity for me to have a reality check on how real all of this is, and can be.

Limited by our flight schedule, we only attended the second day of the conference. It started with a few motivational speakers as an introduction to the second half of the day which was held in the now popular unconference format.

A Systematic View on Anti-Aging 

The first talk about anti-aging was given by Chris Heward, President of the Kronos Science Laboratory. He explained their very systematic approach to analyze the effects of aging and what factory play what role in the process. The great thing about this talk was that there was no esoterics, no magic, no BS, just plain, number driven science full of hard facts about what aging actually is (a decrease of bio-functional abilities due to decaying body functions over time), a fresh view on the subject (we're already becoming "unnatually" old, so why not figure this out once and for all?) and some reality-checks on popular health myths (If fats are so bad, why is the US population becoming fatter and fatter despite all that non-fat food?).

So the systematic approach is quite simple, but effective: Figure out the primary causes of death (heart disease, skeletal dysfunction, cancer) and find ways to prevent them from happening as early as possible. The "as early as possible" part is the most important one: The earlier one starts to work on preventing these factors, the longer the life expectancy. 

My takeaways:

  • Drink lots of water,
  • a BMI of 22-26 is a good place to be (I'm at 23),
  • avoid eating empty calories (all the "white" stuff that is not meat),
  • eat colorful veggies,
  • some supplements are actually really good (he especially mentioned Vitamin E and Omega-3 fatty acids),
  • exercise regularly. Actually, this is the biggest factor, capable of even compensating for a fat or a smoking lifestyle! I really need to start jogging again...

One interesting but not well understood factor in aging is hormones. There's a strong correlation between dropping levels of male and female sex hormones and their negative symptoms in ageing (obvious, isn't it?), but it is not understood yet if and how taking hormon supplements really helps you overcome ageing symptoms. Plus, taking hormones as pills is likely to produce other problems (as in liver overload...).

Anyway, this was a fascinating talk and I now need to understand more on this subject, although separating the wheat from the chaff is difficult if you're not a doctor or a biochemist...

Productive Nanosystems Roadmap 

This conference covered a great variety of topics, so the next talk by Pearl Chin was on a completely different topic: The Productive Nanosystems Roadmap. What's a productive nanosystem you might ask? It's a machine that operates at the molecular level to create things in an atomically precise way. Watch this short movie to see one in action.

The Productive Nanosystems Roadmap is all about the "How do we get there?" aspects of Molecular Nanotechnology. Similar to, but more challenging than the semiconductor business, this involves a huge amount of interdisciplinary work by physicists, chemists, biotechnologists, computer scientists, mechanical engineers, process technologists and many more. By synchronizing and bringing together different fields of research and development, the Nanotechnology Roadmap facilitates the creation of Productive Nanosystems.

Can't wait to having one of these replicators in my home...

Open Source Security

Yet another interesting and completely different subject: Open Source Security, by Christine Peterson, a founder of the Foresight Institute. The current physical security mechanisms, as implemented by major governments are hugely centralized (as in DoD-centralized), not transparent (who knows really what happens inside the NSA, or behind the doors of your friendly airport security operations?) and they have a huge impact on privacy (Did you know that "they" know what you read on an airplane?).

The idea of this talk is: Centralized security has its flaws (what happens if someone takes out the central parts of a nation's security system?), obscure security measures are prone to becoming a security threat by themselves (In Germany there's a current debate about the police monitoring license plates on a big scale vs. privacy rights) and of course, there's no fun in living in a 100% controlled and watched Orwellian society. So why not try to create a security system that is transparent, distributed and still protects privacy?

This "Open Source Security" system could be everywhere (like a neighborhood watch), it would be open to anyone (so nobody can manipulate the system) and it would work without invading people's privacies (a neighborhood watch keeps the neighbors secure, but doesn't know a thing about, say, the next cities' neighborhoods).

Interesting concept and hopefully one that is going to be developed further. Sounds much, much better than what current governments would like to implement...

Mapping the Technology Landscape

I can't remember the exact title of this session, but this sounds like a good fit. The first of the afternoon sessions I visited (there were several in parallel and we couldn't visit all of them) was about finding the right way to categorize new technologies as they emerge and create headlines. It was run by Phil Bowermaster who has an excellent blog called "The Speculist" and an accompaining podcast called "Fast Forward Radio".

After blogging for a while, Phil came up with a 2-dimensional coordinate system for charting technologies, based on the axes "Impact on Society" and "Impact on Technology". While this seemed to work for charting "spot resistant nano-pants" (low impacts on both society and technology, placing it into the "fake" corner) vs., say, a desktop molecular nanofactory (now we're getting serious...), it didn't feel like the real thing for charting new technology.

So, Phil showed us his improved coordinate systems, this time based on the axes "transformation" and "disruption". It intuitively makes more sense, as it better models the impact of technology on the world as we know it. But every model is only good until the next one comes around, so Phil welcomes your suggestions, too. See his article on "Disruption and Transformation".

Self-Improving A.I.

No futuristic conference without at least one A.I. related topic. Artificial Intelligence may have had a difficult story in the past, but the truth is that people tend to dismiss any advance in A.I. as being "nice, but not the real thing", be it speech recognition, route planning or beating Kasparov at chess playing. What's going to be the next milestone that people will choose to treat as "not real A.I.?".

Ray Kurzweil observed that the development of technology happens at an accelerating pace. In fact, Moore's law only deals with advances in semiconductor technology, but it's pattern of modeling the increasing amount of available calculations per $1000 can be observed all the way back to early mechanical calculators. Looking into the future, semiconductor experts are confident that Moore's law will hold at least into the next 15-20 years - and there are some more exciting technologies waiting to be used for computations onces semiconductor chips become uninteresting. If the current rate of technological progress continues, then we will see a $1000 PC have the power of a human brain by 2025. Not a long time from now.

Steve Omohundro's session on self-improving A.I. dealt with the questions such as: What will drive self-improving A.I.s? What are the benefits and risks of self-improving A.I.s? What should we try to do right before they arrive? Read more about this topic at the Self-aware Systems website.

And for the lighter side of it, here's a hilarious comic on a very similar subject :).

Nanotech Literacy

Perhaps the most important aspect of nanotechnology right now is it's acceptance. As soon as you learn about the great powers of nanotechnology, you can't help but imagine the great peril it might bring. Bill Joy's famous article "Why the future doesn't need us" is only one example.

But is denying or opposing change a solution? Certainly not. If we refuse to learn about the next wave of technology, others will. So we better learn how to do it right from the start. One major focus of the Foresight Institute is to advance beneficial nanotechnology, partly by educating people about it's potential benefits to humanity.

Miguel Aznar's session on Nanotech Literacy focused on how to make Nanotechnology more accessible and understandable to children and students in schools. I think this is a great way of spreading the word, as it instantly will touch their parents as well. I used to teach my parents how to program our VCR, and I'm looking forward to my daughter teaching me how to operate our first family molecular nanotech factory :)

Read more in Miguel's blog.


This really was a most inspiring event. My goal was to understand more about the reality behind Nanotech and other future technologies, and I got much more out of this day than I expected. I'm very proud to see that Sun is a corporate member of the Foresight Institute and I'm going to sign up with them as a senior associate soon. I'm convinced that every dollar spent in advancing beneficial Nanotechnology is going to save us more trees and more species, reduce the levels of CO2 more aggressively, provide more clean energy, cure more cancers and advance humankind more thoroughly in the long term than any other investment.

If you want to learn more about the subject of Nanotechnology, I recommend looking at one of these articles.


Tune in and find out useful stuff about Sun Solaris, CPU and System Technology, Web 2.0 - and have a little fun, too!


« June 2016