Thursday Jan 08, 2009

Making 3D work over VNC

Dave recently played around with VNC on his computer and an iPod touch. While it worked surprisingly well, the achilles heel of many remote access solutions kicks in when you try doing some 3D stuff, such as a game, Second Life or maybe a scientific application.

This reminds me of one of the best kept secrets at Sun: We fixed the 3D-over-VNC problem.

International Supercomputing Conference 2008, LRZ booth showing 3D remote visualizationJust check out the Sun Shared Visualization Software, it is free and based on open source packages and it works like a charm. For example, here is a picture of the ISC 2008 conference in Dresden where you see a molecular visualization program in 3D stereo at the LRZ booth in Dresden, which is actually running in Garching near Munich.

That's right, the server runs in Munich, the client is in Dresden, there's more than 400km air line in between (probably close to double that in terms of network line) and we saw close to 30 frames per seconds of intricate molecular modeling madness that we could manipulate interactively like if the server was around the corner. In this case, the "server" was a supercomputer that fills the halls of the LRZ compute center, so it wouldn't quite fit the showfloor, thus they used Sun Shared Visualization to deliver the images, not the whole supercomputer, to Dresden.

And this is an increasingly common theme in HPC: As data amounts get bigger and bigger (Terabytes are for sissies, it's Petabytes where the fun starts) and compute clusters get bigger and bigger (think rows of racks after racks), your actual simluation becomes harder to transport (a truck is still the cheapest, fastest and easiest way to transmit PB class data across the nation). The key is: You don't need to transport your data/your simulation/your research. You just need to show the result, and that is just pictures.

Even if it's 3D models at 30 frames per second (= interactive speed) with 1920x1080 pixels (= HDTV) each frame, that's only about 180MB per second uncompressed. And after some efficient compressing, it boils down to only a fraction of it.

This means that you can transmit HDTV at interactive speeds in realtime across a GBE line without any noticeable degradation of image quality, or if you're restricted to 100 MBits or less, you can still choose between interactive speeds (at some degradation of picture quality) or high quality images (at some sacrifice in speed) or a mixture (less quality while spinning, hold the mouse to get the nicer picture). And this is completely independent of the complexity of the model that's being computed at the back-end server.

The Sun Shared Visualization Software is based on VirtualGL and TurboVNC, which are two open source projects that Sun is involved in. It also provides integration with the Sun Grid Engine, so you can allocate multiple graphics cards and handle reservations like "I need 3 cards on Monday, 3-5 PM for my presentation" automatically.

So, if you use a 3D application running on Linux or Solaris and you want to have access to it from everywhere, check out the Sun Shared Visualization Software for free and let me know what you've done with it. Also, make sure to check out Linda's blog, she runs the developer team and would love to get some feedback on what people are using it for.

P.S.: There's some subtle irony in the LRZ case. If you check their homepage, their supercomputer has been built by SGI. But their remote visualization system has been built by Sun. Oh, and we now have some good supercomputer hardware, too.

Tuesday Dec 16, 2008

New OpenSolaris Munich User Group

The Munich OpenSolaris User Group (MUCOSUG) LogoMunich is one of the IT centers of Germany. Some would say, the IT center in Germany. Most popular IT and media companies are based here, including Sun Germany, and of course Bavaria has the reputation of being an important technology powerhouse for Germany, between Laptops and Lederhosen.

It was about time that a Munich OpenSolaris User Group be created, which Wolfgang and I just did.

So, if you love OpenSolaris and happen to be near Munich, welcome to the Munich Open Solaris User Group (MUCOSUG). Feel free to visit our project page, subscribe to the mailing list, watch our announcements or participate in our events.

As you can see above, we already have a logo. It shows a silhouette of the Frauenkirche church, which is a signature landmark of downtown Munich, with the Olympiaturm tower in the background. This is meant to symbolize the old and new features of Solaris, but let's not get too sentimental here... Let us know if you like it, or provide your own proposal for a better logo, this is not set in stone yet.

Our first meeting will be on January 12th, 2009, 7-11 PM (19:00-23:00) at the Sun Munich office near Munich, Germany. Check out some more information about this event, we're looking forward to meeting you!

  

 

Sunday Oct 12, 2008

BarCamp Munich 2008 - Enterprise 2.0, Open Source and the Future of Technology

I'm astonished to see that I haven't blogged for so long. Sorry to my readers, it's been some very busy times lately, and I hope I can write more in the coming weeks. I also owe an apology to the people that pointed out a bug with my ZFS replicator script and cron(1M), I'll look into it and make it my next post.

Barcamp at Sun in Munich

Yesterday, I attended Barcamp Munich 2008 which was sponsored by Sun (among other cool sponsors) and so it took place in the Sun Munich offices.

I was surprised to see that both sessions I proposed were accepted, plus one about Open Source Software at Sun that my colleague Stefan proposed with some support by me.

You can find a list of sessions for Saturday and Sunday on the web and it pays off to check back regularly, as the wiki is filling up with more and more collateral information around each track.

So, here's a roundup of session descriptions, slides and other links and materials for those of you who attended my sessions or could not attend, in chronological order.

Enterprise 2.0 - From Co-Workers to Co-Creators

Central Slide about Enterprise 2.0This session was similar to the talk I did at Webkongress Erlangen a few months ago.

We had about 20 people in the room and quite a fruitful discussion on how to motivate employees to use new tools, how to guide employee behaviour and the challenges of opening up a company and making it more transparent.

Feel free to glance through my Enterprise 2.0 slides or read an earlier blog entry on a related subject. Also, check out Peter Reiser's blog, he has a number of great articles from behind the scenes of our SunSpace collaboration project.

Open Source Software at Sun

Stefan Schneider proposed a session about great software products that are available from Sun for free, as open source. We went through his list from least well-known to most popular.

Obviously, MySQL, StarOffice and OpenSolaris were at the end, but the more interesting software products were those that made the attendees go "Oh, I didn't know that!". One example of this category was Lightning, a rich calendar client.

Stefan recently posted his slides into the Sun Startups blog, thanks, Stefan!

The Future of Technology in 10, 20, 30 Years and More

This was a spontaneous talk that I offered after having seen the Barcamp Munich wishlist where people asked for a session on future technology developments, their effects on society and how one can cope with it.

I took some slides from a couple of earlier talks I did a while ago on similar topics and updated it for the occasion. The updated "Future of Technology" slidedeck is in German, but if enough people are interested, I can provide a translated version as well.

We started by looking at Moore's Law as an indicator of technology development. In "The Age of Spiritual Machines", Ray Kurzweil, a well-known futurist, pointed out that this law also holds for technology prior to integrated circuits, all the way down to Charles Babbage's difference engine of the 19th century.

With that in mind, we can confidently extend Moore's Law into the future, knowing that even if traditional chip technology ceases to deliver on Moore's Law, other technologies will pick up and help us achieve even higher amounts of computing power per amount of money/space/energy. Again, Kurzweil points out that if we compare the amount of computational power that one can purchase for $1000 for a given year with the complexity of all neurons of a brain and their connections to neighbouring neurons at their typical firing frequency, then the 2020s will be an interesting decade.

Key technologies of the future will be: Genetics and Biotechnology, Robotics and Nanotechnology.

DNA being replicatedWe watched a fascinating video about Molecular Visualizations of DNA (here's a longer, more complete version) that made us witnesses of DNA being replicated, right before our eyes, at a molecular level. It's amazing to see how mechanical this process looks, almost like industrial robots grinding away on ribbons of DNA, cutting pieces, replicating them, then splicing them back in. In the near future, we will see personalized medicine, based on our own DNA, and optimized for our individual needs as well as novel applications of biotechnology for clean energy, new materials and the assembly of early molecular machines.

Robotics are another fascinating area of technology and we're seeing more and more robots enter our day to day life. Industrial and military robots may be an "old hat", but did you know that today, millions of households are already using robots to vacuum their floory, mow their lawns or perform other routine work? And we will see many more robots in the future, I'm sure. Meanwhile, I'm happy to say that my Roomba robot indeed saves a lot of precious time while fulfilling my natural geeky desire for cool gadgetry.

Finally, Nanotechnology will open up a new category of advanced technology that will affect all aspects of human life, the environment and the world. We watched a vision of a future nanofactory that fits onto a common desk and is capable of manufacturing an advanced laptop with 100 hours of battery life and a billion CPUs. But nanotechnology can do much more: Highly efficient solar cells, clean water, lightweight spacecrafts, nanobots that clean up your bloodstream, more advanced versions of your organs, brain implants and extensions, virtual reality that is indistinguishable from real reality and much more.

Check out the Foresight Institute's introduction to nanotechnology for more information about this fascinating topic, including a free PDF download of K. Eric Drexler's book "Engines of Creation". Real engineers will probably want to take a look at his textbook "Nanosystems Molecular Machinery Manufacturing and Computation"

One controversial topic when discussing the future is the Technological Singularity. This is the point in time, where artificial intelligence becomes powerful enough to create new technology on its own, thereby accelerating the advancement of technology without human intervention. A discussion of this topic can be found in Kurzweil's newest book "The Singularity is Near".

Another great way to think about the future is to read Stefan Pernar's sci-fi thriller "Jame5 - A Tale of Good and Evil". This book starts in the best Michael Crichton style and then becomes a deep and thoughtful discussion around the philosophy of the future, when mankind confronts the powers of strong AI. You can buy the book or just download the PDF for free. Highly recommended.

One of my favourite citations is said to be an old chinese curse: "May you live in interesting times."

Many thanks to all the people that I met during, or attended my sessions at, Barcamp Munich 2008, it was a most interesting event.

Edit (Oct., 13th): Meanwhile, a few blog reactions are rolling in: Dirk wrote a nice summary on the Enterprise 2.0 session (in German) while Ralph summarized the Future technology session (German as well). I found them through Markus' Barcamp Munich 2008 session meta entry. Thanks to all! Also, Stefan has posted his slides from the open source talk, see above.

Edit (Oct. 14th): Here are some more notes from Stefan Freimark (in German). Thank you!


Wednesday Aug 13, 2008

ZFS Replicator Script, New Edition

Many crates on a bicycle. A metaphor for ZFS snapshot replicationAbout a year ago, I blogged about a useful script that handles recursive replication of ZFS snapshots across pools. It helped me migrate my pool from a messy configuration into the clean two-mirrored-pairs configuration I have now.

Meanwhile, the fine guys at the ZFS developer team introduced recursive send/receive into the ZFS command, which makes most of what the script does a simple -F flag to the zfs(1M).

Unfortunately, this new version of the ZFS command has not (yet?) been ported back to Solaris 10, so my ZFS snapshot replication script is still useful for Solaris 10 users, such as Mike Hallock from the School of Chemical Sciences at the University of Illinois at Urbana-Champaign (UIUC). He wrote:

Your script came very close to exactly what I needed, so I took it upon myself to make changes, and thought in the spirit of it all, to share those changes with you.

The first change he in introduced was the ability to supply a pattern (via -p) that selects some of the potentially many snapshots that one wants to replicate. He's a user of Tim Foster's excellent automatic ZFS snapshot service like myself and wanted to base his migration solely on the daily snapshots, not any other ones.

Then, Mike wanted to migrate across two different hosts on a network, so he introduced the -r option that allows the user to specify a target host. This option simply pipes the replication data stream through ssh at the right places, making ZFS filesystem migration across any distance very easy.

The updated version including both of the new features is available as zfs-replicate_v0.7.tar.bz2. I didn't test this new version but the changes look very good to me. Still: Use at your own risk.

Thanks a lot, Mike! 

Thursday Mar 20, 2008

How to compile/run MediaTomb on Solaris for PS3 and other streaming clients

MediaTomb showing photos on a PS3Before visiting CeBIT, I went to see my friend Ingo who works at the Clausthal University's computing center (where I grew up, IT-wise). This is a nice pre-CeBIT tradition we keep over the years when we get to watch movies in Ingo's home cinema and play computer games all day for a weekend or so :).

To my surprise, Ingo got himself a new PlayStation 3 (40GB). The new version is a lot cheaper (EUR 370 or so), less noisy (new chip process, no PS2 compatibility), and since HD-DVD is now officially dead, it's arguably the best value for money in Blu-Ray players right now (regular firmware upgrades, good picture quality, digital audio and enough horsepower for smooth Java BD content). All very rational and objective arguments to justify buying a new game console :).

The PS3 is not just a Blu-Ray player, it is also a game console (I recommend "Ratchett&Clank: Tools of Destruction" and the immensely cute "LocoRoco: Cocoreccho!", which is a steal at only EUR 3) and can act as a media renderer for DLNA compliant media servers: Watch videos, photos and listen to music in HD on the PS 3 from your home server.

After checking out a number of DLNA server software packages, it seemed to me that MediaTomb is the most advanced open source one (TwonkyVision seems to be nicer, but sorry, it isn't open source...). So here is a step-by-step guide on how to compile and run it in a Solaris machine.

Basic assumptions

This guide assumes that you're using a recent version of Solaris. This should be at least Solaris 10 (it's free!), a current Solaris Express Developer Edition (it's free too, but more advanced) is recommended. My home server runs Solaris Express build 62, I'm waiting for a production-ready build of Project Indiana to upgrade to.

I'm also assuming that you are familiar with basic compilation and installation of open source products.

Whenever I compile and install a new software package from scratch, I use /opt/local as my base directory. Others may want to use /usr/local or some other directory (perhaps in their $HOME). Just make sure you use the right path in the --prefix=/your/favourite/install/path part of the ./configure command.

I'm also trying to be a good citizen and use the Sun Studio Compiler here where I can. It generally produces much faster code on both SPARC and x86 architectures vs. the ubiquitous gcc, so give it a try! Alas, sometimes the code was really stubborn and it wouldn't let me use Sun Studio so I had to use gcc. This was the path of least resistance, but with some tinkering, everything can be made to compile on Sun Studio. You can also try gcc4ss which combines a gcc frontend with the Sun Studio backend to get the best of both worlds.

Now, let's get started!

MediaTomb Prerequisites

Before compiling/installing the actual MediaTomb application, we need to install a few prerequisite packages. Don't worry, most of them are already present in Solaris, and the rest can be easily installed as pre-built binaries or easily compiled on your own. Check out the MediaTomb requirements documentation. Here is what MediaTomb wants:

  • sqlite3, libiconv and curl are available on BlastWave. BlastWave is a software repository for Solaris packages that has almost everything you need in terms of pre-built open source packages (but not MediaTomb...). Setting up BlastWave on your system is easy, just follow their guide. After that, installing the three packages above is as easy as:
    # /opt/csw/bin/pkg-get -i sqlite3
    # /opt/csw/bin/pkg-get -i libiconv
    # /opt/csw/bin/pkg-get -i curl
  • MediaTomb uses a library called libmagic to identify file types. It took a little research until I found out that it is part of the file package that is shipped as part of many Linux distributions. Here I'm using file-4.23.tar.gz, which seems to be a reasonably new version. Fortunately, this is easy to compile and install:

    $ wget ftp://ftp.astron.com/pub/file/file-4.23.tar.gz
    $ gzip -dc  file-4.23.tar.gz | tar xvf -$ cd file-4.23
    $ CC=/opt/SUNWspro/bin/cc ./configure --prefix=/opt/local
    $ gmake
    $ su
    # PATH=$PATH:/usr/ccs/bin:/usr/sfw/bin; export PATH; gmake install

    Notice that the last step is performed as root for installation purposes while compilation should generally be performed as a regular user.

  • For tag extraction of MP3 files, MediaTomb uses taglib:
    $ wget http://developer.kde.org/~wheeler/files/src/taglib-1.5.tar.gz
    $ cd taglib-1.5
    $ CC=/usr/sfw/bin/gcc CXX=/usr/sfw/bin/g++ ./configure --prefix=/opt/local
    $ gmake
    $ su
    # PATH=$PATH:/usr/ccs/bin:/usr/sfw/bin; export PATH; gmake install
  • MediaTomb also uses SpiderMonkey, which is the Mozilla JavaScript Engine. Initially, I had some fear about having to compile all that Mozilla code from scratch, but then it dawned on me that we can just use the JavaScript libraries that are part of the Solaris Firefox standard installation, even the headers are there as well!

That was it. Now we can start building the real thing...

 Compiling and installing MediaTomb

Now that we have all prerequisites, we can move on to downloading, compiling and installing the MediaTomb package:

  • Download the MediaTomb source from http://downloads.sourceforge.net/mediatomb/mediatomb-0.11.0.tar.gz
  • Somehow, the mediatomb developers want to enforce some funny LD_PRELOAD games which is uneccesary (at least on recent Solaris versions...). So let's throw that part of the code out: Edit src/main.cc and comment lines 128-141 out by adding /\* before line 128 and \*/ at the end of line 141.
  • Now we can configure the source to our needs. This is where all the prerequisite packages from above are configured in:
    ./configure
    --prefix=/opt/local --enable-iconv-lib --with-iconv-h=/opt/csw/include
    --with-iconv-libs=/opt/csw/lib --enable-libjs
    --with-js-h=/usr/include/firefox/js --with-js-libs=/usr/lib/firefox
    --enable-libmagic --with-magic-h=/opt/local/include
    --with-magic-libs=/opt/local/lib --with-sqlite3-h=/opt/csw/include
    --with-sqlite3-libs=/opt/csw/lib
    --with-taglib-cfg=/opt/local/bin/taglib-config
    --with-curl-cfg=/opt/csw/bin/curl-config

    Check out the MediaTomb compile docs for details. One hurdle here was to use an extra iconv library because the MediaTomb source didn't work with the gcc built-in iconv library. Also, there were some issues with the Sun Studio compiler, so I admit I was lazy and just used gcc instead. 

  • After these preparations, compiling and installing should work as expected:
    gmake
    PATH=$PATH:/usr/ccs/bin:/usr/sfw/bin; export PATH; gmake install

Configuring MediaTomb

Ok, now we have successfully compiled and installed MediaTomb, but we're not done yet. The next step is to create a configuration file that works well. An initial config will be created automatically during the very first startup of MediaTomb. Since we compiled in some libraries from different places, we either need to set LD_LIBRARY_PATH during startup (i.e. in a wrapper script) or update the linker's path using crle(1).

In my case, I went for the first option. So, starting MediaTomb works like this:

LD_LIBRARY_PATH=/opt/csw/lib:/opt/local/lib:/usr/lib/firefox
/opt/local/bin/mediatomb --interface bge0 --port 49194 --daemon
--pidfile /tmp/mediatomb.pid
--logfile=/tmp/mediatomb.log

Of course you should substitute your own interface. The port number is completely arbitrary, it should just be above 49152. Read the command line option docs to learn how they work.

You can now connect to MediaTomb's web interface and try out some stuff, but the important thing here is that we now have a basic config file in $HOME/.mediatomb/config.xml to work with. The MediaTomb config file docs should help you with this.

Here is what I added to my own config and why:

  • Set up an account for the web user interface with your own user id and password. It's not the most secure server, but better than nothing. Use something like this in the <ui> section:
    <accounts enabled="no" session-timeout="30">
      <account user="me" password="secret"/>
    </accounts>
  • Uncomment the <protocolInfo> tag because according to the docs, this is needed for better PS3 compatibility.
  • I saw a number of iconv errors, so I added the following to the config file in the import section. Apparently, MediaTomb can better handle exotic characters in file names (very common with music files) with the following tag:
    <filesystem-charset>ISO-8859-1</filesystem-charset> 
  • The libmagic library won't find its magic information because it's now in a nonstandard place. But we can add it with the following tag, again in the import section:
    <magic-file>/opt/local/share/file/magic</magic-file>
  • A few mime types should be added for completeness:

    <map from="mpg" to="video/mpeg"/>
    <map from="JPG" to="image/jpeg"/>
    <map from="m4a" to="audio/mpeg"/>

    Actually, it should "just work" through libmagic, but it didn't for me, so adding theses mime types was the easiest option. It also improves performance through saving libmagic calls. Most digital cameras use the uppercase "JPG" extension and MediaTomb seems to be case-sensitive so adding the uppercase variant was necessary. It's also apparent that MediaTomb doesn't have much support for AAC (.m4a) even though it is the official successor to MP3 (more than 95% of my music is in AAC format, so this is quite annoying).

  • You can now either add <directory> tags to the <autoscan> tags for your media data in the config file, or add them through the web interface.

MediaTomb browser on a PS3This is it. The pictures show MediaTomb running in my basement and showing some photos through the PS3 on the TV set. I hope that you can now work from here and find a configuration that works well for you. Check out the MediaTomb scripting guide for some powerful ways to create virtual directory structures of your media files.

MediaTomb is ok to help you show movies and pictures and the occasional song on the PS3 but it's not perfect yet. It lacks support for AAC (tags, cover art, etc.) and it could use some extra scripts for more comfortable browsing structures. But that's the point of open source: Now we can start adding more features to MediaTomb ourselves and bring it a few steps closer to usefulness.

Tuesday Nov 27, 2007

Shrink big presentations with ooshrink

I work in an environment where people use presentations a lot. Of course, we like to use StarOffice, which is based on OpenOffice for all of our office needs.

Presentation files can be big. Very big. Never-send-through-email-big. Especially, when they come from marketing departments and contain lots of pretty pictures. I just tried to send a Sun Systems overview presentation (which I created myself, so less marketing fluff), and it still was over 22MB big!

So here comes the beauty of Open Source, and in this case: Open Formats. It turns out, that OpenOffice and StarOffice documents are actually ZIP files that contain XML for the actual documents, plus all the image files that are associated with it in a simple directory structure. A few years ago I wrote a script that takes an OpenOffice document, unzips it, looks at all the images in the document's structure and optimizes their compression algorithm, size and other settings based on some simple rules. That script was very popular with my colleagues, it got lost for a while and thanks to Andreas it was found again. Still, colleagues are asking me about "That script, you know, that used to shrink those StarOffice presentations." once in a while.

Today, I brushed it up a little, teached it to accept the newer od[ptdc] extensions and it still works remarkably well. Here are some examples:

  • The Sun homepage has a small demo presentation with a few vacation photos. Let's see what happens:
    bash-3.00$ ls -al Presentation_Example.odp
    -rw-r--r--   1 constant sun       392382 Mar 10  2006 Presentation_Example.odp
    bash-3.00$ ooshrink -s Presentation_Example.odp
    bash-3.00$ ls -al Presentation_Example.\*
    -rw-r--r--   1 constant sun       337383 Nov 27 11:36 Presentation_Example.new.odp
    -rw-r--r--   1 constant sun       392382 Mar 10  2006 Presentation_Example.odp

    Well, that was a 15% reduction in file size. Not earth-shattering, but we're getting there. BTW: The -s flag is for "silence", we're just after results (for now).

  • On BigAdmin, I found a presentation with some M-Series config diagrams:

    bash-3.00$ ls -al Mseries.odp
    -rw-r--r-- 1 constant sun 1323337 Aug 23 17:23 Mseries.odp
    bash-3.00$ ooshrink -s Mseries.odp
    bash-3.00$ ls -al Mseries.\*
    -rw-r--r-- 1 constant sun 379549 Nov 27 11:39 Mseries.new.odp
    -rw-r--r-- 1 constant sun 1323337 Aug 23 17:23 Mseries.odp

    Now we're getting somewhere: This is a reduction by 71%!

  • Now for a real-world example. My next victim is a presentation by Teera about JRuby. I just used Google to search for "site:sun.com presentation odp", so Teera is completely innocent. This time, let's take a look behind the scenes with the -v flag (verbose):
    bash-3.00$ ooshrink -v jruby_ruby112_presentation.odp
    Required tools "convert, identify" found.
    ooshrink 1.2
    Check out "ooshrink -h" for help information, warnings and disclaimers.

    Creating working directory jruby_ruby112_presentation.36316.work...
    Unpacking jruby_ruby112_presentation.odp...
    Optimizing Pictures/1000020100000307000000665F60F829.png.
    - This is a 775 pixels wide and 102 pixels high PNG file.
    - This image is transparent. Can't convert to JPEG.
    - We will try re-encoding this image with PNG compression level 9.
    - Failure: Old: 947, New: 39919. We better keep the original.
    Optimizing Pictures/100000000000005500000055DD878D9F.jpg.
    - This is a 85 pixels wide and 85 pixels high JPEG file.
    - We will try re-encoding this image with JPEG quality setting of 75%.
    - Failure: Old: 2054, New: 2089. We better keep the original.
    Optimizing Pictures/1000020100000419000003C07084C0EF.png.
    - This is a 1049 pixels wide and 960 pixels high PNG file.
    - This image is transparent. Can't convert to JPEG.
    - We will try re-encoding this image with PNG compression level 9.
    - Failure: Old: 99671, New: 539114. We better keep the original.
    Optimizing Pictures/10000201000001A00000025EFBC8CCCC.png.
    - This is a 416 pixels wide and 606 pixels high PNG file.
    - This image is transparent. Can't convert to JPEG.
    - We will try re-encoding this image with PNG compression level 9.
    - Failure: Old: 286677, New: 349860. We better keep the original.
    Optimizing Pictures/10000000000000FB000001A6E936A60F.jpg.
    - This is a 251 pixels wide and 422 pixels high JPEG file.
    - We will try re-encoding this image with JPEG quality setting of 75%.
    - Success: Old: 52200, New: 46599 (-11%). We'll use the new picture.
    Optimizing Pictures/100000000000055500000044C171E62B.gif.
    - This is a 1365 pixels wide and 68 pixels high GIF file.
    - This image is too large, we'll resize it to 1280x1024.
    - We will convert this image to PNG, which is probably more efficient.
    - Failure: Old: 2199, New: 39219. We better keep the original.
    Optimizing Pictures/100000000000019A000002D273F8C990.png.
    - This is a 410 pixels wide and 722 pixels high PNG file.
    - This picture has 50343 colors, so JPEG is a better choice.
    - Success: Old: 276207, New: 32428 (-89%). We'll use the new picture.
    Patching content.xml with new image file name.
    Patching styles.xml with new image file name.
    Patching manifest.xml with new image file name.
    Optimizing Pictures/1000000000000094000000E97E2C5D52.png.
    - This is a 148 pixels wide and 233 pixels high PNG file.
    - This picture has 4486 colors, so JPEG is a better choice.
    - Success: Old: 29880, New: 5642 (-82%). We'll use the new picture.
    Patching content.xml with new image file name.
    Patching styles.xml with new image file name.
    Patching manifest.xml with new image file name.
    Optimizing Pictures/10000201000003E3000003E4CFFA65E3.png.
    - This is a 995 pixels wide and 996 pixels high PNG file.
    - This image is transparent. Can't convert to JPEG.
    - We will try re-encoding this image with PNG compression level 9.
    - Failure: Old: 196597, New: 624633. We better keep the original.
    Optimizing Pictures/100002010000013C0000021EDE4EFBD7.png.
    - This is a 316 pixels wide and 542 pixels high PNG file.
    - This image is transparent. Can't convert to JPEG.
    - We will try re-encoding this image with PNG compression level 9.
    - Failure: Old: 159495, New: 224216. We better keep the original.
    Optimizing Pictures/10000200000002120000014A19C2D0EB.gif.
    - This is a 530 pixels wide and 330 pixels high GIF file.
    - This image is transparent. Can't convert to JPEG.
    - We will convert this image to PNG, which is probably more efficient.
    - Failure: Old: 39821, New: 56736. We better keep the original.
    Optimizing Pictures/100000000000020D0000025EB55F72E3.png.
    - This is a 525 pixels wide and 606 pixels high PNG file.
    - This picture has 17123 colors, so JPEG is a better choice.
    - Success: Old: 146544, New: 16210 (-89%). We'll use the new picture.
    Patching content.xml with new image file name.
    Patching styles.xml with new image file name.
    Patching manifest.xml with new image file name.
    Optimizing Pictures/10000000000000200000002000309F1C.png.
    - This is a 32 pixels wide and 32 pixels high PNG file.
    - This picture has 256 colors, so JPEG is a better choice.
    - Success: Old: 859, New: 289 (-67%). We'll use the new picture.
    Patching content.xml with new image file name.
    Patching styles.xml with new image file name.
    Patching manifest.xml with new image file name.
    Optimizing Pictures/10000201000001BB0000006B7305D02E.png.
    - This is a 443 pixels wide and 107 pixels high PNG file.
    - This image is transparent. Can't convert to JPEG.
    - We will try re-encoding this image with PNG compression level 9.
    - Failure: Old: 730, New: 24071. We better keep the original.
    All images optimized.
    Re-packing...
    Success: The new file is only 67% as big as the original!
    Cleaning up...
    Done.

    Neat. We just shaved a third off of a 1.3MB presentation file and it still looks as good as the original!

    As you can see, the script goes through each image one by one and tries to come up with better ways of encoding images. The basic rules are:

    • If an image if PNG or GIF and it has more than 128 colors, it's probably better to convert it to JPEG (if it doesn't use transparency). It also tries recompressing GIFs and other legacy formats as PNGs if JPEG is not an option.
    • Images bigger than 1280x1024 don't make a lot of sense in a presentation, so they're resized to be at most that size.
    • JPEG allows to set a quality level. 75% is "good enough" for presentation purposes, so we'll try that and see how much it buys us.
    The hard part is to patch the XML files with the new image names. They don't have any newlines, so basic Unix scripting tools may hiccup and so the script uses a more conservative approach to patching, but it works.

 

Before I give you the script, here's the obvious
Disclaimer: Use this script at your own risk. Always check the shrunk presentation for any errors that the script may have introduced. It only works 9 out of 10 times (sometimes, there's some funkiness about how OpenOffice uses images going on that I still don't understand...), so you have to check if it didn't damage your file.

The script works with Solaris (of course), but it should also work in any Linux or any other Unix just fine. It relies on ImageMagick to do the image heavy lifting, so make sure you have identify(9E) and convert(9E) in your path. 

My 22 MB Systems Overview presentation was successfully shrunk into a 13MB one, so I'm happy to report that after so many years, this little script is still very useful. I hope it helps you too, let me know how you use it and what shrink-ratios you have experienced!

Tuesday Nov 20, 2007

Foresight Vision Weekend 2007

 

About two weeks ago, two colleagues and I had the inspiring pleasure of attending the Foresight Vision Weekend 2007. This was the weekend before our annual TS Ambassador Conference at Sun, so we happened to be in the Bay Area where this unconference was held.

Ever since the year 2000, after I heard a talk from Eric Drexler on Nanotechnology during another Sun event, I've been fascinated by this topic and so I loosely followed the activities of the Foresight Institute. This event was a great way of catching up with recent developments - and an opportunity for me to have a reality check on how real all of this is, and can be.

Limited by our flight schedule, we only attended the second day of the conference. It started with a few motivational speakers as an introduction to the second half of the day which was held in the now popular unconference format.

A Systematic View on Anti-Aging 

The first talk about anti-aging was given by Chris Heward, President of the Kronos Science Laboratory. He explained their very systematic approach to analyze the effects of aging and what factory play what role in the process. The great thing about this talk was that there was no esoterics, no magic, no BS, just plain, number driven science full of hard facts about what aging actually is (a decrease of bio-functional abilities due to decaying body functions over time), a fresh view on the subject (we're already becoming "unnatually" old, so why not figure this out once and for all?) and some reality-checks on popular health myths (If fats are so bad, why is the US population becoming fatter and fatter despite all that non-fat food?).

So the systematic approach is quite simple, but effective: Figure out the primary causes of death (heart disease, skeletal dysfunction, cancer) and find ways to prevent them from happening as early as possible. The "as early as possible" part is the most important one: The earlier one starts to work on preventing these factors, the longer the life expectancy. 

My takeaways:

  • Drink lots of water,
  • a BMI of 22-26 is a good place to be (I'm at 23),
  • avoid eating empty calories (all the "white" stuff that is not meat),
  • eat colorful veggies,
  • some supplements are actually really good (he especially mentioned Vitamin E and Omega-3 fatty acids),
  • exercise regularly. Actually, this is the biggest factor, capable of even compensating for a fat or a smoking lifestyle! I really need to start jogging again...

One interesting but not well understood factor in aging is hormones. There's a strong correlation between dropping levels of male and female sex hormones and their negative symptoms in ageing (obvious, isn't it?), but it is not understood yet if and how taking hormon supplements really helps you overcome ageing symptoms. Plus, taking hormones as pills is likely to produce other problems (as in liver overload...).

Anyway, this was a fascinating talk and I now need to understand more on this subject, although separating the wheat from the chaff is difficult if you're not a doctor or a biochemist...

Productive Nanosystems Roadmap 

This conference covered a great variety of topics, so the next talk by Pearl Chin was on a completely different topic: The Productive Nanosystems Roadmap. What's a productive nanosystem you might ask? It's a machine that operates at the molecular level to create things in an atomically precise way. Watch this short movie to see one in action.

The Productive Nanosystems Roadmap is all about the "How do we get there?" aspects of Molecular Nanotechnology. Similar to, but more challenging than the semiconductor business, this involves a huge amount of interdisciplinary work by physicists, chemists, biotechnologists, computer scientists, mechanical engineers, process technologists and many more. By synchronizing and bringing together different fields of research and development, the Nanotechnology Roadmap facilitates the creation of Productive Nanosystems.

Can't wait to having one of these replicators in my home...

Open Source Security

Yet another interesting and completely different subject: Open Source Security, by Christine Peterson, a founder of the Foresight Institute. The current physical security mechanisms, as implemented by major governments are hugely centralized (as in DoD-centralized), not transparent (who knows really what happens inside the NSA, or behind the doors of your friendly airport security operations?) and they have a huge impact on privacy (Did you know that "they" know what you read on an airplane?).

The idea of this talk is: Centralized security has its flaws (what happens if someone takes out the central parts of a nation's security system?), obscure security measures are prone to becoming a security threat by themselves (In Germany there's a current debate about the police monitoring license plates on a big scale vs. privacy rights) and of course, there's no fun in living in a 100% controlled and watched Orwellian society. So why not try to create a security system that is transparent, distributed and still protects privacy?

This "Open Source Security" system could be everywhere (like a neighborhood watch), it would be open to anyone (so nobody can manipulate the system) and it would work without invading people's privacies (a neighborhood watch keeps the neighbors secure, but doesn't know a thing about, say, the next cities' neighborhoods).

Interesting concept and hopefully one that is going to be developed further. Sounds much, much better than what current governments would like to implement...

Mapping the Technology Landscape

I can't remember the exact title of this session, but this sounds like a good fit. The first of the afternoon sessions I visited (there were several in parallel and we couldn't visit all of them) was about finding the right way to categorize new technologies as they emerge and create headlines. It was run by Phil Bowermaster who has an excellent blog called "The Speculist" and an accompaining podcast called "Fast Forward Radio".

After blogging for a while, Phil came up with a 2-dimensional coordinate system for charting technologies, based on the axes "Impact on Society" and "Impact on Technology". While this seemed to work for charting "spot resistant nano-pants" (low impacts on both society and technology, placing it into the "fake" corner) vs., say, a desktop molecular nanofactory (now we're getting serious...), it didn't feel like the real thing for charting new technology.

So, Phil showed us his improved coordinate systems, this time based on the axes "transformation" and "disruption". It intuitively makes more sense, as it better models the impact of technology on the world as we know it. But every model is only good until the next one comes around, so Phil welcomes your suggestions, too. See his article on "Disruption and Transformation".

Self-Improving A.I.

No futuristic conference without at least one A.I. related topic. Artificial Intelligence may have had a difficult story in the past, but the truth is that people tend to dismiss any advance in A.I. as being "nice, but not the real thing", be it speech recognition, route planning or beating Kasparov at chess playing. What's going to be the next milestone that people will choose to treat as "not real A.I.?".

Ray Kurzweil observed that the development of technology happens at an accelerating pace. In fact, Moore's law only deals with advances in semiconductor technology, but it's pattern of modeling the increasing amount of available calculations per $1000 can be observed all the way back to early mechanical calculators. Looking into the future, semiconductor experts are confident that Moore's law will hold at least into the next 15-20 years - and there are some more exciting technologies waiting to be used for computations onces semiconductor chips become uninteresting. If the current rate of technological progress continues, then we will see a $1000 PC have the power of a human brain by 2025. Not a long time from now.

Steve Omohundro's session on self-improving A.I. dealt with the questions such as: What will drive self-improving A.I.s? What are the benefits and risks of self-improving A.I.s? What should we try to do right before they arrive? Read more about this topic at the Self-aware Systems website.

And for the lighter side of it, here's a hilarious comic on a very similar subject :).

Nanotech Literacy

Perhaps the most important aspect of nanotechnology right now is it's acceptance. As soon as you learn about the great powers of nanotechnology, you can't help but imagine the great peril it might bring. Bill Joy's famous article "Why the future doesn't need us" is only one example.

But is denying or opposing change a solution? Certainly not. If we refuse to learn about the next wave of technology, others will. So we better learn how to do it right from the start. One major focus of the Foresight Institute is to advance beneficial nanotechnology, partly by educating people about it's potential benefits to humanity.

Miguel Aznar's session on Nanotech Literacy focused on how to make Nanotechnology more accessible and understandable to children and students in schools. I think this is a great way of spreading the word, as it instantly will touch their parents as well. I used to teach my parents how to program our VCR, and I'm looking forward to my daughter teaching me how to operate our first family molecular nanotech factory :)

Read more in Miguel's blog.

Conclusion

This really was a most inspiring event. My goal was to understand more about the reality behind Nanotech and other future technologies, and I got much more out of this day than I expected. I'm very proud to see that Sun is a corporate member of the Foresight Institute and I'm going to sign up with them as a senior associate soon. I'm convinced that every dollar spent in advancing beneficial Nanotechnology is going to save us more trees and more species, reduce the levels of CO2 more aggressively, provide more clean energy, cure more cancers and advance humankind more thoroughly in the long term than any other investment.

If you want to learn more about the subject of Nanotechnology, I recommend looking at one of these articles.

Sunday Oct 21, 2007

How to burn high resolution DVD-Audio DVDs on Solaris and Linux (And it's legal!)


This weekend I've burned my first DVD-Audio DVD with high resolution music at 96 kHz/24 Bit.

It all started with this email I got from Linn Records, advertising the release of their Super Audio Surround Collection Vol 3 Sampler (Yes, targeted advertising works, but only if customers choose to receive it), which is offered in studio master quality FLAC format files, as a download. Gerald and I applauded Linn Records a few months ago for offering high quality music as lossless quality downloads, so I decided to try out their high resolution studio master quality offerings.

The music comes as 96kHz/24 Bit FLAC encoded files. These can be played back quite easily on a computer with a high resolution capable sound card, but computers don't really look good in living rooms, despite all the home theater PC and other efforts. The better alternative is to burn your own DVD-Audio and then use a DVD-A capable DVD player connected to your HiFi-amplifier to play back the music.

There's a common misconception that "DVD-Audio" means "DVD-Video" without the picture which is wrong. DVD-Video is one standard, aimed at reproducing movies, that uses PCM, AC-3, DTS or MP2 (mostly lossy) for encoding audio, while DVD-Audio sacrifices moving pictures (allowing only still ones for illustration) so it can use the extra bandwidth for high resolution audio, encoded as lossless PCM or lossless MLP bitstreams. Also, note that it is not common for regular DVD-players to accept DVD-Audio discs, they must state that they can handle the format, otherwise you're out of luck. Some if not most DVD-Audio Discs are hybrid in that they offer the content stored in DVD-Audio format additionally as DVD-Video streams with one of the lossy DVD-Video audio codecs so they can be played on both DVD-Video and DVD-Audio players.

 

Now, after having downloaded a bunch of high-res FLAC audio files, how can you create a DVD-Audio disc? Here's a small open source program called dvda-author that does just that: Give it a bunch of FLAC or WAV files and a directory, and it'll create the correct DVD-A UDF file structure for you. It compiles very easily on Solaris so I was able to use my Solaris fileserver in the basement where I downloaded the songs to. Then you give the dvda-author output directory along with a special sort file (supplied by dvda-author) to mkisofs (which is included in Solaris in the /usr/sfw directory) and it'll create a DVD ISO image that you can burn onto any regular DVD raw media. It's all described nicely on the dvda-author How-To page. Linn Records also supplies a PNG image to download along with the music that you can print and use as your DVD-Audio cover.

And how about iPods and other MP3-Players? Most open source media players such as the VideoLan Client (VLC) can transcode from high resolution FLAC format to MP3 or AAC so that's easily done, too. For Mac users, there's a comfortable utility called XLD that does the transcoding for you.

Here's common misconception #2: Many people think AAC is proprietary to Apple, mostly because Apple is heavily advertising its use as their standard for music encoding. This is wrong. AAC is actually an open standard, it is part of the ISO/IEC MPEG-4 specification and it is therefore the legitimate successor to MP3. AAC delivers better audio quality at lower bitrates and even the inventors of MP3, the Fraunhofer IIS institute treat AAC as the legitimitate successor, just check their current projects page under the letter "A". Apple developed the "Fairplay" DRM extension to Quicktime (which is the official MPEG-4/AAC encapsulation format) to be able to sell their iTunes Music Store as a download portal to the music industry. Fairplay is proprietary to Apple, but has nothing to do with AAC per se.

As much as I love Apple's way of using open standards wherever possible, I don't think it's a good thing that their marketing department creates the illusion of these technologies being Apple's own. This is actually an example of how AAC suffers in the public perception because people think it's proprietary where the opposite is true.

How is the actual music, you ask? Good. The album is a nice mixture of jazz and classical music, both in smooth and in more lively forms, great for a nice dinner and produced with a very high quality. Being a sampler, this album gives you a good overview of current Linn Records productions, so you can choose your favourite artists and then dig deeper into the music you liked most.

There's one drawback still: The high-res files available on the Linn Records download store are currently stereo only, while the physical SACD releases come with 5.1 surround sound. It would be nice if they could introduce 5.1 FLAC downloads in the future. That would make downloading high resolution audio content perfect, and this silly SACD/DVD-Audio/Dolby-TrueHD/DTS-HD Master Audio war would finally be over.


P.S.: A big hello to the folks at avsforum.com who were so kind to link to my previous high resolution audio entry!

 

Tuesday Oct 16, 2007

Walking through the CEC 2007 JavaFX Message Prompter Source

The CEC Message Prompter in Action

 
Now that I'm back from CEC and out of jetlag, I've had some time to clean up  the CEC 2007 Message Prompter source code. Thanks to all those who asked for it, that was quite a motivation.

The CEC Message Prompter source code is free for your reading pleasure under an as-is basis, no warranty, no support, etc. Still, comments are of course very welcome.

The easiest way to try this out is to load up NetBeans (I use the current Beta 6), install the JavaFX module, then create a new JavaFX project. The stuff in the source code archive goes into the src subdirectory of your new JavaFX project. Choose "Main.fx" as the main class and feel free to enable Java Web Start.

In order to compile/run the app, you also need JAXB 2.0 (or use J2SE 6) and the mySQL JDBC Connector installed in NetBeans as libraries and assigned to the project you use for this app.

After starting the app, you'll see the window above. To the top is the message source selection GUI. Choose whether you want to have a database or a URL (for XML) connection. A sample XML file with some messages is included, so you probably want to use the URL method. Enter the file URL where you have your messages stored into the URL field, then click on the right (next) or left (previous) or the X (clear) buttons to display the messages. The optional Session field is for filtering messages by session ID but we never got to use it yet. 

Before I start with the code, a few words of introduction: This is my first JavaFX project and I welcome any suggestions on how to better code in JavaFX. It is also my first Java/NetBeans project since a long time, so I'm sure I can still learn a lot more about how to properly do it. But the learning journey into creating this app has been a fun and instructive one, so I hope this code can help others learn more about JavaFX too. If I had to do it again (And I hope I will, next year), I'd do some stuff differently, which I'll discuss at the end of this posting. 

Let's walk through the code in roughly the order of how the message flow works:

  • The basic idea is this:
  1. The audience sends their questions, feedback, messages etc. to the CEC backstage team through either Email, Instant Messaging or SMS through special Email or IM accounts or mobile phone numbers. The CEC backstage team reads the messages and stores them in a database where they can be approved, marked for deletion, marked for display on the Message Prompter and assigned a sequence to display in.
  2. The CEC Message Prompter is the application that the people on stage and occasionally the audience see/s and where the current question to be asked to the people on stage is displayed. So the app has to fetch messages from the database and display them on screen on demand and in a visually intuitive way.
  3. For testing/development/backup purposes, the Message Prompter can also accept messages out of a single XML file instead of a database.
  • The top level directory is supposed to go into the src subdirectory of a NetBeans JavaFX project, but I guess you could as well include this easily into any other IDE or just work from the command-line out of this directory. CECMessage.xsd is an XML schema courtesy of Simon Cook that defines the XML format for a list of messages. ExampleMessages.xml contains a bunch of messages for testing purposes. Most of the source code is in the cecmessaging subdirectory which is the name of the Java package bundle for this app. If you apply the Message schema to the JAXB xjc.sh script, it creates the java classes in the org/netbeans/xml/schema/cecmessage directory which describe the corresponding java objects.
  • Some things are best left to real Java, in this case the message fetching and conversion into JavaFX digestable classes. The nice thing about JavaFX is that it can seamlessly use Java classes for doing any heavy-lifting. Messages can come in as an XML file or from a database, in both cases they are fetched from Java helper classes that handle the complexity of fetching messages and who return an object structure to the main JavaFX application.
    In the case of messages coming in as a single XML file, the file is parsed by the XMLHelper class in cecmessaging/XMLHelper.java using the JAXB framework. The resulting object structure can then be interpreted by the JavaFX main class. Make sure you include JAXB 2.0 or later if you use J2SE 5, in J2SE 6 it's already included.
    If messages are to be retrieved from a database, then the DBHelper class in cecmessaging/DBHelper.java is used. It uses the mySQL JDBC connector for database access but you could easily plug in any other database connector. For simplicity, the database data is converted into a JAXB structure as if it was coming out of an XML document. Here is the definition of the database that Simon created:

    Database: cec Table: message

    FieldType
    idint(11)
    sendervarchar(100)
    messagetext
    topicvarchar(100)
    timeSenttimestamp
    session_idint(11)
    devicevarchar(10)
    approvedtinyint(1)
    deletedtinyint(1)
    to_be_askedtinyint(1)
    display_sort_orderint(11)

    Both XMLHelper and DBHelper sort the messages by the displayOrder field, then by id. The sort comparator for this particular ordering is in CECMessageComparator.java.
  • The heart of the Message Prompter lives in cecmessaging/Main.fx.
    • It starts with a few data structures:
      • The AnimValues class stores font sizes, colors, duration and other parameters used for animation. JavaFX does not let you specify defaults as part of the class definition, hence the attribute commands.
      • The Message class is modeled after the corresponding JAXB CECMessage class. It adds a few attributed to track a particular message's color and font size. The font size and color of a message depends on its position (whether it is highlighted or not) and can change while it is animated during transitions. That's why we need to keep track of them. The alive attribute is not used right now, it may become useful if I rework the animation stuff.
      • The Tag class is for handling, well, tags. Every word that shows up as a message, author, device or topic is treated as a tag and a tag cloud is generated based on how often the word shows up on the screen. This class stores the tag word, counts the number of appearance and stores the current font size of that tag on screen. Again, we need to track font size for animation.
      • The MessageList class is the main model class the application uses. It contains an AnimValue class, a list of Message class messages and list of tags. It knows where the messages come from and where the original message data in JAXB format is. It keeps track of the GUI Labels that graphically represent the messages on screen plus it knows how many messages to display at once, which one is to be highlighted and other useful parameters.
    • The following operation statements are really methods. They are written in script-like manner rather than in object-oriented manner. This means that they are not associated to a particular class other than the main one. Next time, I might use a more strict object-oriented approach, but hey, this is a scripting language, isn't it?
      • MessageSignature computes a single string out of all fields in a message for comparison purposes. Somehow .equals or .toString didn't work for me as expected, so Implemented this simple mechanism to see if two messages are equal.
      • ClearMessages clears all messages, its associated Label objects and makes sure that dying messages are animated and their tags updated. Actually, today the death of a message isn't animated yet but I hope to implement a nice way of dying for messages. I loved my Apple Newton back then in the 90ies and it had this nice animation where deleted stuff would vanish in a puff of smoke :).
      • CecmToMessage takes a JAXB message created by the XMLHelper or DBHelper class and creates the corresponding JavaFX Message instance. It also handles basic true/false associations for the approved and deleted fields, which are meant to be boolean but are actually strings in the XML schema.
      • MessageToLabel creates a Java Swing Label object that displays the message on screen. The nice thing about the JavaFX Label implementation is that it understands HTML. So we can use HTML freely here to control how the message is to be seen. Notice the bind statement where the Label text is: It ties the Label's text content to the Message's attributes (color, size, content). This means that whenever any of these attributes are changed in the Message object, the corresponding Label object is changed as well! This is a very nice mechanism for Model-View-Controller like programming and a big time saver when coding.
      • The messageDisplayable function decides whether a message is supposed to be displayed. This is just a logic expression checking the approved, deleted and toBeAsked fields and filtering by sessionId (In case one wants to restrict messages to a particular session). One could have implemented the filtering at the XMLHelper or at the DBHelper level, but I felt it would be better to have full control over displaying messages from the app itself.
      • UpdateMessages checks all currently displayed messages against their counterparts in the XML file or the DB. The idea here is that we want to be able to change a message even if it's already displayed in the application (you know, when accidentally a bad word came through :) ). This is called regularly before adding new messages to the screen.
      • compareMessageOrder does just that. Messages come in already sorted, but we still need to decide on ordering when going through them to detect whether a message is missing etc. (The naming is wrong, it should start uppercase. This is because this operation started as a function but then if-then is not accepted in functions by JavaFX...).
      • NextMessage adds a message to the display list. It also deals with the unexpected complexity of deciding which message to highlight in certain corner cases. For instance, when we want to preview 2 messages, the third one is to be highlighted, but if you only have 0-2 messages on screen, the highlight should be on the last etc. When done, the message animator is called to animate any newly higlighted or unhighlighted messages and the tags are updated.
      • PreviousMessage does the opporsite of NextMessage. Again, the handling of the highlight is a tad more complex than I would have wanted it. Again, we animate here as well.
      • RefreshTags goes through all messages displayed on screen and makes sure the tag list is up to date. Then it starts animation for those tags that have changed.
      • AnimateMessages checks all messages and whether their font sizes match their position and highlighting status. Then, it animates all messages that have changed their status into their destination sizes and colors. Animation is handled through the dur operator. It assigns a list of values to a single parameter in sequence, during a specified time. So when we want a piece of text to grow from 0 to 20 pixels in size during 500 milliseconds, we say something like text.size = [0..20] dur 500. Very nice! Color animations work by applying a string array with the color values in sequence to a string variable. I wasn't confident on how the animation works in terms of concurrency (for instance, if another thread happens to change a value while it is animated) and I've seen cases where the font sizes weren't correct (and that cost me quite some sweat drops!) so I added some watchdog code to make sure the font size is correct after the end of the animation. Now that I've seen the CEC 2007 JavaFX session (sic!), I know a bit more about how this is supposed to work so hopefully I won't need it any more :).
      • AnimateTags does similar things to the tags, a tad easier to do.
      • The LoadProperties stuff is not used at the moment, so isn't the properties file included with the source. I was planning to outsource all relevant defaults and constants into an external properties file, but didn't have the time to do it. But here's a start...
      • The Main part is fairly straight forward: It first instantiates the MessageList model structure with some default values, then proceeds to instantiate the GUI elements. Another nice thing about JavaFX is the declarative syntax where you just write down the GUI class with all the desired values and the runtime system takes care of instantiating the classes, hooking them together and assigning values to them, as well as tying in the methods to be called when a GUI element is activated. Also, the bind command is your best friend here in that it automatically binds GUI attributes to the model classes and saves you the hassle of implementing callback methods etc. You don't even need a GUI builder, just write down the widget hierachy and you're done. Very convenient.

That was it. All in all, learning JavaFX was a fun experience. And you can do it too, just go to the OpenJFX website and check out the tutorials and references.

What would I do differently if I had to write this app from scratch? Probably one or more of the following:

  • Use real object oriented style by attaching methods to classes etc. Possibly different classes in different files, loosely coupled by the main class, as in this nice Mariah Carey website example.
  • Rework the animation so it works on triggers. Triggers are a way of coupling code to variables, similar to binding. So, whenever a variable is changed, the trigger code gets executed. For instance, the tags could be updated and animated using triggers.
  • Introduce more eye-candy. JavaFX comes with full Java2D support, so I'd dig in deeper into its classes to implement nicer animations.
  • Make it more interactive by letting GUI elements slide in and out only when necessary so there's more real estate for the messages.
  • Introduce images and symbols to help with the eye-candyness.

Thank you for reading this and I hope you enjoyed this JavaFX example. Let me know your thoughts by using the comment function or by sending me email!

Friday Oct 12, 2007

Final CEC Reflections: The Wynn, ZFS Under the Hood, Messaging wrap-up

I'm now back home, sorting through emails and cleaning up some stuff before a regular week of work begins. Here are some highlights from Tuesday and Wednesday during the Sun CEC 2007 conference in Las Vegas:

  • The Wynn: After visiting the CEC Party, Barton, Yan, Henning and I decided to have dinner at the Wynn. It's one of the newest hotels in town and a must-see. This place sure has style! We went to the Daniel Boulud Brasserie which is located inside the hotel at the Lake of Dreams. This is one of the few restaurants in Las Vegas where you can actually eat outside the ever present air-conditioning and enjoy a great view. The lake features a gigantic rectangular waterfall surrounded by a forest. The lake and the waterfall are part of several mini-shows that occur at regular intervals in the afternoon, featuring impressive animatronics such as a  head coming out of the water (with projected animated faces) or a gigantic frog leaning over the waterfall back which also serves as a huge video screen. Music, light and animation are perfectly synchronized so that for instance the head emerging from the water perfectly matches its projected upon face or the light ripples running over the lake perfectly match the animation on screen.
    This is definitely my favourite Vegas hotel now, I wonder where our stock price needs to be to afford having our nect CEC at their convention center :).
  • ZFS Under the Hood: This was a great session done by Jason Bantham and Jarod Nash. They went through the ZFS Source Tour diagram and explained the little boxes one by one while describing the basic processes, mechanisms and data flow that ZFS uses to write and read data. And they were fun speakers too! Plus each attendee that asked a question got a piece of chocolate thrown at as an extra incentive to participate :).
  • Podcasting: After the closing session, Franz, Matthias, Brian, Dave and I recorded episode 3 of the CEC 2007 Podcast. We reflected on our impressions of the conference and on our project to aggregate and display audience messages during the main sessions. Actually, I'm cleaning up and commenting the JavaFX code as we speak to publish it in the next post for your code-reading pleasure :).
About

Tune in and find out useful stuff about Sun Solaris, CPU and System Technology, Web 2.0 - and have a little fun, too!

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
Bookmarks
TopEntries
Blogroll
OldTopEntries