Thursday Sep 17, 2009

New OpenSolaris ZFS Auto-Scrub Service Helps You Keep Proper Pool Hygiene

A harddisk that is being scrubbed

One of the most important features of ZFS is the ability to detect data corruption through the use of end-to-end checksums. In redundant ZFS pools (pools that are either mirrored or use a variant of RAID-Z), this can be used to fix broken data blocks by using the redundancy of the pool to reconstruct the data. This is often called self-healing.

This mechanism works whenever ZFS accesses any data, because it will always verify the checksum after reading a block of data. Unfortunately, this does not work if you don't regularly look at your data: Bit rot happens and with every broken block that is not checked (and therefore not corrected), the probability increases that even the redundant copy will be affected by bit rot too, resulting in data corruption.

Therefore, zpool(1M) provides the useful scrub sub-command which will systematically go through each data block on the pool and verify its checksum. On redundant pools, it will automatically fix any broken blocks and make sure your data is healthy and clean.

It should now be clear that every system should regularly scrub their pools to take full advantage of the ZFS self-healing feature. But you know how it is: You set up your server and often those little things get overlooked and that cron(1M) job you wanted to set up for regular pool scrubbing fell off your radar etc.

Introducing the ZFS Auto-Scrub SMF Service

Here's a service that is easy to install and configure that will make sure all of your pools will be scrubbed at least once a month. Advanced users can set up individualized schedules per pool with different scrubbing periods. It is implemented as an SMF service which means it can be easily managed using svcadm(1M) and customized using svccfg(1M).

The service borrows heavily from Tim Foster's ZFS Auto-Snapshot Service. This is not just coding laziness, it also helps minimize bugs in common tasks (such as setting up periodic cron jobs) and provides better consistency across multiple similar services. Plus: Why invent the wheel twice?

Requirements

The ZFS Auto-Scrub service assumes it is running on OpenSolaris. It should run on any recent distribution of OpenSolaris without problems.

More specifically, it uses the -d switch of the GNU variant of date(1) to parse human-readable date values. Make sure that /usr/gnu/bin/date is available (which is the default in OpenSolaris).

Right now, this service does not work on Solaris 10 out of the box (unless you install GNU date in /usr/gnu/bin). A future version of this script will work around this issue to make it easily usable on Solaris 10 systems as well.

Download and Installation

You can download Version 0.5b of the ZFS Auto-Scrub Service here. The included README file explains everything you need to know to make it work:

After unpacking the archive, start the install script as a privileged user:

pfexec ./install.sh

The script will copy three SMF method scripts into /lib/svc/method, import three SMF manifests and start a service that creates a new Solaris role for managing the service's privileges while it is running. It also installs the OpenSolaris Visual Panels package and adds a simple GUI to manage this service.

ZFS Auto-Scrub GUI

After installation, you need to activate the service. This can be done easily with:

svcadm enable auto-scrub:monthly

or by running the GUI with:

vp zfs-auto-scrub

This will activate a pre-defined instance of the service that makes sure each of your pools is scrubbed at least once a month.

This is all you need to do to make sure all your pools are regularly scrubbed.

If your pools haven't been scrubbed before or if the time or their last scrub is unknown, the script will proceed and start scrubbing. Keep in mind that scrubbing consumes a significant amount of system resources, so if you feel that a currently running scrub slows your system too much, you can interrupt it by saying:

pfexec zpool scrub -s <pool name>

In this case, don't worry, you can always start a manual scrub at a more suitable time or wait until the service kicks in by itself during the next scheduled scrubbing period.

Should you want to get rid of this service, use:

pfexec ./install.sh -d

The script will then disable any instances of the service, remove the manifests from the SMF repository, delete the scripts from /lib/svc/method, remove the special role and the authorizations the service created and finally remove the GUI. Notice that it will not remove the OpenSolaris Visual Panels package in case you want to use it for other purposes. Should you want to get rid of this as well, you can do so by saying:

pkg uninstall OSOLvpanels

Advanced Use

You can create your own instances of this service for individual pools at specified intervals. Here's an example:

  constant@fridolin:~$ svccfg
  svc:> select auto-scrub
  svc:/system/filesystem/zfs/auto-scrub> add mypool-weekly
  svc:/system/filesystem/zfs/auto-scrub> select mypool-weekly
  svc:/system/filesystem/zfs/auto-scrub:mypool-weekly> addpg zfs application
  svc:/system/filesystem/zfs/auto-scrub:mypool-weekly> setprop zfs/pool-name=mypool
  svc:/system/filesystem/zfs/auto-scrub:mypool-weekly> setprop zfs/interval=days 
  svc:/system/filesystem/zfs/auto-scrub:mypool-weekly> setprop zfs/period=7
  svc:/system/filesystem/zfs/auto-scrub:mypool-weekly> setprop zfs/offset=0
  svc:/system/filesystem/zfs/auto-scrub:mypool-weekly> setprop zfs/verbose=false
  svc:/system/filesystem/zfs/auto-scrub:mypool-weekly> end
  constant@fridolin:~$ svcadm enable auto-scrub:mypool-weekly

This example will create and activate a service instance that makes sure the pool "mypool" is scrubbed once a week.

Check out the zfs-auto-scrub.xml file to learn more about how these properties work.

Implementation Details

Here are some interesting aspects of this service that I came across while writing it:

  • The service comes with its own Solaris role zfsscrub under which the script runs. The role has just the authorizations and profiles necessary to carry out its job, following the Solaris Role-Based Access Control philosophy. It comes with its own SMF service that takes care of creating the role if necessary, then disables itself. This makes a future deployment of this service with pkg(1) easier, which does not allow any scripts to be started during installation, but does allow activation of newly installed SMF services.
  • While zpool(1M) status can show you the last time a pool has been scrubbed, this information is not stored persistently. Every time you reboot or export/import the pool, ZFS loses track of when the last scrub of this pool occurred. This has been filed as CR 6878281. Until that has been resolved, we need to take care of remembering the time of last scrub ourselves. This is done by introducing another SMF service that periodically checks the scrub status, then records the completion date/time of the scrub in a custom ZFS property called org.opensolaris.auto-scrub:lastscrub in the pool's root filesystem when finished. We call this service whenever a scrub is started and it deactivates itself once it's job is done.
  • As mentioned above, the GUI is based on the OpenSolaris Visual Panels project. Many thanks to the people on its discussion list to help me get going. More about creating a visual panels GUI in a future blog entry.

Lessons learned

It's funny how a very simple task like "Write an SMF service that takes care of regular zpool scrubbing" can develop into a moderately complex thing. It grew into three different services instead of one, each with their own scripts and SMF manifests. It required an extra RBAC role to make it more secure. I ran into some zpool(1M) limitations which I now feel are worthy of RFEs and working around them made the whole thing slightly more complex. Add an install and de-install script and some minor quirks like using GNU date(1) instead of the regular one to have a reliable parser for human-readable date strings, not to mention a GUI and you cover quite a lot of ground even with a service as seemingly simple as this.

But this is what made this project interesting to me: I learned a lot about RBAC and SMF (of course), some new scripting hacks from the existing ZFS Auto-Snapshot service, found a few minor bugs (in the ZFS Auto-Snapshot service) and RFEs, programmed some Java including the use of the NetBeans GUI builder and had some fun with scripting, finding solutions and making sure stuff is more or less cleanly implemented.

I'd like to encourage everyone to write their own SMF services for whatever tools they install or write for themselves. It helps you think your stuff through, make it easy to install and manage, and you get a better feel of how Solaris and its subsystems work. And you can have some fun too. The easiest way to get started is by looking at what others have done. You'll find a lot of SMF scripts in /lib/svc/method and you can extract the manifests of already installed services using svccfg export. Find an SMF service that is similar to the one you want to implement, check out how it works and start adapting it to your needs until your own service is alive and kicking.

If you happen to be in Dresden for OSDevCon 2009, check out my session on "Implementing a simple SMF Service: Lessons learned" where I'll share more of the details behind implementing this service including the Visual Panels part.

Edit (Sep. 21st) Changed the link to CR 6878281 to the externally visible OpenSolaris bug database version, added a link to the session details on OSDevCon.

Edit (Jun. 27th, 2011) As the Mediacast service was decommissioned, I have re-hosted the archive in my new blog and updated the download link. Since vpanels has changed a lot lately, the vpanels integration doesn't work any more, but the SMF service still does.

Wednesday Aug 13, 2008

ZFS Replicator Script, New Edition

Many crates on a bicycle. A metaphor for ZFS snapshot replicationAbout a year ago, I blogged about a useful script that handles recursive replication of ZFS snapshots across pools. It helped me migrate my pool from a messy configuration into the clean two-mirrored-pairs configuration I have now.

Meanwhile, the fine guys at the ZFS developer team introduced recursive send/receive into the ZFS command, which makes most of what the script does a simple -F flag to the zfs(1M).

Unfortunately, this new version of the ZFS command has not (yet?) been ported back to Solaris 10, so my ZFS snapshot replication script is still useful for Solaris 10 users, such as Mike Hallock from the School of Chemical Sciences at the University of Illinois at Urbana-Champaign (UIUC). He wrote:

Your script came very close to exactly what I needed, so I took it upon myself to make changes, and thought in the spirit of it all, to share those changes with you.

The first change he in introduced was the ability to supply a pattern (via -p) that selects some of the potentially many snapshots that one wants to replicate. He's a user of Tim Foster's excellent automatic ZFS snapshot service like myself and wanted to base his migration solely on the daily snapshots, not any other ones.

Then, Mike wanted to migrate across two different hosts on a network, so he introduced the -r option that allows the user to specify a target host. This option simply pipes the replication data stream through ssh at the right places, making ZFS filesystem migration across any distance very easy.

The updated version including both of the new features is available as zfs-replicate_v0.7.tar.bz2. I didn't test this new version but the changes look very good to me. Still: Use at your own risk.

Thanks a lot, Mike! 

Thursday Jul 03, 2008

Welcome to the year 2038!

An illustration of the year 2038 problem. Image taken from Wikipedia, see license there.A lot of interesting things are going to happen in the next 30 years. One of them will be a big push to fix the so called "Year 2038 Problem" on Solaris and other Unix and C-based OSes (assuming there'll be any left), which will be similar to the "Year 2000 Problem".

The Year 2038 Problem

To understand the Year 2038 Problem, check out the definition of time_t in sys/time.h:

typedef long time_t; /\* time of day in seconds \*/

To represent a date/time combination, most Unix OSes store the number of seconds since January 1st, 1970, 00:00:00 (UTC) in such a time_t variable. On 32-Bit systems, "long" is a signed integer between -2147483648 and 2147483647 (see types.h). This covers the range between December 13th, 1901, 20:45:52 (UTC) and January 19th, 2038, 03:14:07, which the fathers of C and Unix thought to be sufficient back then in the seventies.

On 64-Bit systems, time_t can be much bigger (or smaller), covering a range of several hundred thousands of years, but if you're 32-Bit in 2038 you'll be in trouble: A second after January 19th, 2038, 03:14:07 you'll travel back in time and immediately find yourself in the middle of December 13th, 1901, 20:45:52 with a major headache called "overflow".

More details about this problem can be found on its Wikipedia page.

2038 could be today...

Well, you might say, I'll most probably be retired in 2038 anyway and of course, there won't be any 32-Bit systems that far in the future, so who cares?

A customer of mine cared. They run a very big file server infrastructure, based on Solaris, ZFS and a number of Sun Fire X4500 machines. A big infrastructure like this also has a large number of clients in many variations. And some of their clients have a huge problem with time: They create files with a date after 2040.

Now, the NFS standard will happily accept dates outside the 32-Bit time_t range and so will ZFS. But any program compiled in 32-Bit mode (and there are many) will run into an overflow error as soon as it wants to handle such a file. Incidentally, most of the Solaris file utilities (you know, rm, cp, find, etc.) are still shipped in 32-Bit, so having files 30+ years in the future is a big problem if you can't administer them.

The 64-Bit solution

One simple solution is to recompile your favourite file utilities, say, from GNU coreutils in 64-Bit mode, then put them into your path and hello future! You can do this by saying something like:

CC=/opt/SUNWspro/bin/cc CFLAGS=-m64 ./configure --prefix=/opt/local; make

(Use /opt/SunStudioExpress if you're using Sun Studio Express).

Now, while trying to reproduce the problem and sending some of my own files into the future, I found out thanks to Chris and his short "what happes if I try" DTrace script, that OpenSolaris already has a way to deal with these problems: ufs and ZFS just won't accept any dates outside the 32-Bit range any more (check out lines 2416-2428 in zfs_vnops.c). Tmpfs will, so at least I could test there on my OpenSolaris 2008.05 laptop.

That's one way to deal with it, but shutting the doors doesn't help our poor disoriented client of the future. And it's also only available in OpenSolaris, not Solaris 10 (yet).

The DTrace solution

So, I followed Ulrich's helpful suggestions and Chris' example and started to hack together a DTrace script of my own that would print out who is trying to assign a date outside of 32-Bit-time_t to what file, and another one that would fix those dates so files can still be accepted and dealt with the way sysadmins expect.

The first script is called "showbigtimes" and it does just that:

constant@foeni:~/file/projects/futurefile$ pfexec ./showbigtimes_v1.1.d 
dtrace: script './showbigtimes_v1.1.d' matched 7 probes
CPU     ID                    FUNCTION:NAME
  0  18406                  futimesat:entry 
UID: 101, PID: 2826, program: touch, file: blah
  atime: 2071 Jun 23 12:00:00, mtime: 2071 Jun 23 12:00:00

\^C

constant@foeni:~/file/projects/futurefile$ /usr/bin/amd64/ls -al /tmp/blah
-rw-r--r--   1 constant staff          0 Jun 23  2071 /tmp/blah
constant@foeni:~/file/projects/futurefile$

Of course, I ran "/opt/local/bin/touch -t 207106231200 /tmp/blah" in another terminal to trigger the probe in the script above (that was a 64-Bit touch compiled from GNU coreutils).

A couple of non-obvious hoops needed to be dealt with:

  • To make the script thread-safe, all variables need to be prepended with self->.
  • There are two system calls that can change a file's date: utimes(2) and futimesat(2) (let me know if you know of more). The former is very handy, because we can steal the filename from it's second argument, but the latter also allows to just give it a file descriptor. If we want to see the file names for futimesat(2) calls, then we may need to figure them out from the file descriptor. I decided to create my own fd-to-filename table by tapping into open(2) and close(2) calls because chasing down the thread data structures or calling pfiles from within a D script would have been more tedious/ugly.
  • Depending on whether we are fired from utimes(2) or futimesat(2), the arguments we're interested in change place, i.e. the filename will come from arg0 in the case of utimes(2) or arg1 if futimesat(2) was used. To always get the right argument, we can use something like probefunc=="utimes"?argo:arg1.
  • We can't directly access arguments to system calls and manipulate them, we have to use copyin().

I hope the comments inside the script are helpful. Be sure to check out the DTrace Documentation, which was very useful to me.

The second script is called correctbigtimes.d and it not only alerts us of files being sent into the future, it automatically corrects the dates to the current date/time in order to prevent any time-travel outside the bounds of 32-Bit time_t at all:

constant@foeni:~/file/projects/futurefile$ pfexec ./correctbigtimes_v1.1.d 
dtrace: script './correctbigtimes_v1.1.d' matched 2 probes
dtrace: allowing destructive actions
CPU     ID                    FUNCTION:NAME
  0  18406                  futimesat:entry 
UID: 101, PID: 2844, program: touch, fd: 0, file: 
  atime: 2071 Jun 23 12:00:00, mtime: 2071 Jun 23 12:00:00
Corrected atime and mtime to: 2008 Jul  3 16:23:25

\^C

constant@foeni:~/file/projects/futurefile$ ls -al /tmp/blah
-rw-r--r-- 1 constant staff 0 2008-07-03 16:23 /tmp/blah
constant@foeni:~/file/projects/futurefile$

As you can see, we enabled DTrace's destructive mode (of course only for constructive purposes) which allows us to change the time values on the fly and ensure a stable time continuum.

This time, I left out the code that created the file descriptor-to-filename table, because this script may potentially be running for a long time and I didn't want to consume preciuous memory for just a convenience feature (Otherwise we'd kept an extra table of all open files for all running threads in the syste,!). If we get a filename string, we print it, otherwise a file descriptor needs to suffice, we can always look it up through pfiles(1).

The actual time modification takes place inside our local variables, which then get copied back into the system call through copyout().

I hope you liked this little excursion into the year 2038, which can happen sooner than we think for some. To me, this was a great opportunity to dig a little deeper into DTrace, a powerful tool that shows us what's going on while enabling us to fix stuff on the fly.

Update: Ulrich had some suggestions and found a bug, so I updated both scripts to version 1.2:

  • It's better to use two individual probes for utimes(2) and futimesat(2) than placing if-then structures based on probefunc. Improves readability, less error-prone, more efficient.
  • The predicates are now much simpler due to using struct timeval arrays there already.
  • Introduced constants for LONG_MIN and LONG_MAX to improve readability of the predicates.
  • The filename table doesn't account for one process opening a file in one thread, then pass the fd to another thread. Therefore, it's better to have a 2-dimensional array with fd and pid as index that is local.
  • The +8 in the predicate to fetch was incorrect, +16 or sizeof(struct timeval) would have been correct. That's fixed by using the original structures right from the beginning at predicate time.

The new versions are already linked from above or available here: showbigtimes_v1.2.d, correctbigtimes_v1.2.d.

Monday Apr 28, 2008

Presenting images and screenshots the cool, 3D, shiny way

My daughter Amanda in her 2D cheerfulnessIf you give a presentation about hardware products, it is easy to make your slides look good: Remove boring text, add nice photos of your hardware and all is well.

But what if you have to present on software, some web service or give a Solaris training with lots of command line stuff?

Sure, you can do screenshots and hope that the GUI looks nice. Or use other photos (like the one to the left) that may or may not relate to the software you present about.

But screenshots and photos (to a lesser degree) are so, well, 2D. They look boring. Wouldn't it be nice to present your screenshots the way Apple presents its iTunes software? Like add some 3D depth to your slide-deck or website, with a nice, shiny, reflective underground?

Well, you don't need to spend thousands of dollars with art departments and graphics artists (they'd be glad to do something different for a change) or work long hours with Photoshop or the Gimp (a most excellent piece of software, BTW),  trying to create that stylish 3D look. Here's a script that can do this easily for you!

You're probably wondering why my daughter Amanda shows up at the top of this article. Well, she was volunteered to be a test subject for my new script. The script uses ImageMagick and POV-Ray in a similar way to my earlier photocube script that we now use to generate the animated cube of the HELDENFunk show notes. It places any image you give it into a 3D space and adds a nice, shiny reflection to it. Let's see how Amanda looks like after she's been through the featurepic.sh script:

-bash-3.00$ ./featurepic.sh -s 200 Amanda_small.jpg
Fetching and pre-processing file:///home/constant/software/featurepic/Amanda_small.jpg
Rendering image.
Writing image: featurepic.jpg
Ready.

Amanda, in her new 3D shininess

The size (-s) parameter defines the length of either width or height of the result image, whichever is larger. In this case, we choose an image size of a maximum of 200x200 pixels, so the image can fit this blog. You can see the result to the right. Nice, eh?

As you can see, her picture has now been placed into a 3D scene, slightly rotated to the left, onto a shiny, white surface. More interesting than the usual flat picture on a blog, isn't it?

The script uses POV-Ray to place and rotate the photo in 3D and to generate the reflection. ImageMagick is used for pre- and post-processing the image. The reflection is not real, it is actually the same picture, flipped across the y axis and with a gradient transparency applied to it. That way, the reflection can be controlled much better. I tried the real thing and it didn't want to look artistic enough :).

The amount of rotation, the reflection intensity and the length of the reflective tail can be adjusted with command-line switches, so can the height of the camera. Here's an example that uses all of these parameters:

-bash-3.00$ ./featurepic.sh -h
Usage: ./featurepic.sh: [-a angle] [-c cameraheight] [-p] [-r reflection] [-s size] [-t taillength] image
-p creates a PNG image with transparency, otherwise a JPEG image is created.
Defaults: -a 15 -c 0.3 -r 0.3 -s 512 -t 0.3
-bash-3.00$ ./featurepic.sh -a 30 -c 0.1 -r 0.8 -s 200 -t 0.5 Amanda_small.jpg
Fetching and pre-processing file:///home/constant/software/featurepic/Amanda_small.jpg
Rendering image.
Writing image: featurepic.jpg
Ready.

Amanda with more shinynessThe angle is in degrees and can be negative. One good thing about rotating the image into 3D is that you gain horizontal real estate to fit that slightly longer bullet point in. It helps you trade-off image width for height without losing too much detail. An angle value of up to 30 is still ok, I wouldn't recommend more than that.

The camera height (-c) value is relative to the picture: 0 is ground level, 1 is at the top edge. The camera will always look at the center of the image. Camera height values below 0.5 are good because a camera below the subject makes it look slightly more impressing. Values above 0.5 make you look down at the picture, making it a bit smaller and less significant.

The reflection intensity (-r) goes from 0 (no reflection) to 1 (perfect mirror) while the length of the reflection (the fade-off "tail", -t) goes from 0 (no tail) to 1 (same size as image). Smaller values for reflection and the tail length make the reflection more subtle and less distracting. I think the default values are very good for most cases.

Check out the -p option for a nicer way to integrate the resulting image into other graphical elements of your presentation. It creates a PNG image with a transparency channel. This means you can place it above other graphical elements (such as a different background color) and the reflection will still look right. See the next example to the right, where Amanda prefers a pink background. Keep in mind that the rendering step still assumes a white background, so drastic changes in background may or may not result in slight artifacts at the edges.

Amanda loves pink backgrounds!

You can also use this script with some pictures of hardware to make them look more interesting, if the hardware shot is dead front and if it doesn't have any border at the bottom. Use an angle value of 0, this will place your hardware onto that virtual glossy carbon plastic that makes it look nicer. See below for an embellished Sun Fire T5440 Server, the new flagship in our line of Chip-Multi-Threading (CMT) servers.

This script should work on any unixoid OS, especially Solaris, that understands sh and where a reasonably recent (6.x.x) version of ImageMagick and POV-Ray are available.

You can get ImageMagick and POV-Ray from their websites. On Solaris, you can easily install them through Blastwave. The version of ImageMagick that is shipped with Solaris in /usr/sfw is not recent enough for the way I'm using it, so the Blastwave version is recommended at the moment.

The Sun Fire T5440 Server, plus some added shinyness.The featurepic.sh script is free, open source, distributed under the CDDL and you don't have to attribute its use when using the resulting images in your own presentations, websites, or other derivative work.

It's free as in "free beer". Speaking of which, if you like this script, leave a comment or send me email at constantin at sun dot com telling me what you did with it, what other features you'd like to see in the script and where I can meet you for some beer :).

 

Tuesday Nov 27, 2007

Shrink big presentations with ooshrink

I work in an environment where people use presentations a lot. Of course, we like to use StarOffice, which is based on OpenOffice for all of our office needs.

Presentation files can be big. Very big. Never-send-through-email-big. Especially, when they come from marketing departments and contain lots of pretty pictures. I just tried to send a Sun Systems overview presentation (which I created myself, so less marketing fluff), and it still was over 22MB big!

So here comes the beauty of Open Source, and in this case: Open Formats. It turns out, that OpenOffice and StarOffice documents are actually ZIP files that contain XML for the actual documents, plus all the image files that are associated with it in a simple directory structure. A few years ago I wrote a script that takes an OpenOffice document, unzips it, looks at all the images in the document's structure and optimizes their compression algorithm, size and other settings based on some simple rules. That script was very popular with my colleagues, it got lost for a while and thanks to Andreas it was found again. Still, colleagues are asking me about "That script, you know, that used to shrink those StarOffice presentations." once in a while.

Today, I brushed it up a little, teached it to accept the newer od[ptdc] extensions and it still works remarkably well. Here are some examples:

  • The Sun homepage has a small demo presentation with a few vacation photos. Let's see what happens:
    bash-3.00$ ls -al Presentation_Example.odp
    -rw-r--r--   1 constant sun       392382 Mar 10  2006 Presentation_Example.odp
    bash-3.00$ ooshrink -s Presentation_Example.odp
    bash-3.00$ ls -al Presentation_Example.\*
    -rw-r--r--   1 constant sun       337383 Nov 27 11:36 Presentation_Example.new.odp
    -rw-r--r--   1 constant sun       392382 Mar 10  2006 Presentation_Example.odp

    Well, that was a 15% reduction in file size. Not earth-shattering, but we're getting there. BTW: The -s flag is for "silence", we're just after results (for now).

  • On BigAdmin, I found a presentation with some M-Series config diagrams:

    bash-3.00$ ls -al Mseries.odp
    -rw-r--r-- 1 constant sun 1323337 Aug 23 17:23 Mseries.odp
    bash-3.00$ ooshrink -s Mseries.odp
    bash-3.00$ ls -al Mseries.\*
    -rw-r--r-- 1 constant sun 379549 Nov 27 11:39 Mseries.new.odp
    -rw-r--r-- 1 constant sun 1323337 Aug 23 17:23 Mseries.odp

    Now we're getting somewhere: This is a reduction by 71%!

  • Now for a real-world example. My next victim is a presentation by Teera about JRuby. I just used Google to search for "site:sun.com presentation odp", so Teera is completely innocent. This time, let's take a look behind the scenes with the -v flag (verbose):
    bash-3.00$ ooshrink -v jruby_ruby112_presentation.odp
    Required tools "convert, identify" found.
    ooshrink 1.2
    Check out "ooshrink -h" for help information, warnings and disclaimers.

    Creating working directory jruby_ruby112_presentation.36316.work...
    Unpacking jruby_ruby112_presentation.odp...
    Optimizing Pictures/1000020100000307000000665F60F829.png.
    - This is a 775 pixels wide and 102 pixels high PNG file.
    - This image is transparent. Can't convert to JPEG.
    - We will try re-encoding this image with PNG compression level 9.
    - Failure: Old: 947, New: 39919. We better keep the original.
    Optimizing Pictures/100000000000005500000055DD878D9F.jpg.
    - This is a 85 pixels wide and 85 pixels high JPEG file.
    - We will try re-encoding this image with JPEG quality setting of 75%.
    - Failure: Old: 2054, New: 2089. We better keep the original.
    Optimizing Pictures/1000020100000419000003C07084C0EF.png.
    - This is a 1049 pixels wide and 960 pixels high PNG file.
    - This image is transparent. Can't convert to JPEG.
    - We will try re-encoding this image with PNG compression level 9.
    - Failure: Old: 99671, New: 539114. We better keep the original.
    Optimizing Pictures/10000201000001A00000025EFBC8CCCC.png.
    - This is a 416 pixels wide and 606 pixels high PNG file.
    - This image is transparent. Can't convert to JPEG.
    - We will try re-encoding this image with PNG compression level 9.
    - Failure: Old: 286677, New: 349860. We better keep the original.
    Optimizing Pictures/10000000000000FB000001A6E936A60F.jpg.
    - This is a 251 pixels wide and 422 pixels high JPEG file.
    - We will try re-encoding this image with JPEG quality setting of 75%.
    - Success: Old: 52200, New: 46599 (-11%). We'll use the new picture.
    Optimizing Pictures/100000000000055500000044C171E62B.gif.
    - This is a 1365 pixels wide and 68 pixels high GIF file.
    - This image is too large, we'll resize it to 1280x1024.
    - We will convert this image to PNG, which is probably more efficient.
    - Failure: Old: 2199, New: 39219. We better keep the original.
    Optimizing Pictures/100000000000019A000002D273F8C990.png.
    - This is a 410 pixels wide and 722 pixels high PNG file.
    - This picture has 50343 colors, so JPEG is a better choice.
    - Success: Old: 276207, New: 32428 (-89%). We'll use the new picture.
    Patching content.xml with new image file name.
    Patching styles.xml with new image file name.
    Patching manifest.xml with new image file name.
    Optimizing Pictures/1000000000000094000000E97E2C5D52.png.
    - This is a 148 pixels wide and 233 pixels high PNG file.
    - This picture has 4486 colors, so JPEG is a better choice.
    - Success: Old: 29880, New: 5642 (-82%). We'll use the new picture.
    Patching content.xml with new image file name.
    Patching styles.xml with new image file name.
    Patching manifest.xml with new image file name.
    Optimizing Pictures/10000201000003E3000003E4CFFA65E3.png.
    - This is a 995 pixels wide and 996 pixels high PNG file.
    - This image is transparent. Can't convert to JPEG.
    - We will try re-encoding this image with PNG compression level 9.
    - Failure: Old: 196597, New: 624633. We better keep the original.
    Optimizing Pictures/100002010000013C0000021EDE4EFBD7.png.
    - This is a 316 pixels wide and 542 pixels high PNG file.
    - This image is transparent. Can't convert to JPEG.
    - We will try re-encoding this image with PNG compression level 9.
    - Failure: Old: 159495, New: 224216. We better keep the original.
    Optimizing Pictures/10000200000002120000014A19C2D0EB.gif.
    - This is a 530 pixels wide and 330 pixels high GIF file.
    - This image is transparent. Can't convert to JPEG.
    - We will convert this image to PNG, which is probably more efficient.
    - Failure: Old: 39821, New: 56736. We better keep the original.
    Optimizing Pictures/100000000000020D0000025EB55F72E3.png.
    - This is a 525 pixels wide and 606 pixels high PNG file.
    - This picture has 17123 colors, so JPEG is a better choice.
    - Success: Old: 146544, New: 16210 (-89%). We'll use the new picture.
    Patching content.xml with new image file name.
    Patching styles.xml with new image file name.
    Patching manifest.xml with new image file name.
    Optimizing Pictures/10000000000000200000002000309F1C.png.
    - This is a 32 pixels wide and 32 pixels high PNG file.
    - This picture has 256 colors, so JPEG is a better choice.
    - Success: Old: 859, New: 289 (-67%). We'll use the new picture.
    Patching content.xml with new image file name.
    Patching styles.xml with new image file name.
    Patching manifest.xml with new image file name.
    Optimizing Pictures/10000201000001BB0000006B7305D02E.png.
    - This is a 443 pixels wide and 107 pixels high PNG file.
    - This image is transparent. Can't convert to JPEG.
    - We will try re-encoding this image with PNG compression level 9.
    - Failure: Old: 730, New: 24071. We better keep the original.
    All images optimized.
    Re-packing...
    Success: The new file is only 67% as big as the original!
    Cleaning up...
    Done.

    Neat. We just shaved a third off of a 1.3MB presentation file and it still looks as good as the original!

    As you can see, the script goes through each image one by one and tries to come up with better ways of encoding images. The basic rules are:

    • If an image if PNG or GIF and it has more than 128 colors, it's probably better to convert it to JPEG (if it doesn't use transparency). It also tries recompressing GIFs and other legacy formats as PNGs if JPEG is not an option.
    • Images bigger than 1280x1024 don't make a lot of sense in a presentation, so they're resized to be at most that size.
    • JPEG allows to set a quality level. 75% is "good enough" for presentation purposes, so we'll try that and see how much it buys us.
    The hard part is to patch the XML files with the new image names. They don't have any newlines, so basic Unix scripting tools may hiccup and so the script uses a more conservative approach to patching, but it works.

 

Before I give you the script, here's the obvious
Disclaimer: Use this script at your own risk. Always check the shrunk presentation for any errors that the script may have introduced. It only works 9 out of 10 times (sometimes, there's some funkiness about how OpenOffice uses images going on that I still don't understand...), so you have to check if it didn't damage your file.

The script works with Solaris (of course), but it should also work in any Linux or any other Unix just fine. It relies on ImageMagick to do the image heavy lifting, so make sure you have identify(9E) and convert(9E) in your path. 

My 22 MB Systems Overview presentation was successfully shrunk into a 13MB one, so I'm happy to report that after so many years, this little script is still very useful. I hope it helps you too, let me know how you use it and what shrink-ratios you have experienced!

Thursday Aug 23, 2007

Cool Apple-Like Photo Animations With POV-Ray, ImageMagick and Solaris

A GIF Animation Showing Popular Sun ProductsOne of the features I like most about Apple's iPhoto application is the transition effect in slideshows where the whole screen becomes a cube that rotates into the next photograph. The same effect is also used when switching users, etc.

Recently we took a team photograph for an internal web page. I wanted that effect and I love the open source raytracer POV-Ray so I wrote a script that renders the same animation effect and creates an animated GIF using ImageMagick. You can see an example output to the right featuring photos of some popular Sun products. BTW, check out photos.sun.com for free, high-quality access to Sun product photography.

To create your own photocubes, you just need POV-Ray and ImageMagick in your path and the photocube.sh script. Being open source, all run on Solaris but also on Linux, NetBSD or any other operating system that can run open source software. I'd love to try this script out on a Niagara 2 system with its 8 cores, 16 pipelines, 64 threads and 8 FPUs. Hmmm, all rendering frames in parallel :).

There are already precompiled distributions of POVRay and ImageMagick on Blastwave that you can install very easily onto your Solaris machine if you don't have them already.

Just call the script with 6 URLs or pathnames. It will then automatically read in the images, render the animation frames and then combine them all into an animated GIF:

-bash-3.00$ ../photocube.sh \*.jpg
Converting images to be quadratic...
Fetching and processing 721_DRIVE-open.1024x768.jpg
Fetching and processing 773_FrtAbv-78L-PWR-FAN.1024x768.jpg
Fetching and processing 915_Lside-plexiOFF.1024x768.jpg
Fetching and processing IMG_4551.1024x768.jpg
Fetching and processing blackbox_wind_turbine.1024x768.jpg
Fetching and processing ultra_cool_combo.1024x768.jpg
Rendering animation frames...
Creating animated gif file...
-bash-3.00$ ls
721_DRIVE-open.1024x768.jpg          blackbox_wind_turbine.1024x768.jpg
773_FrtAbv-78L-PWR-FAN.1024x768.jpg  photocube.gif
915_Lside-plexiOFF.1024x768.jpg      ultra_cool_combo.1024x768.jpg
IMG_4551.1024x768.jpg
-bash-3.00$

The script uses ImageMagick to make the pictures quadratic and to limit their size to 1200x1200 pictures if necessary. Since the -extent switch is fairly new, you'll need to use a newer distribution of ImageMagick, the one on the Solaris Companion CD is sadly not new enough. The POVRay source (embedded in the script) uses an S-curve to turn the cube which produces a natural and smooth acceleration/decelleration effect. It would have been more efficient to let POVRay output PNG files rather than TGA but for some reason some of the PNG files that POVRay produces were not compatible with ImageMagick.

Feel free to modify this script to your needs. You may want to experiment with other ways of animating the cube or other image transition effects. Maybe you want to use ffmpeg to create real video files instead of animated GIFs. Be careful when cranking up the number of frames while using ImageMagick to create animated GIFs, ImageMagick wants to suck in all frames into memory before creating the animated GIF and so you may end up using a lot of memory. If someone has a more elegant, scriptable animated GIF creator, please leave me a comment.

I hope you enjoy this little exercise in raytracing and animation. Let me know if you have suggestions or other ideas to improve this script!

Thursday Aug 16, 2007

ZFS Snapshot Replication Script

This script recursively copies ZFS filesystems and its snapshots to another pool. It can also generate a new snapshot for you while copying as well as handle unmounts, remounts and SMF services to be temporarily brought down during file system migration. I hope it's useful to you as well![Read More]
About

Tune in and find out useful stuff about Sun Solaris, CPU and System Technology, Web 2.0 - and have a little fun, too!

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
Bookmarks
TopEntries
Blogroll
OldTopEntries