Thursday Dec 03, 2009

OSDevCon 2009 Paper: Implementing a simple ZFS Auto-Scrub Service with SMF, RBAC, IPS and Visual Panels Integration - Lessons learned

A while ago, I wrote a little tool that helps you keep your ZFS pools clean by automatically running regular scrubs, similar to what the OpenSolaris auto-snapshot service does.

The lessons I learned during development of this service went into an OSDevCon 2009 paper that was presented in September 2009 in Dresden. It is a nice summary of things to keep in mind when developing SMF services of your own and it includes a tutorial on writing a GUI based on the OpenSolaris Visual Panels project.

Check out the Whitepaper here, the slides here, the SMF service here and if you want to take a peek at the Service's Visual Panels Java code, you'll find it here.

Sunday Oct 25, 2009

A Small and Energy-Efficient OpenSolaris Home Server

In an earlier entry, I outlined my most important requirements for an optimal OpenSolaris Home Server. It should:

  1. Run OpenSolaris in order to fully leverage ZFS,
  2. Support ECC memory, so data is protected at all times,
  3. Be power-efficient, to help the environment and control costs,
  4. Use a moderate amount of space and be quiet, for some extra WAF points.

So I went shopping and did some research on possible components. Here's what I came up with:

Choosing a Platform: AMD or Intel?

Disclosure: My wife works for AMD, so I may be slightly biased. But I think the following points are still very valid.

Intel is currently going through a significant change in architecture: The older Core 2 microarchitecture was based on the Front Side Bus (FSB), where the CPU connects to the Northbridge which contains the memory controller and connects to the memory, while also connecting to the Southbridge which connects to I/O.

Now, they are switching to the new Nehalem microarchitecture which has a memory-controller built into the CPU and a scalable I/O bus called Quickpath Interconnect that connects CPUs with other CPUs and/or IO.

Unfortunately, none of these architectures seem to support ECC memory at consumer level pricing. The cheapest Intel-based ECC-motherboard I could find had still more than double the cost of an AMD-based one. Even though the new Intel Core i7 series is based on Nehalem and thus could support ECC memory easily in theory, Intel somehow chose to not expose this feature. In addition, Core i7 CPUs are relatively new and there are not yet any power efficient versions available.

The Intel Atom processor series may be interesting for a home server from a pure power-saving perspective, but again, Atom motherboards don't support ECC and once your workload becomes a little more demanding (like transcoding or some heavier compiling), you'll miss the performance of a more powerful CPU.

AMD on the other hand has a number of attractive points for the home server builder:

  • AMD consumer CPUs use the same microarchitecture than their professional CPUs (currently, it's the K10 design). They only vary by number of cores, cache size, number of HT channels, TDP and frequency, which are all results of the manufacturing process. All other microarchitecture features are the same. When using an AMD consumer CPU, you essentially get a "smaller brother" of their high end CPUs.
  • This means you'll also get a built-in memory-controller that supports ECC.
  • This also means less chips to build a system (no Northbridge needed) and thus lower power-consumption.
  • AMD has been using the HyperTransport Interconnect for quite a while now. This is a fast, scaleable interconnect technology that has been on the market for quite a while so chipsets are widely available, proven and low-cost.

So it was no suprise that even low-cost AMD motherboards at EUR 60 or below are perfectly capable of supporting ECC memory which gives you an important server feature at economic cost.

My platform conclusion: Due to ECC support, low power consumption and good HyperTransport performance at low cost, AMD is an excellent platform for building a home server.

AMD Athlon II X2 240e: A Great Home Server CPU

An AMD Athlon II X2 240e CPU

While I was shopping around for AMD Athlon CPUs and just before I was about to decide for an AMD Athlon II X2 variant, AMD offered me one of their brand new AMD Athlon II X2 240e for testing, provided that I blogged about it. Thank you, AMD!

Introduced in October 20th, this CPU is part of the newest energy-efficient range of consumer CPUs from AMD. It has 2 cores (hence X2), snazzy 2.8 GHz and a 2 MB L2 cache. What's most important: The TDP for this CPU is only 45W, meaning that even under the highest stress, this CPU will never exceed 45W of power consumption. Including the memory controller. As you've guessed already, the "e" in the model number stands for "efficient".

There's an important trade-off to consider for home server CPUs: For instance, the AMD Phenom II series would have been more powerful because it has an additional L3 cache, but their TDP is at 65W minimum. While big caches (both with AMD and Intel) are good for compute-intensive operations and games, they can't help much in a home server context: Home servers spend most of their non-idle time transferring data from A to B (files, videos, music) and a cache doesn't help much here, cause it's just another stop between I/O and CPU to pass by. Transferred data hardly gets re-used.

Instead, for home servers, sacrificing the L3 cache for lower power consumption makes a lot of sense: You pay less for the CPU and you pay less for your power bill without sacrificing too much (if any) server relevant performance.

My CPU conclusion: For home servers, AMD Athlon II "e" series are perfect, because they save power and money and do the job very well. For games you might choose a more powerful Phenom II processor, which delivers better compute power at a slightly higher power bill.

Finding the Right Motherboard

After nailing the Platform and CPU question, I needed a motherboard. This can be a confusing process: For each CPU there are different chipsets, then there are different vendors offering motherboards based on these chipset, and then they offer different variants with different features. What should a good home server motherboard offer?

  • OpenSolaris support: Most motherboards "just work" with OpenSolaris, especially if they've been available for some time and/or use some well-known chipsets. To remove any doubts, consult the OpenSolaris Hardware Compatibility List.
  • ECC: Yes, AMD's ECC support is in the CPU, but just in case, we want to make sure that the motherboard exposes this feature to the user.
  • Integrated graphics: We only need a graphics card for installation, BIOS-settings/updates or debugging. The graphics shouldn't consume much power and doesn't need to deliver a lot of performance. Therefore, integrated graphics is just fine for a home server, and it saves some precious slot space, too.
  • Enough SATA ports: You need two ports for a mirrored root pool, another two for a mirrored data pool and then some more for hot-spare or if you want to spread your data pool across 4 or more disks. Many motherboards come with 6 SATA ports, which is ok.
  • Stability: If you have the choice between different variants, try to go for the "business" or "quality" version. It will likely have slightly better components and be better tuned towards 24/7 operation than the "performance", "gaming" or "overclocker" type boards which try to squeeze out more performance at the expense of durability.

Here's a very useful email thread on the OpenSolaris ZFS-discuss mailing list about CPU and motherboard options, pros and cons and user experiences. In this discussion, F.Wessels recommended the M3A78 series from Asus so I went for the M3A78-CM motherboard, which is their "business class" variant, priced at around 60 Euros and it has 6 SATA and 12(!) USB ports.

My motherboard conclusion: The Asus M3A78-CM motherboard has everything I need for a home server at a very low cost, and it's proven to run OpenSolaris just fine.

The Case: Antec NSK-1380

I won't go into much details about the case. My goal was to find one that can support at least 4 disks while being as compact as possible. The Antec NSK-1380 was the smallest case I could find that supports 4 disks. It comes with a built-in power supply, an extra fan, some features to help with keeping noise down and it looked ok for a PC case.

Miscellaneous Tips&Tricks

While putting everything together, I ran into some smaller issues here and there. Here's what I came up with to solve them:

  • CPU cooler: My CPU being a gift from AMD, it came without a cooler. I chose the Zalman CNPS 7000C AL-CU because it was the cheapest cooler that was less than 6cm high, which is a limit imposed by the case. Unfortunately, it collides with one of the 4 harddisk slots by a mere couple of millimeters, so I need to figure out how to best cut or bend the blades to make room for the disk without hurting the cooler too much. I'm not very familiar with PC CPU coolers, but I suspect that with 45W TDP one could even get away with a passive cooler. The Zalman came with a throttle circuit which I set to the minimum speed. This seems to be more than enough and it makes the system really silent, but I need some more thorough testing to confirm. Drop me a comment if you are familiar with passive cooling of 45W TDP CPUs.
  • Boot disks: We need something to boot our home server from, so boot disks are unavoidable. I say "disks", because they should always be mirrored. But other than providing a way of booting, they tend to get in the way, especially when space is a precious resource in a small case. Some people therefore boot from CF cards or other small flash media. This can be a nice solution, especially combined with an IDE-to-CF adapter, but consumer level flash media is either very slow (a typical OS install can take many hours!) or very expensive. While looking for alternatives, I found a nice solution: The Scythe Slot Rafter fits into an unused PCI slot (taking up the breadth of two) and provides space for mounting four 2.5" disks at just EUR 5. These disks are cheap, good enough and I had an unused one lying around anyway, so that was a perfect solution for me.
  • Extra NIC: The Asus M3A78-CM comes with a Realtek NIC and some people complained about driver issues with OpenSolaris. So I followed the advice on the aforementioned Email thread and bought an Intel NIC which is well supported, just in case.
  • USB boot: I couldn't get the M3A78-CM to boot from USB at all. I tried a USB stick and a USB disk and different boot settings in the BIOS to no avail. I gave up and built in a DVD-ROM drive from my old server just to install OpenSolaris, then built it out again. Jan Brosowski has the same motherboard and he found out that you need to update to the latest BIOS revision and use the USB port that's right below the Ethernet port for USB boot to work. YMMV :).

The Result

And now for the most important part: How much power does the system consume? I did some testing with one boot disk and 4GB of ECC RAM and measured about 45W idle. While stressing CPU cores, RAM and the disk with multiple instances of sysbench, I could not get the system to consume more than 80W. All in all, I'm very pleased with the numbers, which are about half of what my old system used to consume. I didn't do any detailed performance tests yet, but I can say that the system feels very responsive and compile runs just rush along the screen. CPU temperature won't go beyond the low 50Cs on a hot day, despite using the lowest fan speed, so cooling seems to work well, too.

I just started full 24/7 operation of my new home server this weekend, so I hope I'll have some more long-term experience about performance and stability in a few months. Meanwhile, I'm in the middle of configuring the system, installing some services and implementing a new way of managing my home server. But that's probably the topic of another blog post...

Do you agree with the home server conclusions I reached in this post? Or would you suggest alternatives? Do you have experiences to share with the mentioned components? Or do you have suggestions and tips on how to get the most out of them? Let me know by posting a comment here!

Many thanks go to Michael Schmid of AMD for sending me the AMD Athlon II X2 240e CPU.

Saturday Sep 19, 2009

New OpenSolaris ZFS Home Server: Requirements

Old OpenSolaris Home Server with blinkenlights USB drives

A few months ago, I decided it was time for a new home server. The old one (see picture) is now more than 3 years old (the hardware is 2 years older), so it was time to plan ahead for the inevitable hardware failure. Curiously enough, my old server started to refuse working with some of my external USB disks only a few weeks ago, which confirmed my need for a new system. This is the beginning of a series of blog articles around building a new OpenSolaris home server.

Home Server Goals

Let's go over some goals for a home server to help us decide on the hardware. IMHO, a good home server should:

  1. Run OpenSolaris. This means I don't want an appliance because this is too limiting and I'd end up hacking it to make it run something it doesn't do by itself anyway. It should therfore use a real OS.
    It also means I don't want to use Linux, because quite frankly the whole Linux landscape is too unstable, confused and de-focused (don't get me wrong: It's nice for experimentation and as a hobbyist-OS, but I want something more serious to guard my data).
    Windows is out of the question because it delivers too little for too high a price.
    I like BSD (and used to run NetBSD on my Amiga 4000 back then in the mid nineties), but it seems to be more oriented to some (albeit interesting) niches for my taste.
    Right now I prefer OpenSolaris because it's rock-solid, clean, well-documented, well-designed and it has lots of advanced features that other OSes only dream of. Yes, I'd still write and do the same if I weren't a Sun employee.
  2. Leverage ZFS. This should be a no-brainer, but I just wanted to point out that any system that is serious about its data should absolutely run ZFS because of the end-to-end-integrity. Period. And then there are many useful features such as compression, send/receive, snapshots, ease of administration, no fscks and much more. Oh, and I'm looking forward to leveraging encryption and de-duplication at home in the near future, too!
  3. Use ECC Memory: What's the use of having end-to-end data integrity with ZFS if your data is corrupted before ZFS can create it's checksum? That's why you need ECC Memory. Simply put: Use ECC memory and kiss those unexpected, unexplicable system crashes and broken data surprises good bye.
  4. Be Power Efficient: Think 1.5 Euros of electricity bill per Watt per Year for a system running 24/7. The difference between your typical gaming PC and a power-efficient home server can easily be 50W or more when idle, so you're looking at an extra 75 Euros and more of free cash if you just pick your components more carefully. Notice that I'm not saying "Low-Power". There are a lot of compromises when trying to reach absolute low-powerness. Like many optimization problems, squeezing the last few Watts out of your system means investing a lot of money and effort while sacrificing important features. So I want this system to be power-efficient, but without too many sacrifices.
  5. Use a Moderate Amount of Space: While my home server sits in the basement, form doesn't matter. But I may move into a new apartment where networking to the basement is not an option. Then the server needs to be living-room capable and have a decent WAF. Which also brings us to:
  6. Be quiet: A power-efficient server needs less cooling which helps with being quiet. Again, we don't want to stretch the limits of quietness at all costs, but we want to make sure we don't do any obvious mistakes here that sacrifice the living-room capabilities of the system

What's Next

In the next blog entry, we'll discuss a few processor and platform considerations and reveal a cool, yet powerful option that presented itself to me. Meanwhile, feel free to check out other home server resources, such as Simon Breden's blog, Matthias Pfuetzner's blog, Jan Brosowski's Blog (German) or one of the many home server discussions on the zfs-discuss mailing list.

What are your requirements for a good home server? What do you currently use at home to fulfill your home server needs? What would you add to the above list of home server requirements? Feel free to add a comment below!

Thursday Sep 17, 2009

New OpenSolaris ZFS Auto-Scrub Service Helps You Keep Proper Pool Hygiene

A harddisk that is being scrubbed

One of the most important features of ZFS is the ability to detect data corruption through the use of end-to-end checksums. In redundant ZFS pools (pools that are either mirrored or use a variant of RAID-Z), this can be used to fix broken data blocks by using the redundancy of the pool to reconstruct the data. This is often called self-healing.

This mechanism works whenever ZFS accesses any data, because it will always verify the checksum after reading a block of data. Unfortunately, this does not work if you don't regularly look at your data: Bit rot happens and with every broken block that is not checked (and therefore not corrected), the probability increases that even the redundant copy will be affected by bit rot too, resulting in data corruption.

Therefore, zpool(1M) provides the useful scrub sub-command which will systematically go through each data block on the pool and verify its checksum. On redundant pools, it will automatically fix any broken blocks and make sure your data is healthy and clean.

It should now be clear that every system should regularly scrub their pools to take full advantage of the ZFS self-healing feature. But you know how it is: You set up your server and often those little things get overlooked and that cron(1M) job you wanted to set up for regular pool scrubbing fell off your radar etc.

Introducing the ZFS Auto-Scrub SMF Service

Here's a service that is easy to install and configure that will make sure all of your pools will be scrubbed at least once a month. Advanced users can set up individualized schedules per pool with different scrubbing periods. It is implemented as an SMF service which means it can be easily managed using svcadm(1M) and customized using svccfg(1M).

The service borrows heavily from Tim Foster's ZFS Auto-Snapshot Service. This is not just coding laziness, it also helps minimize bugs in common tasks (such as setting up periodic cron jobs) and provides better consistency across multiple similar services. Plus: Why invent the wheel twice?

Requirements

The ZFS Auto-Scrub service assumes it is running on OpenSolaris. It should run on any recent distribution of OpenSolaris without problems.

More specifically, it uses the -d switch of the GNU variant of date(1) to parse human-readable date values. Make sure that /usr/gnu/bin/date is available (which is the default in OpenSolaris).

Right now, this service does not work on Solaris 10 out of the box (unless you install GNU date in /usr/gnu/bin). A future version of this script will work around this issue to make it easily usable on Solaris 10 systems as well.

Download and Installation

You can download Version 0.5b of the ZFS Auto-Scrub Service here. The included README file explains everything you need to know to make it work:

After unpacking the archive, start the install script as a privileged user:

pfexec ./install.sh

The script will copy three SMF method scripts into /lib/svc/method, import three SMF manifests and start a service that creates a new Solaris role for managing the service's privileges while it is running. It also installs the OpenSolaris Visual Panels package and adds a simple GUI to manage this service.

ZFS Auto-Scrub GUI

After installation, you need to activate the service. This can be done easily with:

svcadm enable auto-scrub:monthly

or by running the GUI with:

vp zfs-auto-scrub

This will activate a pre-defined instance of the service that makes sure each of your pools is scrubbed at least once a month.

This is all you need to do to make sure all your pools are regularly scrubbed.

If your pools haven't been scrubbed before or if the time or their last scrub is unknown, the script will proceed and start scrubbing. Keep in mind that scrubbing consumes a significant amount of system resources, so if you feel that a currently running scrub slows your system too much, you can interrupt it by saying:

pfexec zpool scrub -s <pool name>

In this case, don't worry, you can always start a manual scrub at a more suitable time or wait until the service kicks in by itself during the next scheduled scrubbing period.

Should you want to get rid of this service, use:

pfexec ./install.sh -d

The script will then disable any instances of the service, remove the manifests from the SMF repository, delete the scripts from /lib/svc/method, remove the special role and the authorizations the service created and finally remove the GUI. Notice that it will not remove the OpenSolaris Visual Panels package in case you want to use it for other purposes. Should you want to get rid of this as well, you can do so by saying:

pkg uninstall OSOLvpanels

Advanced Use

You can create your own instances of this service for individual pools at specified intervals. Here's an example:

  constant@fridolin:~$ svccfg
  svc:> select auto-scrub
  svc:/system/filesystem/zfs/auto-scrub> add mypool-weekly
  svc:/system/filesystem/zfs/auto-scrub> select mypool-weekly
  svc:/system/filesystem/zfs/auto-scrub:mypool-weekly> addpg zfs application
  svc:/system/filesystem/zfs/auto-scrub:mypool-weekly> setprop zfs/pool-name=mypool
  svc:/system/filesystem/zfs/auto-scrub:mypool-weekly> setprop zfs/interval=days 
  svc:/system/filesystem/zfs/auto-scrub:mypool-weekly> setprop zfs/period=7
  svc:/system/filesystem/zfs/auto-scrub:mypool-weekly> setprop zfs/offset=0
  svc:/system/filesystem/zfs/auto-scrub:mypool-weekly> setprop zfs/verbose=false
  svc:/system/filesystem/zfs/auto-scrub:mypool-weekly> end
  constant@fridolin:~$ svcadm enable auto-scrub:mypool-weekly

This example will create and activate a service instance that makes sure the pool "mypool" is scrubbed once a week.

Check out the zfs-auto-scrub.xml file to learn more about how these properties work.

Implementation Details

Here are some interesting aspects of this service that I came across while writing it:

  • The service comes with its own Solaris role zfsscrub under which the script runs. The role has just the authorizations and profiles necessary to carry out its job, following the Solaris Role-Based Access Control philosophy. It comes with its own SMF service that takes care of creating the role if necessary, then disables itself. This makes a future deployment of this service with pkg(1) easier, which does not allow any scripts to be started during installation, but does allow activation of newly installed SMF services.
  • While zpool(1M) status can show you the last time a pool has been scrubbed, this information is not stored persistently. Every time you reboot or export/import the pool, ZFS loses track of when the last scrub of this pool occurred. This has been filed as CR 6878281. Until that has been resolved, we need to take care of remembering the time of last scrub ourselves. This is done by introducing another SMF service that periodically checks the scrub status, then records the completion date/time of the scrub in a custom ZFS property called org.opensolaris.auto-scrub:lastscrub in the pool's root filesystem when finished. We call this service whenever a scrub is started and it deactivates itself once it's job is done.
  • As mentioned above, the GUI is based on the OpenSolaris Visual Panels project. Many thanks to the people on its discussion list to help me get going. More about creating a visual panels GUI in a future blog entry.

Lessons learned

It's funny how a very simple task like "Write an SMF service that takes care of regular zpool scrubbing" can develop into a moderately complex thing. It grew into three different services instead of one, each with their own scripts and SMF manifests. It required an extra RBAC role to make it more secure. I ran into some zpool(1M) limitations which I now feel are worthy of RFEs and working around them made the whole thing slightly more complex. Add an install and de-install script and some minor quirks like using GNU date(1) instead of the regular one to have a reliable parser for human-readable date strings, not to mention a GUI and you cover quite a lot of ground even with a service as seemingly simple as this.

But this is what made this project interesting to me: I learned a lot about RBAC and SMF (of course), some new scripting hacks from the existing ZFS Auto-Snapshot service, found a few minor bugs (in the ZFS Auto-Snapshot service) and RFEs, programmed some Java including the use of the NetBeans GUI builder and had some fun with scripting, finding solutions and making sure stuff is more or less cleanly implemented.

I'd like to encourage everyone to write their own SMF services for whatever tools they install or write for themselves. It helps you think your stuff through, make it easy to install and manage, and you get a better feel of how Solaris and its subsystems work. And you can have some fun too. The easiest way to get started is by looking at what others have done. You'll find a lot of SMF scripts in /lib/svc/method and you can extract the manifests of already installed services using svccfg export. Find an SMF service that is similar to the one you want to implement, check out how it works and start adapting it to your needs until your own service is alive and kicking.

If you happen to be in Dresden for OSDevCon 2009, check out my session on "Implementing a simple SMF Service: Lessons learned" where I'll share more of the details behind implementing this service including the Visual Panels part.

Edit (Sep. 21st) Changed the link to CR 6878281 to the externally visible OpenSolaris bug database version, added a link to the session details on OSDevCon.

Edit (Jun. 27th, 2011) As the Mediacast service was decommissioned, I have re-hosted the archive in my new blog and updated the download link. Since vpanels has changed a lot lately, the vpanels integration doesn't work any more, but the SMF service still does.

Monday Jul 06, 2009

How to Fix OpenSolaris Keyboard Irregularities with Virtual Box

Virtual Box is great: It allows you to install OS A on OS B for impressively large sets of A and B OSes and their permutations. Almost everything works smoothly and seamlessly between host and guest: Cut&Paste, File sharing, networking, USB pass-through, even seamless windows are supported.

But there's one little glitch that is still a little annoying, but apparently not annoying enough for someone else to have blogged about this before: Keyboard remapping on Mac OS X hosts.

The Problem

Simple problem: Macs are different than PCs (phew), but they have slightly different keyboard mappings (oops). Most notably, on my German keyboard, the "<" key at the bottom left on the Mac will yield "\^" on OpenSolaris and vice versa. Same thing goes for "@", which is Right-Alt-L on the Mac, but Right-Alt-Q on PCs. Similar difficulties are encountered if you try to create a "|" pipe symbol or angular/curved brackets ("[]" and "{}" respectively).

Pressing the Right Keys

Usually no big deal. Close your eyes and blindly type what you would type on a PC and that'll give you a good hint at where the right keystrokes are. That works because Virtual Box actually maps the physical locations of the keys between host and guest, but not what's painted on them. So, with a little practice, you should be fine. But what happens if you can't quite remember what that PC keyboard looked like?

Last Friday I had an hour or so left and the playfulness of the problem got the better of me, so I decided to see if this can be fixed the Unix way. It's actually quite easy.

Searching for a cure

There are some helpful hints on the net, most notably Petr Hruska's entry on "Switching Keyboard Layout in Solaris", but it only deals with internationalization issues. What if you have the keyboard nationalities right, but individual keys are still different as in the Mac/PC case? Here's a step-by-step guide to help you with any keyboard remapping problem, plus a bonus table for OpenSolaris on Macbook users to get you started:

Xmodmap to the rescue

  1. We're going to use xmodmap(1) to remap the keys on our keyboard. Check out the man-page to familiarize yourself with how it works.
  2. See the keystrokes as OpenSolaris sees them: Use xev(6) to find out what keycodes belong to the keys you want to correct.
  3. Check out what OpenSolaris is thinking about your problematic keys, either by testing them in a terminal or by checking your version of the standard Sun USB keyboard layouts.
  4. Before you start modifying the current keyboard mapping, get the currently active one by saying something like:
    xmakemap > ~/.xmodmaprc.current
    Caution: There seems to be a bug in xmakemap that corrupts some of the entires. So, please use this only for reference but do not feed this file back into xmodmap (see later) or you'll likely make your keyboard unusable (until this bug is resolved).
  5. Start editing your own remapping script for xmodmap:
    vi ~/.xmodmaprc
  6. For each key you want to remap, copy it's keycode entry from the xmakemap output into your own remapping table and modify to taste. Be careful, some entries from xmakemap are broken, but you should be able to figure those out. Here's my current .xmodmaprc file as a reference:
    !
    ! Set up keys for a MacBook Pro running OpenSolaris on VirtualBox
    !
    !       Key   Unshifted       Shifted         AltGraph        AltGraph-Shifted
    !       ---   --------------- --------------- --------------- ----------------
    
    keycode  49 = less            greater
    keycode  94 = asciicircum     degree          asciicircum     degree
    keycode  14 = 5               percent         bracketleft
    keycode  15 = 6               ampersand       bracketright
    keycode  16 = 7               slash           bar             backslash
    keycode  17 = 8               parenleft       braceleft
    keycode  18 = 9               parenright      braceright
    keycode  24 = q               Q               q               Q
    keycode  46 = l               L               at
    keycode  57 = n               N               asciitilde
    
    This works well on my MacBook Pro, your mileage may vary.
  7. You can activate your remapping by saying something like:
    xmodmap ~/.xmodmaprc
  8. In case something goes wrong and you render your keyboard useless, you can restart your X server by pressing Ctrl-Alt-Backspace twice.
  9. If you're happy with your remapping, you can automatically activate it on every login by using the System->Preferences->Sessions panel and adding an entry for the above xmodmap command there.

Conclusion

I hope this little exercise in some lesser known X-Windows commands (Hi Jörg) was useful for you, now you shouldn't need to worry too much about keyboard mapping inconsistencies any more.

If you want to learn a little more about modifying your keyboard, check out this section of the OpenSolaris docs.

The example keymap modifications above work well for me, but I'm sure I've forgotten a key or two. What other keys did you remap and why? Feel free to leave me a comment below.

Monday Jun 15, 2009

OpenSolaris meets Mac OS X in Munich

Last Wednesday, Wolfgang and I had the honor to present at "Mac Treff München", Munich's local Mac User Group. There are quite a few touching points between OpenSolaris and Mac OS X, such as ZFS, DTrace and VirtualBox, we thought it would be a good idea to contact them out of our Munich OpenSolaris User Group and talk a little bit about OpenSolaris.

Breaking the Ice

We were a little bit nervous about what would happen. Do Mac people care about the innards of a different, seemingls non-GUIsh OS? Are they just fanboys or are they open to other people's technologies? Will talking about redundancy, BFU, probes and virtualization bore them to death?

Fortunately, the 30-40 people that attended the event proved to be a very nice, open and tolerant group. They let us talk about OpenSolaris in General including some of the nitty-grittyness of the development process, before we started talking about the features that are more interesting to Mac users. We then talked about ZFS, DTrace and VirtualBox:

ZFS for Mac OS X (or not (yet)?)

Explaining the principles behind ZFS to people who are only used to draging'n'dropping icons, shooting photos or video and using computers to get work done, without having to care about what happens inside, is not easy. We concentrated on getting the basics of the tree structure, copy-on-write, check-summing and using redundancy to self-heal while using real world examples and metaphors to illustrate the principles. Here's the deal: If you have lots of important data (photos, recording, videos, anyone?) and care about it (content creators...), then you need to be concerned about data availability and integrity. ZFS solves that, it's that simple. A little animation in the slides were quite helpful in explaining that, too :).

The bad news is that ZFS seems to have vanished from all of Apple's communication about the upcoming Mac OS X Snow Leopard release. That's really bad, because many developers and end-users were looking forward to take advantage of it.

The good news is that there are still ways to take advantage of ZFS as a Mac User: Run an OpenSolaris file server for archiving your data or using it as a TimeMachine store, or even run a small OpenSolaris ZFS Server inside your Mac through VirtualBox.

DTrace: A Mac Developer/Admin's Heaven, Albeit in Jails

Next, we dove a little bit into DTrace and how it makes the OS really transparent for admins, developers and users. In addition to the dtrace(1) command, Apple created a nice GUI called "Instruments" as part of their XCode development environment that leverages the DTrace infrastructure to collect useful data about your application in realtime.

Alas, as with ZFS, there's another downer, and this time it's more subtle: While you can enjoy the power of DTrace in Mac OS X now, it's still kinda crippled, as Adam Leventhal pointed out: Processes can escape the eyes of DTrace at will, which counters the absolute observability idea of DTrace quite massively. Yes, there are valid reasons for both sides of the debate, but IMHO, legal things should be enforced using legal means, and software should be treated as software, meaning it is not a reliable way of enforcing any license contracts - with or without powerful tools such as DTrace.

OpenSolaris for all: VirtualBox

Finally, a free present to the Mac OS X community: VirtualBox. I still get emails asking me to spend 80+ dollars on some virtualization software for my Mac. There are at least two choices in that price range: VMware Workstation and Parallels. Well, the good news is that you can save your 80 bucks and use VirtualBox instead.

This may not be new to you, since as a reader of my blog you've likely heard of VirtualBox before, but it's always amazing for me to see how slowly these things spread. So, after reading this article, do your Mac friends a favour and tell them they can save precious money buy just downloading VirtualBox instead of spending money on other virtualization solutions for the Mac. It's really that simple.

Indeed, this was the part where the attendees took most of their notes, and asked a lot of questions about (ZFS being a close first in terms of discussion/questions).

Conclusion

After our presentations, a lot of users came up and asked questions about how to install OpenSolaris on their hardware and on VirtualBox. Some even asked where to buy professional services for installing them an OpenSolaris ZFS fileserver in their company. The capabilities of ZFS clearly struck some chords inside the Mac OS X community, which is no wonder: If you have lots of Audio/Video/Photo data and care about quality and availability, then there's no way around FS.

I used this event as an excuse to try out keynote, which worked quite well for me, especially because it helped me create some easy to understand animations about the mechanics of ZFS. I also liked the automatic guides a lot which help you position elements on your slides very easily and seem to guess very well what your layout intentions were. I'd love the OpenOffice folks to check out Keynote's guides and see if they can come up with something similar. So, here's a Keynote version of my "OpenSolaris for Mac Users" slides as well as a PDF version (both in German) for you to check out and re-use if you like.

Update: Wolfgang's introductory slides are now available for download as well and Klaus, the organizer of the event, posted a review in the Mac Treff München Blog with some pictures, too.

Tuesday Apr 21, 2009

Video: Top 5 Cool Features of the Sun Storage 7000 Unified Storage Systems

A couple of weeks ago, Marc (our producer from the HELDENFunk Podcast) and I sat down and put together a video about the top 5 reasons why the new Sun Storage 7000 systems are so cool. We even "invited" Brendan Gregg to show us his latest trick:

For the next video, I'll try to learn more phrases by heart and look less at the prompter screen for a more natural feel. I apologize for my German accent (some people say it adds credibility :) ). Still, people seem to like the video, at least it has been viewed about 200 times already.

There's a lot of discussion around the Sun Storage 7000, most of it is very positive. In Germany, we like to complain a lot so of course we also hear a lot of constructive criticism. Most of the comments I hear fall into one of the two following categories:

  1. The Storage 7000 systems are cool, but I know ZFS/OpenSolaris can do "X" and I really want this to be in the Storage 7000 GUI as well!
    Yes, we know that there are still many features we'd like to see in the Storage 7000 and we're working on making them available. Make sure your Sun contact knows about your wishlist, so she can forward it to our engineers. Please remember that the Storage 7000 systems are meant to be easy-to-use appliances: Taking your "X" feature from ZFS/OpenSolaris and building a GUI around it is a hard thing to do, especially if you want it to work reliably and if you want it to be self-explanatory and self-serviceable. Please be patient, we're most probably working on your favourite features already.

  2. The Storage 7000 systems are cool, but I want more control. I want to change the hardware/hack them/take them apart/add more functionality/get them to do exactly what I want, etc.
    Sure, that feature is called "OpenSolaris". Please go to OpenSolaris.org, download the CD, install it on your favourite hardware and off you go!
    But, can I have the GUI, too, maybe as an SDK of some sort?
    No. The Storage 7000 systems are not "just a GUI". They are full-blown appliances which means that they're more than just the hardware and a GUI. A big part of the ease-of-use, stability, performance and predictability of these products is in the way configuration options are selected, tested and yes, limited, as well as a careful consideration of which features to implement at what time and which not. Only then comes the GUI on top, which is tailored to the overall product as a whole. In other words: You wouldn't go to BMW and ask them to give you their dashboard, radio and the lights so you can bolt them onto a Volkswagen, would you?

You see, either you build your own storage machine out of the building blocks you have, and get all the functionality and flexibility you want at the expense of some configuration effort,
or you buy the car as a whole, nice, round, sweet package, so you don't worry about configuration, implementation details, complexity, etc. Asking for anything in between will get you into trouble: Either you'll spend more effort than you want, or you won't get the kind of control you want.

If you understand German, there's some discussion of this topic as well as a great overview of the MySQL future plus a primer on SSDs in the latest episode of the HELDENFunk podcast.

And if you like the Sun Storage 7000 Unified Storage Systems as much as I do, here are the slides in StarOffice format, as well as in PDF format, so you can tell your colleagues and friends as well.

Wednesday Mar 25, 2009

Think Twice Before Deleting Stuff (Or Better Not at All!)

Some piggy banks

No, this is not going to be another "Remember to do snapshots" post. I'm also not going to talk about backups. Instead, let's look at some very practical aspects of deleting files.

So, why delete a file? "Trivial", you think, "so I can save space!". Sure, dear reader, but at the expense of what?

Let's stop and think for a minute. Our lives try to center around doing cool, worthwhile, meaningful, useful stuff. Deleting files isn't really cool, nor fun, it is a necessity we're forced to do. Don't you hate it when that dreaded "Your startup disk is almost full" message appears while you're in the middle of downloading new photos from your latest exciting vacation trip?

Actually, the seemingly simple act of deleting is really a challenge: "Will I need this again?", "Wouldn't it be better to archive this instead?", "Last time I was really glad I kept that email from 2 years ago, so why delete this one?". Sometimes I surprise myself thinking a long time before I really press that "ok" button or hit "Enter" after the "rm".

The reality is: Storage is cheap, so why delete stuff in the first place?

To put things in perspective, let's try an ROI analysis of deleting files. Let's say we need about 6 seconds of thinking time before we can decide whether a particular file can really be deleted without regret. Let's also assign some value to our time, say $12 per hour (I hope you're getting paid much more than that, but this is just to keep the numbers simple).

Storage is cheap, and last time I checked, a 1 TB USB hard drive cost about $100 at a major electronics retailer, with prices falling by the hour.

Now, how much space does the act of deleting a file need to free up so it justifies the effort of deciding whether to delete or keep it?

Well, our $12 per hour conveniently breaks down to $0.20 per minute, which allows us to perform 10 delete-it-or-not decisions per minute at $0.02 each. Fine. Deleting seems to be cheap, doesn't it?

Now, for that $0.02 you can buy a 1/5000th of a 1 TB hard drive. Wait a minute, 1TB/5000 still amounts to 200 MB of data per $0.02! That's more than you need to store a 10 minute video, or a full CD of music, compressed at high quality! Or 20 presentations at 10MB each! Not to mention countless emails, source code and other files!

So, unless the file you're pondering is bigger than 200MB, it's not really worth even considering to delete it. I'll call this 200MB boundary the "Destructive Utility Heuristic (DUH)".

The result is therefore: Save your time, buy more harddisk space (or upgrade your old hard drive to a bigger one before it dies) and move on. Life's too precious to waste it on deleting stuff. Create good stuff instead! Only think about deleting stuff if the file in question is bigger than 200MB.

I can hear some "Wait, but!"'s in the audience, ok, one at a time:

  • "But I can delete much faster than 6 seconds!"
    No big deal. So you can delete 1 file per second, that's still a threshold of 33MB, more than 5 songs worth or even the biggest practical business presentation or the source code to a major open source project. And harddisks are getting cheaper every day, while your time will become more and more precious as you age. Yes, if you're dead sure that file is useless junk and don't need to think about it, go ahead and delete it, but why did you save it in the first place?

  • "But I like my directories to be clean and tidy!"
    Congratulations, that's a good habit! Keeping files organized doesn't mean you need to delete stuff, though. Set up an "Archive" folder somewhere and dump everything you think you may or may not use again there. Use one archive folder for each year if you want. File search technology is pretty advanced these days so you should be able to find your archived files quicker than the time you'd take to decide which ones you'll never want to find again. Then, you can still decide to delete your whole archive from 3 years ago because you never used it, and it will likely make some sense, because its size may be above the destructive utility heuristic, but chances are you won't really care because storage will have become even cheaper after those 3 years so you won't save a big deal, relatively speaking.

  • "That still doesn't help me when that damn 'Your startup disk is almost full' message comes!"
    You're right. The point is: It's often hard to sift through data and decide what to keep and what not. That's why we dread deleting stuff and instead wait until that message comes. I'm only offering relief to those that felt that the act of having to delete stuff isn't really rewarding, and it isn't (at least while you're below the DUH). Go buy a bigger harddrive for your laptop, it's really the cost effective option. Use the numbers above to help you justify that towards your finance department.

  • "I'm still not convinced. I actually kinda like going through my files and delete them once in a while..."
    Sure, go ahead. Just know that you could use that time to do more productive stuff, such as checking out the Sun cloud, installing OpenSolaris or testing our new Sun OpenStorage products.

  • "Wait, aren't you supposed to write about OpenSolaris, ZFS and this stuff anyway?"
    I'm glad you mentioned that :). Actually, OpenSolaris and ZFS make it even easier for you to both not care about deleting stuff while keeping your files organized at the same time. The amazing ZFS auto snapshot SMF service will create snapshots of your data automagically every 15 minutes, so it won't matter whether you delete files or not. You can then choose to either not delete them at all and just move them to some archive, or you can delete whatever you want, without the 6 seconds of thinking (just to keep stuff tidy), knowing that you'll always be able to recover those files with Time Slider later. You could then use zfs send/receive to dump your data incrementally to a file server as a backup mechanism and the hooks are already there to automate this.

See, once you think of it, there's not really a need to delete files at all any more. At least not for mere mortals like us with file sizes that are typically below the destructive utility heuristic of currently 200MB (and rising...) most of the time. Music has already reached the point where a song can be stored at studio quality with lossless compression at manageable file sizes so that kind of data won't see significant growth any more. And photos and videos will soon follow. This means we'll need to care less and less about restricting personal data storage. Instead, we now need to focus more on managing personal storage.

Now there's a completely different problem that'll keep us entertained for some time...

Friday Feb 27, 2009

Munich OpenSolaris User Group Install Fest

mucosug logoYesterday we had the first Munich OpenSolaris User Group (MUCOSUG) install fest at Munich Technical University's Mathematics and Computer Science Building in the Garching Campus. Many thanks go to Martin Uhl for organizing coffee, meeting room and overall help!

The building is very cool, featuring two giant parabolic slides that go all the way from 3rd floor to the ground floor. Check out some construction pictures here.

Home server in the basementWe began the meeting with a short presentation on OpenSolaris as a home server (here are the slides, let me know if you want the source). It covers some thoughts on why you need a home server (hints: Photos, multimedia clients, backups, first-hand Solaris experience), where to get some extra software, first steps in ZFS, CIFS server and iSCSI and some useful blogs to follow up with for more good home-server specific content.

Most of the people had OpenSolaris installed already, either on their laptops or inside VirtualBox. So most of the conversation was centered around tips for setting up home server hardware, how to install the VirtualBox guest additions and why, or what the best ways are to integrate VirtualBox networking and exchange files between host and guest.

I learned that sharing the host interface with the Virtual Box guest has become as painless as using NAT with the added benefit of making your guest be a first-class citizen on your network, so that's what I'll try out next. Also, the cost of 32 GB USB sticks has come way down at acceptable speed rates, so I'll try one of them to host my OpenSolaris work environment and free my local harddisk a bit.

All in all, such geek gatherings are always a nice excuse to sit together and chat about the newest in technology, find new ideas and have a beer or two afterwards, so how about organizing your own OpenSolaris Installfest in your neighbourhood now?

Update: The way how to set up CIFS in OpenSolaris turned out to be slightly more complicated. Please check the above slides for an updated list of commands on how to set this up. I forgot to include how to expand /etc/pam.conf and assumed this was automatic. Sorry, must be because I set this up at home a while ago...

Wednesday Jan 14, 2009

How to get Audio to work on OpenSolaris on VirtualBox

Man playing a big trumpet My regular working environment on the go or when working from home is, of course, OpenSolaris. I've been using it on an Acer Ferrari Laptop for years now and I can say I'm very happy with it, and that's not just because I work for Sun.

Lately, I tried OpenSolaris on VirtualBox on my private MacBook Pro. This configuration turned out to work better than the native OpenSolaris on my company's Acer Ferrari laptop! Due to the MBP being 2 years newer and it having a dual-core CPU plus 4 GB of RAM, it turned out to be the better machine to host my OpenSolaris work environment.

With one exception: Audio.

Audio isn't enabled in VirtualBox by default in the Mac version and that has already been blogged elsewhere. The solution is simply to enable Audio in VirtualBox settings and select the Intel ICH AC97 soundchip.

Then, OpenSolaris doesn't come with an ICH AC97 audio driver and even the new SUNWaudiohd driver doesn't support it. The solution here is to download the OSS sound drivers from 4Front technologies. So far, so good.

But this didn't work for me: Either the sound would play for a few seconds, then hang, or the sound drivers wouldn't be recognized by GNOME/GStreamer at all, resulting in a crossed-out loudspeaker icon at the top! This is very frustrating if you want to show Brandan's excellent shouting video to an audience and have to switch out of OpenSolaris/VirtualBox back to Mac OS X just for that.

Apparently others suffered from the same annoyance, too, but neither of the solutions I found seemed to help: I installed and uninstalled and reinstalled the OSS drivers a number of times, ran the ossdevlinks script to recreate device links, even installed a newer, experimental version of the SUNaudiohd driver. No luck yet.

Then Frank, a Sun sales person who happens to use OpenSolaris on his laptop as well (Yay! a salesrep using OpenSolaris! Kudos to Frank!) suggested to uninstall the SUNWaudiohd driver, then install the OSS sound driver, which worked for him. It didn't occur to me that uninstalling SUNWaudiohd might be the solution, so I wanted to give it a try.

But, alas "pfexec pkg uninstall SUNaudiohd" didn't work for me either! Apparently there's a dependency between this package and the slim_install package bundle. Again, Google is your friend and it turned out to be a known bug that prevented me from uninstalling SUNWaudiohd. The workaround is simply to "pfexec pkg uninstall slim_install" which is no longer needed after the installation process anyway.

So lo and behold, gone is slim_install, gone is SUNWaudiohd, installed the OSS drivers, logged out and back in and audio works fine now! (Notice: no reboot required).

Here's the sweet and short way to audio goodness on OpenSolaris on VirtualBox:

  1. Shutdown your OpenSolaris VirtualBox image if it is running, so you can change it's settings.
  2. Activate audio for your OpenSolaris VM in VirtualBox. Select the ICH AC97 Chip. Here's a blog entry that describes the process.
  3. Boot your OpenSolaris VirtualBox image.
  4. Uninstall the slim_server package: "pfexec pkg uninstall slim_server"
  5. Uninstall the SUNWaudiohd driver: "pfexec pkg uninstall SUNWaudiohd"
  6. Download the OSS sound driver for OpenSolaris.
  7. Install the OSS sound driver: "pfexec pkgadd -d oss-solaris-v4.1-1051-i386.pkg" (Or whatever revision you happened to download).
  8. Log out of your desktop and log back in. Sound should work now.
About

Tune in and find out useful stuff about Sun Solaris, CPU and System Technology, Web 2.0 - and have a little fun, too!

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
Bookmarks
TopEntries
Blogroll
OldTopEntries