Tuesday Jun 16, 2009

Paris in the Clouds: A CloudCamp Paris report

CloudCamp logoLast week, Eric Bezille invited me to Paris for a couple of Cloud Computing related meetings and to help out with CloudCamp Paris. Paris in the clouds, what a nice experience!

This was also a great opportunity to try out the audio recording features of my LiveScribe Pulse pen. This pen not only can record what you write (on special dot paper), it can also record what has been said while you write, creating links between the words you write and the points in time of the audio recording. Very cool. You can then tap on the words in your notebook and the pen will play back the associated audio. Great for conferences, and I wish I had had this pen during my university times :). You can also export your notes including the audio as a flash movie and share them on the net, which is what I'm going to do below.

Intro Session and Lightning Talks

The CloudCamp was kicked off by a representative of Institut Telecom, the location sponsor of CloudCamp Paris. Sam Johnston gave a short and sweet introduction to Clouds, providing some definitions, examples and also some contrarian views, finishing with a short video on how easy it is to set up your account in the cloud.

A series of lightning talks by the sponsors gave us some interesting news, insights and context for the conference:

  • Eric Bezille from Sun showed us what's behind Sun's cloud activities.
  • Arvid Fossen from Aserver.com talked about how they provide datacenters as a service to their clients. Wanna have your own cloud? Go buy it as a turnkey solution!
  • Matthew Hugo (Not sure if I got that name right...) from Runmyprocess.com showed some nice examples of integration between different cloud services.
  • Josh Fraser, VP of Business Development at Rightscale showed some impressive examples of how the cloud can neatly adjust to your business demand curve.
  • Peter Martin from Orange Business Services showed us some pictures of his kids who use clouds based services today (Facebook anyone?), pointing out that when they'll grow up to be CEOs, CIOs and decision makers, they're most likely not going to operate their own datacenters. Food for thought for the sceptics who think Cloud Computing is just a temporary hype or not ready (yet) for prime time: Just wait 'til your kids grow up. It may happen sooner than that, though, given the enthusiasm of the more than 100 people in the room...
  • Finally, Owen Garrett from Zeus provided a really good reason for using a software load balancer: Take back control of your application!

Here are two pencasts with audio and notes taken during the above lightning talks. The first one covers the intro until and including the Rightscale talk, the second one starts with the Orange talk and finishes with the Zeus talk.

The Unpanel

I've been to a couple of unconferences before, but this was my first unpanel. Dave Nielsen asked the attendees about who thought they were an expert on Cloud Computing. A couple of hands went up and whoosh - there you have seven experts for a panel :). Then he asked the group to provide seven questions for the panel to answer, after which each of the panelists got to answer one. For each question, the group was asked whether there was potential for some more discussion on that topic, so we also had a good basis for creating some spontaneous sessions during the conference part. Listen to the whole unpanel session on the right.

Cloud Architecture Session

After the introductory sessions and the unpanel, it was time for the breakouts. There were four of them: Cloud Security (moderated by Luc Wijns from Sun), Cloud Architecture, Open Clouds and Cloud Business Opportunities. Sébastien Pahl from DotCloud and I moderated the Cloud Architecture session. After some introductory slides, Sébastien explained his work on creting portable cloud-based services (including leverating Solaris Containers). (Sébastien, let me know when you have your slides online...). We then let the group share their questions, answeres, thoughts and discussion points. We talked about scaling MySQL in the cloud, or perhaps it would be better to leave the traditional relational model and use a simple key/value alternatives such as CouchDB. Developers asked whether they'll be able to use their IDEs with the cloud (hint: Check out NetBeans...) or whether they need to throw it all away and learn everything from scratch. How much should developers care about scalability? Isn't that something the cloud should provide? What about different APIs? Does it make sense to write your own abstraction layer? Message queues were also a popular topic and we noticed that RESTful interfaces are everywhere. I liked the final statement of one attendee most: Maybe clouds are forcing us to rethink a lot of our developer concepts so we can actually sit down and start writing clean code for a change!

Here's the audio recording from the architecture session. I tried to write down some notes after they have been discussed so you can try and skip to the pieces you're most interested in. The audio is a bit low volume, but still quite intelligible.

Wrapping It All Up

After the breakouts, a surprisingly large number of attendees were still there despite being late into the evening to gather and listen to the summaries of the different sessions. Here's the recording, including some notes to help you navigate.

All in all, this was a great event. A big thank you to Eric and his team in Paris and the sponsors for setting this up! More than ever, it became clear to me how significant the trend towards cloud computing is and how many talented people are part of this community, driving the future of IT into the sky.

Update: Eric now published his own summary with a lot of background information. It's a great read, so check it out!

Monday Jun 15, 2009

OpenSolaris meets Mac OS X in Munich

Last Wednesday, Wolfgang and I had the honor to present at "Mac Treff München", Munich's local Mac User Group. There are quite a few touching points between OpenSolaris and Mac OS X, such as ZFS, DTrace and VirtualBox, we thought it would be a good idea to contact them out of our Munich OpenSolaris User Group and talk a little bit about OpenSolaris.

Breaking the Ice

We were a little bit nervous about what would happen. Do Mac people care about the innards of a different, seemingls non-GUIsh OS? Are they just fanboys or are they open to other people's technologies? Will talking about redundancy, BFU, probes and virtualization bore them to death?

Fortunately, the 30-40 people that attended the event proved to be a very nice, open and tolerant group. They let us talk about OpenSolaris in General including some of the nitty-grittyness of the development process, before we started talking about the features that are more interesting to Mac users. We then talked about ZFS, DTrace and VirtualBox:

ZFS for Mac OS X (or not (yet)?)

Explaining the principles behind ZFS to people who are only used to draging'n'dropping icons, shooting photos or video and using computers to get work done, without having to care about what happens inside, is not easy. We concentrated on getting the basics of the tree structure, copy-on-write, check-summing and using redundancy to self-heal while using real world examples and metaphors to illustrate the principles. Here's the deal: If you have lots of important data (photos, recording, videos, anyone?) and care about it (content creators...), then you need to be concerned about data availability and integrity. ZFS solves that, it's that simple. A little animation in the slides were quite helpful in explaining that, too :).

The bad news is that ZFS seems to have vanished from all of Apple's communication about the upcoming Mac OS X Snow Leopard release. That's really bad, because many developers and end-users were looking forward to take advantage of it.

The good news is that there are still ways to take advantage of ZFS as a Mac User: Run an OpenSolaris file server for archiving your data or using it as a TimeMachine store, or even run a small OpenSolaris ZFS Server inside your Mac through VirtualBox.

DTrace: A Mac Developer/Admin's Heaven, Albeit in Jails

Next, we dove a little bit into DTrace and how it makes the OS really transparent for admins, developers and users. In addition to the dtrace(1) command, Apple created a nice GUI called "Instruments" as part of their XCode development environment that leverages the DTrace infrastructure to collect useful data about your application in realtime.

Alas, as with ZFS, there's another downer, and this time it's more subtle: While you can enjoy the power of DTrace in Mac OS X now, it's still kinda crippled, as Adam Leventhal pointed out: Processes can escape the eyes of DTrace at will, which counters the absolute observability idea of DTrace quite massively. Yes, there are valid reasons for both sides of the debate, but IMHO, legal things should be enforced using legal means, and software should be treated as software, meaning it is not a reliable way of enforcing any license contracts - with or without powerful tools such as DTrace.

OpenSolaris for all: VirtualBox

Finally, a free present to the Mac OS X community: VirtualBox. I still get emails asking me to spend 80+ dollars on some virtualization software for my Mac. There are at least two choices in that price range: VMware Workstation and Parallels. Well, the good news is that you can save your 80 bucks and use VirtualBox instead.

This may not be new to you, since as a reader of my blog you've likely heard of VirtualBox before, but it's always amazing for me to see how slowly these things spread. So, after reading this article, do your Mac friends a favour and tell them they can save precious money buy just downloading VirtualBox instead of spending money on other virtualization solutions for the Mac. It's really that simple.

Indeed, this was the part where the attendees took most of their notes, and asked a lot of questions about (ZFS being a close first in terms of discussion/questions).

Conclusion

After our presentations, a lot of users came up and asked questions about how to install OpenSolaris on their hardware and on VirtualBox. Some even asked where to buy professional services for installing them an OpenSolaris ZFS fileserver in their company. The capabilities of ZFS clearly struck some chords inside the Mac OS X community, which is no wonder: If you have lots of Audio/Video/Photo data and care about quality and availability, then there's no way around FS.

I used this event as an excuse to try out keynote, which worked quite well for me, especially because it helped me create some easy to understand animations about the mechanics of ZFS. I also liked the automatic guides a lot which help you position elements on your slides very easily and seem to guess very well what your layout intentions were. I'd love the OpenOffice folks to check out Keynote's guides and see if they can come up with something similar. So, here's a Keynote version of my "OpenSolaris for Mac Users" slides as well as a PDF version (both in German) for you to check out and re-use if you like.

Update: Wolfgang's introductory slides are now available for download as well and Klaus, the organizer of the event, posted a review in the Mac Treff München Blog with some pictures, too.

Friday Feb 27, 2009

Munich OpenSolaris User Group Install Fest

mucosug logoYesterday we had the first Munich OpenSolaris User Group (MUCOSUG) install fest at Munich Technical University's Mathematics and Computer Science Building in the Garching Campus. Many thanks go to Martin Uhl for organizing coffee, meeting room and overall help!

The building is very cool, featuring two giant parabolic slides that go all the way from 3rd floor to the ground floor. Check out some construction pictures here.

Home server in the basementWe began the meeting with a short presentation on OpenSolaris as a home server (here are the slides, let me know if you want the source). It covers some thoughts on why you need a home server (hints: Photos, multimedia clients, backups, first-hand Solaris experience), where to get some extra software, first steps in ZFS, CIFS server and iSCSI and some useful blogs to follow up with for more good home-server specific content.

Most of the people had OpenSolaris installed already, either on their laptops or inside VirtualBox. So most of the conversation was centered around tips for setting up home server hardware, how to install the VirtualBox guest additions and why, or what the best ways are to integrate VirtualBox networking and exchange files between host and guest.

I learned that sharing the host interface with the Virtual Box guest has become as painless as using NAT with the added benefit of making your guest be a first-class citizen on your network, so that's what I'll try out next. Also, the cost of 32 GB USB sticks has come way down at acceptable speed rates, so I'll try one of them to host my OpenSolaris work environment and free my local harddisk a bit.

All in all, such geek gatherings are always a nice excuse to sit together and chat about the newest in technology, find new ideas and have a beer or two afterwards, so how about organizing your own OpenSolaris Installfest in your neighbourhood now?

Update: The way how to set up CIFS in OpenSolaris turned out to be slightly more complicated. Please check the above slides for an updated list of commands on how to set this up. I forgot to include how to expand /etc/pam.conf and assumed this was automatic. Sorry, must be because I set this up at home a while ago...

Tuesday Jan 13, 2009

First Munich OpenSolaris User Group Meeting

OpenSolaris and other Unix enthusiasts drinking beerJust as announced, we yesterday had our first Munich OpenSolaris User Group (MUCOSUG) meeting at the local Sun office, which is in Heimstetten near Munich.

We organized this meeting in cooperation with the local German Unix User Group's (GUUG) (thanks, Wolfgang!) SAGE monthly meeting. Normally, about 30 people would come to such meetings, so we were especially pleased to see over 40 people come to this event.

Photos of the meeting are available here. If you took some photos of your own, then just upload them to Flickr and tag them with "MUCOSUG". Yes, this somehow sounds like a nasal medicine, but hey, it's winter anyway and most people suffer from the cold, and besides, "MUC" is the official airline code for Munich, which is why the name was chosen.

In this meeting, we discussed OpenSolaris 2008.11 and due to popular demand, we also talked about VirtualBox. We also got to tour the Sun Solution Center which showcases Sun's hard- and software.

The presentation slides for OpenSolaris 2008.11 in German are provided here in ODP and PDF format, so are the ones from VirtualBox (PDF). Feel free to use them for your own purposes in case you want to do your own local OpenSolaris 2008.11 update. Thanks to Glynn for some of the slides!

Here is also Brendan's excellent video where he shouts at some disks, for your viewing pleasure. All made possible through the magic of OpenSolaris, DTrace and the Fishworks team who brought us the Sun Storage 7000 systems:

Our next task is to find a week of the month and day of the week for a regular meeting. We'll run a poll through the mailinglist soon, so make sure to sign up by sending mail to "ug-mucosug-subscribe at opensolaris dot org" if you want to attend our next meetings.

Monday Apr 21, 2008

On Knowledge Management, Community Equity and Ontologies

Last week, I attended a meeting of the BITKOM Working Group for Knowledge Engineering & Management at the Sun Frankfurt office. The meeting was very nicely organized by Mr. Weber, Mr. Neuwirth and some colleagues from Sun in Germany (Hi Hansjörg, you should really blog!) and Peter Reiser from Sun in Switzerland. Therefore, I got to play host of the meeting without having to do too much work :).

Peter asked me to present his work on Community Equity (see also this interview with Shel Israel and this other one with Robert Scoble) and the CE 2.0 project to the group. The working group was very interested in how to encourage communities to participate and how Community Equity mechanisms can be used towards this goal. We had quite a few positive discussions during the breaks.

Image illustrating Community Equity 

But, some people seem to be concerned with tracking community contribution and participation on an automatic basis, for example, see Mike's post on the subject and Alec's reaction to Peter's interview. These are all very valid thoughts, and indeed nobody wants to see their work or life be reduced into a couple of numbers.

As always, the threat is not in the technology, but in the way we use it:

  • Measuring stuff is a good thing, if you know what you measure and how accurate that measurement is.
  • Telling people how their work is being received is also a good thing. I always get a kick out of the HELDENFunk download statistics (We should probably start publishing them), or my own blog's metrics. This is a huge motivator.
  • Telling people about how other people's work has been received is also a good thing. Nobody would put the kind of trust into eBay if it weren't for their rating system. How many books have you bought on Amazon based on other people's recommendations, stars, etc. on their site?
  • Web 2.0 style commenting, crosslinking, social networking, tagging and rating is also a good thing. Much of the web 2.0 world today would be untrusted, unnavigationable and unuseful if it weren't for those mechanisms.
  • The next step is to take these concepts, and apply them to an enterprise context. This is what Peter's Community Equity work is all about. The goal I see here is: If you do a good job, others should be able to notice (including, but not limited to, your manager). If you're looking for an expert on topic X, you should be able to find people that may be able to help you. If you are talking to person Y or if you run into that person as part of a team, you should be able to see what kind of work that person has contributed to the enterprise before and what others are saying about them. Think Amazon and eBay and LinkedIn ratings, recommendations, tags etc. as a tool to better navigate the social network and knowledge base of your enterprise.

Notice that the part where discussions become heated is not the technology one, it's the "what do we do with the numbers" part. That, of course, is where we need to be careful. We need to understand how the data is generated, how it has been processed (i.e. the exact rules and formular that is used to generate the Community Equity score) and what it does not tell us. You may trust your latest auction winner to transact with you on that particular sale, but you still don't know if she is actually a nice person or not :).

As long as the process is open, well-understood and transparent, using Web 2.0 mechanisms and Community Equity style metrics can be a very useful thing. You can generate a lot of useful information based on that kind of data: What are hot topics? Which documents are the most used, best rated, most re-used ones? Who are the company internal creators, connectors and consumers of knowledge? What topics have trouble to be picked up by the community? Sounds like fascinating stuff, if you're responsible for your company's knowledge...

Of course, this was only a small part of the BITKOM meeting. We heard presentations by other companies on different applications of knowledge management technologies in a customer service context. Interestingly, all of them (including CE 2.0) mentioned the term Ontology in one way or other. In a knowledge management context, an Ontology is the part of the system that relates "words" or other abstract data to real-world concepts and objects, resolving ambiguities, consolidating synonyms and clarifying user-errors. It's the part of the system that tries to bring in semantic knowledge as opposed to mere processing words.

Ontologies are very hard to do. That's why most of the times they are generated "by hand" which is very time and resource consuming. The holy grail of ontologies is when the system can automatically generate semantic meaning out of naked data by itself, without any help. Some of this systems are seeded with hand-made ontologies that can then expand somewhat automatically.

An interesting approach to generating ontologies might be to analyze web 2.0 style tagging data that has been created by users. An ontology system could then try to identify clusters of tags and assign them to a real world concept, then try to identify relationships between those concepts. As an example, the tags "LDAP", "Directory Server", "DS" all belong to the same concept and they are related to (but not the same as) "Identity Management", "IdM", and "Databases". A search engine then can use this data to find better matches for a user that is looking for "Identity Management and LDAP interoperability".

As you can see, even a seemingly dry and academic workshop on "Knowledge Engineering and Management", organized by an industry association can be an exciting topic, sometimes transcending the boundaries between technology, philosophy and anybody's daily web 2.0 style work.

Tuesday Apr 15, 2008

SAGE@GUUG Web 2.0 Presentation

Yesterday, we had a SAGE@GUUG Meeting at the Munich Sun office.

In similar spirit to the USENIX SAGE, the SAGE@GUUG meetings are an informal gathering of system admins and Unix enthusiasts that like to talk about interesting computer-related topics. This time, I had the honor to host their Munich's group April meeting at Sun and the topic of the day was Web 2.0. Many thanks to Wolfgang for organizing the meeting, and a lot of thanks to Barbara, my angel from marketing for getting us food&drinks!

We began the meeting with this video:

Check out Mike Wesch's digital ethnography site for more information.

My slides were a slight modification from the GUUG FFG talk of the same name. As expected, the "PHP maintainability" slide with the large spaghetti photo triggered some agitated responses, but that's what provocations are for, and this is why Ruby is becoming more and more popular. I try to make my slides unusual and interesting, not boring eye-charts and bullet-point deserts. Let me know what you think of them!

We had about 30 people and the interaction with the group was great. Many people pointed out examples of their own on how the world is changing thanks to web 2.0, most visible in the way young people interact with media and technology.

After the talk, we saw an introduction by our favourite IT Guy to the new Sun UltraSPARC T2+ servers:

Which led us to a visit of the Sun Vision Center for some hardware show&tell, before going to the Fliegerbräu for some well-deserved beer.

Thursday Feb 14, 2008

Be a System Hero

Ansaphone mockery ad 

If you read this blog regularly, you might have noticed that I like spending time participating in podcasts for the german website Systemhelden.com (For instance, see here, here and of course here). The podcast and the Systemhelden.com community is in german language, so if your native tongue isn't, the times of envy are over. Welcome to Systemheroes.co.uk!

What is it?

It's a community website for those that are the "up" in "uptime", the unsung heroes of data centers, the people that never get a "Thank you for delivering all of my 1526 emails today!" call: The system heroes. If you like tinkering with computer systems, it's probably something for you.

What's in it for me?

First of all: A lot of fun, including some comics. A place to plug your blog (and who doesn't want the occasional extra spike in hitrates...). A place to meet other system heroes and chat about those pesky little lusers and their latest PEBKAC incidents while exchanging LART maintenance tips. And they have the coolest system hero game around: Caffeine Crazy. As seen, er, heard on HELDENFunk #9 and #10. Try it out!

Yeah, there's some Sun marketing, too, I admit. Mainly references to cool technology from Sun and the ability to test it 60 days for free (if it's hardware) or just use it eternally for free (if it's software), but someone has to pay the hosting bills and I assure you: It's for the good of system herokind.

Oh, and you gotta love these great ads at the bottom of each page (my favourite is above).

Cool, what do I do?

Do as Yoda would say: "Hrrm, a system hero you want to be? Sign up you need!" Well, being a system hero has never been so much fun...

Thursday Sep 06, 2007

7 Easy Tips for ZFS Starters

So you're now curious about ZFS. Maybe you read Jonathan's latest blog entry on ZFS or you've followed some other buzz on the Solaris ZFS file system or maybe you saw a friend using it. Now it's time for you to try it out yourself. It's easy and here are seven tips to get you started quickly and effortlessly:

1. Check out what Solaris ZFS can do for you

First, try to compose yourself a picture of what the Solaris ZFS filesystem is, what features it has and how it can work to your advantage. Check out the CSI:Munich video for a fun demo on how Solaris ZFS can turn 12 cheap USB memory sticks into highly available, enterprise-class, robust storage. Of course, what works with USB sticks also works with your own harddisks or any other storage device. Also, there are great ZFS screencasts that show you some more powerful features in an easy to follow way. Finally, there's a nice writeup on "What is ZFS?" at the OpenSolaris ZFS Community's homepage.

2. Read some (easy) documentation

It's easy to configure Solaris ZFS. Really. You just need to know two commands: zpool (1M) and zfs (1M). That's it. So, get your hands onto a Solaris system (or download and install it for free) and take a look at those manpages. If you still want more, then there's of course the ZFS Administration Guide with detailed planning, configuration and troubleshooting steps. If you want to learn even more, check out the OpenSolaris ZFS Community Links page. German-speaking readers are invited to read my german white paper on ZFS or listen to episode #006 of the POFACS podcast.

3. Dive into the pool

Solaris ZFS manages your storage devices in pools. Pools are a convenient way of abstracting storage hardware and turning it into a repository of blocks to store your data in. Each pool takes a number of devices and applies an availability scheme (or none) to it. Pools can then be easily expanded by adding more disks to them. Use pools to manage your hardware and its availability properties. You could create a mirrored pool for data that should be protected against disk failure and that needs fast access to hardware. Then, you could add another pool using RAID-Z (which is similar, but better than RAID-5) for data that needs to be protected but where performance is not the first priority. For scratch, test or demo data, a pool without any RAID scheme is ok, too. Pools are easily created:

zpool create mypool mirror c0d0 c1d0

Will create a mirror out of the two disk devices c0d0 and c1d0. Similarly, you can easily create a RAID-Z pool by saying:

zpool create mypool raidz c0d0 c1d0 c2d0

The easiest way to turn a disk into a pool is:

zpool create mypool c0d0

It's that easy. All the complexity of finding, sanity-checking, labeling, formatting and managing disks is hidden behind this simple command.

If you don't have any spare disks to try this out with, then you can just create yourself some files, then use them as if they were block devices:

# mkfile 128m /export/stuff/disk1
# mkfile 128m /export/stuff/disk2
# zpool create testpool mirror /export/stuff/disk1 /export/stuff/disk2
# zpool status testpool
pool: testpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror ONLINE 0 0 0
/export/stuff/disk1 ONLINE 0 0 0
/export/stuff/disk2 ONLINE 0 0 0

errors: No known data errors

The cool thing about this procedure is that you can create as many virtual disks as you like and then test ZFS's features such as data integrity, self-healing, hot spares, RAID-Z and RAID-Z2 etc. without having to find any free disks.

When creating a pool for production data, think about redundancy. There are three basic properties to storage: availability, performance and space. And it's a good idea to prioritize them in that order: Make sure you have redundancy (mirroring, RAID-Z, RAID-Z2) so ZFS can self-heal data when stuff goes wrong at the hardware level. Then decide how much performance you want. Generally, mirroring is faster and more flexible than RAID-Z/Z2, especially if the pool is degraded and ZFS needs to reconstruct data. Space is the cheapest of all three, so don't be greedy and try to give priority to the other two. Richard Elling has some great recommendations on RAID, space and MTTDL. Roch has also posted a great article on mirroring vs. RAID-Z.

4. The power to give

Once you have set up your basic pool, you can already access your new ZFS file system: Your pool has been automatically mounted for you in the root directory. If you followed the examples above, then you can just cd to /mypool and start using ZFS!

But there's more: Creating additional ZFS file systems that use your pool's resources is very easy, just say something like:

zfs create mypool/home
zfs create mypool/home/johndoe
zfs create mypool/home/janedoe

Each of these commands only takes seconds to complete and every time you will get a full new file system, already set up and mounted for you to start using it immediately. Notice that you can manage your ZFS filesystems hierarchically as seen above. Use pools to manage storage properties at the hardware level, use filesystems to present storage to your users and applications. Filesystems have properties (compression, quotas, reservations, etc.) that you can easily administer using zfs set and that are inherited across the hierarchy. Check out Chris Gerhard's blog on more thoughts about file system organization.

5. Snapshot early, snapshot often

ZFS snapshots are quick, easy and cheap. Much cheaper than the horrible experience when you realize that you just deleted a very important file that hasn't been backed up yet! So, use snapshots whenever you can. If you think about whether to snapshot or not, just do it. I recently spent only about $220 on two 320 GB USB disks for my home server to expand my pool with. At these prices, the time you spend thinking about whether to snapshot or not may be more worth than just buying more disk.

Again, Chris has some wisdom on this topic in his ZFS snapshot massacre blog entry. He once had over 60000 snapshots and he's snapshotting filesystems by the minute! Since snapshots in ZFS “just work” and since they only take up the space that actually changes between snapshots, there's really no reason to not doing snapshots all the time. Maybe once per minute is a little bit exaggerated, but once a week, once per day or once an hour per active filesystem is definitely good advice.

Instead of time based snapshotting, Chris came up with the idea to snapshot a file system shared with Samba whenever the Samba user logs in!

6. See the Synergy

ZFS by itself is very powerful. But the full beauty of it can be unleashed by combining ZFS with other great Solaris 10 features. Here are some examples:

  • Tim Foster has written a great SMF service that will snapshot your ZFS filesystems on a regular basis. It's fully automatic, configurable and integrated with SMF in a beautiful way.

  • ZFS can create block devices, too. They are called zvols. Since Nevada build 54, they are fully integrated into the Solaris iSCSI infrastructure. See Ben Rockwood's blog entry on the beauty of iSCSI with ZFS.

  • A couple of people are now elevating this concept even further: Take two Thumpers, create big zvols inside them, export them through iSCSI and mirror over them with ZFS on a server. You'll get a huge, distributed storage subsystem that can be easily exported and imported on a regular network. A poor man's SAN and a powerful shared storage for future HA clusters thanks to ZFS, iSCSI and Thumper! Jörg Möllenkamp is taking this concept a bit further by thinking about ZFS, iSCSI, Thumper and SAM-FS.

  • Check out some cool Sun StorageTek Availability Suite and ZFS demos here.

  • ZFS and boot support is still in the works, but if you're brave, you can try it out with the newer Solaris Nevada distributions on x64 systems. Think about the possibilities together with Solaris Live Upgrade! Create a new boot environment in seconds while not needing to find or dedicate a new partition, thanks to snapshots, while saving most of the needed disk space!

And that's only the beginning. As ZFS becomes more and more adopted, we'll see many more creative uses of ZFS with other Solaris 10 technologies and other OSes.

7. Beam me up, ZFS!

One of the most amazing features of ZFS is zfs send/receive. zfs send will turn a ZFS filesystem into a bitstream that you can save to a file, pipe through bzip2 for compression or send through ssh to a distant server for archiving or for remote replication through the corresponding zfs receive command. It also supports incremental sending and receiving out of subsequent snapshots through the -i modifier.

This is a powerful feature with a lot of uses:

  • Create your Solaris zone as a ZFS filesystem, complete with applications, configuration, automation scripts, users etc., zfs send | bzip2 >zone_archive.zfs.bz2 it for later use. Then, unpack and create hundreds of cloned zones out of this master copy.

  • Easily migrate ZFS filesystems between pools on the same machine or on distant machines (through ssh) with zfs send/receive.

  • Create a crontab entry that takes a snapshot every minute, then zfs send -i it over ssh to a second machine where it is piped into zfs receive. Tadah! You'll get free, finely-grained, online remote replication of your precious data.

  • Easily create efficient full or incremental backups of home directories (each in their own ZFS filesystems) through ZFS send. Again, you can compress them and treat them like you would, say, treat a tar archive.

See? It is easy, isn't it? I hope this guide helps you find your way around the world of ZFS. If you want more, drop by the OpenSolaris ZFS Community, we have a mailing list/forum where bright and friendly people hang out that will be glad to help you.

Sunday Aug 12, 2007

ZFS Interview in the POFACS Podcast (German)

Last week, I've been interviewed by the german podcast POFACS, the podcast for alternative computer systems. Today, the interview went live, so if you happen to understand the german language and want to learn about ZFS while driving to work or while jogging, you're invited to listen to the interview.

I was actually amazed at how long the interview turned out: It's 40 minutes, while recording the piece only felt like 20 minutes or so. The average commute time in germany is about 20 minutes, so this interview will easily cover both ways to and from work. But there's more: This episode of POFACS also introduces you to the NetBSD operating system, the German Unix User Group GUUG. Finally, the guys at POFACS were also so kind to feature the HELDENFunk podcast in a short introductory interview. Thanks!

So with a total playing time if 1 hour and 20 minutes, this episode has you covered for at least two commutes or a couple of jogging runs :).

Tuesday Aug 07, 2007

Consolidating Web 2.0 Services, anyone?

I have profiles on both LinkedIn and XING. And lately, I discovered Facebook, so I created a third profile there as well. And then there are half a dozen web forums here and there that I have a profile with as well.

Wouldn't it be nice to create and update a profile in one place, then have it available from whatever the Web 2.0 networking site du jour is? 

Each of these sites has their own messaging system. No, they don't forward me messages, they just send out notifications, since they want me to spend valuable online time with their websites, not anybody else's.

Wouldn't it be nice to have all Web 2.0 site's messaging systems aggregated as simple emails to my personal mailbox of choice?

I also like Plazes.com, and I update my whereabouts and what I do there once in a while. I can also tell Facebook what I'm doing right now. And now, surprise, a colleague tells me that this Twitter (sorry, I don't have a Twitter profile yet...) thing is real cool and I should use it to tell the world what I'm doing right now. That would be the third Web 2.0 service where I can type in what I do and let my friends know.

Wouldn't it be... You get the picture.

I think it would be real nice if Web 2.0 services could sit together at one table, agree on some open standards for Web 2.0 style profiles, messaging, microblogging, geo-tagging etc., and then connect with each other, so one change in one profile is reflected in the other as well, so one message sent to me from one forum reaches my conventional mail box and so one action I post to one microblogging site shows up on Plazes and Facebook as well.

I know I'm asking for a lot: After all, much of the business models of Web 2.0 companies actually rely on collecting all that data from their users and figure out how to monetize it. But on the other hand, as a user of such services, I'd like to have a nice user experience and updating three profiles is not fun if I were to do that seriously.

Therefore, I think one of the following will happen:

  • Web 2.0 companies will consolidate in the sense of being merged into very few, but global uber-companies that own all business profiles, all geo-tagging stuff, etc. This is probably why Google is buying some Web 2.0 company on a weekly basis. Maybe I should by XING stock and wait for them to be acquired by LinkedIn etc. but maybe I'm an investment sissy...
  • Web 2.0 Meta-Companies will emerge that leverage Web 2.0 APIs (or mimick users through traditional HTTP) and offer Meta-Services. I'd love to got to, say, a MetaProfiles.com, set up a real good and thorough profile of my life, then let it automatically export it to LinkedIn, XING and whatnot.com and I'd be a happy person. Let me know if you happen to know such a service.
    The closest thing to such a service is actually Facebook: Since it's not just a social website, but a real application platform, it has the potential to provide meta-services for any other Web 2.0 sites out there. I love being able to pull in data from Plazes, del.icio.us etc. into my Facebook profile and have it all in one place. I love the "My Profiles" app that lets me show off my dozen or so profiles, blogs, etc. in one single list.
  • Since both of the above are quite inevitable, eventually the losers remaining companies will sit down and start agreeing on unified and open standards for Web 2.0 centric data exchange. We've seen this with many other open standards, so why not the same for personal profiles, geodata etc.?

Meanwhile, I'll check out some of the APIs out there. Maybe I can put together a sync script or something similar to help me across the turbulences of Web 2.0 tryouts.

But first, I'll tryout Twitter. Since a couple of friends are using it already, I feel some social pressure 2.0 building up...

About

Tune in and find out useful stuff about Sun Solaris, CPU and System Technology, Web 2.0 - and have a little fun, too!

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
Bookmarks
TopEntries
Blogroll
OldTopEntries