Wednesday Aug 29, 2007

ZFS I/Os in motion

I'm walking a line - I'm thinking about I/O in motion
I'm walking a line - Just barely enough to be living
Get outta the way - No time to begin
This isn't the time - So nothing was biodone
Not talking about - Not many at all
I'm turning around - No trouble at all
You notice there's nothing around you, around you
I'm walking a line - Divide and dissolve.

[theme song for this post is Houses in Motion by the Talking Heads]

Previously, I mentioned a movie. OK, so perhaps it isn't a movie, but an animated GIF.


This is a time-lapse animation of some of the data shown in my previous blog on ZFS usage of mirrors. Here we're looking at one second intervals and the I/O to the slow disk of a two-disk mirrored ZFS file system. The workload is a recursive copy of the /usr/share directory into this file system.

The yellow areas on the device field are write I/O operations. For each time interval, the new I/O operations are shown with their latency elevators. Shorter elevators mean lower latency. Green elevators mean the latency is 10ms or less, yellow until 25ms, and red beyond 25ms. This provides some insight into the way the slab allocator works for ZFS. If you look closely, you can also see the redundant uberblock updates along the lower-right side near the edge. If you can't see that in the small GIF, click on the GIF for a larger version which is easier to see.

ZFS makes redundant copies of the metadata. By preference, these will be placed in a different slab. You can see this in the animation as there are occasionally writes further out than the bulk of the data writes. As the disk begins to fill, the gaps become filled.  Interestingly, the writes to the next slab (metadata) do not have much latency - they are in the green zone. This is a simple IDE disk, so there is a seek required by these writes. This should help allay one of the fears of ZFS, that the tendency to have data spread out will be a performance problem - I see no clear evidence of that here. 

I have implemented this as a series of Worldwind layers. This isn't really what Worldwind was designed to do, so there are some inefficiencies in the implementation, or it may be that there is still some trick I have yet to learn.  But it is functional in that you can see I/Os in motion.

Looking at ZFS

A few months ago, I blogged about why I wasn't at JavaOne and mentioned that I was looking at some JOGL code. Now I'm ready to show you some cool pictures which provide a view into how ZFS uses disks.

The examples here show a mirrored disk pair. I created a mirrored zpool and use the default ZFS settings. I then did a recursive copy of /usr/share into the ZFS file system. This is a write-mostly workload.

There are several problems with trying to visualize this sort of data:

  1. There is a huge number of data points.  A 500 GByte disk has about a billion blocks.  Mirror that and you are trying to visualize two billion data points. My workstation screen size is only 1.92 million pixels (1600x1200) so there is no way that I could see this much data.
  2. If I look at an ASCII table of this data, then it may be hundreds of pages long.  Just for fun, try looking at the output of zdb -dddddd to get an idea of how the data might look in ASCII, but I'll warn you in advance, try this only on a small zpool located on a non-production system.
  3. One dimensional views of the data are possible.  Actually, this is what zdb will show for you.  There is some reasoning here because a device is accessed as a single set of blocks using an offset and size for read or write operations. But this doesn't scale well, especially to a billion data points.
  4. Two dimensional views are also possible, where we basically make a two dimensional array of the one dimensional data.  This does hide some behaviour, as disks are two dimensional, but they are stacks of circles of different sizes.  These physical details are cleverly hidden and subject to change on a per-case basis.  So, perhaps we can see some info in two dimensions that would help us understand what is happening.
  5. Three dimensional views can show even more data.  This is where JOGL comes in, it is a 3-D libary for JAVA.

It is clear that some sort of 3-D visualization system could help provide some insight into this massive amount of data.  So I did it.

Where is the data going? 

mirrored write iops

This is a view of the two devices in the mirror after they have been filled by the recursive copy. Yellow blocks indicate write operations, green blocks are read operations.  Since this was a copy into the file system, there aren't very many reads. I would presume that your browser window is not of sufficient resolution to show the few, small reads anyway, so you'll just have to trust me.

What you should be able to see, even at a relatively low resolution, is that we're looking at a 2-D representation of each device from a 3-D viewpoint. Zooming, panning, and moving the viewpoint allows me to observe more or less detail.

To gather this data, I used TNF tracing.  I could also write a dtrace script to do the same thing. But I decided to use TNF data because it has been available since Solaris 8 (7-8 years or so) and I have an archive of old TNF traces that I might want to take a look at some day. So what you see here are the I/O operations for each disk during the experiment.

How long did it take?  (Or, what is the latency?)

The TNF data also contains latency information.  The latency is measured as the difference in time between the start of the I/O and its completion. Using the 3rd dimension, I put the latency in the Z-axis.


Ahhh... this view tells me something interesting. The latency is shown as a line emitting from the starting offset of the block being written. You can see some regularity over the space as ZFS will coalesce writes into 128 kByte I/Os. The pattern is more clearly visible on the device on the right.

 But wait! What about all of the red?  I color the latency line green when the latency is less than 10ms, yellow until 25ms, and red for latency > 25ms.  The height of the line is a multiple of its actual latency.  Wow!  The device on the left has a lot of red, it sure looks slow.  And it is.  On the other hand, the device on the right sure looks fast.  And it is. But this view is still hard to see, even when you can fly around and look at it from different angles. So, I added some icons...

I put icons at the top of the line. If I hover the mouse over an icon, it will show a tooltip which contains more information about that data point. In this case, the tooltip says, "Write, block=202688, size=64, flags=3080101, time=87.85"  The size is in blocks, the flags are defined in a header file somewhere, and the time is latency in milliseconds.  So we wrote 32 kBytes at block 202,688 in 87.85 ms.  This is becoming useful!  By cruising around, it becomes apparent that for this slow device, small writes are faster than large writes, which is pretty much what you would expect.

Finding a place in the world

Now for the kicker.  I implemented this as an add-on to NASA's Worldwind.



 I floated my devices at 10,000 m above the ocean off the west coast of San Diego! By leveraging the Worldwind for Java SDK, I was able to implement my visualization by writing approximately 2,000 lines of code. This is a pretty efficient way of extending a GIS tool into non-GIS use, while leveraging the fact that GIS tools are inherently designed to look at billions of data points in 3-D.

More details of the experiment

The two devices are intentionally very different from a performance perspective. The device on the left is an old, slow, relatively small IDE disk. The device on the right is a ramdisk. 

I believe that this technique can lead to a better view of how systems work under the covers, even beyond disk devices.  I've got some cool ideas, but not enough days in the hour to explore them all.  Drop me a line if you've got a cool idea. 

The astute observer will notice another view of the data just to the north of the devices. This is the ZFS space map allocation of one of the mirror vdevs. More on that later... I've got a movie to put together...


Monday Jul 30, 2007

Solaris Cluster Express is now available

As you have probably already heard, we have begun to release Solaris Cluster source at the OpenSolaris website. Now we are also releasing a binary version for use with Solaris Express. You can download the bits from the download center.

Share and enjoy!

Monday Jul 16, 2007

San Diego OpenSolaris User's Group meeting this week

Meeting Time: Wednesday, July 18 at 6:00pm
Sun Microsystems
9515 Towne Centre Drive
San Diego, CA 92121
Building SAN10 - 2nd floor - Gas Lamp Conference Room
Map to Sun San Diego

On July 18, Ryan Scott will be presenting "Building and Deploying OpenSolaris." Ryan will demonstrate how to download the OpenSolaris source code, how to make source code changes, and how to build and install these changes.

Ryan is a kernel engineer in the Solaris Core Technology Group, working on implementing Solaris on the Xen Hypervisor. In previous work at Sun, he worked on Predictive Self Healing, for which he received a Chairman's Award for Innovation. He has also worked on error recovery and SPARC platform sustaining. Ryan joined Sun in 2001 after receiving a BSE in Computer Engineering from Purdue University.

Information about past meetings is here.

Monday Jun 25, 2007

Brain surgery: /usr/ccs tumor removed

Sometimes it just takes way too long to remove brain damage. Way back when Solaris 2.0 was forming, someone had the bright idea to move the  C compilation system from /usr to /usr/ccs. I suppose the idea was that since you no longer had to compile the kernel, the C compiler no longer needed to be in the default user environment.  I think the same gremlin also removed /usr/games entirely, another long-time staple.  This move also coincided with the "planetization of Sun" idea, so the compilers were split off to become their own profit and loss center.  IMHO, this is the single biggest reason gcc ever got any traction beyond VAXen.  But I digress...

No matter what the real reasons were, and who was responsible (this was long before I started working at Sun), I am pleased to see that /usr/ccs is being removed. I've long been an advocate of the thought that useful applications should be in the default user environment. We should never expose our company organization structure in products, especially since we're apt to reorganize far more often than products change. IMHO, the /usr/ccs fiasco was exposing our customers to pain because of our organizational structure.  Brain damage. Cancer. A bad thing.

I performed a study of field installed software a few years ago. It seems that Sun makes all sorts of software which nobody knows about because it is not installed by default, or is not installed into the default user environment. I'm very happy to see all of the positive activity in the OpenSolaris community to rectify this situation and make Solaris a better out of the box experience.  We still have more work to do, but removing the cancerous brain damage that was /usr/ccs is a very good sign that we are moving in the right direction.

Thursday Jun 07, 2007

Ducks win! Ducks win the cup!

The Anaheim Ducks defeated the Ottawa Senators to win the Stanley Cup last night! The team played well through the season and into the playoffs and deserved the win. Congrats to the team!

Too bad Hal Stern's favorite team were sitting at home watching, it would have been more pleasurable to avenge the 2003 Stanley Cup Final game 7.  Maybe next year, Hal.

 In an odd display of Southern California culture, after the post-game festivities, mostly dominated by the players skating around holding the cup, the programming went straight to Wheel of Fortune. I have to think that if Ottawa had won, the TV stations in Canada would still be showing highlights and party shots well into the wee hours of Friday morning.

Tuesday May 08, 2007

Who's not at JavaONE?

I'm not going to JavaONE this year.  And I'm a little sad about that, but I'll make it one day.  I was part of the very first JavaONE prelude.  "Prelude" is the operative term because at that time Sun employees were actively discouraged from attending.  This was a dain bramaged policy which has since been fixed, but at the time the idea was to fill the event with customers and developers.  Now it is just full of every sort of person.  Y'all have fun up there!

So to celebrate this year's JavaONE, I'm learning JOGL.   Why?  Well, I've got some data that I'd like to visualize and I've not found a reasonable tool for doing it in an open, shareable manner.  Stay tuned...

Friday May 04, 2007

ZFS, copies, and data protection

OpenSolaris build 61 (or later) is now available for download. ZFS has added a new feature that will improve data protection: redundant copies for data (aka ditto blocks for data). Previously, ZFS stored redundant copies of metadata. Now this feature is available for data, too.

This represents a new feature which is unique to ZFS: you can set the data protection policy on a per-file system basis, beyond that offered by the underlying device or volume. For single-device systems, like my laptop with its single disk drive, this is very powerful. I can have a different data protection policy for the files that I really care about (my personal files) than the files that I really don't care about or that can be easily reloaded from the OS installation DVD. For systems with multiple disks assembled in a RAID configuration, the data protection is not quite so obvious. Let's explore this feature, look under the hood, and then analyze some possible configurations.

Using Copies

To change the numbers of data copies, set the copies property. For example, suppose I have a zpool named "zwimming." The default number of data copies is 1. But you can change that to 2 quite easily.

# zfs set copies=2 zwimming

The copies property works for all new writes, so I recommend that you set that policy when you create the file system or immediately after you create a zpool.

You can verify the copies setting by looking at the properties.

# zfs get copies zwimming
zwimming  copies    2         local

ZFS will account for the space used. For example, suppose I create three new file systems and copy some data to them. You can then see that the space used reflects the number of copies. If you use quotas, then the copies will be charged against the quotas, too.

# zfs create -o copies=1 zwimming/single
# zfs create -o copies=2 zwimming/dual
# zfs create -o copies=3 zwimming/triple
# cp -rp /usr/share/man1 /zwimming/single
# cp -rp /usr/share/man1 /zwimming/dual
# cp -rp /usr/share/man1 /zwimming/triple
# zfs list -r zwimming                                                       
zwimming 48.2M 310M 33.5K /zwimming
zwimming/dual 16.0M 310M 16.0M /zwimming/dual
zwimming/single 8.09M 310M 8.09M /zwimming/single
zwimming/triple 23.8M 310M 23.8M /zwimming/triple

This makes sense. Each file system has one, two, or three copies of the data and will use correspondingly one, two, or three times as much space to store the data.

Under the Covers

ZFS will spread the ditto blocks across the vdev or vdevs to provide spatial diversity. Bill Moore has previously blogged about this, or you can see it in the code for yourself. From a RAS perspective, this is a good thing. We want to reduce the possibility that a single failure, such as a drive head impact with media, could disturb both copies of our data. If we have multiple disks, ZFS will try to spread the copies across multiple disks. This is different than mirroring, in subtle ways. The actual placement is ultimately based upon available space. Let's look at some simplified examples. First, for the default file system configuration settings on a single disk.

Default, simple config

Note that there are two copies of the metadata, by default. If we have two or more copies of the data, the number of metadata copies is three.

ZFS, 2 copies 

Suppose you have a 2-disk stripe. In that case, ZFS will try to spread the copies across the disks.

ZFS, 2 copies, 2 disks

Since the copies are created above the zpool, a mirrored zpool will faithfully mirror the copies.


ZFS, copies=2, mirrored

Since the copies policy is set at the file system level, not the zpool level, a single zpool may contain multiple file systems, each with different policies. In other words, you could have data which is not copied allocated along with data that is copied.


ZFS, mixed copies

Using different policies for different file systems allows you to have different data protection policies, allows you to improve data protection, and offers many more permutations of configurations for you to weigh in your designs.

RAS Modeling

It is obvious that increasing the number of data copies will effectively reduce the amount of available space accordingly. But how will this affect reliability? To answer that question we use the MTTDL[2] model I previously described, with the following changes:

First, we calculate the probability of unsuccessful reconstruction due to a UER for N disks of a given size (unit conversion omitted). The number of copies decreases this probability. This makes sense as we could use another copy of the data for reconstruction and to completely fail, we'd need to lose all copies:
Precon_fail = ((N-1) \* size / UER)copies
For single-disk failure protection:
MTTDL[2] = MTBF / (N \* Precon_fail)
For double-disk failure protection:
MTTDL[2] = MTBF2/ (N \* (N-1) \* MTTR \* Precon_fail)

Note that as the number of copies increases, Precon_fail approaches zero quickly. This will increase the MTTDL. We want higher MTTDL, so this is a good thing.

OK, now that we can calculate available space and MTTDL, let's look at some configurations for 46 disks available on a Sun Fire X4500 (aka Thumper). We'll look at single parity schemes, to reduce the clutter, but double parity schemes will show the same, relative improvements.

ZFS, X4500 single parity schemes with copies

bigger view 

You can see that we are trading off space for MTTDL. You can also see that for raidz zpools, having more disks in the sets reduces the MTTDL. It gets more interesting to see that the 2-way mirror with copies=2 is very similar in space and MTTDL to the 5-disk raidz with copies=3. Hmm. Also, the 2-way mirror with copies=1 is similar in MTTDL to the 7-disk raidz with copies=2, though the mirror configurations allow more space. This information may be useful as you make trade-offs. Since the copies parameter is set per file system, you can still set the data protection policy for important data separately from unimportant data. This might be a good idea for some situations where you might have permanent originals (eg. CDs, DVDs) and want to apply a different data protection policy.

In the future, once we have a better feel for the real performance considerations, we'll be able to add a performance component into the analysis.

Single Device Revisited

Now that we see how data protection is improved, let's revisit the single device case. I use the term device here because there is a significant change occurring in storage as we replace disk drives with solid state, non-volatile memory devices (eg. flash disks and future MRAM or PRAM devices). A large number of enterprise customers demand dual disk drives for mirroring root file systems in servers. However, there is also a growing demand for solid state boot devices, and we have some Sun servers with this option. Some believe that by 2009, the majority of laptops will also have solid state devices instead of disk drives. In the interim, there are also hybrid disk drives.

What affect will these devices have on data retention? We know that if the entire device completely fails, then the data is most likely unrecoverable. In real life, these devices can suffer many failures which result in data loss, but which are not complete device failures. For disks, we see the most common failure is an unrecoverable read where data is lost from one or more sector (bar 1 in the graph below). For flash memories, there is an endurance issue where repeated writes to a cell may reduce the probability of reading the data correctly. If you only have one copy of the data, then the data is lost, never to be read correctly again.

We captured disk error codes returned from a number of disk drives in the field. The Pareto chart below shows the relationship between the error codes. Bar 1 is the unrecoverable read which accounts for about 24% of the errors recorded. The violet bars show recoverable errors which did succeed. Examples of successfully recovered errors are: write error - recovered with block reallocation, read error - recovered by ECC using normal retries, etc. The recovered errors do not (immediately) indicate a data loss event, so they are largely transparent to applications. We worry more about the unrecoverable errors.


Disk error Pareto chart

Approximately 1/3 of the errors were unrecoverable. If such an error occurs in ZFS metadata, then ZFS will try to read alternate metadata copy and repair the metadata. If the data has multiple copies, then it is likely that we will not lose any data. This is a more detailed view of the storage device because we are not treating all failures as a full device failure.

Both real and anecdotal evidence suggests that unrecoverable errors can occur while the device is still largely operational. ZFS has the ability to survive such errors without data loss. Very cool. Murphy's Law will ultimately catch up with you, though. In the case where ZFS cannot recover the data, ZFS will tell you which file is corrupted. You can then decide whether or not you should recover it from backups or source media.

Another Single Device

Now that I've got you to think of the single device as a single device, I'd like to extend the thought to RAID arrays. There is much confusion amongst people about whether ZFS should or should not be used with RAID arrays. If you search, you'll find comments and recommendations both for and against using hardware RAID for ZFS. The main argument is centered around the ability of ZFS to correct errors. If you have a single device backed by a RAID array with some sort of data protection, then previous versions of ZFS could not recover data which was lost. Hold it right there, fella! Do I mean that RAID arrays and the channel from the array to main memory can have errors? Yes, of course! We have seen cases where errors were introduced somewhere along the path between disk media to main memory where data was lost or corrupted. Prior to ZFS, these were silent errors and blissfully ignored. With ZFS, the checksum now detects these errors and tries to recover. If you don't believe me, then watch the ZFS forum on where we get reports like this about once a month or so. With ZFS copies, you can now recover from such errors without changing the RAID array configuration.

If ZFS can correct a data error, it will attempt to do so. You now have a the option to improve your data protection even when using a single RAID LUN. And this is the same mechanism we can use for a single disk or flash drive: data copies. You can implement the copies on a per-file system basis and thus have different data protection policies even though the data is physically stored on a RAID LUN in a hardware RAID array. I really hope we can put to rest the "ZFS prefers JBOD" argument and just concentrate our efforts on implementing the best data protection policies for the requirements.

ZFS with data copies is another tool in your toolbelt to improve your life, and the life of your data.

Thursday Apr 26, 2007 gets better every year

I've been a baseball fan for a long time.  Growing up in Alabama during the time of Hank Aaron's amazing career meant that I was an Atlanta Braves fan.  Now that I live near San Diego, I find it difficult to follow the Braves. Of course, I've also adopted the San Diego Padres, too.  The best way to follow the teams is now I've been visiting for a long time and can say that without doubt they get much better every year. This year they've significantly improved the site, especially the gameday information.  The new gameday is very, very cool. It shows the pitches in 3-D including the strike zone and break. You can pick different angles to view the pitches. It is very, very cool.  Since such things cannot be completely automated, there must be someone in the back room putting together the play-by-play commentary.  Well, CNET just wrote a photo gallery showing the fine folks who run the site and put all of the information out there in near real time.  It is also cool because you can see some of the Sun gear in the background.  I'm happy to be a long time baseball fan (love ya Hank), and even happier to know that Sun has helped make such a cool site.

Wednesday Apr 25, 2007

It gets worse when...

It is bad when someone wakes you from a deep slumber at 2:00 in the morning.

It gets worse when that someone is a policeman.

It gets worse when he asks, "do you have any cows?"  Groggily, I answer, "yes."

It gets worse when he says, "there is about a half dozen cows loose on the highway."  Suddenly, and without coffee, I am wide awake.  We have a half dozen cows in the pasture near the highway.

It gets worse when he says, "see that car spun off the road in the ditch?"  In the distance I see a car with its blinkers on.

It gets worse when he says, "the car clipped a cow and spun out."  I've seen pictures of livestock vs. automobile encounters and they are suddenly recalled from the depths of my memory.

It gets better when he says, "everybody seems to be ok."  Whew!

It gets worse when he asks, "can you get the cows out of the highway?"  "Sure!"

I put on my boots, grab some lights, and jump the in the Calabaza. The policeman heads down to the highway to deal with the wrecker arriving to extract the car from the ditch.

I drive into the pasture to see if the fence is broken and accidentally drop my flashlight.  I jump out to retrieve the flashlight and hear an inquisitive "moo?"  I scan the pasture and see 12 green eyeballs staring at me.  "Got hay?" they ask.  Whew, the girls are safe.

We rounded up the wandering bovines and moved them to a safe area and began looking for their home. To our surprise, we couldn't find a broken fence or open gate.  We did find evidence that they had traveled about  a half mile down the highway, though.  By 4:30 AM we decided that we would continue to look for their home after sunrise and went back to bed.  The story is getting better, now, but...

It gets worse when you sleep through the alarm and miss your 8:00 AM conference call...


Monday Apr 23, 2007

Mainframe inspired RAS features in new SPARC Enterprise Servers

My colleague, Gary Combs, put together a podcast describing the new RAS features found in the Sun SPARC Enterprise Servers. The M4000, M5000, M8000, and M9000 servers have very advanced RAS features, which put them head and shoulders above the competition. Here is my list of favorites, in no particular order:

  1. Memory mirroring. This is like RAID-1 for main memory. As I've said many times, there are 4 types of components which tend to break most often: disks, DIMMs (memory), fans, and power supplies. Memory mirroring brings the fully redundant reliability techniques often used for disks, fans, and power supplies to DIMMs.
  2. Extended ECC for main memory.  Full chip failures on a DIMM can be tolerated.
  3. Instruction retry. The processor can detect faulty operation and retry instructions. This feature has been available on mainframes, and is now available for the general purpose computing markets.
  4. Improved data path protection. Many improvements here, along the entire data path.  ECC protection is provided for all of the on-processor memory.
  5. Reduced part count from the older generation Sun Fire E25K.  Better integration allows us to do more with fewer parts while simultaneously improving the error detection and correction capabilities of the subsystems.
  6. Open-source Solaris Fault Management Architecture (FMA) integration. This allows systems administrators to see what faults the system has detected and the system will automatically heal itself.
  7. Enhanced dynamic reconfiguration.  Dynamic reconfiguration can be done at the processor, DIMM (bank), and PCI-E (pairs) level of grainularity.
  8. Solaris Cluster support.  Of course Solaris Cluster is supported including clustering between Solaris containers, dynamic system domains, or chassis.
  9. Comprehensive service processor. The service processor monitors the health of the system and controls system operation and reconfiguration. This is the most advanced service processor we've developed. Another welcome feature is the ability to delegate responsibilities to different system administrators with restrictions so that they cannot control the entire chassis.  This will be greatly appreciated in large organizations where multiple groups need computing resources.
  10. Dual power grid. You can connect the power supplies to two different power grids. Many people do not have the luxury of access to two different power grids, but those who have been bitten by a grid outage will really appreciate this feature.  Think of this as RAID-1 for your power source.

I don't think you'll see anything revolutionary in my favorites list. This is due to the continuous improvements in the RAS technologies.  The older Sun Fire servers were already very reliable, and it is hard to create a revolutionary change for mature technologies.  We have goals to make every generation better, and we've made many advances with this new generation.  If the RAS guys do their job right, you won't notice it - things will just keep working.

Friday Apr 20, 2007

Teraflop in a box

About 20 years ago, I was working on a project to create a massively parallel processor capable of delivering 1 Teraflop (1 Trillion floating point operations (per second)).  This was a grand challenge for the time and technology. Almost all of the projects trying to tackle this sort of problem were massively parallel architectures, and many of the companies formed to build such machines are long gone.

Today, it is quite feasible to put a bunch of machines in a (relatively) small room (or Black Box) and achieve 1 Teraflop. But the problem with these sorts of architectures (grid is the current buzzword) is that the programming and debugging environment is difficult. Lots of work has been done on that front, too, but it is still far easier to develop and debug programs in a single OS model with a single shared memory space.

I am very happy to say that we now have a single machine, running a single OS instance, with a single address space, which is capable of 1 Teraflop!  The Sun SPARC Enterprise M9000 server is the fastest single machine on the planet!


Wednesday Apr 18, 2007

Is eWeek living on internet time?

eWeek has published a nice article describing Sun's new, low-cost RAID array: the Sun StorageTek ST2500 Low Cost Array.  This is an interesting new product that has broad appeal and will be a heck of a good box to run under ZFS.

But I'm worried about eWeek.  It seems that they've lost track of time.  Many of us have been running on internet time for most of our lives.  This quote makes me wonder if eWeek forgot to update their timezone data:

"The ZFS [the speedy Zeta[sic] file system, recently released to the open-source community by Sun] is very interesting, and people are looking at it," [Henry] Baltazar told eWeek.

I'm pretty sure Henry Baltazar is running on internet time, and provided a very nice quote.  But whoever added the editorial clarification at eWeek spelled Zettabyte wrong and is running on island timeZFS was released to the open-source community on June 14, 2005 - nearly 2 years ago in real time.  Even in real time, 2 years can hardly be considered "recent."  Sigh.


Thursday Apr 05, 2007

ZFS ported to FreeBSD!

The FreeBSD team has added ZFS to the FreeBSD-7.0 release!  This is excellent news and all of us are happy to share with the FreeBSD community.  Pawel Jakub Dawidek has posted this note to the FreeBSD and ZFS community. This will greatly expand the use of ZFS and will no doubt lead to more innovative developments in the community.  Well done!

Tuesday Apr 03, 2007

San Diego OpenSolaris Users Group forming

We're starting a San Diego OpenSolaris Users Group. Everyone is invited to participate, especially those in the San Diego area. We're planning on meeting the first Wednesday of every month, except for July 4th, 2007, which is a holiday. The first meeting will be Wednesday May 2. For more information, see the website, join the e-mail list, or monitor the forum.




« July 2016