Thursday Jan 22, 2009

Gotta love the spring

In this time of great changes, I thought I might share a view from the ranch this morning.

Our cherry trees begin blooming after the rains start in November or December. They will continue to bloom for another month or two. For all of those people who witnessed change on the Mall this week, huddled against the cold, know that soon the changes will bring spring-like weather and the cherry blossoms will bloom.


Wednesday Nov 05, 2008

Roch star and Really Cool Stuff

Our very own Roch star is in a new video released today as part of the MySQL releases today. In the video "A Look Inside Sun's MySQL Optimization Lab" Roch gives a little bit of a tour and at around 3:00, you get a glimpse of some Really Cool Stuff which might be of interest to ZFS folks.

Friday Oct 24, 2008

Oldie but goodie

This week, several colleagues and I were discussing problems posing people who run datacenters due to the increasingly high churn rates in hardware. Some customers prefer systems which don't change every other month, so they can plan for longevity.  Other customers want the latest and greatest bling. Vendors are stuck in the middle and have their own issues with stocking, manufacturing, plant obsolescence, etc. So, how can you design your datacenter architecture to cope with these conflicting trends?  Well, back in 1999 I wrote a paper for the SuperG conference which discusses this trend, to some degree.  It is called A Model for Datacenter.com.  It is now nearly 10 years old, but seems to be holding its own, even though much has changed in the past decade. I'm posting it here, because I never throw things away and the SuperG proceedings are not generally available.

Enjoy.

Monday Oct 13, 2008

Evolution of RAS in the Sun SPARC T5440 server

Reliability, Availability, and Serviceability (RAS) in the Sun SPARC Enterprise T5440 builds upon the solid foundations created for the Sun SPARC Enterprise T5140, T5240, and Sun Fire X4600 M2 servers. The large number of CPU cores available in the T5440 needs large amounts of I/O capability to balance the design. The physical design of the X4600 M2 servers was a natural candidate for the new design – modular CPU and memory cards along with plenty of slots for I/O expansion. We've also seen good field reliability from the X4600 M2 servers and their components. The T5440 is a excellent example of how leveraging the best parts of these other designs has resulted in a very reliable and serviceable system.

The trade-offs required for scaling from a single board design to a larger, multiple board design always impact reliability of the server. Additional connectors and other parts also contribute to increased failure rates, or lower reliability. On the other hand, the ability to replace a major component without replacing a whole motherboard increases serviceability – and lowers operating costs. The additional parts which enable the system to scale also have an impact on performance, as some of my colleagues have noted. When comparing systems on a single aspect of the RAS and performance spectrum, you can miss important design characteristics, or worse, misunderstand how the trade-offs impact the overall suitability of a system. To get a better insight on how to apply highly scalable systems to a complex task prefer to do a performability analysis.

The T5440 has almost exactly twice the performance capabilities of the T5220. If you have a workload which previously required four T5220s with a spare (for availability), then you should be able to host that workload on only two T5440s, and a spare. Using benchmarks for sizing is the best way to compare, and we can generally see that a T5440 is six times more capable than a Sun Fire V490 server. This will complete a comparable performance sizing.

On the RAS side, a single T5440 is more reliable than two T5220s, so there is a reliability gain. But for a performability analysis, that is contrasted with the fewer numbers of T5440. For example, if the workload requires 4 servers and we add a spare, then the system is considered performant when 4 of 5 servers are available. As we consolidate onto fewer servers, the model changes accordingly: for 2 servers and a spare, the system is performant when 2 of 3 servers are available. The reliability gain of using fewer servers can be readily seen in the number of yearly service calls expected. Fewer servers tends to mean fewer service calls. The math behind this can become complicated for large clusters and is arguably counter-intuitive at times. Fortuntately, our RAS modeling tools can handle very complicated systems relatively easily.

We build availability models for all of our systems and use the same service parameters to permit easy comparisons. For example, we would model all systems with 8 hour service response time. The models are then compared, thusly

System

Units

Performability

Yearly Services

Sun SPARC Enterprise 5440 server

2 + 1

0.99999903

0.585

Sun SPARC Enterprise 5240 server

4 + 1

0.99999909

0.661

Sun SPARC Enterprise 5140 server

4 + 1

0.99999915

0.687

Sun Fire V490 server

12 + 1

0.99998644

1.402

In these results, you can see that T5440 clearly wins the number of units and yearly services. Both of these metrics impact total cost of ownership (TCO) as the complexity of an environment is generally attributed to the number of OS instances – fewer servers generally means fewer OS instances. Fewer service calls means fewer problems that require physical human interactions.

You can also see that the performability of the T5x40 systems are very similar. Any of these systems will be much better than a system of V490 servers.

More information on the RAS features these servers can be found in the white paper we wrote, Maximizing IT Service Uptime by Utilizing Dependable Sun SPARC Enterprise T5140, T5240, and T5440 Servers. Ok, I'll admit that someone else wrote the title...

Wednesday Oct 01, 2008

Hey! Where did my snapshots go? Ahh, new feature...

I've been running Solaris NV b99 for a week or so.  I've also been experimenting with the new automatic snapshot tool, which should arrive in b100 soon. To see what snapshots are taken, you can use zfs's list subcommand.

# zfs list
NAME                     USED    AVAIL    REFER   MOUNTPOINT
rpool                   3.63G    12.0G    57.5K   /rpool
rpool@install             17K        -    57.5K   -

...

This is typical output and shows that my rpool (root pool) file system has a snapshot which was taken at install time: rpool@install

But in NV b99, suddenly, the snapshots are no longer listed by default. In general, this is a good thing because there may be thousands of snapshots and the long listing is too long for humans to understand. But what if you really do want to see the snapshots?  A new flag has been added to the zfs list subcommand which will show the snapshots.

# zfs list -t snapshot
NAME                     USED    AVAIL    REFER   MOUNTPOINT
rpool@install             17K        -    57.5K   -

...


This should clear things up a bit and make it easier to manage large numbers of snapshots when using the CLI. If you want to see more details on this change, see the head's up notice for PSARC 2008/469.

Tuesday Sep 02, 2008

Sample RAIDoptimizer output

We often get asked, "what is the best configuration for lots of disks" on the ZFS-discuss forum. There is no one answer to this question because you are really trading-off performance, RAS, and space.  For a handful of disks, the answer is usually easy to figure out in your head.  For a large number of disks, like the 48 disks found on a Sun Fire X4540 server, there are too many permutations to keep straight.  If you review a number of my blogs on this subject, you will see that we can model the various aspects of these design trade-offs and compare.

A few years ago, I wrote a tool called RAIDoptimizer, which will do the math for you for all of the possible permutations. I used the output of this tool to build many of the graphs you see in my blogs.

Today, I'm making available a spreadsheet with a sample run of the permutations of a 48-disk system using reasonable modeling defaults.  In this run, there are 339 possible permutations for ZFS.  The models described in my previous blogs are used to calculate the values.  The default values used are not representative of a specific disk, and merely represent ballpark, default values.  The exact numbers are not as important as the relationships exposed for when you look at different configurations.  Obviously, the tool allows us to change the disk parameters, which are usually available from disk data sheets.  But this will get you into the ballpark, and is a suitable starting point for making some trade-off decisions. 

For your convenience, I turned on the data filters for the columns so that you can easily filter the results. Many people also sort on the various columns.  StarOffice or OpenOffice will let you manipulate the data until the cows come home.  Enjoy.

Wednesday Aug 20, 2008

Dependability Benchmarking for Computer Systems

Over the past few years, a number of people have been working to develop benchmarks for dependability of computer systems. After all, why should the performance guys have all of the fun? We've collected a number of papers on the subject in a new book, Dependability Benchmarking for Computer Systems, available from the IEEE Computer Society Press and Wiley.

The table of contents includes:

  1. The Autonomic Computing Benchmark
  2. Analytical Reliability, Availability, and Serviceability Benchmarks
  3. System Recovery Benchmarks
  4. Dependability Benchmarking Using Environmental Test Tools
  5. Dependability Benchmark for OLTP Systems
  6. Dependability Benchmarking of Web Servers
  7. Dependability Benchmark of Automotive Engine Control Systems
  8. Toward Evaluating the Dependability of Anomaly Detectors
  9. Vajra: Evaluating Byzantine-Fault-Tolerant Distributed Systems
  10. User-Relevant Software Reliability Benchmarking
  11. Interface Robustness Testing: Experience and Lessons Learned from the Ballista Project
  12. Windows and Linux Robustness Benchmarks with Respect to Application Erroneous Behavior
  13. DeBERT: Dependability Benchmarking of Embedded Real-Time Off-the-Shelf Components for Space Applications
  14. Benchmarking the Impact of Faulty Drivers: Application to the Linux Kernel
  15. Benchmarking the Operating System against Faults Impacting Operating System Functions
  16. Neutron Soft Error Rate Characterization of Microprocessors

Wow, you can see that there has been a lot of work, by a lot of people to measure system dependability and improve system designs.

The work described in Chapter 2,  Analytical Reliability, Availability, and Serviceability Benchmarks, can be seen as we are beginning to publish these benchmark results in various product white papers:

Performance benchmarks have proven useful in driving innovation in the computer industry, and I think dependability benchmarks can do likewise. If you feel that these benchmarks are valuable, then please drop me a note, or better yet, ask your computer vendors for some benchmark results.

I'd like to thank all of the contributors to the book, the IEEE, and Wiley. Karama Kanoun and Lisa Spainhower worked tirelessly to get all of the works compiled (herding the cats) and interfaced with the publisher, great job! Ira Pramanick, Jim Mauro, William Bryson, and Dong Tang collaborated with me on Chapters 2 & 3, thanks team!

Thursday Aug 07, 2008

Smartphones will rule the earth!

A few years ago, I put up the first, freely available WiFi hotspot in Ramona at the Ramona Cafe. I hope you think this was an altruistic move, but in reality it often saved me a commute into La Jolla, so it was well worth the investment. "How will we pay for it?" some asked, to which I often reply, "buy some pie!"

Naturally, I have the DHCP logs sent to me so that I can keep track of how it is being used.  At first, there were only a few, regular customers.  Now there are many regular customers.  For the most part, the type of machine being used is relatively obvious.  People have a tendency to name their laptops based on the vendor logo.  So I would see a number of devices named something like "Richard's Dell" or "Richard's Mac."  Others have boring names like "ASSIGNED LAPTOP 573" or some such... boring!

Today, however, I'm seeing a significant change in the machines connecting to the net at the cafe. I'm seeing a large number of "iPhone" and even a few "HTC-8900" devices stopping by.  This is very cool and reinforces the notion that once we can shrink computers to fit in your pocket, then everyone will want one.  Smart phones will rule the earth!

As an MBA who didn't do particularly well in marketing class, even I can say it is very obvious who does better branding between Apple (iPhone) and AT&T (tilt, aka HTC-8900).  Let's face it, something as intimate as a phone is going to get a human-like name.  "iPhone" works as an excellent brand.  "HTC-8900" sucks as brand and there is absolutely no connection between "tilt" and "HTC-8900."  I had to do a google search just to know that the "HTC-8900" is also known as a "tilt."  Why would someone even name a phone "tilt" which has the connotations of draining out of the pinball game.  Go figure. My marketing advice to AT&T, get with the program and work your brands!

Tuesday Aug 05, 2008

More Enterprise-class SSDs Coming Soon

Sun has been talking more and more about enterprise-class solid-state disks (SSDs) lately. Even Jonathan blogged about it. Now we are starting to see some interesting articles hitting the press as various companies prepare to release interesting products for this market.

Today, CNET posted an interesting article by Brooke Crothers that offers some insight into how the consumer and enterprise class devices are diverging in their designs.  My favorite quote is, "One of the things that SSD manufacturers have been slow to learn (is that) you can't just take a compact flash controller, throw some NAND on there and call it an SSD," said Dean Klein, vice president of memory system development at Micron. Yes, absolutely correct.  Though Sun makes several products which offer compact flash (CF) for storage, the future of enterprise class SSDs is not re-badged CFs.  There are many more clever tricks that can be used to provide highly reliable, fast, and reasonably priced SSDs.

Monday Jul 14, 2008

ZFS Workshop at LISA'08

We have organized a ZFS Workshop for the USENIX Large Installation Systems Administration (LISA'08) conference in San Diego this November. I hope you can attend.

The call for papers describes workshops as:
One-day workshops are hands-on, participatory, interactive sessions where small groups of system administrators have an opportunity to discuss a topic of common interest. Workshops are not intended as tutorials, and participants normally have significant experience in the appropriate area, enabling discussions at a peer level. However, attendees with less experience often find workshops useful and are encouraged to discuss attendance with the workshop organizer.

There is an opportunity to seed the discussions, so be sure to let me know if there is an interesting topic to be explored. 

The LISA conference is always one of the more interesting conferences for people who must deal with large sites as their day job. Many of the more difficult scalability problems are discussed in the sessions and hallways. If you are directly involved with the design or management of a large computer site, then it is an excellent conference to attend.

My first LISA was LISA-VI in 1992 where I presented a paper that Matt Long and I wrote, User-setup: A System for Custom Configuration of User Environments, or Helping Users Help Themselves, now hanging out on SourceForge. The original source was published on usenet -- which is how we did such things at the time. I suppose I could search around and find it archived somewhere...

Much has changed from the environments we had in 1992, but the problem of managing complex application environments continues to live on. I think that the more modern approaches to this problem, as clearly demonstrated by connected devices like the iPhone, is to leverage the internet and the browser-like interfaces to hide much of the complexity behind the scenes.  In a sense, this is the approach ZFS takes to managing disks -- hide some of the mundane trivia and provide a view of storage that is more intuitive to the users of storage. The more things change, the more the problems stay the same.

Please attend LISA'08 and join the ZFS workshop.

Nice Article on Understanding Disk Reliability

David Stone of eWeek published a nice article this week on understanding disk reliability. It covers in a short, concise manner many of the aspects of disks that I've blogged about here.  Enjoy.

Examining Disk Reliability Specs

Wednesday Jul 09, 2008

This Ain't Your Daddy's JBOD

This morning, we announced the newest Just a Bunch of Disks (JBOD) storage arrays. These are actually very interesting products from many perspectives. One thing for sure, these are not like any previous JBOD arrays we've ever made. The simple, elegant design and high reliability, availability, and serviceability (RAS) features are truly innovative. Let's take a closer look...

Your Daddy's JBOD

In the bad old days, JBOD arrays were designed around bus or loop architectures. Sun has been selling JBOD enclosures using parallel SCSI busses for more than 20 years. There were even a few years when fibre channel JBOD enclosures were sold. In many first generation systems, they were not what I would call high-RAS designs.  Often there was only one power supply or fan set, but that really wasn't what caused many outages. Placing a number of devices on a bus or loop exposes them to bus or loop failures. If you hang a parallel SCSI bus, you stop access to all devices on the bus.  The vast majority of parallel SCSI bus implementations used single-port disks. The A5000 fibre channel JBOD attempted to fix some of these deficiencies: redundant power supplies, dual-port disks, two fibre channel loops, and two fibre channel hubs. When introduced, it was expected that fibre channel disks would rule the enterprise. The reality was not quite so rosy. Loops still represent a shared, single point of failure in the system. A misbehaving host or disk could still lock up both loops, thus rendering this "redundant" system useless. Fortunately, the power of the market drove the system designs to the point where the fibre channel loops containing disks are usually hidden behind a special-purpose controller. By doing this, the design space can be limited and thus the failure modes can be more easily managed.  In other words, instead of allowing a large variety of hosts to be directly connected to a large variety of disks in a system where a host or disk could hang the shared bus or loop, array controllers divide and conquer this problem by reducing the possible permutations.

This is basically where the market was, yesterday.  Array controllers are very common, but they represent a significant cost. The costs increase for high-RAS designs because you need redundant controllers with multiple host ports, driving the costs up.

Suppose we could revisit the venerable JBOD, but using modern technology?  What would it look like?

SCSI, FC, ATA, SATA, and SAS

When we use the term SCSI, most people think of the old, parallel Small Computer System Interconnect bus. This was a parallel bus implemented in the bad old days with the technology available then. Wiggling wire speed was relatively slow,  so bandwidth increases were achieved using more wires. It is generally faster to push photons through an optical fiber than to wiggle a wire, thus fibre channel (FC) was born. In order to leverage some of the previous software work, FC decided to expand the SCSI protocol to include a serial interface (and they tried to add a bunch of other stuff too, but that is another blog). When people say "fibre channel" they actually mean "serial SCSI protocol over optical fiber transport." Another development around this time was that the venerable Advanced Technology Attachment (ATA) disk interface used in many PCs was also feeling the strain of performance improvement and cost reductions.

Cost reductions?  Well, if you have more wires, then the costs will go up.  Connectors and cables get larger. From a RAS perspective, the number of failure opportunities increases. A (parallel) UltraSCSI implementation needs 68-pin connectors. Bigger connectors with big cables must be stronger and are often designed with lots of structural metal. Using fewer wires means that the connector and cables get smaller, reducing the opportunities for failure, reducing the strength requirements, and thus reducing costs.

Back to the story, the clever engineers said, well if we can use a protocol like SCSI or ATA over a fast, serial link, then we can improve performance, improve RAS, and reduce costs -- a good thing. A better thing was the realization that the same, low-cost physical interface (PHY) can be used for both serial SCSI and serial ATA (SATA). Today, you will find many host bus adapters (HBAs) which will support both SAS and SATA disks. After all, the physical connections are the same, it is just a difference in the protocol running over the wires.

One of the more interesting differences between SAS and SATA is that the SAS guys spend more effort on making the disks dual-ported.  If you look around, you will find single and dual-port SAS disks for sale, but rarely will you see a dual-port SATA disk (let me know if you find one).  More on that later... 

Switches, Yay!

Now that we've got a serial protocol, we can begin to think of implementing switches. In the bad old days, Ethernet was often implemented using coaxial cable, basically a 2-wire shared bus. All Ethernet nodes shared the same coax, and if there was a failure in the coax, everybody was affected. The next Ethernet evolution replaced the coax with point-to-point wires and hubs to act as a collection point for the point-to-point connections. From a RAS perspective, hubs acted similar to the old coax in that a misbehaving node on a hub could interfere with or take down all of the nodes connected to the hub. With the improvements in IC technology over time, hubs were replaced with more intelligent switches. Today, almost all Gigabit Ethernet implementations use switches -- I doubt you could find a Gigabit Ethernet hub for sale. Switches provide fault isolation and allow traffic to flow only between interested parties. SCSI and SATA followed a similar evolution. Once it became serial, like Ethernet, then it was feasible to implement switching.  RAS guys really like switches because in addition to the point-to-point isolation features, smart switches can manage access and diagnose connection faults.

J4200 and J4400 JBOD Arrays

Fast forward to 2008. We now have fast, switchable, redundant host-disk interconnect technology.  Let's build a modern JBOD. The usual stuff is already taken care of: redundant power supplies, redundant fans, hot-pluggable disks, rack-mount enclosure... done. The connection magic is implemented by a pair of redundant SAS switches. These switches contain an ARM processor and have intelligent management. They also permit the SATA Tunneling Protocol (STP) to move SATA protocol over SAS connections. These are often called SAS Expanders, and the LSISASx36 provides 36 ports for us to play with. SAS connections can be aggregated to increase effective bandwidth. For the J4200 and J4400, we pass 4 ports each to two hosts and 4 ports "downstream" to another J4200 or J4400. For all intents and purposes, this works like most other network switches. The result is that each host has a logically direct connection to each disk. Each disk is dual ported, so each disk connects to both SAS expanders. We can remotely manage the switches and disks, so replacing failed components is easy, even while maintaining service.  RAS goodness.

Dual-port SATA?

As I mentioned above, SATA disks tend to be single ported.  How do we connect to two different expanders? The answer is shown here. In the bill-of-materials (BOM) for SATA disks, you will notice a SATA Interposer card. This fits in the carrier for the disk and provides a multiplexor which will connect one SATA disk to two SAS ports. This is, in effect, what is built into a dual-port SAS disk. From a RAS perspective, this has little impact on the overall system RAS because the field replaceable unit (FRU) is the disk+carrier.  We don't really care if that FRU is a single-port SATA disk with an interposer or a dual-port SAS disk.  If it breaks, the corrective action is to replace the disk+carrier. Since each disk slot has point-to-point connections to the two expanders, replacing a disk+carrier is electrically isolated from the other disks.

What About Performance?

Another reason that array controllers are more popular than JBODs is that they often contain some non-volatile memory used for a write cache. This can significantly improve write-latency-sensitive applications. When Sun attempted to replace the venerable SPARCStorage Array 100 (SSA-100) with the A5000 JBOD, one of the biggest complaints was that performance was reduced. This is because the SSA-100 had a non-volatile write cache while the A5000 did not. The per-write latency difference was an order of magnitude. OK, so this time around does anyone really expect that we can replace an array controller with a JBOD?  The answer is yes, but I'll let you read about that plan from the big-wigs...


Postscript

I had intended to show some block diagrams here, but couldn't find any I liked that weren't tagged for internal use only.  If I find something later, I'll blog about it.


Thursday Jun 26, 2008

Awesome disk AFR! Or, is it...

I was hanging out in the ZFS chat room when someone said they were using a new Seagate Barracuda 7200.11 SATA 3Gb/s 1-TB Hard Drive. A quick glance at the technical specs revealed a reliability claim of 0.34% Annualized Failure Rate (AFR).  Holy smokes!  This is well beyond what we typically expect from disks.  Doubling the reliability would really make my day. My feet started doing a happy dance.

So I downloaded the product manual to get all of the gritty details. It looks alot like most of the other large, 3.5" SATA drive specs out there, so far so good. I get to the Reliability Section (section 2.11, page 18) to look for more nuggets.

Immediately, the following raised red flags with me and my happy feet stubbed a toe.

The product shall achieve an Annualized Failure Rate (AFR) of 0.34% (MTBF of 0.7 million hours) when operated in an environment of ambient air temperatures of 25°C. Operation at temperatures outside the specifications in Section 2.8 may increase the product AFR (decrease MTBF). AFR and MTBF are population statistics that are not relevant to individual units.


AFR and MTBF specifications are based on the following assumptions for desktop personal computer environments:
• 2400 power-on-hours per year.
...


Argv! OK, here's what happened. When we design enterprise systems, we use AFR with a 24x7x365 hour year (8760 operation hours/year). A 0.34% AFR using a 8760 hour year is equivalent to an MTBF of 2.5 million hours (really good for a disk). But the disk is spec'ed at 0.7 million hours, which, in my mind is an AFR of 1.25%, or about half as reliable as enterprise disks. The way they get to the notion that an AFR of 0.34% equates to an MTBF of 0.7 million hours is by changing the definition of operation to 2,400 hours per year (300 8-hour days). The math looks like this:

    24x7x365 operation = 8760 hours/year (also called power-on-hours, POH)

    AFR = 100% \* (POH / MTBF)

For an MTBF of 700,000 hours,

    AFR = 100% \* (8760 / 700,000) = 1.25%

or, as Seagate specifies for this disk:

    AFR = 100% \* (2400 / 700,000) = 0.34%

The RAS community has better luck explaining failure rates using AFR rather than MTBF. With AFR you can expect the failures to be a percentage of the population per year. The math is simple and intuitive.  MTBF is not very intuitive and causes all sorts of misconceptions. The lesson here is that AFR can mean different things to different people and can be part of the marketing games people play. For a desktop environment, a large population might see 0.34% AFR with this product (and be happy).  You just need to know the details when you try to compare with the enterprise environments.

Unrecoverable Error on Read (UER) rate is 1e-14 errors/bits read, which is a bit of a disappointment, but consistent with consumer disks.  Enterprise disks usually claim 1e-15 errors/bits read, by comparison. This worries me as the disks are getting bigger because of what it implies.  The product manual says that there is guaranteed to be at least 1,953,525,168 512 byte sectors available.

    Total bits = 1,953,525,168 sectors/disk \* 512 bytes/sector \* 8 bits/byte= 8e12 bits/disk

If the UER is 1e-14 errors/bits read then you can expect an unrecoverable read once every 12.5 times you read the entire disk. Not a very pleasant thought, even if you are using a file system which can detect such errors, like ZFS.  Fortunately, field failure data tends to see a better UER than the manufacturers claim.  If you are worried about this sort of thing, I'll recommend using ZFS.

All-in-all, this looks like a nice disk for desktop use. But you should know that in enterprise environments we expect much better reliability specifications.

Friday Jun 06, 2008

on /var/mail and quotas

About every other month or so, someone comes onto the ZFS forum, complains about quotas, and holds up the shared /var/mail directory as an example of where UFS quotas are superior to ZFS quotas. This is becoming very irritating as it makes an assumption about /var/mail which we proved doesn't scale decades ago.  Rather than trying to respond explaining this again and again, I'm blogging about it.  Enjoy.

When we started building large e-mail servers using sendmail in the late 1980s, we ran right into the problem of scaling the mail delivery directory.  Recall that back then relatively few people were on the internet or using e-mail, a 40MHz processor was leading edge, a 200 MByte hard disk was just becoming affordable, RAID was mostly a white paper, and e-mail attachments were very uncommon. It is often limited resources which cause scaling problems, and putting thousands of users into a single /var/mail quickly exposes issues.

Many sites implemented quotas during that era, largely because of the high cost and relative size of hard disks.  The computing models were derived from the timeshare systems (eg UNIX) and that model was being stretched as network computing was evolving (qv rquotad).  A common practice for Sun sites was to mount /var/mail on the NFS clients so that the mail clients didn't have to know anything about the network.

As we scaled, the first, obvious change was to centralize the /var/mail directory. This allowed you to implement a site-wide mail delivery where you could send mail to user@some.place.edu instead of user@workstation2.lab301.some.place.edu.   This is a cool idea and worked very well for many years. But it wasn't the best solution.

As we scaled some more, and the "administration" demanded quotas, we found that the very nature of distributed systems didn't match the quota model very well. For example, the "administration's" view was that a user may be given a quota of Q for the site. But the site now had many different file systems and a quota only really works on a single file system. We had already centralized everyone onto a single mail store and you needed some quota for the home directory and another subset of Q for the mail store. You also had to try and limit the quota on other home directories because the clever users would discover where the quotas weren't and use all of the space. Back at the mail store, it became increasingly more difficult to manage the space because, as everybody knows, the managers never delete e-mail and they complain loudly when they run out of space. So, quotas in a large, shared directory don't work very well.<\\p>

The next move was to deliver mail into the user's home directory. This is trivially easy to setup in sendmail (now). In this model, the quota only needs to be set by the userin their home directory and when they run out, you can do work. This solution bought another few years of scalability, but still has its limitations. A particularly annoying limitation is that sending mail to someone who was over quota is not handled very well. And if the sys-admins use mail to tell people they are near quota, then it might not be deliverable (recall, managers don't delete e-mail :-)

There is also a potential problem with mail bombs.  In the sendmail model, each message was copied to each user's mailbox. In the old days, you could implement a policy where sendmail would reject mail messages of a large size. You can still do that today, but before attachments you could put the limit at something small, say 100 kBytes.  There is no way you can do that today. So a mischievous user could send a large mail message to everyone, blow out the /var/mail directory or the quotas.

A better model is to have only one copy of an e-mail message and just use pointers for each of the recipients. But while this model can save large amounts of disk space, it is not compatible with quotas because there is no good way to assign the space to a given user.

The next problem to be solved was the clients. Using an NFS mounted /var/mail worked great for UNIX users, but didn't work very well for PCs (which were now becoming network citizens). The POP and IMAP protocols fixed this problem.

Today mail systems can scale to millions of users, but not by using a shared file system or file system quotas. In most cases, there is a database which contains info on the user and their messages. The messages themselves are placed in a database of sorts and there is usually only one copy of the message. Mail quotas can be easily implemented and the mailer can reply to a sender explaining that the recipient is over mail quota, or whatever.  Automation sends a user a near-quota warning message.  But this is not implemented via file system quotas.

So, please, if you want to describe shared space and file system quotas, find some other example than mail. If you can't find an example, then perhaps we can drop the whole quota argument altogether.

If your "administration" demands that you implement quotas, then you have my sympathy.  Just remind them that you probably have more space in your pocket than quota on the system...

Wednesday Apr 09, 2008

RAS in the T5140 and T5240

Today, Sun introduced two new CMT servers, the Sun SPARC Enterprise T5140 and T5240 servers.

I'm really excited about this next stage of server development. Not only have we effectively doubled the performance capacity of the system, we did so without significantly decreasing the reliability. When we try to predict reliability of products which are being designed, we make those predictions based on previous generation systems. At Sun, we make these predictions at the component level. Over the years we have collected detailed failure rate data for a large variety of electronic components as used in the environments often found at our customer sites. We use these component failure rates to determine the failure rate of collections of components. For example, a motherboard may have more than 2,000 components: capacitors, resistors, integrated circuits, etc. The key to improving motherboard reliability is, quite simply, to reduce the number of components. There is some practical limit, though, because we could remove many of the capacitors, but that would compromise signal integrity and performance -- not a good trade-off. The big difference in the open source UltraSPARC T2 and UltraSPARC T2plus processors is the high level of integration onto the chip. They really are systems on a chip, which means that we need very few additional components to complete a server design. Fewer components means better reliability, a win-win situation. On average, the T5140 and T5240 only add about 12% more components over the T5120 and T5220 designs. But considering that you get two or four times as many disks, twice as many DIMM slots, and twice the computing power, this is a very reasonable trade-off.

Let's take a look at the system block diagram to see where all of the major components live.



You will notice that the two PCI-e switches are peers and not cascaded. This allows good flexibility and fault isolation. Compared to the cascaded switches in the T5120 and T5220 servers, this is a simpler design. Simple is good for RAS.

You will also notice that we use the same LSI1068E SAS/SATA controller with onboard RAID. The T5140 is limited to 4 disk bays, but the T5240 can accommodate 16 disk bays. This gives plenty of disk targets for implementing a number of different RAID schemes. I recommend at least some redundancy, dual parity if possible.

Some people have commented that the Neptune Ethernet chip, which provides dual-10Gb Ethernet or quad-1Gb Ethernet interfaces is a single point of failure. There is also one quad GbE PHY chip. The reason the Neptune is there to begin with is because when we implemented the coherency links in the UltraSPARC T2plus processor we had to sacrifice the builtin Neptune interface which is available in the UltraSPARC T2 processor. Moore's Law assures us that this is a somewhat temporary condition and soon we'll be able to cram even more transistors onto a chip. This is a case where high integration is apparent in the packaging. Even though all four GbE ports connect to a single package, the electronics inside the package are still isolated. In other words, we don't consider the PHY to be a single point of failure because the failure modes do not cross the isolation boundaries. Of course, if your Ethernet gets struck by lightning, there may be a lot of damage to the server, so there is always the possibility that a single event will create massive damage. But for the more common cabling problems, the system offers suitable isolation. If you are really paranoid about this, then you can purchase a PCI-e card version of the Neptune and put it in PCI-e slot 1, 2, or 3 to ensure that it uses the other PCI-e switch.

The ILOM service processor is the same as we use in most of our other small servers and has been a very reliable part of our systems. It is connected to the rest of the system through a FPGA which manages all of the service bus connections. This allows the service processor to be the serviceability interface for the entire server.

The server also uses ECC FB-DIMMs with Extended ECC, which is another common theme in Sun servers. We have recently been studying the affects of Solaris Fault Management Architecture and Extended ECC on systems in the field and I am happy to report that this combination provides much better system resiliency than possible through the individual features. In RAS, the whole can be much better than the sum of the parts.

For more information on the RAS features of the new T5140 and T5240 servers, see the white paper, Maximizing IT Service Uptime by Utilizing Dependable Sun SPARC Enterprise T5140 and T5240 Servers. The whitepaper has results of our RAS benchmarks as well as some performability calculations.



About

relling

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today