Monday Nov 17, 2008

Project Thebes Update from Georgetown University

The big news from Arnie Miles, Senior Systems Architect at Georgetown University, is that the Thebes Middleware Consortium has moved from concept to code with a new prototype of a service provider based on DRMAA that mediates access to an unmodified Sun Grid Engine instance from a small Java-based client app.

In addition, the Thebes Consortium has just released a first draft of an XML schema that attempts to create a language that harmonizes how jobs and resources are described in a resource-sharing environment and sits above the specific approaches taken by existing systems like Ganglia, Sun Grid Engine, PBS, Condor, LSF, etc.) The proposal will soon be submitted to OGF for consideration.

The next nut to crack is the definition of a resource discovery network, which is under development now. The team hopes to be able to share their work on this at ISC in Hamburg in June of next year.


Dealing with Data Proliferation at Clemson

Jim Pepin, CTO for Clemson University talked today at the HPC Consortium Meeting about the challenges and problems created by emerging technology and socal trends as seen through the lens of a university environment.

As a preamble, Jim noted that between 1970 and now the increases in compute and storage capabilities have pretty much kept pace with each other. Networking bandwidth, however, has lagged by about two orders of magnitude. This has a variety of ramifications for local/centralized data storage decisions (or constraints.)

In many ways, storage is moving closer to end-users. Examples include personal storage like iPods, phones, local NAS boxes, etc, as well as more research-oriented data collection efforts related to the proliferation of new sensors and instrumentation. There is data everywhere in vast quantities and widely distributed across a typical university environment.

Particular issues of concern at Clemson include how to back up these distributed and rapidly-growing pools of storage, how to handle security, how to protect data while still being able to open networks, and how to deal with a wide diversity of systems and data-generating instruments.


So, What About Java for HPC?

About ten years ago the HPC community attempted to embrace Java as a viable approach for high performance computing via a forum called Java Grande. That effort ultimately failed for various reasons, one of which was the difficulty of achieving acceptable performance for interesting HPC workloads. Today at the HPC Consortium Meeting here in Austin, Professor Denis Caromel from the University of Nice made the case that Java is ready now for serious HPC use. He described the primary features of ProActive Java, a joint project between INRIA and University of Nice CNRS, and provided some performance comparisons against Fortran/MPI benchmarks.

As background, Denis explained that the goal of ProActive is to enable parallel, distributed, and multi-core solutions with Java using one unified framework. Specifically, the approach should scale from a single, multi-core node to a large, enterprise-wide grid environment.

ProActive embraces three primary areas: Programming, Optimizing, and Scheduling. The programming approach is based on the use of active objects to create a dataflow-like asynchronous communication framework in which objects can be instantiated in either separate JVMs or within the same address space in the case of a multi-core node. Objects are instantiated asynchronously on the receiver side and then represented immediately on the sender side by "future objects" which will be populated asynchronously when the remote computation completes. Accessing future events whose contents have not yet arrived causes a "wait by necessity" which implements the dataflow synchronization mechanism.

ProActive also supports a SPMD programming style with many of the same primitives found in MPI -- e.g., barriers, broadcast, reductions, scatter-gather, etc.

Results for several NAS parallel benchmarks were presented, in particular CG, MG, and EP. On CG, the ProActive version performed at essentially the same speed as the Fortran/MPI version over a range of problem sizes from 1-32 processes. Fortran did better on MG and this seems to relate to issues around large memory footprints, which the ProActive team is looking at in more detail. With EP, Java was faster or significantly faster in virtually all cases.

Work continues to lower messaging latency, to optimize in-node data transfers by sending pointers rather than data, and to reduce message-size overhead.

When asked how ProActive compares to X10, Denis pointed out that while X10 does share some concepts with ProActive, X10 is a new language while ProActive is designed to run on standard Java JVMs and to enable to use of standard Java for HPC.

A full technical paper about ProActive in PDF format is available here.


A Customer's View of Sun's HPC Consortium Meeting

One of our customers, Gregg TeHennepe from Jackson Laboratory, has been blogging about his attendance at Sun's HPC Consortium meeting here in Austin. For his perspectives and for some excellent photos of the Ranger supercomputer at TACC, check out his blog, Mental Burdocks.

Sunday Nov 16, 2008

If You Doubt the Utility of GPUs for HPC, Read this

Professor Satoshi Matsuoka from the Tokyo Institute of Technology gave a really excellent talk this afternoon about using GPUs for HPC at the HPC Consortium Meeting here in Austin.

As you may know, the Tokyo Institute of Technology is the home of TSUBAME, the largest supercomputer in Asia. It is an InfiniBand cluster of 648 Sun Fire x4600 compute nodes, many with installed Clearspeed accelerator cards.

The desire is to continue to scale TSUBAME into a petascale computing resource over time. However, power is a huge problem at the site. The machine is responsible for roughly 10% of the overall power consumption of the Institute and therefore they cannot expect their power budget to grow over time. The primary question, then, is how to add significant compute capacity to the machine while working within a constant power budget.

It was clear from their analysis that conventional CPUs would not allow them to reach their performance goals while also satisfying the no-growth power constraint. GPUs--graphical processing units like those made by nVidia--looked appealing in that they claim extremely high floating point capabilities and deliver this performance at a much better performance/watt ratio that conventional CPUs. The question, though, is whether GPUs can be used to significantly accelerate important classes of HPC computations or whether they are perhaps too specialized to be considered for inclusion in a general-purpose compute resource like TSUBAME. Professor Matsuoka's talk focused on this question.

The talk approached the question by presenting performance speed-up results for a selection of important HPC applications or computations based on algorithmic work done by Prof. Matsuoka and other researchers at the Institute. These studies were done in part because GPU vendors do a very poor job of describing exactly what GPUs are good for and what problems are perhaps not handled well by GPUs. By assessing the capabilities over a range of problem areas, it was hoped that conclusions could be drawn about the general utility of the GPU approach for HPC.

The first problem examined was a 3D protein docking analysis that performs an all-to-all analysis of 1K proteins to 1K proteins. Based on their estimates, a single protein-protein interaction analysis requires about 200 TeraOps while the full 1000x1000 problem requires about 200 ExaOps. In order to maximally exploit GPUs for this problem, a new 3D FFT algorithm was developed that in the end delivered excellent performance and a 4x better performance/watt over IBM's BG/L system, which itself is much more efficient than a more conventional cluster approach.

In addition, other algorithmic work delivered speedups of 45X over single conventional CPUs for CFD, which is typically limited by available bandwidth. Likewise, a computation involving phase separation liquid delivered a speedup of 160X over a conventional processor.

Having looked at single node performance and compared it to a single-node GPU approach and found that GPUs do appear to able to deliver interesting performance and performance/watt for an array of useful problem types so long as new algorithms can be created to exploit the specific capabilities of these GPUs, the next question was whether these results could be extended to multi-GPU and cluster environments.

To test this, the team worked with the RIKEN Himeno CFD benchmark, which is considered the worst memory bandwidth-limited code one will ever see. It is actually worse than any real application one would ever encounter. If this could be parallelized and used with GPUs to advantage, then other less difficult codes should also benefit from the GPU approach.

To test this, the code was parallelized to run using multiple GPUs per node and with MPI as the communication mechanism between nodes. Results showed about a 50X performance improvement over a conventional CPU cluster on a small-sized problem.

A multi-GPU parallel sparse solver was also created which showed a 25X-35X improvement over conventional CPUs. This was accomplished using double precision implemented using mixed-precision techniques.

While all of these results seemed promising, could such a GPU approach be deployed at scale in a very large cluster rather than just within a single node or across a modest-sized cluster? The Institute decided to find out by teaming with nVidia and Sun to enhance TSUBAME by adding Tesla GPUs to some (most) nodes.

Installing the Tesla cards into the system went very smoothly and resulted in three classes of nodes: those with both Clearspeed and Tesla installed, those with only Tesla installed, and those Opteron nodes with neither kind of accelerator installed.

Could this funky array of heterogeneous nodes be harnessed to deliver an interesting LINPACK number? It turns out that it could, with much work and in spite of the fact that there was limited bandwidth in the upper links of the InfiniBand fabric and that they had limited PCIx/PCIe bandwidth available in the nodes (I believe due to the number and types of slots available in the x4600 and the number of required devices in some of the TSUBAME compute nodes.)

As a result of the LINPACK work (which could have used more time--it was deadline-limited) the addition of GPU capability in TSUBAME allowed its LINPACK number to be raised from 67.7 TFLOPs, which was reported in June, to a new high of 77.48 TFLOPs which shows an impressive increase.

With the Tesla cards installed, TSUBAME can now be viewed as a 900 TFLOPs (single precision) or 170 TFLOPs (double precision) machine. A machine that has either 10K cores or 300K SIMD cores if one counts the components embedded within each installed GPU.

The conclusion is pretty clearly that GPUs can be used to significant advantage on an interesting range of HPC problem types, though it is worth noting that it also appears that significantly clever, new algorithms may also need to be developed to map these problems efficiently onto GPU compute resources.


A Pan-European Approach to High Performance Computing

Dr. Thomas Lippert, Director of the the Institute for Advanced Simulation and Head of Jülich Supercomputing Center, spoke at the HPC Consortium Meeting today about PRACE, an important effort to establish a pan-European HPC infrastructure beyond anything available today.

To quote from the PRACE website,

"The Partnership for Advanced Computing in Europe prepares the creation of a persistent pan-European HPC service, consisting several tier-0 centres providing European researchers with access to capability computers and forming the top level of the European HPC ecosystem. PRACE is a project funded in part by the EU’s 7th Framework Programme."

Dr. Lippert explained that the reasoning behind PRACE is to improve the strategic competitiveness of researchers and of industrial development and to strengthen and revitalize HPC across Europe. The vision for how this will be accomplished includes a multi-tiered hierarchy of centers that starts at the top with a small number of tier-0 European sites, coupled with tier-1 sites at the national level, and also with tier-2 sites at the regional level.

This is a huge program, now in the planning stages. It is expected to require funding at the level of 200-400M Euros per year over 4-5 years with a similar amount allocated for operating costs.

[Dr. Lippert also discussed the next generation supercomputer to be built soon at Jülich, but I have omitted that information here to avoid sharing any inappropriate information.]


What's Been Happening with Sun's Biggest Supercomputer?

Karl Schulz – Associate Director, HPC, Texas Advanced Computing Center gave an update on Ranger, including current usage statistics as well as some of the interesting technical issues they've confronted since bringing the system online last year.

Karl started with a system overview, which I will skip in favor of pointing to an earlier Ranger blog entry that describes the configuration in detail. Note, however, that Ranger is now running with 2.3 GHz Barcelona processors.

As of November 2008, Ranger has more than 1500 allocated users who represent more than 400 individual research projects. Over 300K jobs have been run so far on the system, consuming a total of 220 million CPU hours.

When TACC brought their 900 TeraByte Lustre filesystem online, they wondered how long it would take to fill it. It took six months. Just six months to generate 900 TeraBytes of data. Not surprising, I guess, when you hear that users generate between 5 and 20 TeraBytes of data per day on Ranger. Now that they've turned on their file purging policy files currently currently reside on the filesystem for about 30 days before they are purged, which is quite good as supercomputing centers go.

Here are some of the problems Karl described.

OS jitter. For those not familiar, this phrase refers to a sometimes-significant performance degradation seen by very large MPI jobs that is caused by a lack of natural synchronization between participating nodes due to unrelated performance perturbations on individual nodes. Essentially some nodes fall slightly behind, which slows down MPI synchronization operations, which can in turn have a large effect on overall application performance. The worse the loss of synchronization, the longer certain MPI operations take to complete, and the larger the overall application performance impact.

A user reported bad performance problems with a somewhat unusual application that performed about 100K MPI_AllReduce operations with a small amount of intervening computation between each AllReduce. When running on 8K cores, a very large performance difference was seen when running 15 processes per node versus 16 processes per node. The 16-process-per-node runs showed drastically lower performance.

As it turned out, the MPI implementation was not at fault. Instead, the issue was traced primarily to two causes. First, an IPMI daemon that was running on each node. And, second, another daemon that was being used to gather fine-grained health monitoring information to be fed into Sun Grid Engine. Once the IPMI daemon was disabled and some performance optimization work was done on the health daemon, the 15- and 16-process runs showed almost identical run times.

Karl also showed an example of how NUMA effects at scale can cause significant performance issues. In particular, it isn't sufficient to deal with processor affinity without also paying attention to memory affinity. Off-socket memory access can kill application performance in some cases, as in the CFD case shown during the talk.


Attention Supercomputing Weirdos (You Know Who You Are)

When Karl Schulz, Assistant Director at TACC spoke today at the HPC Consortium Meeting, he asked everyone to do their part--within legal limits--to help Keep Austin Weird. Having been a part of the HPC community for many years, I'm pretty sure we are collectively more than up to the task. The phrase "core competency" comes to mind. :-)



Using SPARC and Solaris for HPC: More of this, please!

Ken Edgecombe – Executive Director of HPCVL spoke today at the HPC Consortium Meeting in Austin about experiences with SPARC and HPC at his facility.

HPCVL has a massive amount of Sun gear, the newest of which includes a cluster of eight Sun SPARC Enterprise M9000 nodes, our largest SMP systems. Each node has 64 quad-core, dual-threaded SPARC64 processors and includes 2TB of RAM. With a total of 512 threads per node, the cluster has a peak performance of 20.5 TFLOPs. As you'd expect, these systems offer excellent performance for problems with large memory footprints or for those requiring extremely high bandwidths and low latencies between communicating processors.

In addition to their M9000 cluster, HPCVL has another new resource that consists of 78 Sun SPARC Enterprise T5140 (Maramba) nodes, each with two eight-core Niagara2+ processors (a.k.a. UltraSPARC T2plus). With eight threads per core, these systems make almost 10,000 hardware threads available to users at HPCVL.

Ken described some of the challenges of deploying the T5140 nodes in his HPC environment. The biggest issue is that researchers invariably first try running a serial job on these systems and then report they are very disappointed with the resulting performance. No surprise since these systems run at less that 1.5 GHz as compared to competing processors that run at over twice that rate. As Ken emphasized several times, the key educational issue is to re-orient users to thinking less about single-threaded performance and more about "getting more work done." In other words, throughput computing. For jobs that can scale to take advantage of more threads, excellent overall performance can be achieved my consuming more (slower) threads to complete the job in a competitive time. This works if one can either extract more parallelism from a single application, or run multiple instances of applications to make efficient use of the threads within these CMT systems. With 256 threads per node, there is a lot of parallelism available for getting work done.

As he closed, Ken reminded attendees of the 2009 High Performance Computing Symposium which will be held June 14-17 in Kingston, Ontario at HPCVL.

Saturday Nov 15, 2008

Gregg TeHennepe: Musician, Blogger, Senior Manager and Research Liaison -- Customer!

When the band at this evening's HPC Consortium dinner event invited guests to join them on stage to do some singing, I didn't think anything of it...until some time later when I heard a new voice and turned to see someone up there who sounded good and looked like he was having a good time. He was wearing a conference badge, but I couldn't see whether he was a customer or Sun employee.

Since I was taking photos, it was easy to snap a shot and then zoom in on his badge to see who it was. Imagine my surprise as the name came into focus and I saw that it was Gregg TeHennepe, one of our customers from the Jackson Laboratory where he is a senior manager and research liaison. I was surprised because I hadn't realized it was Gregg in spite of the fact that I had eaten breakfast with him this morning and talked with him several times during the day about the fact that he intends to blog the HPC Consortium meeting, which so far as I know marks the first time a customer has blogged this event.

My surprise continued, however. When I googled Gregg just now to remind myself of the name of his blog, I found he is a member of Blue Northern, a five-piece band from Maine that traditional, original, and contemporary acoustic music. So, yeah. I guess he does sound good. :-)

Gregg's blog is Mental Burdocks. I'll save you the trip to the dictionary and tell you that a burdock is one of those plants with hook-bearing flowers that get stuck on animal coats (and other kinds of coats) for seed dispersal.


Spur: Terascale Visualization at TACC

Kelly Gaither, Associate Director at TACC – University Texas Austin, talked today about Spur, a new scalable visualization system that is directly connected to Ranger, the 500 TFLOP Sun Constellation System at the Texas Advanced Computing Center (TACC) in Austin.

Spur, which was a joint collaboration between TACC and Sun, bolts the visualization system directly into the InfiniBand fat tree used to create the 65K-core Ranger cluster. This allows both compute and visualization to be used effectively together, essential for truly interactive data explorations.

Spur is actually a cluster of eight Sun compute nodes, each with four nVidia QuadroPlex GPUs installed. The overall system includes 32 GPUs, 128 cores, close to 1 TB RAM and can support up to 128 simultaneous Shared Visualization clients. Sun Grid Engine is used to schedule jobs onto Spur nodes.

Spur went into production in October and within a week was being used about 120 hours per week for interactive HPC visualization. Users who access Spur via the TeraGrid find the resource invaluable because the alternative--computing on Ranger and then transferring results back to their home site to visualize it just isn't feasible due to the sheer volume of data involved. One user estimated it would take him a week to transfer the data to his local facility whereas with Spur he is able to compute at TACC, render the visualization at TACC using Spur, and then use Sun Shared Visualization software to display the visualization at his local site.


A Customer View of the Sun Blade 6000 System

Prof. James Leylek – Executive Director, CU-CCMS at Clemson spoke today at the HPC Consortium Meeting in Austin about his experiences with the installation and acceptance of their new Sun Sun Blade 6000 system. CU-CCMS installed 31 SunBlade 6000 chassis with 10 blades per chassis each with two Intel quad-core CPUs for a total of 3440 cores, connected with DDR InfiniBand. The entire cluster has 14 TB RAM and delivers about 34.4 TFLOPs peak performance. After deciding on the system, the next major task was to specify an acceptance test for the cluster.

Initially, CU-CCMS decided that an uninterrupted 72-hour LINPACK run would be an appropriate full-system acceptance test. External advisors, however, suggested that such a long run on a system of this size would be infeasible. As it turns out, the full-scale LINPACK ran to 48 hours without any problems. And then to 72 hours. And then to 130 hours.

The entire project was completed in two weeks and two days with two local resources assigned to the acceptance process. End result? A very happy HPC customer. We like that.

A Soft Landing in Austin, Texas


I arrived last night in Austin for the Sun HPC Consortium meeting this weekend and for Supercomputing '08 next week. I joined several colleagues for a casual dinner at a local home hosted by Deirdré and her daughter Ross and attended by several of Ross' friends. It was a fun and relaxing way to ease into this trip, which year to year proves to be pretty exhausting since the Consortium runs all weekend and is followed immediately by Supercomputing, which we can count on to deliver a week of high-energy, non-stop sensory overload.

Thanks to Deirdré for the invitation and to Ross, Mo, Trishna, April, Griffin, and Terry for the entertaining evening and the extremely wide-ranging conversation (wow!) And a special thanks to Ross for hosting and for introducing me to a Sicilian pesto that's to die for! YUM. In return, I hope they enjoyed my "opening the apple" demo. :-)

Our internal training session for Sun's HPC field experts (HPC ACES) will finish in about an hour, at which point we will break for lunch and the return for the start of the HPC Consortium meeting.

Let the games begin!


Thursday Nov 13, 2008

Big News for HPC Developers: More Free Stuff

'Tis the Season. Supercomputing season, that is. Every November the HPC community--users, researchers, and vendors--attend the world's biggest conference on HPC: Supercomputing. This year SC08 is being held in Austin Texas, to which I'll be flying in a few short hours.

As part of the seasonal rituals vendors often announce new products, showcase new technologies and generally strut their stuff at the show and even before the show in some cases. Sun is no exception as you will see if you visit our booth at the show and if you take note of two announcements we made today that should be seen as a Big Deal to HPC developers. The first concerns MPI and the second our Sun Studio developer tools.

The first announcement extends Sun's support of Open MPI to Linux with the release of ClusterTools 8.1. This is huge news for anyone looking for a pre-built and extensively tested version of Open MPI for RHEL 4 or 5, SLES 9 or 10, OpenSolaris, or Solaris 10. Support contracts are available for a fee if you need one, but you can download the CT 8.1 bits here for free and use them to your heart's content, no strings attached.

Here are some of the major features supported in ClusterTools 8.1:

  • Support for Linux (RHEL 4&5, SLES 9&10), Solaris 10, OpenSolaris
  • Support for Sun Studio compilers on Solaris and Linux, plus the GNU/gcc toolchain on Linux
  • MPI profiling support with Sun Studio Analyzer (see SSX 11.2008), plus support for VampirTrace and MPI PERUSE
  • InfiniBand multi-rail support
  • Mellanox ConnectX Infiniband support
  • DTrace provider support on Solaris
  • Enhanced performance and scalability, including processor affinity support
  • Support for InfiniBand, GbE, 10GbE, and Myrinet interconnects
  • Plug-ins for Sun Grid Engine (SGE) and Portable Batch System (PBS)
  • Full MPI-2 standard compliance, including MPI I/O and one sided communication

The second event was the release of Sun Studio Express 11/08, which among other enhancements adds complete support for the new OpenMP 3.0 specification, including tasking. If you are questing for ways to extract parallelism from your code to take advantage of multicore processors, you should be looking seriously at OpenMP. And you should do it with the Sun Studio suite, our free compilers and tools which really kick butt on OpenMP performance. You can download everything--the compilers, the debugger, the performance analyzer (including new MPI performance analysis support) and other tools for free from here. Solaris 10, OpenSolaris, and Linux (RHEL 5/SuSE 10/Ubuntu 8.04/CentOS 5.1) are all supported. That includes an extremely high-quality (and free) Fortran compiler among other goodies. (Is it sad that us HPC types still get a little giddy about Fortran? What can I say...)

The full list of capabilities in this Express release are too numerous to list here, so check out this feature list or visit the wiki.


Tuesday Nov 11, 2008

Unified Storage Simulator: Too Fun to be Legal

As I mentioned, we released a simulator as part of the Sun Storage 7000 Unified Storage System (\*) launch event yesterday. I geeked out and took it for a quick spin today to get a first-hand view of its capabilities. Short summary: this is a really cool way to play with one of these new storage appliances from the comfort of your existing desktop or laptop with no extra hardware required. It's all virtual, baby. Check this out...

The simulator (download it from here) is basically a VMware virtual machine that has been pre-loaded with the Unified Storage System software stack and configured with 15 virtual 2 GB disks. Here is what I did to try it out.

First I downloaded the simulator and booted the virtual machine using VMware Fusion on my MacBook Pro. The boot is straightforward with one tiny exception. At one point I was asked for a password, which stumped me since there is no mention in the instructions on the download page about supplying a password. As it turns out, this is where you specify the root password for the appliance. Pick something you will remember since you will need it to access the appliance's administrative interface later.

Once the appliance has booted it will display some helpful information about next steps, the most important of which is to access the appliance via its web interface to configure it for use. With the appliance running in a virtual machine on my Mac, I used the Safari web browser under Mac OS X to contact the appliance at the hostname (or IP address) supplied by the DHCP server when the appliance booted and using the port number shown in the documentation (port 215.)

I then logged into the appliance using the root password I had specified earlier. The BUI then walked me through a set of configuration steps that included networking, DNS, NTP, and name services. The process was simple and quick since the defaults were correct for most questions. Once I finished the basic configuration, I reached the following screen:

This is where it starts to get fun. This interface helps you choose which replication profile makes most sense for the storage that will be managed by your appliance. Each option is ranked by availability, performance, and capacity. The pie chart on the left illustrates how storage will be allocated under each scheme. In the case of Double Parity RAID, you can see that data and parity are placed on 14 disks and the last disk is held as a spare. In contrast, when I selected the "Striped" option, I saw this:

You can see that this strategy delivers maximum capacity and also great performance since I can get all those spindles working at once on my IO requests, but at the expense of low availability, which might be perfectly fine for a scratch file system. I opted for a Double Parity RAID scheme for my filesystem.

Once I configured the storage I visited the Shares tab and created an NFS filesystem called "ambertest." Again, this was straightforward. Straightforward enough that I forgot to take a screenshot of that step. Sorry about that.

I then mounted my new NFS filesystem under Mac OS X:

% mount -t nfs ip-address-or-hostname-of-the-virtual-storage-appliance:/export/ambertest /Volumes/amber

As a test, I copied several directory trees from my local Mac file system into the NFS filesystem exported by the virtual appliance and also ran several small test scripts to manipulate NFS files in various ways to generate load so I could play with the Analytics component of the appliance.

This is the part that should be illegal. Because the appliance stack is built on top of Solaris, DTrace is available for doing deep dives on all sorts of usage and performance information that would be interesting to an administrator of such a storage appliance. Here is one silly example that will give you a little flavor of what you can do with this capability. It is a much wider and deeper facility than this simple example shows.

Consider the following page:

I used the Analytics interface to graphically select a metric of interest--in this case, number of NFS v3 operations per second broken out by filename. The main graphical display shows how that metric varies over time and allows me to move backwards or forward in time, look at historical data, zoom in and out, pause data collection, etc. The lower pane in this case shows me the directories that are currently being touched within the NFS filesystem by ongoing operations. Individual files are listed in the small pane to the left of the timeline.

When I selected the pltestsuite line in the bottom pane, the timeline updated to show me exactly when in time the operations related to the files in that directory actually occurred. Since my test was a simple 'cp -r' of a directory tree into the NFS-mounted directory on the Mac, the display shows me when the files within the pltestsuite directory were cancelled and the NFS load generated by that part of the overall copy operation. I can easily see which file activity is contributing to load on the appliance--very useful for an administrator, for example.

In addition to examining NFS operations by filename, I can break down NFS statistics by type of operation, by client, by share, by project, by latency, by size, or by offset. I can do the same for CIFS or HTTP/WebDAV requests. NFS v4? No problem. Network traffic, disk operations, cache accesses, CPU utilization? It's all there in one easy-to-use, integrated web-based interface. To me, the Analytics are one of the coolest parts of the product since observability is often the first step to good performance and effective capacity planning.

If you are curious about the capabilities of the Sun Storage 7000 Unified Storage Server line, I do recommend trying the simulator. In addition to offering an effective way to explore the product without buying one (we expect you'll want to buy one after you finish :-) ), it is interesting to see how desktop virtualization neatly enabled us to create this simulator experience.


(\*) Named, no doubt, by Sun's department of redundancy department. I liked the code name better.


About

Josh Simons

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today