Wednesday Mar 26, 2008

SPEC releases SPECsfs2008 file server benchmark

How fast is your file server? SPEC has released a new network file server benchmark, SPECsfs2008. A follow on to the SFS97_R1 benchmark for NFS file servers, the new benchmark adds support for the CIFS protocol. Of course NFS transaction rates cannot be compared with CIFS transaction rates because the workloads are completely different. Both workloads draw from real world measurements of thousands of customer sites. More client types are supported and the benchmark is easier to use than ever.

SPEC award recipients, Power Performance

More SPECtacular awards given at SPEC's 2008 annual meeting in San Francisco, to members of the power committee who produced the SPECpower_ssj2008 benchmark. This wasn't an easy benchmark to do, taking us into areas of engineering not so familiar to performance analysts. Along the way we picked up some new contributors, and some of us picked up some new knowledge and skills. Energy efficiency is increasingly important, and eventually I expect to see power measurements as part of every performance benchmark. But for now, SPECpower_ssj2008 is a great start that establishes a fair and practical methodology for consistent measurement.

As before I won't cite names without permission, but will add them later if given the okay. SPEC thanks:
  • Paul Muehr from AMD
  • Greg Darnell, and another engineer from Dell
  • Karin Wulf, and another engineer from Fujitsu-Siemens
  • Klaus Lange, and another engineer from HP
  • Jeremy Arnold, Alan Adamson, and another engineer from IBM
  • Anil Kumar, and two other engineers from Intel
  • an engineer from Sun
  • Michael Armbrust from UC Berkeley RAD Lab

Friday Feb 22, 2008

SPEC award recipient, Mail Server

Another SPECtacular award given at SPEC's 2008 annual meeting in San Francisco, this to Stephen Pratt of Communigate Systems, a member of SPEC's mail server committee. Our current benchmark SPECmail2001 tests performance using SMTP and POP3 protocols. The next major version will test the IMAP protocol, a much needed addition to measure enterprise email. Email server performance is important. If you think you get a lot of email in your inbox, then look in your spam folder. 95% of all email is now spam, meaning that mail servers must have 20 times the capacity they would need to carry only legitimate email.

Stephen has done a great job on the benchmark code, managing to carry on with the work across a job change. In addition, he even found time to help the virtualization committee integrate the IMAP benchmark into their benchmark.

Thursday Feb 21, 2008

SPEC award recipients, Web Server

More SPECtacular awards given at SPEC's 2008 annual meeting in San Francisco, these to members of the Web Committee. As before, I won't post anyone's name without permission, but you know who you are and SPEC is grateful for your contributions.

It's a bit more complicated to tell you what these engineers did to earn this recognition. In 2007 SPEC released a maintenance version 1.20 to SPECweb2005 which included a lot of improvements. Most notable to me are new code that extends the level playing field of comparison, and tightened standards compliance rules that ensure real world applicability.

In order to focus on performance of the web server, to simplify the configurations tested and so reduce the cost of benchmarking, SPECweb2005 uses a simulated back-end database, BESIM, instead of the database server typically found with real world web applications. The great majority of the computing load is, by design, on the web server. Therefore the database component being synthetic code has not been an issue with fair comparison across platforms. However we discovered that when using an ISAPI implementation, two Ethernet packets were exchanged per HTTP response instead of the single packet with other implementations. No such results had been published, so there were no unfair comparisons - yet. But the problem needed to be fixed in order that ISAPI solutions could be measured on a level playing field. The solution was to adapt BESIM to use ISAPI v2.0 interface based on the Windows IIS implementation (with permission from Microsoft), so now different solutions behave alike, and they behave like real world web servers.

Another potential risk to fair comparison was in the Java Server Pages (JSP) which is typically used in benchmark results in preference to other slower scripting languages like PHP. We added a requirement to the run rules that software products prove their standards compliance by passing the relevant test suite. In the case of JSP that means that web servers which say they implement JSP must really implement JSP, not a subset needed to run the benchmark but omiting some slower features needed to implement real world web applications. Again, we don't know of any published results which did gain a performance advantage by cutting corners on standards conformance. Now that the rules are tightened we know there won't be any.

Wednesday Feb 20, 2008

SPEC award recipients, Network File Server

More SPECtacular awards given at SPEC's 2008 annual meeting in San Francisco, these to members of the System File Server Group. As before, I won't post anyone's name without permission, but you know who you are and SPEC is grateful for your contributions.

SPEC SFS97_R1 is SPEC's benchmark suite for evaluating performance of network file servers using the NFS protocol. The committee has been working to update the workload based on measurements at thousands of customer sites, add support for Windows and Mac OS X clients, and add support for the CIFS protocol. For this important work SPEC recognizes NSPlab, two engineers from another member company which prefers not to be thanked publicly, and Don Capps of NetApp, who is also known as the creator of the IOzone filesystem benchmark.

Tuesday Feb 19, 2008

SPEC awards, virtualization

Virtualization is such a hot technology that Dilbert is poking fun at it: 2/12, 2/13 and 2/14. No wonder, since IT centers must use both hardware and energy more efficiently. At SPEC's 2008 annual meeting in San Francisco SPECtacular awards were given to members of the Virtualization committee. As always, I won't post anyone's name without permission, but you know who you are and SPEC is grateful for your contributions.

SPEC is working on a benchmark to model server consolidation of commonly virtualized systems such as mail servers, database servers, application servers, web servers, and file servers. Requiring a very different technical approach than SPEC's traditional benchmarks, virtualization has brought unique challenges. SPEC recognizes these engineers for outstanding contributions in meeting those challenges:
  • Andrew Bond, HP
  • Cathy Reddy, Unisys
  • Chris Floyd, IBM
  • Fred Abounador, AMD
  • Greg Kopczynski and another engineer, VMware
  • Nitin Ramannavar, Sun
  • Stephen Pratt, Communigate
  • and an engineer from a company so modest that they don't even want to accept public thanks.

Friday Feb 15, 2008

SPEC award recipients, High Performance Computing

More SPECtacular awards given at SPEC's 2008 annual meeting in San Francisco, these to members of the High Performance Group. As before, I won't post anyone's name without permission, but you know who you are and SPEC is grateful for your contributions. SPEC MPI2007 is SPEC's benchmark suite for evaluating MPI-parallel, floating point, compute intensive performance across a wide range of cluster and SMP hardware using the Message-Passing Interface (MPI).  SPEC recognizes
They all played a big role in the creation of this benchmark, which extends SPEC's offerings of real world relevant measures of high performance computing performance to this important software architecture, and providing better metrics for evaluation of cluster and shared memory systems.

Thursday Feb 14, 2008

SPEC award recipients, Java

More SPECtacular awards given at SPEC's 2008 annual meeting in San Francisco, these to members of the Java committee. As before, I won't post anyone's name without permission, but you know who you are and SPEC is grateful for your contributions.

SPEC's Java committee maintains, reviews, and publishes benchmark results for SPECjvm98, the first industry standard benchmark of Java client performance; SPECjAppServer2004, the benchmark of Java application server performance; SPECjbb2005, the benchmark of Java server performance; and the newest SPECjms2007, the benchmark of Java Messaging Service performance.

SPECjvm is the only pre-Y2K benchmark in that list. Think how much CPU performance has improved and how much the Java platform has grown in scope from 1998 to today, and it's clear that SPECjvm98 is due for update which should be announced this quarter. Messaging is an increasingly crucial part of the Internet infrastructure, and of enterprise applications, but until the release of SPECjms2007 there was no standardized way to quantify the performance of various solutions.

Sam Kounev of the Technical University of Darmstadt, Germany, was cited for his leadership as chair of SPEC's JMS team in driving and bringing SPECjms2007 to market. Key developers on the project included Kai Sachs from TU Darmstadt, Lawrence Cullen from IBM, and his team of messaging developers at IBM Hursley laboratory, UK, including Tim Dunn and Martin Ross. An engineer from Sun, Silicon Valley, was cited for specification and design of the benchmark and the run and reporting rules.

Stefan Sarne of BEA Systems, Stockholm, and Evgeniya Maenkova from Intel, St. Petersburg, Russia, were cited for coding, testing, and finalizing the new Java client benchmark.

Leading such a large committee with so many projects and so many company interests has not been an easy job, and we were fortunate in 2007 to have a very capable engineer from Intel as our Java chairman.  He always did what he saw was right, putting SPEC's interests foremost.  He is moving on to other assignments this year, and will be terribly missed at SPEC.

Wednesday Feb 13, 2008

SPEC award recipients, graphics

Now and then you get to do something that really makes your job worthwhile, even an unpaid job like SPEC President.  January 30th at SPEC's annual meeting in San Francisco, I gave awards for SPECtacular contributions in 2007 to engineers and researchers from 26 companies and universities.  Now I'm writing thank-you notes to their managers, because frankly when you do a lot of your work in an industry consortium with your best efforts visible more to your competitors than to your managers, a little bit of extra recognition is surely in order.

Damien Farnham says "That which is measured, improves." And when the right things are measured in a fair and representative manner, then the improvements benefit the entire industry. Thanks to the work of these people and others in SPEC, all our customers are able to evaluate system performance with confidence, and performance delivered in new products improves in a way that translates to real world improvements.

To make the thanks public I'll write a bit about each of them here in my blog. I won't list everyone since some people don't want their names posted; but you know who you are.

I'll start today by thanking outstanding contributors to SPEC's graphics benchmarks.  Years after pundits declared that PC's had more performance than they would ever need, graphics continues to drive high end workstation sales, as graphics applications deliver real customer benefit for the dollar. SPEC's free downloadable benchmarks measure Open/GL performance across multiple platforms, and measure real world graphics application performance.

Last year we released SPECviewperf 10, the industry's standard measure of OpenGL performance across multiple platforms, with view sets from real world graphics applications, with the leadership and dedication of two engineers from NVIDIA. We released SPECapc for 3ds Max 9, the cross platform standard performance measure for 3D modeling using the Autodesk software, thanks to the work of an NVIDIA engineer. And we released SPECapc for Solidworks 2007, the premier measure of CAD/CAM performance, thanks to the leadership and hard work of Louis Barton from Dell, David Reiner from AMD, and an engineer from HP.

These people are making a positive difference in the industry, and so I say thank you!

Tuesday Dec 11, 2007

SPEC announces SPECpower_ssj2008

Today SPEC announced SPECpower_ssj2008, the first industry standard power performance benchmark. It measures electric power used at various load levels from active idle to 100% of possible throughput. The workload tested is a server side Java workload. The methodology is applicable to many workloads, and I hope in the future we will see more standard benchmarks, and application of these methods to measuring power consumption of customers' own workloads. This benchmark is the result of long hard work by dedicated engineers from many companies, universities, and Lawrence Berkeley National Laboratory. Congratulations!


Sunday Nov 11, 2007

Will that run on an UltraSPARC T2?

Will a workload run well on a Sun SPARC Enterprise T5120, T5220, and Sun Blade T6320? The question has become easier to answer for the UltraSPARC T2 than it was for the UltraSPARC T1, which has limited floating point capability to match the majority of commercial workloads. However that means that for an UltraSPARC T1 you must be careful to ensure that your workload does not contain excessive floating point instructions.

By contrast the UltraSPARC T2 has enough floating point power to set the world record for single chip (Search form: enter "1 chip" in the "# CPU" field) SPECompM2001, the industry standard high performance computing benchmark for Open MP. Still with the T2 you need to determine whether your workload has sufficient parallelism to make effective use of the T2's 8 cores and 64 hardware threads. Back when we used to worry about using 64 CPU's in a Sun Enterprise 10000, the high performance computing community developed technologies like Open MP and MPI for portable expression of parallelism.

What's different today is those 64 virtual CPU's fitting into 1 RU of space, so we're looking at a much wider range of applications. Of course if you're consolidating multiple workloads onto a T2 based system using LDOMs or other virtualization method, then you already have those workloads running in parallel.

But what of the case where you want to mostly run a single application? If it's written in Java then it's likely to be multi-threaded, either because the language makes it natural to write the application that way, or because the utility classes called by the application are themselves multi-threaded. But a Java application might still be dominated by one or a few threads, and another application might be very well threaded.

The CoolThreads Selection Tool (cooltst) looks at the workload running on a system, and advises you about its suitability for T1 systems. As it happens, you can also use it to get an idea about the suitability of a workload for T2 systems - with some caveats. First, if you run cooltst on a T2 system it will complain that it doesn't know what processor it's running on; cooltst predates the T2 processor. But then you probably won't want to gauge the suitability of a workload already running on T2, to be migrated to a T2. Second, ignore anything it warns you about floating point content being too high for a T1; that isn't a problem for T2.

You can also just look at the workload yourself using standard Solaris (prstat) or Linux (top) commands, to see how the CPU consumption is spread across application threads. In upcoming blog entries I'll talk more about this simple do-it-yourself analysis.

Disclosure Statement:

SPEC and SPEComp are registered trademarks of Standard Performance Evaluation Corporation. Results are current as of 11/11/2007. Complete results may be found at the url referenced above or at


Monday Oct 29, 2007

SPEC releases Java Message Service benchmark

SPEC has released SPECjms2007, so there is now an industry standard metric of messaging performance for middleware. Congratulations to the team from vendors, academia, and the open source community who put in long hard work to make this benchmark a reality, including Technische Universit├Ąt Darmstadt (Germany), IBM, Sun Microsystems, Oracle, BEA, Sybase and the Apache Software Foundation!


Thursday Aug 16, 2007

speed vs. throughput in the real world

Performance engineers often talk about the difference between speed and throughput with respect to industry standard benchmarks, e.g. SPEC CPU2006. Glenn Fawcett wrote an excellent posting about that difference in the real world. Reading Glenn's scenario it's easy to see how technically able people can easily get it confused as we all adjust to the "CMT era."


Thursday Aug 24, 2006

SPEC releases CPU2006 benchmark suite

Bid farewell to CPU2000, arguably the world's most widely used benchmark suite with more than 6,000 results published to date. Today SPEC announced its replacement, CPU2006. The old suite, formidable in its day, was being overtaken by modern systems. Originally some of the benchmark run times had been well in excess of an hour, so that running three repetitions of each of the 14 floating point benchmarks and each of the 10 integer benchmarks took a considerable time. But some of today's systems run some of the benchmarks in less than 15 seconds.

The new CPU2006 suite has 14 integer benchmarks and 17 floating point benchmarks, covering a wider range of application areas. Some of the best old benchmarks like gcc remain, but with considerably bigger more demanding workloads. And there are many new benchmarks, some of which came from a wide ranging search program that paid "bounties" to authors to contribute code to the suite and work with SPEC to adapt the applications into suitable benchmarks.

SPEC member companies, universities, and individuals spent many hours (and sleepless nights) working on these benchmarks, starting soon after CPU2000 was released. They tested and performed detailed performance analysis to ensure that the benchmarks test a wide range of application areas and coding styles, that they run on any standard conforming system, and that they are not biased towards any particular operating system, compiler, processor pipeline, or cache organization. For every benchmark included in the suite, one or two benchmarks was worked on for months only to drop out along the way. Equally important as the benchmarks are the run rules and lots of work also went into these to ensure that tests are run under appropriate conditions for fair and accurate comparisons. All this work required that the vendor representatives temporarily set aside their competitive instincts to share comparative performance analyses, to help their competitors port code, and to compromise on technical and run rule issues.

CPU2006 is off to a fast start with already 66 results published from a wide range of vendors. This ensures that competition will heat up quickly and we should see many new results in coming months.

Congratulations to all the members of SPEC's CPU committee!

Wednesday Dec 14, 2005

To Sun bloggers, about benchmarks

Speaking as a SPEC representative, we love to have people using our benchmarks, but everyone has to follow the rules - bloggers too. Standard benchmarks have reporting and fair use rules that require for instance that performance claims be backed up by fully rule compliant test runs, and in some cases by prior independent results review; and that comparisons among products must include the appropriate information to offer a fair comparison.

Speaking as a Sun engineer, we have some great products and I don't want you to stop talking about them! Just please take care to follow the rules. If you're not sure what they are, ask. Inside Sun go to SAE on SunWeb for information on the benchmarks. Email them. Or email me.

See BMSeer's postings for examples of footnotes saying what he's comparing, the basis for comparison, when he looked at the competitive numbers, and substantiation of the numbers - like this entry about SPECweb2005. You may find the required disclosure inelegant, but bits are free. Put them in. It might just be exactly the information some reader really needs to know.

Thank you.

See also:


I am a software engineer in San Diego, president of the Standard Performance Evaluation Corporation (, formerly a mathematician and a violist.


« October 2015