Tuesday Mar 15, 2011
Wednesday Feb 24, 2010
By walterbays on Feb 24, 2010
Recently released SPEC MPI2007 2.0 benchmark adds support for testing much larger clusters, up to 2,048 ranks (cores). This represents a big leap over (the currently shipping with SPEC MPI version 1.1) Medium Data Set, which is designed to scale to 128 processing cores. The Large Data Set enables fair comparison of modern HPC systems as they have outgrown the Medium Data Set. SPEC awarded six engineers for their work on this benchmark.
Andrey Naraikin - Intel
Huiyu Feng – SGI
Pavel Shelepugin - Intel
When creating new benchmarks, testing is vital, especially of a benchmark that can run into the thousands of cores. Andrey, Huiyu, and Pavel provided the testing needed to make sure our SPEC MPI2007 large dataset benchmark works at the largest sizes. They have provided the testing, problem identification, scaling analysis and bug fixes required to allow us to make progress on the benchmark.
Brian Whitney – Oracle
Brian is HPG’s release manager who prepared many benchmark test kits integrating all the changes brought by the entire SPEC HPG group. He also implemented numerous infrastructure improvements.
Carl Ponder - IBM
Carl was a crucial contributor for the development and management of MPI2007 documentation, especially with respect to the run rules, FAQ, and configuration file
Cloyce Spradling – Oracle
SPEC MPI2007 uses the tools that SPEC CPU2006 is based upon. Cloyce adapted the tools to the HPC needs and extended them to include new flexibility needed for MPI2007.
Photo: Klaus-Dieter Lange (SPEC Awards Committee Chair, right) presents award to Brian Whitney (Oracle)
Tuesday Feb 23, 2010
By walterbays on Feb 23, 2010
More SPECtacular awards. As engineers know, academics are impractical. And as university researchers know, industrial practicioners have sorely limited vision. What does it take to bridge those two worlds so that industry can take full advantage of theoretical advances, and so that academia can leverage an industrial base to more quickly and easily develop relevant results? Two SPECtacular award recipients are answering that question: Kai Sachs of Technische Universität Darmstadt (top photo), and Samuel Kounev of Universität Karlsruhe (bottom photo).
SPEC has long offered reduced price benchmarks to educational institutions, and enticed universities to join as "associates" for a nominal fee. Yet when SPECies talk about our work designing, developing, and analyzing benchmarks, we mostly talk to each other. And when academic researchers talk about their work, they mostly talk to each other. Samuel and Kai have worked diligently to bridge the gap by growing SPEC's series of informal industrial benchmark workshops into a major performance conference co-sponsored by the ACM, with refereed and published proceedings, a diverse set of tracks, and a program committee drawn equally from industry and academia.
- SIPEW 2008
- SPEC Benchmark Workshop 2009
- First Joint WOSP/SIPEW International Conference on Performance Engineering WOSP/SIPEW 2010
- Second Joint WOSP/SIPEW International Conference on Performance Engineering WOSP/SIPEW 2011
Few people would have earned the respect and credibility in both the academic and the industrial spheres to be able to bridge the differences in culture and outlook and bring us together like this. And they are not finished yet - but that is a story for a later date. For now I'll just say, Kai Sachs and Samuel Kounev are SPECtacular!
Monday Feb 15, 2010
By walterbays on Feb 15, 2010
Last December SPEC released the SPECjEnterprise2010 benchmark, the third generation of Java Enterprise Edition performance tests from the Standard Performance Evaluation Corporation (SPEC) consortium. The new benchmark tests Java EE 5.0's significantly expanded and simplified programming model, with a realistic workload stressing the entire system including JVM, middleware, Database, CPU, disk and servers. Yet although the benchmark test is much broader, it is simpler than ever to run because it takes advantage of Java EE 5.0 features and because it uses the open source Faban general purpose driver.
One of the most enjoyable duties of SPEC President is thanking the people from all the member institutions who make SPEC's success possible. I fear that it is all too easy for SPEC people's achievements to miss recognition, in our environment where their successes are most visible to their competitors rather than to their own management. So each January at the annual meeting we present awards to SPECtacular contributers. And now I write here to give them a bit of public recognition. I start this year with award recipients from the Java committee. Awards were presented by Alan Adamson (on right in photo, presenting award to Steve Realmuto). Alan is a SPEC Awards Committee member, member of the Board of Directors, and former chair of the Java committee.
Akara Sucharitakul - Oracle
The silent partner. Akara developed Faban, made it available to SPEC, and implemented new features needed to facilitate its use in SPECjEnterprise2010, so that the benchmark can be broader in scope and still be simpler to run.
Anil Kumar - Intel
The greenie. Although SPECjEnterprise2010 version 1.0 does not include an energy metric, the code is there thanks to Anil, awaiting adoption of suitable run and reporting rules.
Anoop Gupta - Oracle
The quiet achiever. Anoop seldom took part in the committee's sometimes raucous debates, because he was busy working on the code, making sure the workload is correct and correctly balanced.
Bernhard Riedhofer - SAP
Mr Specification. Berhnard speaks very quietly and politely , but the development group learned early on that when Bernhard speaks , it pays to listen. Of his many code contributions, those to make the database loader run truly parallel are going to be most appreciated by folks running large submissions.
David Keenan – Oracle
The chair. The job of chairing one of SPEC's largest and sometimes fractious committees is not an easy one, particularly while running results review for 4 benchmarks plus development of two new benchmarks. David combines a soft touch with firm determination to get the job done.
John Stecher - IBM
The closer. At then end when "only" a few tough action items stood in the way of benchmark release, John got additional resources committed from IBM to "get this thing done," pitching in for plenty of the closing work himself too.
Rahul Biswas - Oracle
Mr WebServices. Rahul provided most of the web services code, plus the code and ant scripts to integrate the benchmark into the Faban harness.
Robert Wisniewski - IBM
The reporter. Rob wrote the reporter code and built the test kits. He also served as secretary: taking good notes is vital to an open development process of multiple (competing) vendors.
Saraswathy Narayan - Oracle
The architect. Sara took the time to make a deep study of the entire benchmark from a transactional and data flow perspective, ensuring the correctness and function of the database schema and benchmark data partitioning.
Steve Realmuto - Oracle
The chief of police. Steve contributed much to the organisation of the benchmark development effort, and helped the team follow SPEC policies. He was editor of the run rules, driving the review and ensuring correctness.
Tom Daly – Oracle
The instigator. Tom was a relentless leader in driving the project forward, and a tireless worker in helping to push at every stage. The benchmark became a much richer and diverse test of Java middleware because of Tom’s influence.
Congratulations to these outstanding engineers, and to the entire SPEC Java team!
Friday Jun 05, 2009
By walterbays on Jun 05, 2009
The last of the 2009 SPECtacular awards. SPECweb2005 is the industry standard performance metric for web servers, and today it is joined by SPECweb2009, the industry standard performance and energy metric for web servers. The benchmark includes a banking workload (all SSL), a support workload (no SSL), and an ecommerce workload (mixed). This is the first application of the SPECpower methodology to potentially large system under test configurations. In the initial benchmark results you can see one system with and one without external storage, and the test report lets you see the power consumption of just the server, of the storage, and of the entire configuration at various utilization levels. The entire committee did a fantastic job with this benchmark. As always, I won't list anyone's name without permission. (But give me the okay and I'll update this posting!) SPEC recognizes:
Gary Frost (AMD) who stepped in to fill a key developer role in an emergency with the release clock ticking. He took over the control code after a sudden reassignment, and frankly we handed him quite an undocumented mess. Gary was up to the challenge and produced the finished code.
Another engineer from AMD had primary responsibility for the reporting page generator. You often can't know exactly what information ought to go into a full disclosure report (FDR) until you see it. Nor how you want it organized and arranged. Nor what data integrity cross checks need be present to avoid errors. So the committee changed requirements often during development. But no matter how many requirements were placed on him, he turned around with the needed code within a week!
An engineer from Fujitsu Technology Solutions became the de facto quality assurance office because of his thorough and methodical testing practices. If there are a hundred ways software in general can go wrong, then there are a thousand ways benchmark software can go wrong, as by its nature it runs on systems stressed to the limit. When SPEC benchmark software just works that is largely due to people like this engineer who forsee, test, and diagnose every possible failure unanticipated by the authors.
And, if you'd like to see all of the SPECtacular awards, then follow the tags!
Wednesday Jun 03, 2009
By walterbays on Jun 03, 2009
Another SPECtacular award from the SPEC annual meeting: Alan Adamson retired from IBM where he had been their primary SPEC Representative, held a number of different elective positions in SPEC, and earned deep respect and trust from his colleagues. Coming from the IBM Toronto Software Lab, Alan was a natural to lead SPEC's Java committee. Having put that very large committee in smooth running order, Alan was elected secretary to the Power committee helping it to produce the first industry standard power performance benchmark. Meanwhile he led the OSG steering committee which coordinates activities of all the SPEC OSG committees.
Alan genuinely cares about the
well-being of SPEC and the people involved. He demonstrates
incredible thoughtfulness and effectiveness in thinking about SPEC's
benchmark development. He fosters the fun and friendly SPEC culture
where there is always time to share a joke or a funny story if
appropriate. At the same time he creates space for candid discussions
of serious matter. Alan's leadership and personal effort has been a
big contributor to the success of SPEC.
Alan continues to hold one position in SPEC, as a director, because members of the board of directors are elected as individuals, not as companies. Alan serves as a general chair of the 2010 WOSP/SIPEWInternational Conference on Performance Engineering, a joint conference of SPEC and ACM which brings together top academic researchers and industry practitioners in performance engineering.
You can follow Alan on his blog, for interesting insights on art, technology, politics, and life - where he is just as opinionated as ever, just as modest as ever, just as intolerant of stupidity, and just as tolerant of the people involved - even when we are opinionated, immodest, and stupid at times. For all his hard work in SPEC I can think of nobody more deserving of a relaxing retirement than Alan, and nobody whom we will miss more than him!
Monday Jun 01, 2009
By walterbays on Jun 01, 2009
Another SPECtacular award from the SPEC annual meeting: Klaus Lange (HP) has become a valuable conduit across different levels of the organization and across benchmark subcommittees, by virtue of becoming indispensable in all of them. Though Klaus is an experienced "SPEC hand" he never forgot what he faced as a newcomer, and took it on himself to organize a new member orientation program to help new institutions integrate into SPEC more easily and effectively. As chair of the SPECpower committee Klaus delivered the industry's first energy efficiency benchmark, and leads the committee in aiding other groups as they add energy metrics to a wide range of benchmarks. These groups include many SPEC committees as well as other industry consortia. As HP's representative on the OSG steering committee Klaus has earned respect for his opinions with his diligence and fair mindedness. As a member of the Board of Directors he is often the first to step up to volunteer for important projects, as well as exercising sound judgment in conducting SPEC's business operations.
Friday May 29, 2009
By walterbays on May 29, 2009
Another SPECtacular award from the SPEC
annual meeting: John
Henning of Sun Microsystems is
secretary of the Open Systems Group steering committee. John has been
the driving force behind improvements to our policy
document. This is crucial to efficient operation of the
organization, especially as so many new organizations have joined
SPEC and so many new participants have joined into the work even from
long time SPEC member companies. John is also the one who reminds all
of us to pause in our lecturing and really listen to our adversaries,
the dissident minority voice. Sometimes they have a point that is
valuable to the task at hand, if we only recognize it, and thereby harness all of the energy and creativity of the group.
Thursday May 28, 2009
By walterbays on May 28, 2009
Another SPECtacular award from the SPEC annual meeting: David Morse (Dell) served as vice-chair and now chair of the Open Systems Group steering committee, his effective organization and leadership of a rather fractious bunch, with successful release of many benchmarks, and formalization of rules and procedures to put everyone on an even footing with the "good old boys" and reduce risk and uncertainty in members' use of the benchmarks. Another example of his dedication is his implementation of bookmarkable search extensions to benchmark result queries on spec.org. David is equally comfortable and competent in the most complicated leadership roles and in the most difficult and detailed technical roles.
Wednesday May 27, 2009
By walterbays on May 27, 2009
Another SPECtacular award from the SPEC annual meeting: Paula Smith (VMware) was honored for her tireless, competent and patient work managing the SPEC office and the people there. Paula consistently exhibits what make SPEC an unique place. The attention and enthusiasm she brings to her volunteer work make her a pleasure to interact with. She goes above and beyond in everything she does, and is often able to turn emergencies into opportunities. Most impressive is how she maintains this over time and in every interaction, despite many competing pressures for her attention. Beyond this management work, she also manages to handle the organizational and technical work of chairing the Virtualization committee, and of course her day job at VMware.
Tuesday May 26, 2009
By walterbays on May 26, 2009
Another SPECtacular award from the SPEC annual meeting: Cloyce Spradling (Sun Microsystems) was honored for continued timely support of SPEC CPU, HPG and editorial tools. The key factor is his timeliness, in that he responds to unplanned, asynchronous requests, if not with a solution then at least with a map to help people find their way out of the woods. And that's on top of his day job at Sun.
Friday May 22, 2009
By walterbays on May 22, 2009
SPECtacular awards. The SPECpower committee has been busy. They
released version 1.10 of
the SPECpower_ssj2008 benchmark as a no-cost upgrade to existing
licensees. It adds support for measurement of multi-node (blade)
servers, improves usability, and adds a graphical display of power
data during benchmark execution. Review and publication of benchmark
results continues apace, with a spirited competition for first place,
and with ever more power
analyzers accepted for testing, and more test labs qualified for
independent publication. They have also been assisting several other
benchmark committees inside SPEC, and other industry
standard benchmark organizations, to implement energy measurement for
their benchmarks. SPECpower is more than just a benchmark; it is a
and the methodology is modified and expanded as necessary over time
to accommodate energy measurements for all the different workloads
which are relevant to the real world in those market segments. In
alphabetical order SPEC recognizes:
Chris Boire (Sun Microsystems) – As release manager he coordinated and integrated development activities to keep the deliverables on schedule.
David Schmidt (HP) – He created stand-alone and network integrated tools for automated results checking to help insure that results submissions are correct and complete.
Greg Darnell (Dell) – Author of the PTDaemon, he helped many other groups get started measuring power for their benchmarks. He helps out with whatever needs to be done, technical or organizational.
Hansfried Block (Fujitsu Technology Solutions) - He automated the process of determining power analyzer precision, handled the acceptance of several new power analyzers, and was instrumental in getting multi-channel analyzers accepted.
Harry Li (Intel) – He was primary developer of the Visual Activity Monitor, giving an unique view of the system's activity.
Jeremy Arnold (IBM) – If I tried to recount all the accomplishments Jeremy was cited for I'd probably run into some internal blog size limit. Suffice it to say he is a primary developer on many parts of the code, who never turns down a plea for help, and who is never satisfied until the entire benchmark package is right.
Karl Huppler (IBM) – As primary author/editor of the Power and Performance Methodology, he organized the document to capture deep technical consensus in the committee, and made it readable and understandable for people new to the field.
Matthew Galloway (HP) – He designed the control software to drive multiple JVMs, enabling multi node (blade) testing.
An engineer (AMD) – Who created and maintained much of the web content explaining the benchmark and methodology to the public.
Thursday May 14, 2009
By walterbays on May 14, 2009
More 2009 SPECtacular awards. SPEC's forthcoming virtualization benchmark will provide meaningful metrics of hardware and software performance in data center consolidation. As complex as this benchmark is, running several different benchmarks together in virtual machines on a host system under test, the code is only half the story. As with all benchmarks the workload is vital, to represent realistic usage scenario(s) so that performance improvements made on the benchmark will also benefit real world users. And the run rules are vital, needing to accommodate technology improvements over the lifetime of the benchmark, while precluding unrepresentative optimizations exploiting rule loopholes. (Or what the layman might call “cheating”) There is spirited debate from companies representing rather diverse user communities, all with an interest in seeing that their customers' needs are addressed by the benchmark. In the end when this group of top engineers reaches a consensus you know they've come up with a benchmark that is as rock solid as is possible to make. From among this great team of partners and competitors, three were singled out for SPECtacular awards:
Andrew Bond of HP always steps forward when a person is needed to test new code, features, parameter tuning. He performed many experiments whose results showed the committee the sensitivity of the benchmark to various parameters, sizes, and configuration options, so that the right choices could be made for fair benchmark comparisons. He also created scripts to set up and configure new guest VMs for each workload.
Chris Floyd of IBM improved and tailored the mail server and application server workloads for the new benchmark. He's revamped these workloads several times to improve the I/O profiles and add burstiness to the application server transaction injection. He helps the other developers at regular on-line coding sessions, explaining new features, and resolving problems. He even helps out when on vacation.
Greg Kopczynski of VMware developed a (necessarily) complex and feature extensive harness for the benchmark. He responds to countless pleas for help, assistance, debugging, etc., in true SPEC fashion without asking whether the help is for a partner or a competitor. He added burstiness to the web server workload. And he integrates new code and changes from all the developers for each development kit revision.
Thanks for your great efforts!
Wednesday May 13, 2009
By walterbays on May 13, 2009
More 2009 SPECtacular awards. Sometimes even success doesn't succeed, at first. SPEC developed a workstation energy consumption benchmark, and a lot of people worked extra hard to deliver it in time for EPA to consider using it in the Energy Star program which is being extended beyond PC's to also include workstations, servers, thin clients, and storage. Although EPA decided not to use our test for the workstation program at this time, the work is still important and I am confident it will be used in some way. A graphics processor can easily use more energy than a CPU, especially a high performance accelerated 3D processor. For their exceptional work in producing this benchmark I thank David Reiner of AMD, Joerg Grosshennig of Fujitsu Technology Solutions, Paul Besl of Intel, and an engineer from NVIDIA.
Tuesday May 12, 2009
By walterbays on May 12, 2009
More 2009 SPECtacular awards. SPEC released an update to our MPI2007 benchmark of Message Passing Interface performance. It allows evaluation of MPI-parallel, floating point, compute intensive performance across a wide range of cluster and SMP hardware. MPI2007 continues the SPEC tradition of giving HPC users the most objective and representative benchmark suite for measuring the performance of SMP (shared memory multi-processor) systems. The update, provided at no cost to existing MPI2007 licensees, improved compatibility, stability, documentation and ease of use. SPEC gave awards to:
Brian Whitney of Sun Microsystems for meticulous care as release manager in scheduling, and putting it all together,
Carl Ponder of IBM for the development and management of documentation, especially with respect to the run rules, FAQ, and the configuration file.
Håkon Bugge of Platform Computing for outstanding testing skills during the benchmark development.
I am a software engineer in San Diego, president of the Standard Performance Evaluation Corporation (spec.org), formerly a mathematician and a violist.
- getting ready for SPEC awards
- TurboTax on VirtualBox problem solved
- SPECtacular awards - High Performance Computing
- SPECtacular awards - academia meets industry
- SPECtacular awards - Java
- Oracle Takes Off!
- Music for today
- Fixing OO 3.1 spellcheck on OpenSolaris
- Another new U-verse feature
- Street Smart hybrid electric bicycle