By walterbays on Feb 21, 2008
More SPECtacular awards given at SPEC's 2008 annual meeting in San Francisco, these to members of the Web Committee. As before, I won't post anyone's name without permission, but you know who you are and SPEC is grateful for your contributions.
- an engineer from HP
- Greg Kopczynski, VMware, who was also cited for work on the virtualization benchmark
- Pallab Bhattacharya, and another engineer from Sun
- an engineer from Fujitsu-Siemens
- Sam Warner, Intel
It's a bit more complicated to tell you what these engineers did to earn this recognition. In 2007 SPEC released a maintenance version 1.20 to SPECweb2005 which included a lot of improvements. Most notable to me are new code that extends the level playing field of comparison, and tightened standards compliance rules that ensure real world applicability.
In order to focus on performance of the web server, to simplify the configurations tested and so reduce the cost of benchmarking, SPECweb2005 uses a simulated back-end database, BESIM, instead of the database server typically found with real world web applications. The great majority of the computing load is, by design, on the web server. Therefore the database component being synthetic code has not been an issue with fair comparison across platforms. However we discovered that when using an ISAPI implementation, two Ethernet packets were exchanged per HTTP response instead of the single packet with other implementations. No such results had been published, so there were no unfair comparisons - yet. But the problem needed to be fixed in order that ISAPI solutions could be measured on a level playing field. The solution was to adapt BESIM to use ISAPI v2.0 interface based on the Windows IIS implementation (with permission from Microsoft), so now different solutions behave alike, and they behave like real world web servers.
Another potential risk to fair comparison was in the Java Server Pages (JSP) which is typically used in benchmark results in preference to other slower scripting languages like PHP. We added a requirement to the run rules that software products prove their standards compliance by passing the relevant test suite. In the case of JSP that means that web servers which say they implement JSP must really implement JSP, not a subset needed to run the benchmark but omiting some slower features needed to implement real world web applications. Again, we don't know of any published results which did gain a performance advantage by cutting corners on standards conformance. Now that the rules are tightened we know there won't be any.