Sunday Apr 19, 2009

SPECjAppServer2004 2925.18 JOPS, Glassfish and MySQL raise the bar

SPECjAppServer2004 2925.18 Sun continues performance and price/performance leadership

Good News :

Sun has just published our latest SPECjAppServer2004 results of 2925.18 SPECjAppServer2004 JOPS@Standard using all open source stack i.e. Glassfish , MySQL and OpenSolaris on the new Sun / Intel Nehalem based servers (using ZFS).


  • Nehalem/Glassfish/MySQL combination brings substantial price savings to users of typical web based applications and all with commercial grade support

  • The tested configuration consists of 1 x Sun/Intel Nehalem based X2270 server for the Glassfish application server and 1 x SunFire X4170 Intel Nehalem based server for the MySQL 5.1 database. For more details see the result at or look at for a good overview.

  • MySQL efficient logging system requires only 12 spindles to achieve the I/O throughput to support this load, yet most of the proprietary software results published require at least double this !

  • First industry standard benchmark featuring the database running on a ZFS files system (which made running multiple tests very efficient , this is a big deal so I will give some more details on this shortly)

  • Total software and hardware purchase price $US 38,880 based on the “Bill of Materials” in the benchmark report and online pricing from

  • Significant performance gains i.e. the previous best all open source result was Sun's 1197 JOPS result (see for details ) but this new result is on Sun/Intel Nehalem servers is more than double the previous best.

  • Despite achieving more than double the score of the previous best all open source result this configuration still yields a price/performance gain over the previous result I.e $US 38,880 / 2925.18 = $US 13.29 / JOPS@Standard and the previous best was $13.46/JOPS@Standard (see for pricing details ).

  • To give some idea of the actual performance , this configuration of 2 x 8 Core servers and open source , supports a virtual concurrent user load of more than 22,750 web application virtual users and uses > 800 connections to the MySQL 5.1.30 via the Glassfish connection pools!

The Test rig


  • If you are a software developer of web based applications , perhaps an ISV, or perhaps a contributor to an open source project then you really ought to consider the price/perform and enterprise level support advantages of the Sun open source stack and start considering certifying or deploying your application to this platform.

  • If you are an end user customer of web based applications such as e-commerce or similar applications you should perhaps ask your software supplier when they can start moving their applications to the Sun open source stack, so as to start saving you and them money (especially in these tight times)

  • If your hardware supplier is not helping to drive your costs down like Sun is, then perhaps start asking them why !

Final thought :

These results using MySQL 5.1.30 don't yet include the performance improvements from the marvellous work that the combined Sun/MySQL/Community performance team have been working on so watch this space, the good news is not finished yet.

Pricing is based on Sun Glassfish Enterprise Server 2925.18 SPECjAppServer2004 JOPS@Standard result Bill of Materials at                                                                                                                                                                                                       
Required disclosure : SPEC and SPECjAppServer are registered trademarks of Standard Performance Evaluation Corporation.
Sun GlassFish Enterprise Server v2.1 on Sun Fire X2270 with MySQL 5.1 on OpenSolaris 2008.11                                                       Sun result 1  x X2270  8 x cores (2 Chips) used for application server and 1 x X4170 Database server 8 x Cores (2 Chips)   

Thursday Nov 06, 2008

Sun Demonstrates Compelling Price Performance

Sun demonstrates compelling price/performance advantage of enterprise supported Open Source.

Using Glassfish V2 U2 / MySQL 5.0/ OpenSolaris on commodity Sun Intel hardware (i.e. SunFire X4150 Intel based servers)

Sun has achieved a result of 1197.50 SPECjAppServer2004JOPS@Standard , you can see a lot more detail on this result on the Sun benchmarks page and also a nice write up by the bmseer and also checkout the Sun ISV blog here

The Summary :

Why this is interesting/important to organisations running web applications.

  1. Demonstrates the huge cost savings available from using Open Source software for the entire application stack !

    The table and graphs above highlight an important fact. The Sun solution for hardware and software costs less than one CPU's worth of database license as used on the Dell or HP results and this isn't true just of the results above but in fact true of most of the SPECjAppServer2004 submissions that use proprietary (application server or database) software. You can check out the details of the pricing here

  2. Demonstrates Open Source solution competing directly against proprietary solutions

    One very useful and important feature of the SPECjAppServer2004 benchmark is that it is written in java and this makes it possible to run the same code unchanged on a variety of different operating systems and hardware platforms, in fact there is a benchmark rule that prohibits any code changes being made to the SPECjAppServer2004 java code. This means that no matter what platform , j2ee application server, operating system or database the application is run on the application itself is unchanged. This characteristic makes SPECjAppServer2004 and ideal workload for making comparisons between  different environments and Sun is utilising this fact to provide a comparison between the proprietary software based Dell and HP results with the an all open source based result. In addition to running the same java code and hence accessing the same database data, the open source results must fulfill the benchmark requirements for enterprise products such as , correctness, support, durability ACID compliance etc.

    Of course there are limitations with all benchmarks and comparisons must be made carefully, however this result clearly demonstrates open source doing the same job as proprietary but at a fraction of the cost. This coupled with the continuing performance improvements I mention in 3. below seem to be adding weight to Allan Packers thoughts on the topic of the future of proprietary and open source databases.

  3. Demonstrates continuing and outstanding performance improvements in the Open Source stack and in the MySQL DB in particular

    In fact it turns out that the Sun result of 1197.10 JOPS demonstrates the best published “commodity hardware” result for per core database performance. The Sun benchmark uses just 4 Intel cores on one chip in the X4150 database server and this means that the combination of OpenSolaris/MySQL 5 and X4150 delivers per core JOPS of 299.28 JOPS/database core. The only results that demonstrate superior per core performance for the database are some IBM power architecture chips (which are definitely not commodity priced).  If you want to verify this claim please download the spreadsheet of results available via the search feature at

    Also it turns out that the MySQL database requires far less disk resources that competing products, this is demonstrated by the fact that the internal disks of the X4150 were used to support the database throughput and only 6 of them in a raid 0+1 partition. If you look through the SPECjAppServer2004 results you will see most vendors need to use expensive disk arrays to achieve the required disk performance (and most likely expensive extra database options).

  4. Just the start of performance improvements for MySQL , Glassfish and Open Solaris

    This result is based on the first release/update of OpenSolaris and uses MySQL version 5.0. and the performance of both of these platforms is going to improve in the near future for instance with  MySQL 5.0 we simply have not yet seen  the results of the impressive gains the Sun Performance Technology team in conjunction with the open source community and open source focussed companies such as Google are bringing to MySQL.  We will see these performance gains start to turn up with MySQL 5.1 and beyond (see Allan Packers article on this).  Also stayed tuned as the performance of Glassfish is going to continue to impress especially on commodity 4 and 8 core servers used in conjunction with MySQL. In fact I can't wait to start testing GF V3.

  5. Because this benchmark result can be a pointer to how your organisation can start to save “serious” money by using enterprise supported Open Source software

    Basically you might want to start investigating what web applications in your organisation can run on an enterprise supported open source stack such as the glassfish/MySQL/OpenSolaris stack used for this benchmark result.
    There are some steps you can start talking now and I am going to detail a strategy for doing just this in an upcoming blog.

Required disclosure : SPEC and SPECjAppServer are registered trademarks of Standard Performance Evaluation Corporation. Results from as of 11/05/2008. 2xSun Fire X4150 (8 cores, 2chips) and 1xSun Fire X4150 (4 cores, 1 chip) 1197.10 SPECjAppServer2004 JOPS@Standard; Best result with 8 cores in application-tier of the benchmark: 1xHP BL460c (8 cores,2chips) and 1xHP BL480c (8 cores,2 chips) 2056.27 SPECjAppServer2004 JOPS@Standard; Best result with 2 systems in application-tier of the benchmark: 2xDell PowerEdge 2950 (8 cores, 2 chips) and 1xDell PowerEdge R900 (4 chips, 24 cores) 4,794.33 SPECjAppServer2004 JOPS@Standard.

Tuesday Nov 04, 2008

Price Substantiation for SPECjAppServer2004 results

This blog entry is intended to disclose  and document the price of a number of selected SPECjAppServer2004 submissions and to demonstrate the big price advantages to consumers when using enterprise supported Open Source software in as much of the (enterprise) application stack as possible.  The SPECjAppServer2004 benchmark (despite it's name) is a full system benchmark and tests most system components i.e. application server, JVM, hardware, network, operating system and (especially) the database. Pricing SPECjAppServer2004 submissions therefore assists end users and developers to compare the costs of a system benchmark using proprietary software and using Open Source software running the same (unchanged) applications.

Notes on pricing

  1. For all submissions priced here the "Bill of Materials" (BOM) available in the SPECjAppServer2004 Full Disclosure report has been used as the basis for pricing. 
  2. List prices have been used because a) list prices are most easily comparable b) they are the prices advertised by vendors and c) are conservative i.e. it is unlikely a customer would pay higher than a vendor advertised list price.
  3. Although all tested components in SPECjAppServer2004 are required to be supported,  the pricing below does not include support support prices as support prices vary/ are not available for some hardware components listed in the BOM in some SPECjAppServer2004 submissions. [ note including support for the Open Source based Sun benchmark would result in an even lower price for the Sun benchmark compared to the proprietary solutions also note MySQL support is actually included in the Sun hardware purchase cost ].
  4. The prices quoted come predominately from the vendors websites with a small number coming from online stores.  All Sun components have been priced from the sun online store and prices included in the totals. Where prices can't be found for other vendors component then we have simply omitted the price or quoted the next cheapest and most equivalent component from the vendor website i.e. we are trying to make a genuine attempt to conservatively yet realistically quote competitors prices. The price/performance advantage for Open Source is so great understating the proprietary system cost by a little does not influence the analysis a great deal.

Table 1

Table 2

Table 3

Price/Perf claim Sun MySQL 5.0 SPECjAppServer result demonstrates the best price/performance of any of the published SPECjAppserver2004 results.

How can we be so sure ?

Well largely because the Sun submission is sooo much less expensive than the competition, just glancing through the other submissions shows that the software purchase/license costs of other non open source submissions are all going to be larger than the system cost of the Sun hardware and software combined in fact the Sun submission in most cases is less expensive than the database license for a single CPU.

Consider : There really are no other submissions that come close, to demonstrate this lets consider a couple.

Result / candidate


Costs involved

$US/JOPS for (partial) costing


Sun GlassFish Enterprise Server v2 Update 2, SunFire X4150 Cluster with MySQL 5.0 on OpenSolaris 2008.05


Hardware and Software costs = $16,110 as detailed above

$US 13.46 / JOPS@Standard

Note again this is the full system cost not partial

Sybase has published a low JOPS and potentially cheap and therefore good price/performance candidate.

Sybase Enterprise Application Server 6.0.2 Advanced Edition, 4 cores, 2 chips


From the Sybase web site using the buy button ...

Sybase EAServer Enterprise Edition 6.1 (6.0.2 no longer seems available)   = 7,500 / CPU = $US 15,000

SQL Anywhere 10.0.1 = $2499/CPU  = $4998


Microsoft Server 2003 (64 Bit) Standard Edition x64 = $US 594   \* 2

=  $US 1188

So software total






$US 32.44 / JOPS@Standard

So just of software costs alone the Sybase result is significantly more expensive.

The IBM result of WebSphere Application Server on IBM System p5 550 Cluster

This result shows 365.13 JOPS/DB core (and 8 DB cores) and so could potentially deliver good price performance (note that the Sun/MySQL result is not far behind at 299.28 JOPS/DB core)


So looking at the IBM Processor Value Unit Calculator and DB2 price at

It appears that we need 120 PVUs per core or a total of 960 PVUs and IBM DB2 Enterprise Server Edition Processor Value Unit (PVU) License + SW Subscription & Support 12 Months = $US 386
=> database license of 386 \* 960 = $US 370,560

$US 126.86 / JOPS@Standard

without even considering application server software, operating system or hardware.

Conclusion : The current Sun GlassFish Enterprise Server v2 Update 2, SunFire X4150 Cluster with MySQL 5.0 on OpenSolaris 2008.05 is by far the best price/performance published SPECjAppServer2004 result.

SPEC required disclosure : - SPEC, SPECjAppServer reg tm of Standard Performance Evaluation Corporation. Results from as of 5th Nov 2008 All comparisons are based on the SPEC SPECjAppServer2004JOPS@Standard metric from or on pricing made using the bill of materials included in each SPECjAppServer2004 result


some content

Monday Aug 11, 2008

Real Value in Benchmarks

Benchmarks, Workloads , Micro benchmarks and In-House Performance Testing

As a contributor to the SPEC performance organisation on behalf of Sun, I tend to notice and read comments both negative and positive on the benchmarks SPEC creates and administers, I read with particular interest articles on the SPECjAppServer benchmarks that I am involved in. A few days ago I was forwarded a post in which the author offers the opinion that the SPECjAppServer2004 results provide no value and  also offers a pretty negative view of industry standard benchmarks in general. I certainly don't believe that the SPEC benchmarks or any benchmarks are perfect , nor do I thnk that they are the only valuable source for performance information but to claim that the results have no value seems ..well, absurd.  So I thought it might be useful to offer some background  and observations regarding performance measurement,  and in the discussion below I try to categorise the main sources of performance information and  to  highlight the main benefits and shortcomings of each source. 

(formal) Benchmark

A benchmark is comprised of a performance testing workload (application) or workload definition + a set of run rules and procedures that define how the workload will be run + a process for ensuring that the published results conform to the run rules and to prescribed "fair use"  rules about how comparisons may be made between results.

The workload is generally a (very) complex application and includes the user simulations, data models either in code or specification form and all the information necessary to run repeatable performance tests over a (potentially) wide variety of computing environments. The run rules and procedures define how the workload will be run, what constitutes fair and reasonable tuning techniques, what the requirements are for the products being tested and the format and length of test runs and the reporting requirements . The benchmark usage rules (fair use) outline how one benchmark result can be compared to others and effectively puts constraints on the claims that can be made about any particular benchmark result. Hence (ideally) increasing the value of the published results to end user consumers of the results.

Industry standard benchmarks organisations such as SPEC or TPC are comprised of (IT) companies, and interested individuals who contribute time and/or money to the organisation to develop (complex) benchmarks and to help manage these benchmarks. The reason these benchmark organisations exist is to create benchmarks and performance data that is credible, relevant and useful to end user consumers of this data. There are many benefits to the contributing vendors in creating and running benchmarks, having a forum to prove performance or price/performance gains in the their products is certainly a big motivation but not the only one , many of the benchmarks defined and created for example by SPEC are used by hardware and software vendors to improve their products, long before a result is ever published on the competitive public site.  So there are very sound engineering as well as  marketing  reasons for vendors to contribute to the goal of creating credible relevant useful perfomance benchmarks.

Another valuable source of benchmark and performance data comes from vendor benchmarks. The Oracle applications benchmarks and SAP benchmarks are good and well know examples, the workload, run rules and usage rules are defined by the vendor company and then made available to 3rd parties or perhaps hardware partners who want to run and tune these workloads on their environments. These benchmarks have much in common with the industry standard benchmarks but the scope is limited generally to just the product offered by the vendor. These benchmarks are very useful for potential customers of these systems, to gain the performance information required to size implementations of these products and hence to build confidence in the performance capacities of the system prior to purchase or implementation.


  • Extremely cheap for end users, normally the large IT vendors have done most of the work and published results, end users then need only look at these results and then decide if and how they might be applied to their business and what comparisons they can make based on the published data. End users can use the numbers with a degree of confidence knowing that the results have been audited or perhaps peer reviewed to ensure compliance to the benchmark rules.

  • There is a lot of tuning information and real value in the benchmark results themselves, for instance consider the SPECjAppServer2004 benchmark results. In each result you will find the .html result page which is the full disclosure report (FDR). The FDR contains not just the benchmarking results and final and repeat run scores but also a wealth of tuning information, tuning for the database the application server, hardware , java virtual machine, JDBC driver and operating system, everything another user might need to be able to reproduce the result. In the FDR there is also the full disclosure archive (FDA). The FDA contains the scripts, database schema, deployment information and instructions on how to the environment was established. The SPECjAppServer2004 FDR and FDA are valuable resources I use all the time on customer sites as a reference on how to tune and configure their production and test systems.

  • Again in reference to the FDR and FDA much of the raw data and data rate information is useful, examples include the number of concurrent web tier transactions or the network traffic or perhaps the size of the database supported by the database hardware. These data rates and speeds and feeds can be used to assess the capabilities of certain parts of the system being tested and can be useful in sizing some aspect of similar applications.

  • Hardware and software vendors use the benchmarks as tools to improve their products which in flows to end users. A good example of this at SPEC is when decided to use BigDecimal in the web tier of SPECjAppServer2004, even prior to the SPECjAppServer2004 workload being released it was obvious to the java virtual machine vendors participating at SPEC that this was an opportunity to optimise BigDecimal processing in their JVMs. So before the first SPECjAppServer2004 results were released the JVMs were already providing optimisations for BigDecimal and SPECjAppServer2004 was helping quantify the performance gains from these optimisations. The benefits of these optimisations flowed to all users of java BigDecimal that could move to the later JVMs.

  • Competition. Industry standard benchmarks are one way a vendor can show performance improvements in their products and performance leadership over their competition and perhaps gain a marketing advantage, so there is a fiercely competitive aspect to industry standard and vendor benchmarking. This competition is generally good for end users as it commonly produces tunings and optimisations in the vendors products that benefit a wide range of applications using their technology and indeed this is the situation that the run rules and fair use rules generally strive to promote.


  • Inappropriate comparisons, or extrapolation of results. Care must be taken to make selective and reasonable judgements based on the information provided in the results or benchmark reports. It makes no sense for instance to use SPECjAppServer2004 results which is a transactional benchmark say to size a system for data warehousing or business intelligence, a TPC-H benchmark would be the place to go for this information. Also looking at the transaction rate disclosed in a benchmark report and then extrapolating this result upwards is risky as performance is not a continuous function but instead can have many discrete jumps and tested configurations may have hard ceilings such as memory capacity or bus bandwidth. For example it might not be accurate to predict the performance of a single instance of Glassfish application server on a 64 core machine based on the JOPS/Glassfish instance result measured on a 4 core machine.

  • There will be limitations on how closely industry standard benchmarks model your chosen or developed application. In the case of SPECjAppServer2004 the developers and participants , companies like IBM, Intel,Sun,Oracle have looked at our customer base and tried to model the web applications we have seen our customers developing or perhaps based our modelling decisions what they told us they were going to develop.

  • Industry standard benchmarks trail the technology curve and hence will often be using an older version of infrastructure / technology than the market would like. This is because the benchmark can't come out until there is an established set of products to run the benchmark and because it takes time for companies to run and scale the benchmarks and to build a set of results that is useful for end users. For example the development of SPECjAppServer2004 workload was well underway before there were many/any J2EE 1.4 products available, but it wasn't released until after most of the major J2EE application server companies had released their products. Similarly work is underway for the new version of SPECjAppServer but it is trailing the availability of the application servers that have implemented the Java Enterprise Edition 5.0 specification.


A performance workload  is similar to the benchmark described above but it lacks the run rules, process and oversight. This means that end users can't (in general) have high levels of confidence in the performance claims made by vendors publishing results based on these workloads. End users reading benchmark reports and performance claims made from using workloads without the process of a formal benchmark will have much more work to do to decide what comparisons make sense and what comparisons may in fact even be misleading. For example a vendor could use a workload like DBT2 and publish test results comparing say the largest server hardware running database "A" and a tiny single cpu based server running database "B" and then without disclosing the different hardware platforms could offer this as data to suggest that database "A" is better performing than database "B".  Sure this is an exaggeration but it serves to demonstrate the value of the process and disclosure rules of the formal benchmarks.


  • Workloads are often easy to run, easily understood and readily available. This makes them very useful to run in-house and therefore end users can make their own comparisons without having to rely on external vendors.

  • Workloads don't have the restrictions of the process imposed by the industry standard benchmark bodies and as such it is mush easier to just run and report results from workloads. For example in the open source database world the SysBench workload is a very valuable tool and is commonly used for performance testing of code changes to the MySQL database. Results of these tests are widely and openly reported and used as the basis for even more performance improvement. One key here is, in this situation the workload is being used collaboratively for investigation not primarily competitively to sell something.

  • Workloads are potentially designed by individuals and so development cycles may be shorter that the industry standard benchmarks.


  • Risk of error , especially in tuning. Even though running performance workloads in house can be relatively straight forward there is still the risk of getting the wrong answer. Consider trying to determine which is the best performing database "A" or  "B" by doing workload based performance testing of both. The user running the tests and trying to accurately compare the results has to have the expertise to be able to tune both "A" and "B" to the point where they can make optimal use of the hardware and operating system resources otherwise the results may be misleading.

  • There is a cost to running performance investigations in-house and though running performance workloads may be relatively cheap it is potentially more expensive than if published industry standard or vendor benchmark figures are available and can be used.

Micro Benchmark

A micro benchmark is usually a small generally simple workload that tests only a limited number of system or user functions.
In fact most often the micro benchmark will not have any process such as reporting rules or any basis for comparison of results so really I believe a better terminology would be micro workloads.


  • Generally free or cheap to download or develop.

  • Very easy to run and report results on

  • Potentially very powerful too for diagnosing low-level performance problems

  • Because micro benchmarks are generally fairly simple and only measure a very small set of performance attributes then comparisons may in fact be valid across platforms.


  • It is generally not possible and rarely a good idea to predict larger system or application performance based on micro benchmarking, again by their nature micro benchmarks will test and consider only a small sub set of the performance of the system being tested so it is quite likely that other factors beyond the scope of the micro benchmarks will effect total system response and throughput.

In House Application Performance Testing (customer benchmarking)

Where an end user, customer or software developer creates a purpose built stress test (workload) for their application or hardware and tests what they intend to run in production.
This is arguably the most reliable way for an end user or application developer to understand their code / environment as it involves running the actual code planned to be run in production. There is no requirement to try and apply the results and resource utilisation from some other performance test (benchmark or workload) as it is your code that is being run and directly observed.


  • This is by far the most accurate way to determine real application and / or system performance i.e. to actually test in-house what you intend to run in production. This way no guesses have to be made as to how applicable the test environment is to the intended production environment.


  • By far this is the most expensive of all of the options but for a company or individual potentially spending a lot of money on hardware or software this may well be the best option and it might be that the potentially substantial costs associated with developing a test harness and simulation, determining the test parameters and running the performance tests is well worth it. One caveat here might be that as software costs fall with the increasing enterprise use of open source software then the costs of running “in houseâ€� performance tests may start to look large vrs software purchase cost.

  • Running in-house application performance benchmarks is still not without risk, a wide variety of skills is required to create the simulation and determine that it covers the expected usage pattern for the application. Different skills are needed to be able to deploy and tune the application and any middle-ware required for the application and also DBA skills will be required as well as general performance tuning skills...the variables really start to add up.


I hope to have provided a very high level overview and a useful categorisation of the main sources of performance data available today, in my opinion each of these sources or approaches to obtaining performance data has great value however because performance analysis is always a contentious and often a more subjective topic than it should be I am not sure I expect to settle too many debates. Hopefully offering a broader perspective on the value of benchmarking than I have seen in (some) other forums is useful to those who might be needing or relying on this data.

Tuesday Aug 21, 2007

New SPECjAppServer2004 813.73 JOPS result using Glassfish 2 and Postgres

Sun SPECjAppServer2004 813.73 JOPS@Standard result improves performance and lowers price (again) using Glassfish V2 and Postgres 8.2 DB on T2000.

Sun has submitted a new benchmark result of 813.73 SPECjAppServer2004 JOPS@Standard which improves on the performance of the recent 778 JOPS number and at a reduced price for the tested system. The improvements are due to better database performance and a huge boost to the application server performance which enabled the number of application server machines in the tested configuration to be reduced from 3 to 2.

As I suggested in my comments regarding the recent Sun 778 JOPS result, at Sun we are in the early stages of working with open source communities such as the PostgreSQL community so really we have only just started to demonstrate  how far these open source products will scale and how much money they can save your organisation.

Highlights : -

  • Approximately 4% better performance compared to the earlier 778 JOPS number

  • Decreased the price of the system.
    By using Glassfish V2, we were able to reduce the cost of the system by about $US8K as we only needed 2 application server machines instead of 3.
    The total purchase cost of the tested configuration is now only $US 57, 425

  • Improved price/performance from $US 84.98/JOPS to $US 70.57/JOPS (see below for full pricing of this result)

  • All open source software result so the purchase cost of the software (operating system, application server and database) is zero!

  • This system is supporting more than 6000 simulated concurrent users
    note that SPECjApp2004 run rules ensure that during the test there is no (significant) single point of failure so these virtual users are running in a simulated production quality environment. E.g. log disks and database are mirrored and striped, the StorageTek 2540 array has battery backed cache in case of power fail etc.

Cost comparison against HP result (costing details for HP result)





Pricing calculations for Sun 813.73 JOPS result

  • I have used the bill of materials (bom) from the benchmark result to price this result.

  • All hardware prices come directly from the Sun US website as at 20th July 2007
    go to , choose a server and use the “get it” tab to be taken to the Sun store where you will find the pricing information

  • I have omitted minor items from the pricing like a keyboard etc

  • The pricing doesn't include support but if it did the result would look even better compared to results using commercial (non open source) software!


SPEC required disclosure : -

SPEC, SPECjAppServer reg tm of Standard Performance Evaluation Corporation.
Results from as of 22nd Aug 2007

All comparisons are based on the SPEC SPECjAppServer2004JOPS@Standard metric from or on pricing made using the bill of materials included in each SPECjAppServer2004 result
Sun 813.73 SPECjAppServer2004 JOPS@Standard (Sun Fire X4200, 4 chips / 8 cores, T2000 1 chip / 8 cores)
HP 874.17 SPECjAppServer2004 JOPS@standard (rx2660 , 4 cores, 2 chips)

Monday Jun 18, 2007

Full pricing of the HP SPECjAppServer2004 874.17 JOPS@Standard Submission

As you will see from my other blog entries on the subject of SPECjAppServer2004 there is a part of the submission which is called the bill of materials (BOM). This is included in all of the published results so that users/customers or others can see exactly what components are required to enable the benchmark score to be achieved. The BOM includes all part numbers or names for the tested components and hence allows end users to do pricing of the submission. Depending on the vendor this pricing will be either a straight forward exercise or will take a little work. At Sun it appears that we are happy to publish all of our prices at but this isn't always the case with all vendors, however as you can see below using the BOM and the part numbers in the BOM you can get a pretty good idea of the full price of the submission even if the vendor isn't actively publishing all prices on their web site or in their performance claims. So the BOM is a great tool to keep the claims that vendors make regarding their benchmark results more honest, complete, accurate and relevant to your business.

I noticed the price/performance claims HP is making in recent posts and documents about their integrity servers, certainly the pricing information at the bottom of page 2 in this document seems a little incomplete. So I have used the BOM from the HP SPECjAppServer2004 result to give a “more complete” picture of the price of the HP benchmarked system.


I have used the HP web site as the primary source for HP hardware prices and a TPC-C result from HP to price the operating system and several other components also being used on the SPEC submission. For some components where the price is not listed by HP I have found 3rd party suppliers and included these prices, generally 3rd party suppliers are cheaper than the original equipment manufacturer so that using these 3rd party prices is very likely to yield a lower overall system price.


  1. The full price of the HP 874 JOPS result is $US 187,701.52
    The price performance therefore on this submission is 187701.52/874.17 =
    $US 214.72/JOPS

  2. HP is not leading price performance because Sun's has (far) superior price/performance proof points such as the recent submission with Sun Application Server Glassfish and MySQL 5 on Solaris 10. The result is 720 SPECjAppServer2004JOPS@Standard\* and the price performance is $US 82.24/JOPS. All the pricing details for this result can be found on this blog at

HP 874 SPECjAppServer2004 JOPS@Standard / 2 \* 2 CPU (4 core) Itanium servers
(2 servers total)

source for price






Server HP Integrity rx2660 1.6Ghz 18GB 4-core





Integrity 3 slot cage option


could not find price



4GB DDR2 memory pair





8GB DDR2 memory pair





36GB 10K RPM drives





Dual port 1000BaseT LAN adapter card


could not find price


Dual Port 4Gb Fibre Channel Adapter





HP-UX Integrity FOE w/Sys 2 Proc PCL LTU






Modular SAN Array 1000





72GB 15K drives





HW Total:


Oracle Database 10g Enterprise Edition




Oracle Database Horizontal partition option




Oracle Application server 10g Java Edition (processor perpetual)




SW Total:









The HP result can be found at
The referenced Sun result can be found at

Pricing for Latest Sun Glassfish/MySQL/Solaris SPECjAppServer2004 benchmark

720 JOPS pricing

Some while ago a technical hitch was noted on the Sun/Glassfish/MySQL result referenced here, the hitch was a small configuration error which mean't that the result didn't follow the SPECjAppServer run rules to the letter and so the result was removed from the SPEC web site (marked non compliant). Sun is committed to using and improving open source software and so with help from MySQL we resubmitted and republished this benchmark at SPEC. The updated result uses almost the same configuration as the initial result the signifcant changes were to use a StorageTek 3510 disk array and to use an updated release of the Sun 9.0 application server (Glassfish). You can find the new an improved result of 720.56 SPECjAppServer2004 JOPS@Standard at

Summary of price/performance

So using the bill of materials (BOM) from the submission the price of the entire hardware and software stack is $US 59,260 (see below for calculations)
The price performance of this result is 59,260/720.56 or $US 82.24/JOPS

Please notice that the price analysis includes all of the major components hardware and software listed in the BOM.

I believe that this result demonstrates the leading price/performance for this benchmark (by a long way).

A couple of important notes / disclaimers :

  • The SunFire X4100 servers used in the submission are now older models, the current and (very similar) model is the SunFire X4100 part number A86-FGZ2BH4GKBA listed under “Config 3-M2” for which the price is listed at $US 5945.00 at the Sun shop

  • The prices below are in $US

  • All prices for Sun products below come from the Sun web site and hence the web site is the correct place to make purchasing enquiries or to check on my calculations

  • The prices are listed here for illustrative purposes only and are from the web site as at June 2007

  • I haven't included support in this calculation it looks at acquisition cost only

  • I haven't included the price of the screen and keyboard as this is a cost common to all submissions and only a minor amount

Bill of Materials for SPECjAppServer2004 Sun/MySQL 720.56 SPECjAppServer2004 JOPS result

Supplier  Description                                Product #            Qty     Unit Price      Price
--------  ----------------------------------------   ------------------   ---     ----------      -------
Sun       Sun Fire X4100 (2x285,4x2GB,2X73GB)        A64-EGB2-2H-8G-CB7    3      $8995           $26,985
Sun       Solaris 10 RTU                                                   3      $0              $0
Sun       SunSpectrum Upgrade: 3YGOLD, 24x7          W9D-A64-24-3G         3      not priced      not priced 

Sun       Sun Fire X4100 (2x285,4x2GB,2X73GB)        A64-EGB2-2H-8G-CB7    1      $8995           $8995
Sun       Solaris 10 RTU                                                   1      $0              $0
Sun       Single-Port PCI Ultra320 SCSI HBA          SGXPCI1SCSILM320-Z    1      $340            $340      
Sun       SunSpectrum Upgrade: 3YGOLD, 24x7          W9D-A64-24-3G         1      not priced      not priced

Sun       Sun StorEdge 3320, 12x73GB, 1 RAID CONT    XTA3510R01A1R876      1      $22,495         $22,495
Sun	  Single-Port PCI Ultra320 SCSI HBA	     SGXPCI1SCSILM320-Z    1      $445            $445
Sun       SunSpectum Upgrade: 3Y GOLD, 24x7          W9D-SE3510-24-3G      1      not priced      not priced
Sun       17" Entry Color Monitor                    X7147A                1
Sun       PS/2 Keyboard & Mouse                      #320-1261             1

Sun       Sun Java System Application Server                                      $0              $0
          Platform Edition 9.0 

Sun       Sun Java System Application Server         SJSAS-PE9F-1PR        9      not priced      not priced
            Platform Edition 9.0  
            Premium Support per CPU for 1 year

MySQL     MySQL Database 5.0                                                      $0              $0
MySQL     MySQL Network Gold Support                                       3      not priced      not priced 
            for 1 year


Required Disclosure Statement:

  • SPECjAppServer2004 3x Sun Fire X4100 appservers (12 cores, 6 chips) and 1 Sun Fire X4100 database (4 cores, 2 chips) 720.56 SPECjAppServer2004 JOPS@Standard.

Wednesday Jun 07, 2006

Pricing SPECjAppServer2004 Submissions

Pricing details for Sun's SPECjAppServer2004 712.87 JOPS result

One of the disclosures made in each of the SPECjAppServer2004 submissions is the “Bill of Materials” or BOM. The BOM is designed to enable users of the benchmark information to see all of the components that were used to achieve the result and to enable users to price the entire configuration in a consistent way. This then allows price or price/performance comparisons to other benchmark submissions and importantly it allows users of the benchmark to check price/performance claims made by vendors. To demonstrate how to use the BOM and also to further highlight the terrific price/performance of the Sun/MySQL 712.87 SPECjAppServer2004 JOPS result , I have priced the benchmark result (below) by reproducing the BOM from the benchmark page at and attaching list prices from the Sun ( and MySQL ( websites.

Robert Lee has a good summary of the price/performance benefits of the Sun/MySQL result here, but to recap Sun's result of 712.87 SPECjAppServer2004 JOPS at a total acquisition price of $US51,780 represents $US72.64 / SPECjAppServer2004 JOPS while the comparable HP result of 1664.36 SPECjAppServer2004 JOPS $US1,034,103 represents $US621.32 JOPS.

A couple of important disclaimers :

  • The prices below are in $US

  • The prices for Sun products below come from the Sun web site and hence the web site is the correct place to make purchasing enquiries

  • The prices are listed here for illustrative purposes only and are from the web site as at June 1 2006

  • I haven't included support in this calculation it looks at acquisition cost only

  • I haven't bothered to include the price of the screen and keyboard as this is a cost common to all submissions.

Bill of Materials for SPECjAppServer2004 Sun/MySQL 712 SPECjAppServer2004 JOPS result

Supplier  Description                                Product #            Qty     Unit Price      Price
--------  ----------------------------------------   ------------------   ---     ----------      -------
Sun       Sun Fire X4100 (2x285,4x2GB,2X73GB)        A64-EGB2-2H-8G-CB7    3      $8995           $26,985
Sun       Solaris 10 RTU                                                   3      $0              $0
Sun       SunSpectrum Upgrade: 3YGOLD, 24x7          W9D-A64-24-3G         3      not priced      not priced 

Sun       Sun Fire X4100 (2x285,4x2GB,2X73GB)        A64-EGB2-2H-8G-CB7    1      $8995           $8995
Sun       Solaris 10 RTU                                                   1      $0              $0
Sun       Single-Port PCI Ultra320 SCSI HBA          SGXPCI1SCSILM320-Z    1      $340            $340      
Sun       SunSpectrum Upgrade: 3YGOLD, 24x7          W9D-A64-24-3G         1      not priced      not priced

Sun       Sun StorEdge 3320, 12x73GB, 1 RAID CONT    XTA3320R01A1T876      1      $15,460         $15,460
Sun       SunSpectum Upgrade: 3Y GOLD, 24x7          W9D-SE3310-24-3G      1      not priced      not priced
Sun       17" Entry Color Monitor                    X7147A                1
Sun       PS/2 Keyboard & Mouse                      #320-1261             1

Sun       Sun Java System Application Server                                      $0              $0
          Platform Edition 9.0 

Sun       Sun Java System Application Server         SJSAS-PE9F-1PR        9      not priced      not priced
            Platform Edition 9.0 
            Premium Support per CPU for 1 year

MySQL     MySQL Database 5.0                         
MySQL     MySQL Network Gold Support                                       3      $0              $0
            for 1 year

Required Disclosure Statement:

  • SPECjAppServer2004 3x Sun Fire X4100 appservers (12 cores, 6 chips) and 1 Sun Fire X4100 database (4 cores, 2 chips) 712.87 SPECjAppServer2004 JOPS@Standard.

  • SPECjAppServer 2004 3x Sun Fire V20z appservers (6 cores, 6 chips) and 1 Sun Fire V20z database (2 cores, 2 chips) 266.01 SPECjAppServer2004 JOPS@Standard.

  • BEA Systems: SPECjAppServer2004 6x HP DL380 G4 appservers (12 cores, 12 chips ) and 1 HP rx8620-Intel Itanium 2 database (16 cores, 16 chips) 1,664.36 JOPS@Standard.

  • All results from as of 05/25/06. SPEC, SPECjAppServer reg tm of Standard Performance Evaluation Corporation

Friday May 26, 2006

Sun releases new SPECjAppServer2004 benchmark using completely free and open source software

Sun has just posted a new SPECjAppServer2004 result of 712.87 SPECjAppServer2004 JOPS using a totally open source software stack.  Just to make this clear , the application server is open source, the operating system is open source and the database is open source. The hardware is 4 \* SunFire X4100 dual cpu dual core servers (i.e. each server has 4 cores, see the diag below). You can find the benchmark result and all the configuration details on the SPEC site here


  • It is an all open source result and a good demonstration of how to use open source to put together a system capable of running > 5000 simulated web application users for around $US 52K (which is basically the hardware cost as the software is all free)

  • It demonstrates the scalability and performance of the free and open source Sun Application Server 9.0 (a.k.a project Glassfish).

  • It demonstrates that a 4 core SunFire X4100 server running Solaris 10 and MySQL 5 can support > 5000 concurrent web users !

So why do I care about SPECjAppServer2004 ?

  1. Because there is probably something in the workload that can be related to your business
    The SPECjAppServer2004 workload is designed to model a typical modern end to end web based application. The workload has several distinct domains which feature complex and different business transactions. The “customer” domain has logic which performs on line order entry with credit check, the Manufacturing domain models a just in time manufacturing operation and the Supplier domain performs inventory management and supply chain functions including xml calls to external simulated suppliers.

    If this isn't relating to your business yet then consider that the benchmark itself simulates concurrent on line users connecting to and using the application , these users connect both via HTTP and RMI. In the just released Sun 712.87 JOPS result there were > 5600 simulated concurrent users running against the appplication for the entire measurement period of 1 hour. [You can approximate the simulated users by looking at the benchmark result http=dealer injection rate \* 10 + rmi =dealer injection rate \* 3 ]

    Also, all of the transactions are persisted to the relational database and from the perspective of the database the entire application looks like a very heavy and highly contended OLTP workload, which makes SPECjAppServer2004 a very interesting tool to test and tune relational database performance!

  2. Because Sun and other companies are using SPECjAppServer20004 to improve the performance of your application
    If the application(s) your company uses or develops need a J2EE or J EE 5.0 application server or perhaps JDBC to connect to a relational database or if it uses java or runs on the web then it is quite likely that the work Sun (and others) are doing with SPECjAppServer2004 is going to bring performance benefits to you. In general the way you take advantage of these benefits is to use the latest version of the infrastructure product, for example a (free) upgrade to the Sun application server 9 may give you significant performance improvement in your system with no change to your application. In fact in our testing we saw a measured increase in performance with SPECjAppServer2004 over equivalent tests with Sun application Server 8.1 and again this is with no changes to the application.

  3. Because Sun is using SPECjAppServer2004 to demonstrate how you can save money !
    By utilising the SPECjAppServer2004 benchmark in conjunction with a completely open source software “stack” and by working with prominent open source providers like MySQL ( Sun is demonstrating the reliability and enterprise readiness of open source software and is giving web application developers and deployers a good example of how to save money on IT infrastructure. For a good discussion on the price benefits of the open source Sun SPECjAppServer2004 result have a look at Robert Lee's blog However I encourage you also to check out the price of the system yourself, have a look at the “Bill of Materials” listed in the benchmark result at , you can see the part numbers and complete system specifications of what was used then you can go to and price the components and you can check for yourself that the total acquisition cost of the system which supports > 5000 simulated users is around $US 52K !

Pointers and Resources :

System Diagram (also available as part of the benchmark full disclosure archive)

Required Disclosure Statement:
SPECjAppServer2004 3x Sun Fire X4100 appservers (12 cores, 6 chips) and 1 Sun Fire X4100 database (4 cores, 2 chips) 712.87 SPECjAppServer2004 JOPS@Standard. SPECjAppServer 2004 3x Sun Fire V20z appservers (6 cores, 6 chips) and 1 Sun Fire V20z database (2 cores, 2 chips) 266.01 SPECjAppServer2004 JOPS@Standard. All results from as of 05/25/06. SPEC, SPECjAppServer reg tm of Standard Performance Evaluation Corporation

Tuesday Jun 28, 2005

Configuring Sun Application Server 8.1 (J2EE 1.4) and MySQL 5

Connecting to MySQL from the “free” Sun Application Server 8.1 Platform Edition (PE) J2EE server

Sun has been offering the "platform edition" of our J2EE 1.4 application server free for download and use for sometime now and I am writing this blog from the Sun JavaOne conference in San Francisco where Sun has just announced the open sourcing of the next version of our J2EE application server i.e. the Java EE 5.0 version. See project for details and to download early versions of the code and also check out Jim Driscoll's blog.

Given that the application server is free and open source and the operating system (Solaris 10) is free and open source it seems to follow that many J2EE developers and users will be or are already interested in using open source and free databases with this free combination of application server and operating system. We have been running the SPECjAppServer2004 J2EE benchmark with the Sun application server 8.1 and MySQL 5.0 and we have learnt much about running the app server with MySQL and with Connector/J, so I thought we would share some of the config details which come from this testing to help you get your application running on this same combination. If you are interested in running either Postgres or Derby in conjunction with Sun application server 8.1 then check out Rajesh's blog entry for Postgres and Lance's blog for Derby.


I suggest using MySQL 5.0.4 (or later) of the MySQL database and version 3.1.8 (or later) of the Connector/J JDBC driver. It is important to note that the Sun application server 8.1 passes the compatibility test suite when using the above versions and this is a big deal because it protects your investment by keeping the platform consistent and your code portable. There are also a number of performance enhancements in the above releases and we have especially noticed significant improvement in the internal SPECjAppServer2004 results due to improvements by MySQL in the Connector/J driver. Also w.r.t versions don't forget if you already have app server 8.1 with JDK 1.4 then a move to J2SE 5.0 will likely boost you performance significantly.

Configuring the connection for MySQL 5 from the application server using the admin console

  1. Under Resources, select JDBC and on right hand side frame, select Connection Pools

  2. Click on New to add create a new connection pool

  3. Provide some name such as mysql-pool and select Datasource type as javax.sql.ConnectionPoolDataSource

  4. 4. Click Next and enter the Datasource Classname as com.mysql.jdbc.jdbc2.optional.MysqlDataSource

  5. 5. Add the following properties to the connection pool:

  6. DatabaseName <your-db-name>

  7. port <your-selected-port-number> default port is 3306

  8. user <DB-User-Name>

  9. Password <DB-User-Password>

  10. ServerName <DB-Server-Name>

  11. Save the settings and click on Ping. If DB server is running and connection can be made, Ping will succeed. Please make sure you have copied the mysql-connector-java-3.1.8-bin.jar jdbc driver in $J2EE_HOME/domains/<domain-name>/lib/ext directory.

Example domain.xml

This is entire connection pool entry for the current testing we are doing with SPECjAppServer2004 (see for more details on the workload and check out the Connector/J documentation for an explanation of the properties.


connection-validation-method="table" datasource- classname="com.mysql.jdbc.jdbc2.optional.MysqlDataSource"
fail-all-connections="false" idle-timeout-in-seconds="300" is-connection-validation-required="false"
is-isolation-level-guaranteed="false" max-pool-size="80" max-wait-time-in-millis="90000"
name="SpecJPool" pool-resize-quantity="4" res-type="javax.sql.DataSource" steady-pool-size="80">

<property name="serverName" value="pdb"/>

<property name="portNumber" value="3306"/>

<property name="User" value="spec"/>

<property name="Password" value="spec"/>

<property name="DatabaseName" value="specdb"/>

<!-- Note : the following are tuning parameters not essential for connectivity

<property name="cachePrepStmts" value="true"/>

<property name="prepStmtCacheSize" value="512"/>

<property name="useServerPrepStmts" value="false"/>

<property name="alwaysSendSetIsolation" value="false"/>

<property name="useLocalSessionState" value="true"/>

<property name="elideSetAutoCommit" value="true"/>

<property name="useUsageAdvisor" value="false"/>

<property name="useReadAheadInput" value="false"/>

<property name="useUnbufferedInput" value="false"/>






« February 2017