Wednesday Jun 11, 2008

MySQL Performance Comparison

We run an internally developed AMP benchmark to understand the performance characteristics of the AMP stack on various platforms.


We did some comparisons of the AMP stack in Cool Stack 1.2 with the one in Cool Stack 1.3. We didn't see any signficant difference in performance in the Apache/PHP tier between the two versions. However, there is a dramatic improvement in performance in the MySQL tier using Cool Stack 1.3. Cool Stack 1.2 shipped with MySQL 5.0.45 and 1.3 now has MySQL 5.1.24.


The graph below shows the CPU utilization on the MySQL system at different load levels :



MySQL 5.1 is 4x more efficient than 5.0.45 ! There are improvements in the Query Cache and query optimization in MySQL 5.1, but at this point we are doing more analysis to understand the reasons for the dramatic performance increase. Here is another graph that shows the network throughput at different load levels :



It appears that some of that efficiency comes from driving much less load on the network as well. We know that the client API has changed in 5.1, but it is amazing how much of a difference it makes in terms of performance.


 So, the bottom-line is : Upgrade to MySQL 5.1 in  Cool Stack 1.3 and share your experiences. I'm very interested in finding out what others see in terms of performance.






Cool Stack 1.3 is here

It's finally here. Before you download the release, please do read the documentation. It's over a month late, but hopefully the features will compensate for the delay.
Almost every single component in the stack has been upgraded. Two new packages, CSKpython and CSKnginx have been added. And yes - the python package does include mod_python too.
You can read about the full list of changes in the Changelog.


ruby


Significant changes have been made to the ruby package. In addition to the upgrade to ruby1.8.6p114, we include a number of gems, notably mysql, postgres and mongrel. It should also be easy to install any new gem that requires native compilation using gcc - this should just work out of the box. A word of caution if you're running ruby on SPARC. Please DO NOT use gcc. If you're importing native gems, take the extra step of installing Studio 12 and use the rbconfig.studio instead (simply rename it to rbconfig.rb after saving the current one). This file is located in /opt/coolstack/lib/ruby/1.8/sparc-solaris2.10.


There have also been significant performance enhancements for ruby made in this release. By changing compiler flags, we have measured 20-30% improvement. In addition, SPARC-specific patches (which are now in ruby 1.8.7) improve performance a further 8-15%. If you are a ruby user, let us know your experiences. I'd love to hear of any performance changes you see as well.
In a future post, I'll share some performance results on the AMP stack. But for now, please try this release and let us know what you think via the forum or comments on this blog.


Cool Stack installation in Zones


For users who have installed Cool Stack in zones, I know the upgrade is a painful process. As I mentioned earlier, we do hope to fix this situation by allowing upgrade installs in future. But some users have found solutions on their own. In particular, I found this post in the forum interesting. Scroll down to reply 19.


Monday May 19, 2008

Where is 1.3?

I know many of you may be wondering why Cool Stack 1.3 isn't out yet, so I thought I'd post an update on where we are with the release. When I first started talking about this, I was hoping to get the release out in early May. But several things have colluded to cause a delay.

  1. We took on a lot for this release. Although just looking at the proposal, it may seem like a simple matter to just update the versions of the various components, underneath the hood, we decided to do a lot more streamlining of the build and packaging process.  Since so many version updates were involved, that also meant more legal approvals.
  2. We have a more formal QA process for this release (and for future releases). The good news is that this means we will have a more quality release, but the bad news is that it now takes more time as we have to go through a couple of QA cycles at least to qualify the release.
  3. We are going to institute a patching process for releases going forward. What this means is that we can put out patches for critical bugs and provide an upgrade path for existing installations. This should especially prove useful to users who have installed in zones. Before everyone starts jumping up with joy, I want to mention that for 1.3, you will still have to go through the pain of a fresh installation after saving your config files and re-applying them. But going forward, say for 1.3.1 or 1.4, we will provide an upgrade option, so please bear with the pain for this one release.
  4. And there is now talk about making Cool Stack into a more full-fledged product with proper production support. This is one of the major reasons for all of the above enhancements and the cause of the delay, but we have got so many requests for production support now, that we felt it was worth taking the time to do some ground work to make this happen.
  5. Just to be clear, we will continue to provide free support through the Cooltools forum as we are doing now, and we hope more of you will help us out on the forum. We know there are many experienced Cool Stack users out there - if you think Cool Stack has saved you time, we hope you will consider giving back a little of your time to the forum. We have one full-time engineer assigned to the project and a few other Sun engineers who are interested in Cool Stack's success contribute our time.  We are really short-staffed, so please, please do consider sharing your knowledge and helping out other Cool Stack users.

After reading all that, I know the big question still is : When will 1.3 be released ? We have just finished incorporating almost all components and are starting QA of build2. We have a few more things to take care of but I'm hopeful that we can get this out in the first week of June.

Thanks for your patience and your support of Cool Stack. Please do keep your feedback coming and subscribe to the forum. 


Saturday Apr 26, 2008

Microformats and Tags

I talked about Microformats in a post last year on web20expo. It appears that the technology is now going main stream. I attended a workshop on Web2.0 Best Practices at the Web20 Expo this week in which the speaker, Niall Kennedy expounded on th advantages of using microformats. He said he's seen a significant growth in traffic on his site since he started doing so since search engine results show direct links to pages on his site.
Yahoo is adding microformats to many of their properties. The yahoo event site already has them. This is exciting since microformats are a bridge to the semantic web, which we've been talking about for several years now. However, the talk has never seemed to materialize into anything concrete. Meanwhile, the web2.0 world has decided to do things their own way.

A classic example is tagging. While the semantic folks talk about taxonomies and ontologies, the web guys invented folksonomies (aka tagging). Tagging has allowed users and sites to group stuff together, attaching semantic meaning to their data. Tag clouds have worked fairly well and sites like flickr are extending the concept by automatically creating even more tags ! The problem with tags of course is that a word can have several meanings and it's not easy to figure out which exact interpretation should be used. This problem is what RDF solves nicely, but more on that later.

Microformats are better than tags in the sense that they have a more rigid format and as such provide better semantics, although not perfect. Let's look at an example:

<div class="vevent"><br />  <span class="summary">JavaOne Conference</span>: <br />  <span class="description">The premier java conference</span><br />  <p><a class="url"><a href="http://java.sun.com/javaone/sf">http://java.sun.com/javaone/sf</a><br />  <p><abbr class="dtstart" title="2008-05-06">May 6</abbr>-<br />  <abbr class="dtend" title="2008-05-09">9</abbr>,<br /> at the <span class="location">Moscone Center, San Francisco, CA</span><br /> </div>
which will display as :


JavaOne Conference:
The premier java conference

http://java.sun.com/javaone/sf

May 6-
9,
at the Moscone Center, San Francisco, CA


The advantage of such a format is that it clearly specifies various properties associated with the event: summary, description, url, start and end dates, location etc. However, it can still be ambiguous since it uses literals for many properites e.g. the location. If someone specified the location simply as "San Francisco", it could mean any of 27 different San Francisco's.

If we take this formalizing a step further, we reach the world of RDF. Here every entry is specified as a tuple of the form: <subject><predicate><object> using URIs to represent the objects in an unambiguous manner. Without going into the syntactic details, we could specify a location to be defined in the standard format of: number, street, city, state, country, zip. This provides an object with identity, the property that uniquely identifies it.

I'll talk more about RDF and semantic web in another post.

Microformats and Tags

I talked about Microformats in a post last year on web20expo. It appears that the technology is now going main stream. I attended a workshop on Web2.0 Best Practices at the Web20 Expo this week in which the speaker, Niall Kennedy expounded on th advantages of using microformats. He said he's seen a significant growth in traffic on his site since he started doing so since search engine results show direct links to pages on his site.
Yahoo is adding microformats to many of their properties. The yahoo event site already has them. This is exciting since microformats are a bridge to the semantic web, which we've been talking about for several years now. However, the talk has never seemed to materialize into anything concrete. Meanwhile, the web2.0 world has decided to do things their own way.

A classic example is tagging. While the semantic folks talk about taxonomies and ontologies, the web guys invented folksonomies (aka tagging). Tagging has allowed users and sites to group stuff together, attaching semantic meaning to their data. Tag clouds have worked fairly well and sites like flickr are extending the concept by automatically creating even more tags ! The problem with tags of course is that a word can have several meanings and it's not easy to figure out which exact interpretation should be used. This problem is what RDF solves nicely, but more on that later.

Microformats are better than tags in the sense that they have a more rigid format and as such provide better semantics, although not perfect. Let's look at an example:

<div class="vevent"><br />  <span class="summary">JavaOne Conference</span>: <br />  <span class="description">The premier java conference</span><br />  <p><a class="url"><a href="http://java.sun.com/javaone/sf">http://java.sun.com/javaone/sf</a><br />  <p><abbr class="dtstart" title="2008-05-06">May 6</abbr>-<br />  <abbr class="dtend" title="2008-05-09">9</abbr>,<br /> at the <span class="location">Moscone Center, San Francisco, CA</span><br /> </div>
which will display as :


JavaOne Conference:
The premier java conference

http://java.sun.com/javaone/sf

May 6-
9,
at the Moscone Center, San Francisco, CA


The advantage of such a format is that it clearly specifies various properties associated with the event: summary, description, url, start and end dates, location etc. However, it can still be ambiguous since it uses literals for many properites e.g. the location. If someone specified the location simply as "San Francisco", it could mean any of 27 different San Francisco's.

If we take this formalizing a step further, we reach the world of RDF. Here every entry is specified as a tuple of the form: <subject><predicate><object> using URIs to represent the objects in an unambiguous manner. Without going into the syntactic details, we could specify a location to be defined in the standard format of: number, street, city, state, country, zip. This provides an object with identity, the property that uniquely identifies it.

I'll talk more about RDF and semantic web in another post.

Microformats go mainstream

I talked about Microformats in a post last year on web20expo. It appears that the technology is now going main stream. I attended a workshop on Web2.0 Best Practices
at the Web20 Expo this week in which the speaker, Niall Kennedy
expounded on th advantages of using microformats. He said he's seen a
significant growth in traffic on his site since he started doing so since search engine results show direct links to pages on his site.
 Yahoo is adding microformats to many of their properties. The yahoo event site
already has them. This is exciting since microformats are a bridge to
the semantic web, which we've been talking about for several years now.
However, the talk has never seemed to materialize into anything
concrete. Meanwhile, the web2.0 world has decided to do things their
own way.

A classic example is tagging. While the semantic
folks talk about taxonomies and ontologies, the web guys invented
folksonomies (aka tagging). Tagging has allowed users and sites to
group stuff together, attaching semantic meaning to their data. Tag
clouds have worked fairly well and sites like flickr are extending the
concept by automatically creating even more tags ! The problem with
tags of course is that a word can have several meanings and it's not
easy to figure out which exact interpretation should be used. This
problem is what RDF solves nicely, but more on that later.

Microformats
are better than tags in the sense that they have a more rigid format
and as such provide better semantics, although not perfect. Let's look
at an example:

<div class="vevent"><br>  <span class="summary">JavaOne Conference</span>: <br>  <span class="description">The premier java conference</span><br>  <p><a class="url"><a href="http://java.sun.com/javaone/sf">http://java.sun.com/javaone/sf</a><br>  <p><abbr class="dtstart" title="2008-05-06">May 6</abbr>-<br>  <abbr class="dtend" title="2008-05-09">9</abbr>,<br> at the <span class="location">Moscone Center, San Francisco, CA</span><br> </div>
which will display as :


JavaOne Conference:
The premier java conference

http://java.sun.com/javaone/sf

May 6-
9,
at the Moscone Center, San Francisco, CA


The
advantage of such a format is that it clearly specifies various
properties associated with the event: summary, description, url, start
and end dates, location etc. However, it can still be ambiguous since
it uses literals for many properites e.g. the location. If someone
specified the location simply as "San Francisco", it could mean any of
27 different San Francisco's.

If we take this formalizing a step further, we reach the world of RDF.
Here every entry is specified as a tuple of the form:
<subject><predicate><object> using URIs to represent
the objects in an unambiguous manner. Without going into the syntactic
details, we could specify a location to be defined in the standard
format of: number, street, city, state, country, zip. This provides an
object with identity, the property that uniquely identifies it.

I'll talk more about RDF and semantic web in another post.

Wednesday Apr 23, 2008

Micro-blogging is here

I am attending the Web2.0 Expo at San Francisco this week. Today was the first day of the conference and the crowds seemed to be larger than last year. The primary focus seems to be on social networking this year. 

I'll blog more about other aspects of the conference, but I wanted to focus this post on the twitter phenomenon. I'd heard of twitter of course, but I just could never figure out what it was all about. What was the big deal about telling the world what you were doing every second ? Who would even care ?

I attended a panel titled  "Short attention span theater: The birth of micro-blogging and micro-media". It was mediated by Gregarious Narain (he turned out not to be all that gregarious) and included Jeremiah Owyang (Forrester Research), Stowe Boyd (consultant) and Brian Solis. At the start of the session, Greg asked how many people used twitter, and almost everyone raised their hands (not me though !) He then asked the audience to twitter posts to micromedia2 (expressly set up for the purposes of this session). Within minutes, the posts started rolling in. In a few minutes, my head was swimming with the twitter vocabulary: twhirl, tweet, tweetscan, twitpitch ...

Just when I was beginning to once again tell myself "I just don't get it", another guy in the audience voiced exactly what was on my mind !  To which, Stowe Boyd replied that if you don't get it, forget about it. It's not something that can be easily explained. He went on to suggest that the questioner give it a try for a few weeks and perhaps he'll get it then.

Fair enough. So I'm going to try it. I got myself an account on twitter in the name of shantiS. If you're into twittering, go ahead and twitter (or is that tweet) me. When a noun becomes a verb, it means a product has arrived.

Happy twittering.


                                                                                                                                        

Friday Feb 29, 2008

Time for next Cool Stack release

It's been close to 4 months since we released Cool Stack 1.2, so it's time to start thinking about the next release. Here's what we have planned so far and as always I'm looking for feedback from current and future users on what you'd like to see. Needless to say, that all currently known bugs will be fixed and the current patches will be rolled into the release. However, if you don't tell us about the problems you've run into, we won't be able to fix them. So, once again I'd like to encourage people to please post your problem/issue/tips etc. on the forum.

Here's a list of stuff we're currently looking at for Cool Stack 1.3 :




















ComponentVersion in Cool Stack 1.3
Version in Coolstack 1.2
Apache 2.2.8 2.2.6
Tomcat 5.5.26 5.5.23
php 5.2.5 5.2.4
mysql 5.1 5.0.45
squid 3.0 2.6
apc 3.0.16 3.0.14
mod_perl 2.0.3 2.0.2
rails 2.0.2 1.2.3
Add php extensions: memcache , pdflib, and freetype , mcrypt

-
libevent 1.3e 1.3d
memcached 1.2.5 1.2.2
mod_jk 1.2.26 1.2.25
lighttpd 1.4.18 1.4.16
nginx 0.5.35 -
dtrace ruby extension -
multithreaded perl -
Improve lighttpd with more builtin extensions -
mysql performance improvement with libfasttime/libmtmalloc -
Add ruby gems :postgres, mysql. -
Improve ruby build process (to make easier compilation of external gems) -

I would like feedback on a couple of areas in particular:

  • MySQL 5.1 has many performance improvements and we'd very much like to package this version even though it's not FCS yet (it's been out for a looong time now and should be quite stable). Should we go ahead and use 5.1 instead of 5.0.x ?
  • We've heard some complaints about perl not being multi-threaded, but there is a small performance penalty associated with building it multi-threaded (especially if you're not going to be using it's multi-threaded features).  Any thoughts ?

Of course, please feel free to comment on anything else you'd like seen added/changed as well.

 

Shanti

 

Monday Nov 05, 2007

PHP dtrace extension for Cool Stack 1.2

We have fixed the issue with the PHP dtrace extension not working in Cool Stack 1.2. As I mentioned in my announcement post, we had already identified the issue but didn't have time to fix it before the release. The issue was that /usr/ccs/bin/ld was being used to do the linking but this doesn't work for dtrace as some initialization code needs to be called from the .init section and this is not setup correctly if we don't use 'cc' to do the linking (rather than 'ld'). 

So a simple addition to the configure line:
LD="cc"

did the trick.

We now have two files : dtrace_1.2_sparc.so and dtrace_1.2_x86.so posted. Download the one for your machine and do the following :

  • Copy it to the php5 extensions directory of /opt/coolstack/php5/lib/php/extensions/no-debug-non-zts-20060613 and re-name it as dtrace.so.
  • Add extension="dtrace.so" to your /opt/coolstack/php5/lib/php.ini. 

See my earlier post, for an example of how to use dtrace to trace through the AMP stack. 

Friday Nov 02, 2007

Cool Stack 1.2 Released

Finally, it's here. We missed our October 31 deadline, but hopefully that won't matter when you see the contents of the release.

Short summary of changes in the 1.2 release.

You can download the new packages from the Sun Download Center.

Some caveats to be aware of :

  • These packages will over-write Cool Stack 1.1 packages that you may already have installed. All packages continue to install in /opt/coolstack but some of the package names have been changed. Thus, pkgadd(1M) will not know that the new CSKapache2 package will install in /opt/coolstack/apache2 and over-write the contents from a previous Cool Stack 1.1 installation of CSKamp. So, please do save your current installation. A detailed strategy to do this is  defined on the Cool Stack site. Here are some other short-cuts :
    • Move /opt/coolstack to /opt/coolstack1.1. Remove all the CSK\* packages installed. Since the actual files in /opt/coolstack no longer exist (as you've moved the directory), pkgrm will do no harm but does change the system's perception of what packages are installed. Then delete /opt/coolstack and start installing the new packages.
    • Save just the files you need - typically apache2/conf directory, your php.ini file etc. Then remove all the CSK\* packages and delete /opt/coolstack.
  • This release was built and tested on Solaris 10. During final testing on Nevada (aka OpenSolaris or SXDE), we ran into package incompatibility issues. Some packages do not install in a straight-foward manner on OpenSolaris. Please use the work-around.
  • The php dtrace extension doesn't work. We have identified the fix and will post a patch soon.

As always, please do read the FAQ (we will be updating this as we get feedback about the new release) and if that doesn't help, post your question/problem on the CoolTools Forum.

Let us know what you think of this release via the forum, the feedback alias or this blog. If we don't get feedback, it is hard to know what components to include or what bugs need to be fixed.

Tuesday Oct 16, 2007

Cool Stack on Niagara 2 Systems


Sun recently announced the Sun SPARC Enterprise T5120 and T5220 and Sun Blade T6320 systems based on the UltraSPARC T2 processor.  You can find lots of information on the various features and functionality provided by these servers.

One other cool feature is that these systems ship with Cool Stack pre-loaded. The Cool Stack 1.1 packages are available in /var/spool/pkg. You can install the ones you want using the pkgadd(1M) command as follows :

root@wgs40-82 # cd /var/spool/pkg
root@wgs40-82 # ls
CSKampSrc_sparc.pkg        CSKphplibs_sparc.pkg
CSKamp_sparc.pkg           CSKrubySrc_sparc.pkg
CSKmemcachedSrc_sparc.pkg  CSKruby_sparc.pkg
CSKmemcached_sparc.pkg     CSKsquidSrc_sparc.pkg
CSKmysqlSrc_sparc.pkg      CSKsquid_sparc.pkg
CSKmysql_sparc.pkg         CSKtdsSrc_sparc.pkg
CSKncursesSrc_sparc.pkg    CSKtds_sparc.pkg
CSKncurses_sparc.pkg       CSKtomcat_sparc.pkg
CSKperlSrc_sparc.pkg       Changelog.txt
CSKperl_sparc.pkg          CoolStack1.1_sparc.pdf
CSKphplibsSrc_sparc.pkg    LICENSE.txt
root@wgs40-82 # pkgadd -d CSKamp_sparc.pkg

The following packages are available:
  1  CSKamp     Apache httpd, PHP and MySQL
                (sparc) Apache 2.2.3, PHP 5.2.0, MySQL 5.0.33

Select package(s) you wish to process (or 'all' to process
all packages). (default: all) [?,??,q]:


Tuesday Sep 18, 2007

Cool Stack 1.1.1 contents

We are currently working on Cool Stack 1.1.1 which will have the following components updated, in addition to fixing known bugs :

  • Apache 2.2.6 : Add mod_proxy, mod_fcgid, SMF support
  • PHP 5.2.4      : Add dtrace extension, add new fastcgi version for use with non-Apache webservers
  • APC 3.0.14
  • Suhosin 0.9.20
  • MySQL 5.0.45: Add SMF support
  • Perl 5.8.8       : Add Sys:Syslog, DBI, DBD-mysql extensions
  • memcached 1.2.2
  • squid 2.6
  • tomcat 5.5.23
I would love to get feedback on this. Is your favorite module/extension missing ? Are there bugs/issues you have run into that haven't been reported on the forum ?

Monday Aug 06, 2007

fhb - Http Load Generator

One of the simplest performance and load tests that can be performed on any web server is to measure the response time of a static HTTP GET request. If you measure how this response time degrades as more parallel connections issue the same request, you can get a basic understanding of the performance and scalability of a web server.

Anyone who's worked with Apache long enough is probably familiar with ab, the  load generator/benchmarking tool which is so wide-spread. This is a sad state of affairs as 'ab' is extremely flawed. I don't want to repeat it's flaws here, as Scott Oaks has done an excellent job of summarizing them. 

Another tool that is also popular is http_load, but this one has its drawbacks as well. Although it is better than ab in that it does maintain multiple parallel connections, the driver is not multi-threaded so it doesn't exactly mimic a multi-user load, leading to inaccuracies in response time measurement. Further it handles connections are handled via select(3c), which has known scalability issues.

Enter the Faban Http Bench (fhb) command line tool. This tool is part of Faban, an open source benchmarking kit that lets you develop and run complex benchmarks. For simple http load testing as described above, Faban now includes a simple tool called fhb.

Fhb generates load by instantiating multiple client threads, each thread runs independently and maintains it's own statistics. The response time data collected is highly accurate. The entire infrastructure is written in Java, allowing it to run on any platform (although the wrapper tool fhb is a shell script).

fhb Usage

1. Download fhb (you only need the client tar) and do the following :
    gzip -cd faban-client-062907.tar.gz | tar xvf -
   You should find fhb in the faban/bin directory.

2. Set JAVA_HOME appropriately (JDK 1.5 is required. /usr/java should work fine but double-check to make sure it's 1.5)

3. Set your PATH to include $JAVA_HOME/bin

4. Start your web server (ideally on another machine) and have a static file of whatever size you want to test. Let's assume that the webserver is running on 10.10.18.111:80 and a file called test.html exists in its docs directory.

5. Run fhb from your client machine's faban directory as follows :

   # bin/fhb -J -Xmx1500m -J -Xms1500m -s -r 60/180/0 -c 100 http://10.10.18.111:80/test.html

   Here is an explanation of some of the args :

   -J :  anything following -J is passed to the JVM as is.

   -s :   Create the summary file and save it in the default output directory. Without this option, fhb will simply delete the files created after displaying the summary statistics. By saving the output, you can go back and look at it later if you want. A separate directory will be created for each run in /var/tmp (changeable via the -D option)

  -r : The rampup/steady-state/rampdown times for the run in seconds.

  -c : The number of connections. (You may want to start with 1, 10 before trying larger numbers)

The final parameter is the URL that is accessed. Note that the URL can also point to a php/perl script if you want to include mod_php/mod_perl in your testing.

fhb Output

When you run the tool, you will get output like the following :

Bytes Read from class :1525
Aug 6, 2007 12:49:43 PM com.sun.faban.driver.core.MasterImpl runBenchmark
INFO: RunID for this run is : 1
Aug 6, 2007 12:49:43 PM com.sun.faban.driver.core.MasterImpl runBenchmark
INFO: Output directory for this run is : /var/tmp//faban_cd/1
Aug 6, 2007 12:49:46 PM com.sun.faban.driver.core.AgentImpl run
INFO: http_driver1Agent[0]: Successfully started 100 driver threads.
Aug 6, 2007 12:49:46 PM com.sun.faban.driver.core.MasterImpl executeRun
INFO: Started all threads; run commences in 1995 ms
Aug 6, 2007 12:50:48 PM com.sun.faban.driver.core.MasterImpl executeRun
INFO: Ramp up completed
Aug 6, 2007 12:53:48 PM com.sun.faban.driver.core.MasterImpl executeRun
INFO: Steady state completed
Aug 6, 2007 12:53:48 PM com.sun.faban.driver.core.MasterImpl executeRun
INFO: Ramp down completed
Aug 6, 2007 12:53:48 PM com.sun.faban.driver.core.MasterImpl getDriverMetrics
INFO: Gathering http_driver1Stats ...
Aug 6, 2007 12:53:48 PM com.sun.faban.driver.core.MasterImpl generateReports
INFO: Printing Summary report...
Aug 6, 2007 12:53:48 PM com.sun.faban.driver.core.MasterImpl generateReports
INFO: Summary finished. Now printing detail xml ...
Aug 6, 2007 12:53:48 PM com.sun.faban.driver.core.MasterImpl generateReports
INFO: Summary finished. Now printing detail ...
Aug 6, 2007 12:53:48 PM com.sun.faban.driver.core.MasterImpl generateReports
INFO: Detail finished. Results written to /var/tmp//faban_cd/1.
ops/sec: 7490.989
% errors: 0.0
avg. time: 0.006
max time: 1.001
90th %: 0.05
Saving output from run in /var/tmp//faban_cd

The output files are saved in /var/tmp/faban_cd/1 for this run. Currently, the summary output is only in xml, but it's a simple matter to write an xsl script to convert it to whatever format you want. (The final release will include such a script).

Response Times

A note on response times - I often see reports quoting average response times. In fact, most common benchmarking tools only capture the average response times. If you are concerned about the response experienced by your web customer, then average response times just don't cut it. That is why, most standard benchmarks (like TPC, SPEC) specify a 90th percentile response time criteria - the time by which 90% of the requests should have been completed. This is a reasonable measure - if 90% of your customers see an adequate response, you're site's probably performing okay.

To illustrate my point, look at the response times from the sample output above. The avg. time is 0.006 seconds but the 90% time is 0.05 seconds, nearly ten times the average !
 

fhb Release

fhb will be part of the impending 1.0 release of Faban. At that time, it should include documentation and a post-processing script for the summary xml output. But please do check it out now. That way, we can incorporate any feedback before it's release.

But don't just stop at using fhb and doing simple GET requests to test your web server. Drive a realistic application to get a better sense of how it performs. You can use Faban's full power to write a load generator for any application or if you already have a full benchmark, drop it into Faban's harness to capitalize on it's many wonderful features such as automated monitoring, graphing and tabular viewing of results, results comparisons etc. Check out all the functionality it offers at the Faban website.
 


Tuesday Jul 03, 2007

APC Update

Many folks have reported problems with APC in Cool Stack 1.1 resulting in a SEGV in the CoolTools Forums.
The APC version in Cool Stack 1.1 is APC 3.0.11 and if enabled, extensions such as mysql, dtrace etc. fail with a SEGV. Media Wiki doesn't work either.

We have tested APC 3.0.14 and this version seems to work much better and I'm keeping my fingers crossed that it will work for you as well. Our performance testing doesn't show any substantial differences between 3.0.11 and 3.0.14. If you want to give it a try, simply download the correct file for your platform and rename it to apc.so in your php extensions directory /opt/coolstack/php5/lib/php/extensions/no-debug-non-zts-20060613.

APC 3.0.14 for Cool Stack 1.1 (SPARC)

APC 3.0.14 for Cool Stack 1.1 (x86)

Friday Jun 15, 2007

lighttpd on Solaris

Build notes and Tuning tips for Solaris

lighttpd seems to be increasingly popular, so much so that netcraft has started tracking it's use on production websites. I've spent some time building, tuning and running simple performance stress tests on lighttpd 1.4.15 on Solaris and thought I'd share what I learnt.

Building lighttpd

My build of lighttpd uses the openldap library from Cool Stack. I also built and installed pcre-7.1 in /opt/coolstack using the following script :

#!/bin/sh
INSTALLDIR=/opt/coolstack
CFLAGS="-fast -xipo -xtarget=generic"

make distclean
./configure --prefix=$INSTALLDIR CFLAGS="$CFLAGS"
make
make install

And finally, here is a script to build lighttpd using Sun Studio compiler on Solaris :

#!/bin/sh
INSTALLDIR=/opt/coolstack
LDFLAGS="-L$INSTALLDIR/lib -L/usr/sfw/lib -lsendfile -R/usr/sfw/lib"
CFLAGS="-fast -xipo -xtarget=generic"
PCRECONFIG="$INSTALLDIR/bin/pcre-config"
PATH=/opt/SUNWspro/bin:/usr/ccs/bin:/usr/sbin:/usr/bin:
export PATH

make distclean

./configure --prefix=$INSTALLDIR/lighttpd  --with-pic \\ --with-openssl=/usr/sfw --with-ldap \\
--with-bzip2 --with-pcre=$INSTALLDIR --disable-ipv6 \\ CFLAGS="$CFLAGS" \\
LDFLAGS="$LDFLAGS" PCRECONFIG="$PCRECONFIG"

make
make install

Tuning lighttpd

By default, lighttpd uses poll as the event handler for Solaris; however it does include support for devpoll which scales better (no support for event ports yet). To use devpoll, add the following to your lighttpd.conf :

server.event-handler = "solaris-devpoll"

By default, lighttpd uses sendfilev to do the writes, but there are several issues in Solaris 10 and opensolaris which cause sendfilev to have  performance and stability problems:

6455727 - lighttpd cannot be killed because of hanging in senfilev()
6505740 - TCP does not wake up all waiting threads waiting for TCP zero copy completion notification
6532645 - implement ZCP sendfile for 64-bit apps
5003297 - sendfile/sendfilev should avoid data copy

Some of these issues are already fixed in later Nevada builds, but if you're running on Solaris 10,  it's best to use writev by adding the following :

server.network-backend = "writev"

If you're running on a multi-core system, it is necessary to set max-worker as well, as lighttpd by default is single-process and single-threaded. It is best to set the number of workers to two times the number of cores :

server.max-worker = 4


For other generic lighttpd tuning tips, see the lighttpd documentation.

About

I'm a Senior Staff Engineer in the Performance & Applications Engineering Group (PAE). This blog focuses on tips to build, configure, tune and measure performance of popular open source web applications on Solaris.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today