Thursday Jan 14, 2010
Thursday May 21, 2009
By shanti on May 21, 2009
Following on the heels of our memcached performance tests
on SunFire X2270 ( Sun's Nehalem-based server) running OpenSolaris, we
ran the same tests on the same server but this time on RHEL5. As
mentioned in the post presenting the first memcached results,
a 10GBE Intel Oplin card was used in order to achieve the high
throughput rates possible with these servers. It turned out that using
this card on linux involved a bit of work resulting in driver and kernel
- With the default ixgbe driver from the RedHat distribution (version 1.3.30-k2 on kernel 2.6.18)), the interface simply hung during the benchmark test.
- This led to downloading the driver from the Intel site (22.214.171.124-2-NAPI) and re-compiling it. This version does work and we got a maximum throughput of 232K operations/sec on the same linux kernel (2.6.18). However, this version of the kernel does not have support for multiple rings.
- The kernel version 2.6.29 includes support for multiple rings but still doesn't have the latest ixgbe driver which is 1.3.56-2-NAPI. So we downloaded, built and installed these versions of the kernel and driver. This worked well giving a maximum throughput of 280K with some tuning.
The system running
OpenSolaris and memcached 1.3.2 gave us a maximum throughput of 350K
ops/sec as previously reported. The same system running RHEL5 (with
kernel 2.6.29) and the same version of memcached resulted in 280K
ops/sec. OpenSolaris outperforms Linux by 25% !
The following Linux tunables were changed to try and get the best performance:
net.ipv4.tcp_timestamps = 0 net.core.wmem_default = 67108864 net.core.wmem_max = 67108864 net.core.optmem_max = 67108864 net.ipv4.tcp_dsack = 0 net.ipv4.tcp_sack = 0 net.ipv4.tcp_window_scaling = 0 net.core.netdev_max_backlog = 300000 net.ipv4.tcp_max_syn_backlog = 200000
Here are the ixgbe specific settings that were used (2 transmit, 2 receive rings):
RSS=2,2 InterruptThrottleRate =1600,1600
The following settings in /etc/system were used to set the number of MSIX:set ddi_msix_alloc_limit=4 set pcplusmp:apic_intr_policy=1
For the ixgbe interface, 4 transmit and 4 receive rings gave the best performance :
Finally, we bound the crossbow threads:
dladm set-linkprop -p cpus=12,13,14,15 ixgbe0
Monday May 11, 2009
By shanti on May 11, 2009
The first cut of a Java EE implementation of Olio is now checked into the repository. The file docs/java_setup.html gives instructions on how to build and setup this implementation. The implementation uses JSP, servlets, JPA for persistence, yahoo and Jmaki widgets for AJAX etc. The web application is located in webapp/java/trunk and the load driver, database and file loaders etc. are in workload/java/trunk.
Check it out.
Monday Apr 27, 2009
By shanti on Apr 27, 2009
As promised, here are more results running memcached on Sun's X2270 (Nehalem-based server). In my previous post, I mentioned that we got 350K ops/sec running a single instance of memcached at which point the throughput was hampered by the scalability issues of memcached. So we ran two instances of memcached on the same server, each using 15GB of memory and tested both 1.2.5 and 1.3.2 versions. Here are the results :
The maximum throughput was 470K ops/sec using 4 threads in memcached 1.3.2. Performance of 1.2.5 was just very slightly lower. At this throughput, the network capacity of the single 10gbe card was reached as the benchmark does a lot of small packet transfers. See my earlier post for a description of the server configuration and the benchmark. At the maximum throughput, the cpu was still only 62% utilized (73% in the case of 1.2.5). Note that with a single instance we were using the same amount of cpu but reaching a lower throughput rate which once again points to memcached scalability issues.
These are really exciting results. Stay tuned - there is more exciting information coming.
Saturday Apr 18, 2009
By shanti on Apr 18, 2009
is the de-facto distributed caching server used to scale many web2.0
sites today. With the requirement to support a very large number of
users as sites grow, memcached aids scalability by effectively cutting
down on MySQL traffic and improving response times.
Memcached is a very light-weight server but is known not to scale beyond 4-6 threads. Some scalability improvements have gone into the 1.3 release (still in beta). With the new Intel Nehalem based systems improved hyper-threading providing twice as much performance as current systems, we were curious to see how memcached would perform on these systems. So we ran some tests, the results of which are shown below :
memcached 1.3.2 does scale slightly better than 1.2.5 after 4 threads. However, both versions reach their peak at 8 threads with 1.3.2 giving about 14% better throughput at 352,190 operations/sec.
The improvements made to per-thread stats certainly have helped as we no longer see stats_lock at the top of the profile. That honor now goes to cache_lock. With the increased performance of new systems making 350K ops/sec possible, breaking up of this (and other) lock(s) in memcached is necessary to improve scalability.
A single instance of memcached was run on a SunFire X2270 (2 socket Nehalem) with 48GB of memory and an Oplin 10G card. Several external client systems were used to drive load against the server using an internally developed Memcached benchmark. More on the benchmark later.
The clients connected to the server using a single 10 Gigabit Ethernet link. At the maximum throughput of 350K, the network was about 52% utilized and the server was 62% utilized. So there is plenty of head-room on this system to handle a much higher load if memcached could scale better. Of course, it is possible to run multiple instances of memcached to get better performance and better utilize the system resources and we plan to do that next. It is important to note that utilizing these high performance systems effectively for memcached will require the use of 10 GBE interfaces.
The Memcached benchmark we ran is based on Apache Olio - a web2.0 workload. I recently showcased results from Olio on Nehalem systems as well. Since Olio is a complex multi-tier workload, we extracted the memcached part to more easily test it in a stand-alone environment. This gave rise to our Memcached benchmark.
benchmark initially populates the server cache with objects of different
sizes to simulate the types of data that real sites typically store in
- small objects (4-100 bytes) to represent locks and query results
- medium objects (1-2 KBytes) to represent thumbnails, database rows, resultsets
- large objects (5-20 KBytes) to represent whole or partially generated pages
The benchmark then runs a mixture of operations (90% gets, 10% sets) and measures the throughput and response times when the system reaches steady-state. The workload is implemented using Faban, an open-source benchmark development framework. It not only speeds benchmark development, but the Faban harness is a great way to queue, monitor and archive runs for analysis.Stay tuned for further results.
Tuesday Apr 14, 2009
By shanti on Apr 14, 2009
Olio a little while ago as a toolkit to help web developers and
deployers as well as performance/operations engineers. Olio
includes a web2.0 application as well as the necessary software
required to drive load against it. Today, we are showcasing the first
major deployment of Olio on Sun's newest Intel Nehalem based systems
- the SunFire
X2270 and the SunFire
X4270. We tested 10,000 concurrent users (with a database of 1
million users) using over 1TB of storage in the unstructured object
The diagram below shows the configuration we tested.
The Olio/PHP web application was deployed on two X2270 systems. Since these systems are wickedly fast, we also chose to run memcached on them. This eliminates the need of having a separate memcached tier. The structured data in Olio resides in MySQL. For this deployment, the database used MySQL Replication and was deployed using one Master node and 2 slave nodes - all nodes were X4270 systems. The databases were created on ZFS on the internal drives on these systems. The unstructured data resides on a regular filesystem created on the NAS Appliance AmberRoad - Sun Storage 7210.
I think this is a great solution for web2.0 applications - the new Nehalem servers are extremely powerful allowing you to run a lot of users on each server, resulting in a smaller footprint and easier deployment and maintenance. Of course, this requires a little more effort in terms of tuning the software stack to ensure it can scale and utilize the CPU effectively.
The entire configuration, tuning informantion and performance results is documented in details in a Sun Blueprints titled A Web2.0 Deployment on OpenSolaris and Sun Systems. So check it out and let me know if you have any questions or comments.
Friday Mar 20, 2009
By shanti on Mar 20, 2009
Although rails is a great development environment for web applications, for a newbie the deployment of a rails application can be challenging due to the myriad dependencies on various gems, native libraries etc.
image_science is one such ruby library that provides an easy way to generate thumbnails. It is therefore quite popular in web2.0 type applications (there isn't a site today that doesn't let you upload photographs of yourself, your pets, gadgets, whatever). It is a very simple implementation and available as a ruby gem and so easy to install. However, the real work is done by a native library called FreeImage and installing this on OpenSolaris is a little bit of work. Although, I use OpenSolaris here, the instructions apply to Solaris 10 as well if you are using ruby from Web Stack.
I found instructions from Joyent to build FreeImage on OpenSolaris but found them to be erroneous. To install FreeImage, do the following :
- Download the source from the repository using the command : cvs -z3 -d:pserver:anonymous-AT-freeimage.cvs.sourceforge-DOT-net:/cvsroot/freeimage co -P FreeImage
- Edit FreeImage/Makefile.solaris :
- Change INSTALLDIR to /opt/local
- Change all the lines for the install target as follows: install:
install -m 644 -u root -g root -f $(INSTALLDIR)/include Source/FreeImage.h
install -m 644 -u root -g root -f $(INSTALLDIR)/lib $(STATICLIB)
install -m 755 -u root -g root -f $(INSTALLDIR)/lib $(SHAREDLIB)
ln -sf $(INSTALLDIR)/lib/$(SHAREDLIB) $(INSTALLDIR)/lib/$(LIBNAME)
- Make all the install directories: mkdir -p /opt/local/lib /opt/local/include
- Ensure you have gcc in your PATH (it's in /usr/sfw/bin).
- Now we are ready to build the library: gmake -f Makefile.solaris
gmake -f Makefile.solaris install
If everything went smoothly, you should see the following files in /opt/local:
# ls /opt/local/include
# ls -l /opt/local/lib
-rwxr-xr-x 1 root root 2978480 Mar 17 13:35 libfreeimage-3.12.0.so
-rw-r--r-- 1 root root 3929756 Mar 17 13:35 libfreeimage.a
lrwxrwxrwx 1 root root 22 Mar 17 13:43 libfreeimage.so.3 -> libfreeimage-3.12.0.so
Now that we have FreeImage installed, installing ImageScience itself is real easy. But first, make sure you have the latest version of rubygems (1.3.1). The default rubygems in OpenSolaris 2008.11 is 0.9.4.
# gem --version
# bash-3.2# gem install rubygems-update
Bulk updating Gem source index for: http://gems.rubyforge.org
Successfully installed rubygems-update-1.3.1
This will print a lot of messages, but when it's complete you should have rubygems 1.3.1.
# gem --version
We can now install the image_science gem. This will automatically install all dependent gems so the messages you see depends on what you have installed. On a OpenSolaris 2008.11 system you should see :
bash-3.2# gem install image_science
Successfully installed rubyforge-1.0.3
Successfully installed rake-0.8.4
Successfully installed hoe-1.11.0
Successfully installed ZenTest-4.0.0
Successfully installed RubyInline-3.8.1
Successfully installed image_science-1.1.3
6 gems installed
This will be followed by more messages indicating documentation for the above modules was also installed.
You are now ready to use image_science. Have fun !
Wednesday Nov 12, 2008
By shanti on Nov 12, 2008
For the last few months, I've been working feverishly to get the web2.0kit open-sourced. What is the web2.0kit, you ask ? We introduced this at our session at the Velocity conference. The web2.0kit is a reference architecture to help anyone running a web application evaluate the suitability, functionality and performance of various web technologies.
Most web2.0 sites today use open source languages and frameworks such as PHP, Ruby on Rails and Java EE to develop their applications. Deployments of these applications also use popular open source servers such as Apache httpd, MySQL, memcached and glassfish. Many other servers/technologies such as lighttpd, nginx, mogileFS, mongrel, thin, JRuby are also gaining popularity. To help understand the differences in these various web technology stacks and infrastructure applications, we developed the web2.0kit which has now been open-sourced and is an Apache Incubator project called Olio .
I view Olio as a tool to aid developers, deployers as well as performance engineers. For developers, Olio provides 3 different implementations of the exact same application using three different languages and their associated frameworks - PHP, Java EE and Rails. (At this time, the Java EE version is still not in the Olio repository but will soon be). Developers can browse the source code and understand how to design and code a complex web2.0 application. Even experienced PHP developers may gain by looking at the Olio PHP application as we've tried to design the application using object-oriented principles and well-known design patterns - typically not seen much in the PHP world ! In fact, a couple of fairly large companies in China are already using Olio as a training tool for their new hires/interns. If you've been considering rails but have been hesitant, here's your chance to check out a full-blown app and see what it will take to develop yours.
Developers can also experiment with different pieces of technology by modifying Olio. For example, Olio/PHP provides both ODBC and PDO interfaces to access the database. You can easily add a mysqli one if you want to test that. Similarly, you can test the various methods of caching that is possible in a rails application.
When it comes to deployment, there are a myriad hardware and software solutions to choose from. Should you choose apache or lighttpd ? And what about nginx ? You can use Olio to performance test various solutions and determine for yourself how these servers will behave in your installation; for unlike most test results you see out there where someone runs a toy test comparing apache and lighttpd using http_load or worse ab (see my blog post on the problems with these tools), Olio lets you test a real web2.0 like site with features that are common today including ajax, tags, comments, mashups etc.
We, in the Performance and Applications Engineering (PAE) group at Sun have decades of experience developing and tuning workloads. We have brought that experience to bear in developing Olio and we hope that you can benefit from it too.
Please see the instructions on the Olio website for downloading source code. To make it easier for users to get started, we have made available pre-built binary packages from the Sun Download Center.
Please join the Olio user list to get help with setting up Olio or to make any suggestions.
Friday Nov 02, 2007
By shanti on Nov 02, 2007
Finally, it's here. We missed our October 31 deadline, but hopefully that won't matter when you see the contents of the release.
Short summary of changes in the 1.2 release.
You can download the new packages from the Sun Download Center.
Some caveats to be aware of :
- These packages will over-write Cool Stack 1.1 packages that you may already have installed. All packages continue to install in /opt/coolstack but some of the package names have been changed. Thus, pkgadd(1M) will not know that the new CSKapache2 package will install in /opt/coolstack/apache2 and over-write the contents from a previous Cool Stack 1.1 installation of CSKamp. So, please do save your current installation. A detailed strategy to do this is defined on the Cool Stack site. Here are some other short-cuts :
- Move /opt/coolstack to /opt/coolstack1.1. Remove all the CSK\* packages installed. Since the actual files in /opt/coolstack no longer exist (as you've moved the directory), pkgrm will do no harm but does change the system's perception of what packages are installed. Then delete /opt/coolstack and start installing the new packages.
- Save just the files you need - typically apache2/conf directory, your php.ini file etc. Then remove all the CSK\* packages and delete /opt/coolstack.
- This release was built and tested on Solaris 10. During final testing on Nevada (aka OpenSolaris or SXDE), we ran into package incompatibility issues. Some packages do not install in a straight-foward manner on OpenSolaris. Please use the work-around.
- The php dtrace extension doesn't work. We have identified the fix and will post a patch soon.
I'm a Senior Staff Engineer in the Performance & Applications Engineering Group (PAE). This blog focuses on tips to build, configure, tune and measure performance of popular open source web applications on Solaris.
- Olio 0.2 Released
- Enterprise2.0 Conference
- Olio 0.2 is coming
- Open Source load testing tool
- OpenSolaris beats Linux on Memcached !
- Olio implemented in Java
- Multi-instance memcached performance
- Memcached Performance on Sun's Nehalem System
- Scaling Olio on Sun's Nehalem Systems and Amber Road
- FreeImage and ImageScience on OpenSolaris