Thursday Jan 14, 2010
Thursday Nov 05, 2009
Thursday May 21, 2009
By shanti on May 21, 2009
Following on the heels of our memcached performance tests
on SunFire X2270 ( Sun's Nehalem-based server) running OpenSolaris, we
ran the same tests on the same server but this time on RHEL5. As
mentioned in the post presenting the first memcached results,
a 10GBE Intel Oplin card was used in order to achieve the high
throughput rates possible with these servers. It turned out that using
this card on linux involved a bit of work resulting in driver and kernel
- With the default ixgbe driver from the RedHat distribution (version 1.3.30-k2 on kernel 2.6.18)), the interface simply hung during the benchmark test.
- This led to downloading the driver from the Intel site (126.96.36.199-2-NAPI) and re-compiling it. This version does work and we got a maximum throughput of 232K operations/sec on the same linux kernel (2.6.18). However, this version of the kernel does not have support for multiple rings.
- The kernel version 2.6.29 includes support for multiple rings but still doesn't have the latest ixgbe driver which is 1.3.56-2-NAPI. So we downloaded, built and installed these versions of the kernel and driver. This worked well giving a maximum throughput of 280K with some tuning.
The system running
OpenSolaris and memcached 1.3.2 gave us a maximum throughput of 350K
ops/sec as previously reported. The same system running RHEL5 (with
kernel 2.6.29) and the same version of memcached resulted in 280K
ops/sec. OpenSolaris outperforms Linux by 25% !
The following Linux tunables were changed to try and get the best performance:
net.ipv4.tcp_timestamps = 0 net.core.wmem_default = 67108864 net.core.wmem_max = 67108864 net.core.optmem_max = 67108864 net.ipv4.tcp_dsack = 0 net.ipv4.tcp_sack = 0 net.ipv4.tcp_window_scaling = 0 net.core.netdev_max_backlog = 300000 net.ipv4.tcp_max_syn_backlog = 200000
Here are the ixgbe specific settings that were used (2 transmit, 2 receive rings):
RSS=2,2 InterruptThrottleRate =1600,1600
The following settings in /etc/system were used to set the number of MSIX:set ddi_msix_alloc_limit=4 set pcplusmp:apic_intr_policy=1
For the ixgbe interface, 4 transmit and 4 receive rings gave the best performance :
Finally, we bound the crossbow threads:
dladm set-linkprop -p cpus=12,13,14,15 ixgbe0
Monday May 11, 2009
By shanti on May 11, 2009
The first cut of a Java EE implementation of Olio is now checked into the repository. The file docs/java_setup.html gives instructions on how to build and setup this implementation. The implementation uses JSP, servlets, JPA for persistence, yahoo and Jmaki widgets for AJAX etc. The web application is located in webapp/java/trunk and the load driver, database and file loaders etc. are in workload/java/trunk.
Check it out.
Monday Apr 27, 2009
By shanti on Apr 27, 2009
As promised, here are more results running memcached on Sun's X2270 (Nehalem-based server). In my previous post, I mentioned that we got 350K ops/sec running a single instance of memcached at which point the throughput was hampered by the scalability issues of memcached. So we ran two instances of memcached on the same server, each using 15GB of memory and tested both 1.2.5 and 1.3.2 versions. Here are the results :
The maximum throughput was 470K ops/sec using 4 threads in memcached 1.3.2. Performance of 1.2.5 was just very slightly lower. At this throughput, the network capacity of the single 10gbe card was reached as the benchmark does a lot of small packet transfers. See my earlier post for a description of the server configuration and the benchmark. At the maximum throughput, the cpu was still only 62% utilized (73% in the case of 1.2.5). Note that with a single instance we were using the same amount of cpu but reaching a lower throughput rate which once again points to memcached scalability issues.
These are really exciting results. Stay tuned - there is more exciting information coming.
Saturday Apr 18, 2009
By shanti on Apr 18, 2009
is the de-facto distributed caching server used to scale many web2.0
sites today. With the requirement to support a very large number of
users as sites grow, memcached aids scalability by effectively cutting
down on MySQL traffic and improving response times.
Memcached is a very light-weight server but is known not to scale beyond 4-6 threads. Some scalability improvements have gone into the 1.3 release (still in beta). With the new Intel Nehalem based systems improved hyper-threading providing twice as much performance as current systems, we were curious to see how memcached would perform on these systems. So we ran some tests, the results of which are shown below :
memcached 1.3.2 does scale slightly better than 1.2.5 after 4 threads. However, both versions reach their peak at 8 threads with 1.3.2 giving about 14% better throughput at 352,190 operations/sec.
The improvements made to per-thread stats certainly have helped as we no longer see stats_lock at the top of the profile. That honor now goes to cache_lock. With the increased performance of new systems making 350K ops/sec possible, breaking up of this (and other) lock(s) in memcached is necessary to improve scalability.
A single instance of memcached was run on a SunFire X2270 (2 socket Nehalem) with 48GB of memory and an Oplin 10G card. Several external client systems were used to drive load against the server using an internally developed Memcached benchmark. More on the benchmark later.
The clients connected to the server using a single 10 Gigabit Ethernet link. At the maximum throughput of 350K, the network was about 52% utilized and the server was 62% utilized. So there is plenty of head-room on this system to handle a much higher load if memcached could scale better. Of course, it is possible to run multiple instances of memcached to get better performance and better utilize the system resources and we plan to do that next. It is important to note that utilizing these high performance systems effectively for memcached will require the use of 10 GBE interfaces.
The Memcached benchmark we ran is based on Apache Olio - a web2.0 workload. I recently showcased results from Olio on Nehalem systems as well. Since Olio is a complex multi-tier workload, we extracted the memcached part to more easily test it in a stand-alone environment. This gave rise to our Memcached benchmark.
benchmark initially populates the server cache with objects of different
sizes to simulate the types of data that real sites typically store in
- small objects (4-100 bytes) to represent locks and query results
- medium objects (1-2 KBytes) to represent thumbnails, database rows, resultsets
- large objects (5-20 KBytes) to represent whole or partially generated pages
The benchmark then runs a mixture of operations (90% gets, 10% sets) and measures the throughput and response times when the system reaches steady-state. The workload is implemented using Faban, an open-source benchmark development framework. It not only speeds benchmark development, but the Faban harness is a great way to queue, monitor and archive runs for analysis.Stay tuned for further results.
Tuesday Apr 14, 2009
By shanti on Apr 14, 2009
Olio a little while ago as a toolkit to help web developers and
deployers as well as performance/operations engineers. Olio
includes a web2.0 application as well as the necessary software
required to drive load against it. Today, we are showcasing the first
major deployment of Olio on Sun's newest Intel Nehalem based systems
- the SunFire
X2270 and the SunFire
X4270. We tested 10,000 concurrent users (with a database of 1
million users) using over 1TB of storage in the unstructured object
The diagram below shows the configuration we tested.
The Olio/PHP web application was deployed on two X2270 systems. Since these systems are wickedly fast, we also chose to run memcached on them. This eliminates the need of having a separate memcached tier. The structured data in Olio resides in MySQL. For this deployment, the database used MySQL Replication and was deployed using one Master node and 2 slave nodes - all nodes were X4270 systems. The databases were created on ZFS on the internal drives on these systems. The unstructured data resides on a regular filesystem created on the NAS Appliance AmberRoad - Sun Storage 7210.
I think this is a great solution for web2.0 applications - the new Nehalem servers are extremely powerful allowing you to run a lot of users on each server, resulting in a smaller footprint and easier deployment and maintenance. Of course, this requires a little more effort in terms of tuning the software stack to ensure it can scale and utilize the CPU effectively.
The entire configuration, tuning informantion and performance results is documented in details in a Sun Blueprints titled A Web2.0 Deployment on OpenSolaris and Sun Systems. So check it out and let me know if you have any questions or comments.
Friday Mar 20, 2009
By shanti on Mar 20, 2009
Although rails is a great development environment for web applications, for a newbie the deployment of a rails application can be challenging due to the myriad dependencies on various gems, native libraries etc.
image_science is one such ruby library that provides an easy way to generate thumbnails. It is therefore quite popular in web2.0 type applications (there isn't a site today that doesn't let you upload photographs of yourself, your pets, gadgets, whatever). It is a very simple implementation and available as a ruby gem and so easy to install. However, the real work is done by a native library called FreeImage and installing this on OpenSolaris is a little bit of work. Although, I use OpenSolaris here, the instructions apply to Solaris 10 as well if you are using ruby from Web Stack.
I found instructions from Joyent to build FreeImage on OpenSolaris but found them to be erroneous. To install FreeImage, do the following :
- Download the source from the repository using the command : cvs -z3 -d:pserver:anonymous-AT-freeimage.cvs.sourceforge-DOT-net:/cvsroot/freeimage co -P FreeImage
- Edit FreeImage/Makefile.solaris :
- Change INSTALLDIR to /opt/local
- Change all the lines for the install target as follows: install:
install -m 644 -u root -g root -f $(INSTALLDIR)/include Source/FreeImage.h
install -m 644 -u root -g root -f $(INSTALLDIR)/lib $(STATICLIB)
install -m 755 -u root -g root -f $(INSTALLDIR)/lib $(SHAREDLIB)
ln -sf $(INSTALLDIR)/lib/$(SHAREDLIB) $(INSTALLDIR)/lib/$(LIBNAME)
- Make all the install directories: mkdir -p /opt/local/lib /opt/local/include
- Ensure you have gcc in your PATH (it's in /usr/sfw/bin).
- Now we are ready to build the library: gmake -f Makefile.solaris
gmake -f Makefile.solaris install
If everything went smoothly, you should see the following files in /opt/local:
# ls /opt/local/include
# ls -l /opt/local/lib
-rwxr-xr-x 1 root root 2978480 Mar 17 13:35 libfreeimage-3.12.0.so
-rw-r--r-- 1 root root 3929756 Mar 17 13:35 libfreeimage.a
lrwxrwxrwx 1 root root 22 Mar 17 13:43 libfreeimage.so.3 -> libfreeimage-3.12.0.so
Now that we have FreeImage installed, installing ImageScience itself is real easy. But first, make sure you have the latest version of rubygems (1.3.1). The default rubygems in OpenSolaris 2008.11 is 0.9.4.
# gem --version
# bash-3.2# gem install rubygems-update
Bulk updating Gem source index for: http://gems.rubyforge.org
Successfully installed rubygems-update-1.3.1
This will print a lot of messages, but when it's complete you should have rubygems 1.3.1.
# gem --version
We can now install the image_science gem. This will automatically install all dependent gems so the messages you see depends on what you have installed. On a OpenSolaris 2008.11 system you should see :
bash-3.2# gem install image_science
Successfully installed rubyforge-1.0.3
Successfully installed rake-0.8.4
Successfully installed hoe-1.11.0
Successfully installed ZenTest-4.0.0
Successfully installed RubyInline-3.8.1
Successfully installed image_science-1.1.3
6 gems installed
This will be followed by more messages indicating documentation for the above modules was also installed.
You are now ready to use image_science. Have fun !
By shanti on Mar 20, 2009
We have just released the first binary version of Apache Olio for both the PHP and Rails implementation. Both implementations have been tested quite thoroughly now and we think they are robust enough for serious use - especially for performance testing the workloads.
I introduced Olio in a previous post. It is a toolkit that includes a sample web2.0 application implemented in both PHP and Rails that includes a load generator to drive load against the application.
Please visit the Olio site and download the kits. If you find it interesting, I invite you to come join the project.
Wednesday Nov 12, 2008
By shanti on Nov 12, 2008
For the last few months, I've been working feverishly to get the web2.0kit open-sourced. What is the web2.0kit, you ask ? We introduced this at our session at the Velocity conference. The web2.0kit is a reference architecture to help anyone running a web application evaluate the suitability, functionality and performance of various web technologies.
Most web2.0 sites today use open source languages and frameworks such as PHP, Ruby on Rails and Java EE to develop their applications. Deployments of these applications also use popular open source servers such as Apache httpd, MySQL, memcached and glassfish. Many other servers/technologies such as lighttpd, nginx, mogileFS, mongrel, thin, JRuby are also gaining popularity. To help understand the differences in these various web technology stacks and infrastructure applications, we developed the web2.0kit which has now been open-sourced and is an Apache Incubator project called Olio .
I view Olio as a tool to aid developers, deployers as well as performance engineers. For developers, Olio provides 3 different implementations of the exact same application using three different languages and their associated frameworks - PHP, Java EE and Rails. (At this time, the Java EE version is still not in the Olio repository but will soon be). Developers can browse the source code and understand how to design and code a complex web2.0 application. Even experienced PHP developers may gain by looking at the Olio PHP application as we've tried to design the application using object-oriented principles and well-known design patterns - typically not seen much in the PHP world ! In fact, a couple of fairly large companies in China are already using Olio as a training tool for their new hires/interns. If you've been considering rails but have been hesitant, here's your chance to check out a full-blown app and see what it will take to develop yours.
Developers can also experiment with different pieces of technology by modifying Olio. For example, Olio/PHP provides both ODBC and PDO interfaces to access the database. You can easily add a mysqli one if you want to test that. Similarly, you can test the various methods of caching that is possible in a rails application.
When it comes to deployment, there are a myriad hardware and software solutions to choose from. Should you choose apache or lighttpd ? And what about nginx ? You can use Olio to performance test various solutions and determine for yourself how these servers will behave in your installation; for unlike most test results you see out there where someone runs a toy test comparing apache and lighttpd using http_load or worse ab (see my blog post on the problems with these tools), Olio lets you test a real web2.0 like site with features that are common today including ajax, tags, comments, mashups etc.
We, in the Performance and Applications Engineering (PAE) group at Sun have decades of experience developing and tuning workloads. We have brought that experience to bear in developing Olio and we hope that you can benefit from it too.
Please see the instructions on the Olio website for downloading source code. To make it easier for users to get started, we have made available pre-built binary packages from the Sun Download Center.
Please join the Olio user list to get help with setting up Olio or to make any suggestions.
Wednesday Oct 08, 2008
By shanti on Oct 08, 2008
We released Cool Stack 1.3.1 recently. This was the last release that was built to work on releases as old as Solaris 10 01/06 (Update 1). Going forward, future versions of the stack will only be supported on newer Solaris 10 updates. So I'd like to urge everyone who is running older releases to please schedule their systems for upgrade. We highly recommend that you upgrade to at least Solaris 10 01/08 (Update 5) as it has many performance, security and other fixes.
For some time now, we've had two different stacks for Solaris 10 and OpenSolaris - Cool Stack for Solaris 10 and Web Stack for OpenSolaris. We cannot continue to sustain this model where we have two different source bases with slightly different versions of various components. Further, there has been interest from some large customers for a single stack to be supported on Solaris 10, OpenSolaris as well as Linux. These customers also wanted production support for the stack.
To enable a unified stack, we are now transitioning Cool Stack to Sun Web Stack. This new stack was announced in OSCON in July. It will be very similar to Cool Stack in that it will be separately downloadable but it will be a full-fledged product in that customers who want production support, can now purchase it. The stack will continue to be available free of charge with limited support via a forum.
The first version of Sun Web Stack will be 1.4 (keeping the Cool Stack versioning in place) and should be available in November. Current Cool Stack 1.3.x customers may not see new functionality in terms of upgrades to components in this release as we are trying to sync it to the current versions of components in OpenSolaris. However, Zones users may want to take a look at this release as it does separate out the binaries from the configuration/log files, making it easier for multiple zones to share a single installation in /opt and yet use custom configuration/logs which will be located in /etc, /var etc. similar to other Solaris applications.
With this transition to Sun Web Stack, I will no longer be directly associated with the development of the stack. I will however keep an eye on it's performance and continue to push for increased performance via compiler optimizations or required source changes. We in the performance team will continue to use, test and analyze the performance of many of the components in the stack. So hopefully you will continue to monitor my blog as I talk more about the performance aspect of the stack.
Please do continue to use Cool Stack and Sun Web Stack as that product comes out. Sun is committed to a high-performing web stack of open source components on all platforms.
Thursday Jun 26, 2008
By shanti on Jun 26, 2008
I just got back from 3 days of conferences - 2 days at Velocity and one at Structure '08. The Velocity Conference was billed as the 1st conference devoted exclusively to web performance and operations. And the sessions did live up to this. They had over 700 attendees which is not bad for the first time.
Our session was of course on tuning the server-side. There was another one on squid/varnish and mysql sharding - but beyond that, client was King.
One thing to note is that the people giving these talks on client-side are from the likes of yahoo, google, Microsoft. They have obviously finished with the server-side tuning and then had an ah-ah moment when after all that tuning, they realized that the end-user response time was still bad, and so have shifted their focus to client-side issues.
But if you're a startup or have just started deploying your application, I think the server-side should still remain the first area of focus. If your application isn't designed right or the web stack/server stack isn't tuned, you will not get far. As an example, running apache on Linux without any tuning will barely support a couple of hundred connections. When you get to the point where you're comfortable that the server stack can handle load and scale reasonably, then yes, by all means shift your focus to the client-side.
Ideally, you would attack both at the same time - but probably noone has the resources to focus on everything at the same time.
So, if server-side issues are still plaguing you, please feel free to check out our presentation.
Thursday Jun 12, 2008
By shanti on Jun 12, 2008
As I indicated in my previous post on MySQL performance, we have been doing some performance work using an internally developed web2.0 application. Akara and I will be presenting this app publicly to a large audience for the first time at the upcoming Velocity Conference in Burlingame, CA on June 23, 24. Check out our abstract. Most of our work uses Cool Stack so a lot of the results we will be presenting will be based on that. If you're struggling with performance issues, this conference may be worth checking out.
If you will be attending the conference, please stop by and say hello. It's always good to see people whom we only know through blogs and forums.
I'm a Senior Staff Engineer in the Performance & Applications Engineering Group (PAE). This blog focuses on tips to build, configure, tune and measure performance of popular open source web applications on Solaris.
- Olio 0.2 Released
- Enterprise2.0 Conference
- Olio 0.2 is coming
- Open Source load testing tool
- OpenSolaris beats Linux on Memcached !
- Olio implemented in Java
- Multi-instance memcached performance
- Memcached Performance on Sun's Nehalem System
- Scaling Olio on Sun's Nehalem Systems and Amber Road
- FreeImage and ImageScience on OpenSolaris