Monday Apr 27, 2009

Multi-instance memcached performance

As promised, here are more results running memcached on Sun's X2270 (Nehalem-based server). In my previous post, I mentioned that we got 350K ops/sec running a single instance of memcached at which point the throughput was hampered by the scalability issues of memcached. So we ran two instances of memcached on the same server, each using 15GB of memory and tested both 1.2.5 and 1.3.2 versions. Here are the results :

2 instances performance

The maximum throughput was 470K ops/sec using 4 threads in memcached 1.3.2. Performance of 1.2.5 was just very slightly lower. At this throughput, the network capacity of the single 10gbe card was reached as the benchmark does a lot of small packet transfers. See my earlier post for a description of the server configuration and the benchmark. At the maximum throughput, the cpu was still only 62% utilized (73% in the case of 1.2.5). Note that with a single instance we were using the same amount of cpu but reaching a lower throughput rate which once again points to memcached scalability issues.

These are really exciting results. Stay tuned - there is more exciting information coming.

Saturday Apr 18, 2009

Memcached Performance on Sun's Nehalem System

Memcached is the de-facto distributed caching server used to scale many web2.0 sites today. With the requirement to support a very large number of users as sites grow, memcached aids scalability by effectively cutting down on MySQL traffic and improving response times.

Memcached is a very light-weight server but is known not to scale beyond 4-6 threads. Some scalability improvements have gone into the 1.3 release (still in beta). With the new Intel Nehalem based systems improved hyper-threading providing twice as much performance as current systems, we were curious to see how memcached would perform on these systems. So we ran some tests, the results of which are shown below :




memcached 1.3.2 does scale slightly better than 1.2.5 after 4 threads. However, both versions reach their peak at 8 threads with 1.3.2 giving about 14% better throughput at 352,190 operations/sec.

The improvements made to per-thread stats certainly have helped as we no longer see stats_lock at the top of the profile. That honor now goes to cache_lock. With the increased performance of new systems making 350K ops/sec possible, breaking up of this (and other) lock(s) in memcached is necessary to improve scalability.

Test Details

A single instance of memcached was run on a SunFire X2270 (2 socket Nehalem) with 48GB of memory and an Oplin 10G card. Several external client systems were used to drive load against the server using an internally developed Memcached benchmark. More on the benchmark later.
The clients connected to the server using a single 10 Gigabit Ethernet link. At the maximum throughput of 350K, the network was about 52% utilized and the server was 62% utilized. So there is plenty of head-room on this system to handle a much higher load if memcached could scale better. Of course, it is possible to run multiple instances of memcached to get better performance and better utilize the system resources and we plan to do that next. It is important to note that utilizing these high performance systems effectively for memcached will require the use of 10 GBE interfaces.

Benchmark Details

The Memcached benchmark we ran is based on Apache Olio - a web2.0 workload. I recently showcased results from Olio on Nehalem systems as well. Since Olio is a complex multi-tier workload, we extracted the memcached part to more easily test it in a stand-alone environment. This gave rise to our Memcached benchmark.

The benchmark initially populates the server cache with objects of different sizes to simulate the types of data that real sites typically store in memcached :

  • small objects (4-100 bytes) to represent locks and query results
  • medium objects (1-2 KBytes) to represent thumbnails, database rows, resultsets
  • large objects (5-20 KBytes) to represent whole or partially generated pages

The benchmark then runs a mixture of operations (90% gets, 10% sets) and measures the throughput and response times when the system reaches steady-state. The workload is implemented using Faban, an open-source benchmark development framework. It not only speeds benchmark development, but the Faban harness is a great way to queue, monitor and archive runs for analysis.

Stay tuned for further results.

Tuesday Apr 14, 2009

Scaling Olio on Sun's Nehalem Systems and Amber Road

I introduced Olio a little while ago as a toolkit to help web developers and deployers as well as performance/operations engineers. Olio includes a web2.0 application as well as the necessary software required to drive load against it. Today, we are showcasing the first major deployment of Olio on Sun's newest Intel Nehalem based systems - the SunFire X2270 and the SunFire X4270. We tested 10,000 concurrent users (with a database of 1 million users) using over 1TB of storage in the unstructured object store.

The diagram below shows the configuration we tested.


The Olio/PHP web application was deployed on two X2270 systems. Since these systems are wickedly fast, we also chose to run memcached on them. This eliminates the need of having a separate memcached tier. The structured data in Olio resides in MySQL. For this deployment, the database used MySQL Replication and was deployed using one Master node and 2 slave nodes - all nodes were X4270 systems. The databases were created on ZFS on the internal drives on these systems. The unstructured data resides on a regular filesystem created on the NAS Appliance AmberRoad - Sun Storage 7210.

I think this is a great solution for web2.0 applications - the new Nehalem servers are extremely powerful allowing you to run a lot of users on each server, resulting in a smaller footprint and easier deployment and maintenance. Of course, this requires a little more effort in terms of tuning the software stack to ensure it can scale and utilize the CPU effectively.

The entire configuration, tuning informantion and performance results is documented in details in a Sun Blueprints titled A Web2.0 Deployment on OpenSolaris and Sun Systems. So check it out and let me know if you have any questions or comments.

About

I'm a Senior Staff Engineer in the Performance & Applications Engineering Group (PAE). This blog focuses on tips to build, configure, tune and measure performance of popular open source web applications on Solaris.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today