Setting up memcached on Solaris

These days everyone seems to use memcached, a high-performance, distributed memory object caching system, intended for use in speeding up web applications. Performance can be greatly improved from moving away from disk fetch to a RAM fetch. Here is an excellent article explaining memcached taking LiveJournal as a case study.
Do you know that memcached daemons can be set up on Solaris Zones too? For this you need to download memcached package from the Cool Stack site.

What you need to get?
1. Cool Stack 1.2
2. memcached Java Client APIs (Get this jar and this jar).

Create and Boot Zones
Create 3 zones - zonea, zoneb and zonec to test memcached. For information on creating Solaris zones read this article.

I'm using SXDE 9/07
OK you don't have SXDE? Get it!

Here is the status of my zones:

# zoneadm list -vc
ID NAME STATUS PATH BRAND IP
0 global running / native shared
4 zonea running /zones/zonea native shared
5 zoneb running /zones/zoneb native shared
6 zonec running /zones/zonec native shared


Start memcached on all Zones

Follow the Cool Stack site for installing Cool Stack on your zones. When you do a pkgadd -d <memcached\*.pkg>, memcached gets installed in all the available zones even though they are not in a running state.

When everything is set, these commands should work fine:

zonea# ./opt/coolstack/bin/memcached -u phantom -d -m 100 -l 129.158.224.242 -p 11111
zoneb# ./opt/coolstack/bin/memcached -u phantom -d -m 100 -l 129.158.224.231 -p 11112
zonec# ./opt/coolstack/bin/memcached -u phantom -d -m 100 -l 129.158.224.243 -p 11113


Start memcached as non-root user on all all zones with 100 MB memory bucket. This should be OK for testing but ideally in a production setup it should be around a terabyte.

If you don't know already, each Solaris zone can bind to an IP and port through the virtual interface. So you don't need 3 NICs or 3 machines - but just 3 zones.

Test memcached

Set the classpath pointing to the downloaded jars. Use NetBeans for simplicity. Store an object in-memory and retrieve it from the memcached daemon running on zonea:

        ....
    //Interact with zonea
    MemcachedClient c;
    try {
        c = new MemcachedClient(new InetSocketAddress
                ("129.158.224.242", 11111));

        String test=new String("I'm going to be cached!");
        c.set("mykey", 180, test);
        Object obj=c.get("mykey");
        System.out.println((String)obj);
        c.delete("mykey");
    } catch (IOException ex) {
        ex.printStackTrace();
    }

       ...

We are storing an object for 3 mins. After retrieving the object, you can clean the cache. When you compile and run the program,  the output will look like:

2007-10-03 10:38:49.615 INFO net.spy.memcached.MemcachedConnection: Connected to {QA sa=/129.158.224.242:11111, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} immediately
I'm going to be cached!


From your code, you can also connect to multiple memcached servers and store objects.
This is quite interesting. You can halt one zone and can try to store object in-memory on all the three zones.

# zoneadm -z zonea halt
# zoneadm list -vc
ID NAME STATUS PATH BRAND IP
0 global running / native shared
5 zoneb running /zones/zoneb native shared
6 zonec running /zones/zonec native shared
- zonea installed /zones/zonea native shared

Now zonea is no longer running memcached because zonea zone is down.
Here is the modified code:

    MemcachedClient c;
    try {
        //zonea, zoneb and zonec
        c=new MemcachedClient(AddrUtil.getAddresses
            ("129.158.224.242:11111
              129.158.224.231:11112   
              129.158.224.243:11113"))
;

        String test=new String("I'm going to be cached on all zones!");
        c.set("mykey2", 180, test);
        Object obj=c.get("mykey2");
        System.out.println((String)obj);
        c.delete("mykey2");
    } catch (IOException ex) {
        ex.printStackTrace();
    }

We are trying to store the object on all the available servers. But zonea is offline.
Here is the output:

2007-10-03 11:18:27.678 INFO net.spy.memcached.MemcachedConnection:
Added {QA sa=/129.158.224.242:11111, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect queue

2007-10-03 11:18:27.682 INFO net.spy.memcached.MemcachedConnection:

Connected to {QA sa=/129.158.224.231:11112, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} immediately

2007-10-03 11:18:27.684 INFO net.spy.memcached.MemcachedConnection:

Connected to {QA sa=/129.158.224.243:11113, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} immediately

I'm going to be cached on all zones!


The object is queued for insertion whenever zonea comes up. It would be interesting to test the automatic failover behavior of memcached considering the fact that memcached is a mother of all hashtables and there should be sufficient fail safe plumbing required between running instances of memcached daemons. You can also use DTrace for memcached debugging.


Comments:

Explain to me, please, why you would want to use memcached with Java, do serialization, copying and socket transfer to achieve awkwardly in tens of milliseconds what you could with a shared ConcurrentHashMap in tens of nanoseconds?

The only reason memcached exists at all is languages without the possibility for shared state.

Posted by Mikael Gueck on December 09, 2007 at 03:59 PM PST #

Mikael, I know memcached is a bad idea for Java especially when you can use ConcurrentHashMap. I was just showing how you can use a Java API to manage objects in an existing memcached setup in zones.
You can also take a look at http://ehcache.sourceforge.net/
for a pure Java solution.
All Java Caching Providers are much faster than memcached.
But the AMP guys just love memcached for no particular reason and I'm one of them.

Posted by Phantom on December 09, 2007 at 07:09 PM PST #

I apologise for the abrupt tone of my comment.

Posted by Mikael Gueck on December 10, 2007 at 03:25 PM PST #

memcached is about \*distributed caching\*: load balancing and failover... obviously it is overkill if you don't need these features. Peace

Posted by Olivier Gourment on January 30, 2008 at 02:00 AM PST #

Post a Comment:
Comments are closed for this entry.
About

Application development on Solaris OS

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today