Tuesday Jan 20, 2009

Lighttpd and Olio Rails

We were trying to use Lighttpd to run the Apache Olio Rails application on OpenSolaris recently and we found that because the Lighttpd workers run as a non-root user (in this case as webservd), the image_science gem was unable to access the shared library built for it by RubyInline. The error that we saw was:

ActionView::TemplateError (Permission denied - /root/.ruby_inline) on line #10 of events/_filtered_events.html.erb (although the exact error varies depending on whether you are looking at the error page returned to the browser or the logfile). We knew from some of the problems that we had with getting image_science up and running on OpenSolaris that RubyInline defaulted to building libraries in the root users home directory, but up until then we had been using Mongrel and Thin and running them as root (which is food for thought). 

The fix is simple, RubyInline defaults to building libraries in $HOME/.ruby_inline unless the environment variable $INLINEDIR is set in which case it builds them in $INLINEDIR/.ruby_inline. You can pass environment variables on to the FastCGI processes that Lighttpd spawns by setting them in the fastcgi.server directive in the Lighttpd config file. An example of this is the one from our rig:

fastcgi.server =  ( ".fcgi" =>
                    ( "localhost" =>
                      ( "min-procs" => 1,
                        "max-procs" => 5,
                        "socket" => "/tmp/ruby-olioapp.fastcgi",
                        "bin-path" => "/export/faban/olio_rails/olioapp/public/dispatch.fcgi",
                        "bin-environment" => (
                           "RAILS_ENV" => "production",
                           "INLINEDIR" => "/export/faban/olio_rails/olioapp/tmp"
                        )  
                      )
                    )
                  )

I've included the whole thing as it's sometimes tough to see the nesting of the options. Basically, if you don't have a 'bin-environment' section add one after 'bin-path' (watch for the commas).

With this config file, RubyInline will build (rebuild in this case) the libraries of the gems that make use of it in /export/faban/olio_rails/olioapp/tmp/.ruby_inline so as long as the user that Lighttpd is running it's worker processes as has access to that directory you should be good to go.

BTW: In case you are wondering, image_science is a native Ruby Gem that can resize images and create thumbnails, but instead of being built on install, it's built and managed by the RubyInline gem when you first go to use it.



Tuesday Oct 14, 2008

Lighttpd on CMT running Web 2.0 workloads

Earlier in the year I spent many a night burning candles at both ends testing with a Web 2.0 workload on a Lighttpd/PHP/Memcached/MySQL stack. The results have been made into a Sun Blueprint entitled An Open Source Web Solution - Lighttpd Web Server and Chip Multithreading Technology The bottom line is that we were able to get 416 ops/sec with 2250 users before we ran out of network bandwidth. That was with a 5 second cycle time between operations, so 450 ops/sec would have been the best we could theoretically get. This was all done on a Sun SPARC Enterprise T5120 running with 8 cores (64 threads) at 1.2GHz. In reflection I wish I'd spent more time on Memcached and MySQL analysis, but we wasted a lot of cycles trying to team a couple of network cards using a switch that was not up to the job and we just ran out of time.

The keys to all of the testing were the Faban test harness which not only ran all of the tests for us but collected and presented most of the data used in the Blueprint Document, and the Web 2.0 kit which is now in incubation on Apache.org as Project Olio (http://incubator.apache.org/projects/olio.html)

Saturday Mar 29, 2008

Link Aggravation

Have been trying to do something useful with Link Aggregation on a T5120 connected to a Linksys SRW2048 switch. The whole rig includes a couple of those cute little 1u x4200 servers we sell, the T5120 (64 harware thread CMT system), the Faban Test harness, a benchmark designed to test a whole load of Web 2.0 stuff and a MySQL instance.

I couldn't for the life of me make Link aggregation work on incoming packets. it was obvious from various forum threads that the Linksys load balanced packets based on the MAC address of the system sending the traffic. But the implications of what that meant took a long time to sink in. I'd configure the aggregation on the System Under Test (hereby referred to as the SUT) using dladm:

dladm create-aggr -d e1000g1 -d e1000g2 1

Which creates an aggregation called aggr1 from the interfaces e1000g1 and e1000g2. Then I configured the switch, which involved using Windows and Internet Explorer and a web based interface that logged me out after 5 minutes of inactivity. It's fairly straightforward to configure a Link Aggregation Group (LAG) on the Linksys providing you do everything in exactly the right order (and provided you don't get distracted for 5 minutes). A LAG is the switch side aggregation of 2 or more ports, hopefully in my case the ones that are connected to the interfaces on the SUT that are part of the aggregation.

My test harness has two agents running on separate x4200s and generating load to the SUT. The SUT has two interfaces aggregated (or teamed) which results in a virtual network interface called aggr1 . You can use dladm to look at the traffic on the interfaces that make up the integration:

dladm show-aggr -s -i 5 1

The people who wrote dladm didn't get the formatting of the output right and you basically have to memorize the column positions of the output data. You end up using the %ipkts and %opkts metric for each interface as that never goes above 100 and so it's position doesn't change. The output looks like this:

key: 1  ipackets  rbytes      opackets   obytes          %ipkts %opkts
           Total        193732    167576255   398676    515340307  
           e1000g1      194852    168869030   214144    274881494       100.6  
53.7 
           e1000g2      0         0           185943    242021790       0.0    
46.6 

Which shows all of the incoming traffic (%ipkts) being sent to e1000g1. This traffic is coming from 2 systems, each with a different MAC address (duh) so why no load balancing?

Turns out that the load balancing on the switch is more routing than load balancing. For a 2 port LAG (as in this case) Packets from src MAC address 1 are sent to the first port of the LAG, packets from src MAC address 2 are sent to the second port of the LAG, packets from src MAC address 3 are sent to the first port of the LAG and so on. Packets from the same src MAC address are always sent to the same port so as to avoid any re-ordering issues. So depending on the traffic on my network, it's really a matter of luck as to whether the traffic from the two agents goes to the same port on the LAG or to different ports on the LAG. This wouldn't be a problem if I had 2000 clients sending traffic from 2000 different systems, but there's only two (which might well be the case with a server system fronted by a reverse proxy).

There is a workaround that we tried in our test environment, we changed the MAC address of one of the client systems using ifconfig (not something I'd generally recommend), until the incoming traffic (according to dladm)  was balanced across the two interfaces. This seemed to work every time and had me chanting "Wax on , Wax off" as I toggled the MAC address of one of the loaders and watched the traffic move to a different interface on the SUT. Here's the final result as given by dladm:

 key: 1  ipackets  rbytes      opackets   obytes          %ipkts %opkts
           Total        176194    156273288   342248    441581280  
           e1000g1      74939     67362534    206674    269054155       42.5   
60.4 
           e1000g2      102336    90181188    136617    173791907       58.1   
39.9 

Apparently it would be better if we were able to load balance on the switch at Layer 4 (the transport layer) which would allow load balancing of traffic based on network endpoints (i.e. IP address and port number) but it seems likely that our Linksys SRW2048 switch doesn't support this.

There are useful entries on Nicolas Droux' blog showing the architecture of the Link Aggregation subsystem in Solaris 10 and on setting up a Link Aggregation.



Wednesday Dec 05, 2007

Lighttpd SMF troubles

We came across an issue recently when running Lighttpd with /dev/poll on Solaris under SMF. You would start the service and immediately the CPU would peg at 100% and the Lighttpd error log would fill up with the message "(server.c.1429) fdevent_poll failed: Invalid argument". 

 SMF (The Solaris Service Management Facility) allows the deployer of a service to specify which user and group the processes that belong to the service should run under. In this case Lighttpd was being started as user webservd with group webservd. This would be similar to logging on to a system as webservd and then running the lighttpd executable. When we did exactly that we saw the same problem as we did when running under SMF. If we started Lighttpd as root with the same config file it ran fine and no errors were logged. So the problem came down to starting Lighttpd as webservd with /dev/poll specified as the event handler in the Lighttpd config file.

The workaround is to start Lighttpd as root and specify the user name and group for Lighttpd to run under through the Lighttpd config file. This is fairly standard practice for starting both Lighttpd and Apache. If you've run into this problem then it's maybe because you've somehow obtained a Service Manifest file that specifies "webservd" as the user and group. The easy way to modify the service so that Lighttpd is started as root is to create a copy of the current manifest and in the copy remove the entire <method_credential> that you'll see here:

...
...

<exec_method
  type='method'
  name='start'
  exec='/opt/coolstack/lib/svc/method/svc-csklighttpd start'
  timeout_seconds='60'>
  <method_context>
    <method_credential
      user='webservd' group='webservd'
      privileges='basic,!proc_session,!proc_info,!file_link_any,net_privaddr' />
  </method_context>
</exec_method>
...
...

 
You can leave the <method_context> and </method_context> tags with nothing between or you can delete the closing tag and use an empty tag i.e.: <method_context /> Just don't remove it as it's a useful marker. The above snippet is from an example that I saw when I first came across this issue, yours maybe different but in which case hopefully you wrote it and understand how to change it.

What you are left with is:

...
...
<exec_method
  type='method'
  name='start'
  exec='/opt/coolstack/lib/svc/method/svc-csklighttpd start'
  timeout_seconds='60'>
  <method_context />
</exec_method>
...
...


Once you've changed the copy of the manifest, import it using svccfg as follows:

svccfg -v import <manifest filename>

This will take a snapshot of the current state of the service and name it previous then delete all of the entries that you removed from the copy of the manifest. They will be named start/group, start/user, start/privileges plus a few others that would have been set to their default values. It will then take another snapshot of the service and call it last-import. Finally it will "refresh" the service, which means pushing out the changes to the running service. If the Lighttpd service was running it will probably go to the state called "Maintenance" at this point. It's best to disable and enable the service after a refresh (see the man page for svcadm) so you should do that now. Lighttpd should then be running correctly.

I'll post some example Manifests on another blog entry.

Root Cause

It turns out that when Solaris 10 came along, this same problem was seen when using /dev/poll and when starting Lighttpd as root . Lighttpd is written such that it bases it's maximum number of connections on the number of File Descriptors available to the process, the result is that all of the available File Descriptors are locked away for use when creating connections and none are left  for /dev/poll to use and therefore every call to /dev/poll results in an error. A more detailed discussion is available on this thread on the Sun forums. A workaround was added to Lighttpd that effectively sets the max connections to 10 less that the max File descriptors. Unfortunately it only works for the root user as the number of connections for a non-root user is set in a different code path. See Lighttpd ticket 1465. We are working on getting a workaround added for non-root users so watch this space.

Oh, and also, if the process has it's max File Descriptors set to say 65535 and you specify server.max-fds = 1000 in the Lighttpd config file, Lighttpd will reset the max number of File Descriptors available to it to 1000. So you can't get around the problem simply by specifying a lower number for server.max-fds than what should be available to the process (according to ulimit -n in the shell from which you start Lighttpd).

About

Bloggity, blog

Search

Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today