Thursday Nov 04, 2010

Reverse proxy and custom content

Reverse proxy content overwrite from sed filters
Recently one of the Web Server customer asked about how to overwrite the content of 404 response code coming from origin-server. Customer tried the following :
Service fn="proxy-retrieve"
Error fn="send-error" code="404" path="/tmp/myfile.txt"

However it didn't work. Why? Because when Service saf proxy-retrieve receives any response from origin server, it send the response back to client. So before send-error Error SAF gets executed, response was already complete. So it didn't work the way customer expected. It works if response code is 504 (Gateway timeout) because for 504 proxy-retrieve SAF doesn't send anything to the client.

Web Server supports powerful sed scripts which came to rescue in this case. What user need to do is to insert a sed filter which does the magic. Fortunately sed filter is also very simple.

Following sed script does that (replaceall.sed):
1,1 {
r /tmp/someotherfile.txt
d
}
2,$ d

$ cat somefile.txt | sed -f replaceall.sed

Above example will replace content of somefile.txt with someotherfile.txt

In Web Server, I tested this and it worked for me.
<If code eq "404">
Output fn="insert-filter" filter="sed-response" sed="1,1 {"
                                                sed="r /tmp/one.txt"
                                                sed="d"
                                                sed="}"
                                                sed="2,\\$ d"
</If>

Above snippet will replace the content of 404 output send by origin server with content of /tmp/one.txt

Monday Oct 26, 2009

memsession 1.0.1 stable is now available

Install it from pecl using :
$ pecl install memsession

Caching file_get_contents output inside APC.


APC creates a mmap which is used for caching compiled php response. Besides this APC provides API to cache user data into this mmap. There are many php apps which uses file_get_contents to read the static files. If APC can cache these files in it's mmap, it will improve file_get_contents performance.

Based on the above idea, I developed a patch which stores the file_get_content output in APC's cache. In this patch, APC intercepts the calls of file_get_contents and replaces it with apc_file_get_contents function. If file is a full path and entire file is requested then APC caches the content.

To enable the caching use the following in php.ini :
apc.cache_static_contents=1
apc.user_ttl = 30

apc.user_ttl will decide how long the entry will remain inside the cache.  If apc.user_ttl is set to 0 then the entry will never be removed from cache.

Here is the performance data collected using Studio 12 on Solaris sparc for ecommerce php benchmark. (3500 simultaneous users)
Run
Excl. User CPU  sec.
Incl. User CPU sec.
Excl. Total LWP  sec.
Incl. Total LWP  sec.
Excl. Sys. CPU sec.
Incl. Sys. CPU sec.
Name
Without  Cache
0.831
28.660
1.141
540.018
0.030
344.311
zif_file_get_contents
With Cache
1.481
181.537
1.821
282.758
0.140
12.449
zif_apc_file_get_contents

User + System time before Caching = 372 seconds
User + System time after Caching = 193 seconds
Total time reduction in file_cache_contents = 48%

When static file is inside the cache then server will not do any stat calls. Which means if file changes then cache will still serve the old cached file during apc.user_ttl interval.

Monday Oct 05, 2009

memsession 1.0.0 beta is now available

Memsession 1.0.0 beta is now available, you can install it using pecl.
$ pecl install memsession-beta

Friday Sep 04, 2009

In memory session extension for multithreaded php


Php's default session handler (mod_files) saves the session data in file. To improve performance, we can save the session data in ramdisk or in /tmp/ on Solaris.  Even when we store the session data in ramdisk there is lot of CPU cycles are spent in open/read/write call. So the question is can we improve the session writing/reading in php.

Answer is yes. If we have memory available in the system then session can be stored in RAM. If php is compiled without threads (e.g fastcgi) then mod_mm extension is there in php which does the job. It saves the session in mmap. But when we run php in multithreaded mode and entire php is inside single process then session can be allocated in process memory. I wrote a php extension for multithreaded php which saves php session in memory.

Php's session GC will cleanup the old sessions. So session gc time session.gc_maxlifetime should not be very high otherwise it will consume lots of memory.

Sources can be downloaded from :
svn co http://svn.php.net/repository/pecl/memsession

Sources can be viewed from here. Look for README for more information.


Compilation is similar to any other extension:

Compiling on unix platforms (for mulithreaded php e.g NSAPI php)
$ phpize
$ CFLAGS="-m32" ./configure --with-php-config=<phpinstdir>/bin/php-config
$ gmake
$ gmake install

Compiling in Windows (tested on Win2K and Windows Vista):
 C:\\php-src-5.3.0> cscript /nologo configure.js --enable-apache2handler --enable-zts --enable-memsession=shared
C:\\php-src-5.3.0> nmake

Configuration (php.ini) :
extension="memession.so"

[Session]
session.save_handler=memsession


GCC notes : Since memsession uses gcc's atomic operation which was added in 4.1.2 so gcc version 4.1.2 is required. "-march=i586" might be required to enable gcc's atomic operations.

Monday Aug 31, 2009

Auto tuning of file descriptors in Sun Web Server

Web Server 7.0 uses a algorithm to divide file descriptors to various needs on unix systems.

Here is the list of items which require file descriptors :

1. Web Applications. Web Server leaves 80% file descriptors for web applications.
2. Daemon session threads connections. For each daemon session thread Web
Server 7.0 expects a average of 4 file descriptors. One client socket
connection, requested file, an included file and a backend connection.
3. jdbc pools
4. access log counts for all virtual servers
5. Listener counts
6. file descriptors for file cache
7. keep alive file descriptors.
8. thread pool queue size

In above list (6), (7) and (8) are auto tuned that means if user specify them in server.xml, it uses those, if user doesn't specify then it divides the remaining or available file descriptors to (6), (7) and (8). Available file descriptors = Total descriptors - item (1) to (5)

If available file descriptors are more than 1024 then Web Server uses 1:16:16 ratio for item (6),(7),(8). If available file descriptors are less than 1024 then it uses 1:16:8 ratio. File cache is given the least importance and keep alive is given the highest importance. It also round of the numbers to the power of 2.

Note that Web Server doesn't uses above algorithm for Windows systems. On Windows it uses 64K descriptors for keep alive, 16K descriptors for thread pool queue size.

Monday Jul 20, 2009

Oscon bof slides

Here is the link to the OSCON php performance bof slides.

Thursday Jul 09, 2009

Web Server top "wstop" video

Web Server top "wstop" video Some times back I posted a blog to show how to write a "top" like utility for Sun Web Server using dtrace. Here is the link to you tube video which shows a small demo of wstop.

Wednesday May 27, 2009

mod_sed is now integrated into opensolaris

mod_sed is now integrated into opensolaris. It will be available with opensolaris 2006/09 release. Also it will be part of Sun Web Stack 1.5 release.

mod_sed sources can be downloaded from here

mod_sed is already part of httpd trunk. Old version of sources is available at webstack/mod_sed After the module was accepted into apache trunk, above link is now unmaintained. To download mod_sed, either download files from httpd trunk or from opensolaris website

Wednesday May 20, 2009

Using large pages support in Sun Web Server

  Solaris has a multiple page size support which has been explained here:
http://www.solarisinternals.com/wiki/index.php/Multiple_Page_Size_Support

I was running specweb ecommerce php benchmark and noticed that 4-8% of gains can be obtained on CMT systems by using 256M pages. When process address space getting bigger and bigger (specially for 64 bit), number of tlb entries grow significantly. I observed that in ecommerce specweb php runs, around 7% of CPU time was spent to handle tlb misses (trapstat output).  When I used 256M pages on CMT systems, I found out that CPU time spent because of tlb misses reduced to 2%. One of the interesting thing I observed that Sun Web Server tends to slow down with time. In my setting, I had set the file cache heap size to 2.5GB. I was wondering why WebServer is slowing down with time. What was really happening, that when Web Server starts, it initally only allocate 256 MB so used process address space was smaller and hence less number of tlb misses.  After some time as file cache starts expanding, more and more tlb misses starts happening. So server sounds sluggish after 20 minutes or so.
So if you are running Sun Web Server on CMT systems, you should try 256M page support.

To enable mpss support in Sun Web Server (64 bit) put the following in
bin/startserv:

LD_PRELOAD_64=/usr/lib/sparcv9/mpss.so.1; export LD_PRELOAD_64     
MPSSHEAP=256M; export MPSSHEAP

For 32 bit Web Server :
LD_PRELOAD=/usr/lib/mpss.so.1; export LD_PRELOAD
MPSSHEAP=256M; export MPSSHEAP

"pagesizes -a" can be used to determine the max page sizes supported on your system. For systems which have only 4M pages, following can be used :
LD_PRELOAD_64=/usr/lib/64/mpss.so.1; export LD_PRELOAD_64     
MPSSHEAP=4M; export MPSSHEAP

pmap -sx gives the following information :
# pmap -sx 1938
1938:   webservd -d /opt/SUNWwbsvr/https-nsapiphp/config -r /opt/SUNWwbsvr -t
         Address     Kbytes        RSS       Anon     Locked Pgsz Mode   Mapped File
...
0000000100B00000       6080       6080          -          -    - rwx--    [ heap ]
00000001010F0000     244800     244800     244224          -  64K rwx--    [ heap ]
0000000110000000    3932160    3932160    3932160          - 256M rwx--    [ heap ]
0000000200000000     262144     262144     262144          - 256M rwx--    [ heap ]
FFFFFFFF68000000       4096       4096       4096          -  64K rwx--    [ anon ]
...

Note that few page sizes are mapped with 256M pages while others are mapped
64K pages.


mpss setting has been used in specweb jsp publications.

Friday Aug 22, 2008

Tracing apr calls in Apache using dtrace

Tracking apr calls in Apache pid provider can be used to trace apr calls in apache. At first step we need to create probes for the apache processes. Following script will create probes for a single process.
# createprobes.d
pid$1::apr_\*:entry
/execname == "httpd"/
{
}

pid$1::apr_\*:return
/execname == "httpd"/
{
}

profile:::tick-2sec
{
    exit(0);
}
Now run the script for all httpd processes.
# for each in `pgrep httpd`; 
do 
   echo "each = $each"; 
   dtrace -s createprobes.d $each; 
done

Once the probes are created, we can use the dtrace script can be used to trace apr calls in apache.
pid\*::apr_\*:entry
/execname == "httpd"/
{
}

pid\*::apr_\*:return
/execname == "httpd"/
{
}
To execute the above script, we do not need any builtin probes inside apache. It is the pid provider which inserts the probe in user code.
If we run this script we see the following output (snippet)
# dtrace -s apr-trace.d
CPU     ID                    FUNCTION:NAME
  0  73552  apr_pool_cleanup_register:entry
  0  73535                 apr_palloc:entry
  0  78695                apr_palloc:return
  0  79116 apr_pool_cleanup_register:return
  0  79191         apr_socket_accept:return
    ...

To measure the time taken for each apr routine, we need to do the difference between the timing. Here is the aprtime.d
pid\*::apr_\*:entry
/execname == "httpd"/
{
    ts[probefunc] = timestamp;
}

pid\*::apr_\*:return
/execname == "httpd"/
{
    printf("%d nsecs", timestamp - ts[probefunc]);
}

# dtrace -s aprtime.d
CPU     ID                    FUNCTION:NAME
  0  78695                apr_palloc:return 16834 nsecs
  0  79116 apr_pool_cleanup_register:return 51750 nsecs
  0  79078     apr_thread_mutex_lock:return 11250 nsecs
  0  79086    apr_thread_cond_signal:return 14750 nsecs
  0  79080   apr_thread_mutex_unlock:return 31167 nsecs
  0  79078     apr_thread_mutex_lock:return 6500 nsecs
  ...

Friday Aug 15, 2008

Using mod_sed to filter web content in Apache

Using mod_sed to filter Web Content in apache mod_sed is a apache module which filters the web content using powerful sed commands whether is generated by php, jsp or a plain html. Basic configuration information can been seen from the README. In this blog, I will cover how cryptic but powerful sed commands can be used inside apache.

Using branches "b" to implement if/else type of code
Suppose I want to write
if (line contains "a") then
   replace "x" with "y"
else
   replace "y" with "x"
fi
If I want to write above logic using "goto" syntax then I can write something like (pseudo code ) :
if (line contains "a") go to :ifpart
# else part
   replace "y" with "x"
   go to :end
:ifpart
   replace "x" with "y"
:end
In sed we can use the branch command "b" which is equivalent of goto. Here is the sed equivalent code :
/a/ b ifpart
s/y/x/g
b end
:ifpart
s/x/y/g
:end

$ cat one.txt
ax
xyz
$ /usr/ucb/sed -f one.sed < one.txt
ay
xxz
We can write the same example in apache :
OutputSed "/a/ b ifpart"
OutputSed "s/y/x/g"
OutputSed "b end"
OutputSed ":ifpart"
OutputSed "s/x/y/g"
OutputSed ":end"


Using hold buffer "h" as a buffer to save current text
Let's say I have a text :
It is Sunday today.
And I want replace it with two lines :
It is Monday today.
It is Sunday today.
So I want to do the following (pseudo code)
saveline=curline
replace Monday with Sunday.
curline = curline + saveline
print curline
In sed, we will write something like :
# hold the buffer
h
s/Sunday/Monday/
# Append the hold buffer to current text.
G
Sed's G command append the hold buffer into the current line (Pattern space). Inside apache, we can do the same thing using OutputSed directives :
OutputSed "h"
OutputSed "s/Sunday/Monday/"
OutputSed "G"


Multiline expression using hold buffer and commands "N", "x", "h" and "H"
Sed is very powerful to handle multi line text manipulation. Suppose, I have a condition which says :
'If a line contain "Sunday" and next line contain "Monday" then replace "Sunday" in first line to "Monday" and replace "Monday" to "Tuesday" in second line.'
As a example, I have a text :
It is Sunday today.
Tomorrow will be Monday.
The output should look like :
It is Monday today.
Tomorrow will be Tuesday.
So I want to do the following (pseudo code)
search for Sunday in current line
if found then 
    saveline=curline
    Read next line into curline
    search for Tuesday in second line
    if found then 
        swap curline and readline
        replace Sunday to Monday in curline
        swap curline and readline again.
        replace Monday to Tuesday in curline
        saveline = saveline + curline
        curline = saveline
    end innerif
end outerif
Next line can be read by "N" command.
swap functionality is provided by "x" sed command.
Appending saveline with curline is provided by "H" command.
replacing "curline" with "saveline" is provided by "g" command.
Overall sed script will look like :
/Sunday/ {
# save the current line in hold buffer
h
# Delete the content of the current line.
s/.\*//
# Read next line.
N
# Delete first new line character (from previous line)
s/\^.//
# Search for Monday in next line.
    /Monday/ {
# Exchange hold buffer from current line
        x
# Now current line contain 1st line so replace Sunday with Monday.
        s/Sunday/Monday/
# Exchange hold buffer from current line
        x
# Now current line contain 2nd line so replace Monday with Tuesday.
        s/Monday/Tuesday/
# Append hold buffer (1st line) with 2nd line.
        H
# Replace hold buffer with current line
        g
    }
}
Inside apache httpd.conf, I will write the equivalent sed script as following :
OutputSed "/Sunday/ {"
OutputSed "h"
OutputSed "s/.\*//"
OutputSed "N"
OutputSed "s/\^.//"
OutputSed     "/Monday/ {"
OutputSed         "x"
OutputSed         "s/Sunday/Monday/"
OutputSed         "x"
OutputSed         "s/Monday/Tuesday/"
OutputSed         "H"
OutputSed         "g"
OutputSed     "}"
OutputSed "}"
Above example shows how powerful sed commands can be used to filter web content (whether it is generated by html or php or jsp). Details of the sed can be obtained from sed man page

Little history behind mod_sed filter module

Little history behind mod_sed filter module Sun has donated the "sed" filtering module mod_sed to Apache Software Foundation. It is not yet part of the Apache Web Server. It is under consideration with Apache httpd dev community.

In this blog, I will cover the history behind the mod_sed code. Solaris 10 has two separate "sed" utilities, one in /usr/bin/sed and another in /usr/ucb/sed. The later one is open sourced under CDDL and available in opensolaris. Sun Java System Web Server 7.0 initially included sed filter module. Sun Web Server filter module was derived from "/usr/bin/sed" code and it was written by Chris Elving

Last year, I took the Sun Web Server code and wrote the mod_sed based on Web Server code. The difference between Sun Web Server sed filter module and mod_sed is that mod_sed is derived from /usr/ucb/sed code. Sun Web Server's sed filtering module uses NSPR for portable API while mod_sed uses APR since it runs under Apache which uses APR for portability.

Functionality wise "/usr/bin/sed" code was little better than "/usr/ucb/sed" but I have fixed some of the limitation of "/usr/ucb/sed" in mod_sed e.g max number of characters in a line or hold buffer.

Tuesday Aug 12, 2008

Caching static content in reverse proxy in Sun Java System Web Server 7.0

Sun Java System Web Server 6.1 has a function named "check-passsthrough" which will look the resource in reverse proxy instance's docroot first. I was wondering how to do so in 7.0. I didn't find any documentation for that. The advantage of serving static content from reverse proxy docroot is that it is very efficient. Sun Web Server has a excellent caching capabilities which often backend servers don't have.
Basically user need to do the following test :
\* If file exists on local document root then serve the file else go to back-end instance.
Here is how we can do it in WS 7.0.
1) Define docroot variable for each virtual server :
  <virtual-server>
    ...
    <variable>
       <name>docroot</name>
       <value>/mydocroot/https-rpp/docs</value>
    </variable>
  </virtual-server>
2) Modify the obj.conf to look for local resources before redirecting :
<If not -f "$docroot/$path">
     NameTrans fn="map" from="/" name="reverse-proxy-/" to="http:/"
</If>
Web Server 7.0, file cache is enabled by default. So all static content will be cached by default. set-file-cache-prop wadm command can be used to tune the file cache content.

Tuesday Aug 05, 2008

Hacking Sun Java System Web Server pblocks using dtrace.

Hacking Sun Java System Web Server pblocks using dtrace. In my previous blog, I showed how to use NSAPI plugin and dtrace to write several monitoring tools. In this blog I will show that it is even possible to do few things with dtrace without having any NSAPI plugin installed into your web server. Yes that is true, dtrace can be used with stock web server without any configuration changes. Here is my watchpblocks.d dtrace script output (when I send a "/" request to my Web Server):
# dtrace -qs watchpblocks.d  11463
Req->vars : ntrans-base="/var/www" path="/var/www/" required-rights="list" content-length="1912"
Req->reqpb : clf-request="GET / HTTP/1.1" method="GET" protocol="HTTP/1.1" uri="/"
Req->headers : user-agent="curl/7.16.1 (sparc-sun-solaris2.8) libcurl/7.16.1 
                  OpenSSL/0.9.8d zlib/1.2.3 libidn/0.5.19" host="chilidev4.red.iplanet.com" 
                  accept="\*/\*" content-type="text/html" status="200 OK" 
                  transfer-encoding="chunked" content-length="2003"
11463 is the child process id of my test Web Server. Here is the output of wstop2.pl (similar to my previous version wstop.pl)
# perl wstop2.pl -d 5 11463
12:49:55  Requests: 3     (    0/sec) Bytes: 5736(  1147/sec)
Requests: GETs: 3      POSTs: 0      HEADs: 0      TRACE: 0
Responses: 1xx:0      2xx:3      3xx:0      4xx:0      5xx:0

Requests  Reqests/sec  Bytes Sent  URI
3         0            5736         /
\^C
So how does it work. If you look at the dtrace script, you will find :
pid$1::flex_log:entry
{
...
}
So at the end of each request webserver calls flex_log method to log the request, at that time above dtrace probe is fired. As with most NSAPI functions Request and Session structure pointers are passed as an argument. dtrace script parses the structure and try to decode the pblocks. For this technique to work users need to have accesslog enabled which is almost always enabled.

The interesting aspect is that we don't need to do any configuration changes.

Caution : Since dtrace doesn't allow "for" loop or if/else logic so the pblock hash decoding is a complete hack. It may not work on all scenarios. Also on busy systems, lots of dtrace probes might be missed using this method. Since we are copying data from kernel land to user land several times.

Previous dtrace version, was much more lightweight as far as dtrace work is concerned. If future version of dtrace provides if/else and loop constuct inside dtrace script then the script could be improved and make it more reliable.

Also, right now these scripts will only work for 32 bit web servers. Here are the scripts :
watchpblocks.d
wstop2.pl
wbdtrace.pm
About

Basant Kukreja

Search

Top Tags
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today