Thursday Nov 14, 2013

Performance Analysis of Oracle Traffic Director or Web Server

Performance Analysis of Oracle Trafiic Director or Oracle iPlanet Web Server using Oracle Sun Solaris Studio Performance Analyzer and DTrace scripts.

In this blog I will show how to use the Sun Studio Collector and Performance Analyzer and DTrace script to measure the performance.

1. Using Oracle Solaris Studio 12.2 Performance Analyzer with Oracle Traffic Director or Oracle iPlanet Web Server

Thanx to Basant for teaching me collector and analyzer and thanx to Julien for his help on DTrace scripts.

Install Sun Studio 12. Lets say is installed in /opt/SUNWSpro.

1.1 setting up ~/.er.rc

cat ~/.er.rc
dmetrics e.user:i.user:e!wall:i!wall:e.total:i.total:e.system:i.system:e!wait: \
i!wait:e!lock:i!lock:e!text:i!text:e!data:i!data:e!owait:i!owait:!size:!address:name
dsort e.user
scc basic:version:warn:parallel:query:loop:pipe:inline:memops:fe:cg
dcc basic:version:warn:parallel:query:loop:pipe:inline:memops:fe:cg:src
sthresh 75
dthresh 75
name long
view user
tlmode thread:root:depth=10
tldata sample:clock:hwc:heaptrace:synctrace:mpitrace:msgtrace:datarace
setpath $expts:.
tabs functions:callers-callees:source:disasm:timeline:header:
en_desc on

1.2 Collecting Data Using the collect Command

1.2.1& Run the Collector  using the collect command

# collect collect-options program program-arguments

For OTD or Web Server we edit start script bin/startserv to have a new --collect option as shown below 

Red color lines are the new lines I have added.  Copied all the lines in start option, replaced ${SERVER_BIN} (which is trafficd-wdog) by "trafficd". And added "collect $COLLECT_OPTS" before it.

In the section below replace <profiler directory> by the directory where you want the profiler to collect data.

case $COMMAND in
    --start|-start) 
        ${SERVER_BIN} -d "${SERVER_CONFIG_DIR}" \ 
            -r "${OTD_PRODUCT_HOME}" -t "${SERVER_TEMP_DIR}" \
            -u "${SERVER_USER}" ${SVC_OPT} $@
        STATUS=$?
        if [ $STATUS -ne 0 ] ; then 
             exit $STATUS
        fi
        enable_failover
        ;;
    --collect|-collect)
        PATH=$PATH:/opt/SUNWspro/bin
        COLLECT_OPTS="-t 180-300 -F all -d <profiler directory>";
        collect $COLLECT_OPTS trafficd -d "${SERVER_CONFIG_DIR}" \
            -r "${OTD_PRODUCT_HOME}" -t "${SERVER_TEMP_DIR}" \ 
            -u "${SERVER_USER}" ${SVC_OPT} $@
        STATUS=$?
        if [ $STATUS -ne 0 ] ; then
            exit $STATUS
        fi
        ;;

for web server instead of trafficd it will be webservd. 

1.2.2 start the server using --collect option

Since we replaced trafficd-wdog by trafficd, the server will start up without any watchdog process and will run in console, not in background.  

bin/startserv --collect

1.2.3 run some stress tests

1.2.4 After 5-6 minutes stop the server

This will create a directory called (e.g.) 'test.1.er' which contains the experiment.

The default name for a new experiment is test.1.er.

The Collector automatically increments n by one in the names of subsequent experiments.

1.3 Open the profile and start the Oracle Sun Studio 12.2 analyzer

Set DISPLAY env.

$ cd <profiler directory>
$ export JAVA_PATH=/opt/SUNWwbsvr/jdk 
$ /opt/SUNWspro/bin/analyzer test.1.er/

1.4 The er_print utility prints an ASCII version of the various displays supported by the Performance Analyzer.

$ /opt/SUNWspro/bin/er_print -outfile er_print1.out -functions test.1.er 

In this, functions are sorted by "Exclusive User CPU Time"

or

$ /opt/SUNWspro/bin/er_print -outfile er_print2.out \
-metrics e.user:i.user:e.%user:i.%user:e.total:i.total:e.%total:i.%total:name \
-sort i.total -functions test.1.er 

In this, functions are sorted by "Inclusive Total LWP Time"

You can look at these files and figure out which function is taking how much time.

These files look like :

Functions sorted by metric: Exclusive User CPU Time 
Excl.     Incl.     Excl.       Incl.       Excl.     Incl.      Name  
User CPU  User CPU  Total LWP   Total LWP   Sys. CPU  Sys. CPU         
   sec.      sec.         sec.        sec.     sec.      sec.     
851.666   851.666   105477.573  105477.573  106.525   106.525    <Total>
493.435   493.435      558.110     558.110   26.368    26.368    fn1
 56.840   326.308       64.015     368.568    3.042    17.352    fn2
 28.280    28.280       34.574      34.574    1.701     1.701    fn3

...


2. Using DTrace script to see how much time is spent in which function

#!/usr/sbin/dtrace -s
#pragma D option bufsize=1g
#pragma D option specsize=1g
#pragma D option aggsize=1g
#pragma D option dynvarsize=1g

pid$target:::entry
{
 self->ts[probefunc] = timestamp;
}

pid$target:::return
/self->ts[probefunc]/
{
    @time[probefunc, probemod] = sum(timestamp - self->ts[probefunc]);
    self->ts[probefunc] = 0;
}

Note I have given 1g sizes you can tune it as per your machine.

Run this D script using :

#sudo dtrace -s functime.d -p 27910 -o dtrace.log 

where 27910 is the pid of the process you are examining (in this case webservd or trafficd).

This will generate the output as shown below 

  PListFindValue                       libns-httpd40.so                                        3871
  getbucketnum                         libc.so.1                                               4995
  R_SSL_version                        libnnz11.so                                             6106
  http_format_server                   libns-httpd40.so                                        6807
  void SimpleHashTemplateBase::_insert(unsigned long,hashEntry*)  libns-httpd40.so             7059
  long atoi64(const char*)             libns-httpd40.so                                        7288

...


I wrote this wrapper script to report percentages.

To get the last column I used logic of looking for spaces after the string "lib". For LM1`ld.so.1, I temporarily added a hack.

#!/usr/bin/perl
$logfile = "dtrace.log";
$tmpfile = "temp";
open(IN,"<",$logfile) || die "Can not open $logfile: $!";
$total = 0;
$total_rounded = 0;
while (<IN>) {
    chomp($_);
    s/(.*\s*)lib([^\s]*)\s*([0-9]*)/$1lib$2=$3/g;
    my ($a, $b) = split('=');
    $total += $b;
}
print "total = $total\n";
close(IN);
open(OUT,">", $tmpfile) || die "Can not open $tmpfile: $!";
open(IN,"<",$logfile) || die "Can not open $logfile: $!";
while (<IN>) {
    chomp($_);
    s/LM1`ld.so.1/libld.so.1/g;
    s/(.*\s*)lib([^\s]*)\s*([0-9]*)/$1lib$2=$3/g;
    my ($a, $b) = split('=');
    $rounded = sprintf("%.10f", ($b*100)/$total);
    $a =~ s/libld.so.1/LM1-ld.so.1/g;
    print OUT "$b   $rounded%   $a\n";
    $total_rounded += $rounded;
}
close(IN);
close(OUT);
print "total rounded = $total_rounded\n";
`sort -n -r $tmpfile | tee $logfile.sorted`;
`rm $tmpfile`


This produced output of the following format :

20318357230394   16.4117840797%     poll                                      libc.so.1
20317702791746   16.4112554688%     _pollsys                                  libc.so.1
20313615393944   16.4079539474%     __pollsys                                 libc.so.1
2684593654698   2.1684317735%     int DaemonSession::GetConnection()          libns-httpd40.so

...

3. Using DTrace Profile Probes

 #!/usr/sbin/dtrace -s
profile-1000
/pid == $1 && arg1 != NULL/
{
    @proc[umod(arg1), ufunc(arg1),ustack()] = count(); 
}
END
{
    printa(@proc); 
}


run it as :

dtrace -x ustackframes=20 -s profiler-probes.d <pid> -o dtrace.log

It creates output in the format library name, function name, user stack and the count number of times called, newline

  libc.so.1                                           libc.so.1`mutex_lock_impl        
              libc.so.1`mutex_lock_impl+0xaf
              libc.so.1`mutex_lock+0xb
              libumem.so.1`umem_cache_alloc+0x33
              libumem.so.1`umem_alloc+0xaf
              libumem.so.1`malloc+0x2e
              libnnz11.so`sys_malloc+0x23
             1068

....



4. References

Tuesday Feb 05, 2008

Using HTTP compression to speed up content delivery in Sun Java System Web Server 7.0 update 2

Using HTTP compression to speed up content delivery in Sun Java System Web Server 7.0 update 2


If you are looking for faster web page downloads, you can use HTTP compression feature which compresses your content to speed it up. It is especially beneficial for low bandwidth connections. This also reduces the number of bytes transferred and improves the overall server performance.

How it Works

Browsers that support compressed content send an Accept-encoding header with value gzip, deflate. Our Web Server sees the header and chooses to provide compressed content, and sends Content-encoding: gzip response header. The browser on seeing this header tries to decompress the content and renders it.

Now the question is how to enable HTTP compression feature can be configured in obj.conf. Let me explain that in detail in this blog.

HTTP Compression of Static Files

    For dynamic compression of static files in  Sun Java System Web Server 7.0 Update 2, we need to use compress-file SAF with find-compressed SAF.

    The compress-file function creates the compressed file in the subdirectory specified if the file size is in between min-size and max-size the first time a request is sent on the URI. If Web Server needs to compress that static file for every static request then it may not really improve performance. It will reduce the network traffic but will cause more CPU usage. So content compression will only happen once or until the uncompressed original file is modified. If check-age is set to true, it recreates the compressed file if the compressed version is not as recent as the non-compressed version. The find-compressed function checks if a compressed version of the requested file is available. If the following conditions are met, find-compressed changes the path to point to the compressed file:
  • A compressed version is available.
  • The compressed version is as recent as the non-compressed version.
  • The client supports compression.
  • If the HTTP method is GET or HEAD.

Example

    This is an example of how server compresses html files and places them in .compressed directory (relative to the original file location). Modify the default object of obj.conf as shown below

<Object name="default">
NameTrans fn="assign-name" from="\*.html" name="find-compressed"
...
Service method=(GET|HEAD|POST) type=\*~magnus-internal/\* fn=compress-file subdir=".compressed-files"
Service method=(GET|HEAD|POST) type=\*~magnus-internal/\* fn=send-file
...
</Object>

<Object name="find-compressed">
PathCheck fn="find-compressed"
</Object>

    However, if you plan to put precompressed content in the location where the original file is present yourself, you can just use find-compressed SAF . A compressed version of a file must have the same file name as the non-compressed version but with a .gz suffix. For example, the compressed version of a file named /httpd/docs/index.html would be named /httpd/docs/index.html.gz. To compress files, you can use the freely available gzip program.
<Object name="default">
NameTrans fn="assign-name" from="\*.html" name="find-compressed"
...
</Object>
<Object name="find-compressed">
PathCheck fn="find-compressed"

</Object>

HTTP Compression of Dynamic Content

    The http-compression filter compresses outgoing  dynamic content such as the output from SHTML pages, CGI programs, or pages created with JavaServer PagesTM (JSPTM) technology that need to be interpreted by the server.

Example

Output fn="insert-filter" type="text/\*" filter="http-compression" vary="on" compression-level="9"

In this example, type="text/\*" restricts compression to documents that have a MIME type of text/\* (for example, text/ascii, text/css, text/html, and so on).

Suppose you want to compress only JSPs. Modify the default object of obj.conf as shown below
<Object name="default">
NameTrans fn="assign-name" from="\*.jsp" name="compress-on-demand"
...
</Object>

<Object name="compress-on-demand">
Output fn="insert-filter" filter="http-compression"
</Object>

Parameters

Here is a list of parameters you can use if you want to tune these functions

Parameters for compress-file

  • subdir
    • (Optional) This should be a directory name only relative to the directory in which the original non-compressed file is located.
    •  To overwrite a pre-compressed compressed file lying in docroot  set the subdir to ".".
    • The default value  is "." will overwrite any precompressed gz files if any.
  • check-age            (Optional) The values can be true or false.        The default value is true.
  • vary                   (Optional) The values can be true or false.        The default value is true.
  • compression-level (Optional) The values can be 1 to 9.                  The default value is 6.
  • min-size             (Optional) The values can be 0 to INT_MAX.           The default value is 256.
  • max-size         (Optional) The values can be min-size to INT_MAX. The default value is 1048576.

Parameters for find-compressed

  • check-age
    • (Optional)  Specifies whether to check if the compressed version is older than the non-compressed version. 
    • The values can be yes or no. By default, the value is set to yes.
    • If set to yes, the compressed version will not be selected if it is older than the non-compressed version.
    •  If set to no, the compressed version is always selected, even if it is older than the non-compressed version.
  • vary 
    • (Optional) Specifies whether to insert a Vary: Accept-Encoding header.
    • The values can be yes or no. By default, the value is set to yes.
    • If set to yes, a Vary: Accept-Encoding header is always inserted when a compressed version of a file is selected.
    •  If set to no, a Vary: Accept-Encoding header is never inserted. 

Parameters for http-compression

  • vary 
    • Controls whether the filter inserts a Vary: Accept-encoding header.
    • If vary is absent, the default value is yes. yes tells the filter to insert a Vary: Accept-encoding header when it compresses content. 
    • no tells the filter to never insert a Vary: Accept-encoding header.
  • fragment-size
    • Size in bytes of the memory fragment used by the compression library to control how much to compress at a time.
    • The default value is 8096.
  • compression-level
    • Controls the compression level used by the compression library.
    • Valid values are from 1 to 9. A value of 1 results in the best speed.
    • A value of 9 results in the best compression.
    • The default value is 6.
  • window-size 
    • Controls an internal parameter of the compression library.
    • Valid values are from 9 to 15.
    • Higher values result in better compression at the expense of memory usage.
    • The default value is 15.
  • memory-level 
    • Controls how much memory is used by the compression library.
    • Valid values are from 1 to 9.
    • A value of 1 uses the minimum amount of memory but is slow.
    • A value of 9 uses the maximum amount of memory for optimal speed.
    • The default value is 8.

Using wadm (Administration CLI) to configure these HTTP compression features

You can use the following wadm commands to configure these.
For compression of static content i.e. find-compressed and compress-file functions use
wadm> enable-precompressed-content
Usage: enable-precompressed-content --help|-?
  or   enable-precompressed-content [--echo] [--no-prompt] [--verbose] [--uri-pattern=pattern] [--no-vary-header] [--no-age-check] [--compress-file] [--sub-dir=sub-directory] [--compression-level=1-9] [--min-size=size-in-bytes] [--max-size=size-in-bytes] --config=config-name --vs=vs-name
CLI014 vs is a required option.
wadm>
For compression of dynamic content i.e. http-compression function use
wadm> enable-on-demand-compression
Usage: enable-on-demand-compression --help|-?
  or   enable-on-demand-compression [--echo] [--no-prompt] [--verbose] [--uri-pattern=pattern] [--no-vary-header] [--fragment-size=size] [--compression-level=1-9] --config=config-name --vs=vs-name
CLI014 vs is a required option.

You can also try experimenting with other related CLIs
  • get-precompressed-content-prop
  • get-on-demand-compression-prop
  • disable-precompressed-content
  • disable-on-demand-compression

How do I test if my configuration is working or not

    You can use telnet to submit HTTP requests, impersonating a web browser. The stuff in bold is the stuff I typed.  Here's an example on a Solaris machine testserver, where I have a Web Server instance running on port 2600:
$ telnet testserver 2600
Trying 1.2.3.4...
Connected to testserver.
Escape character is '\^]'.
GET / HTTP/1.0
[press enter twice]

HTTP/1.1 200 OK
Server: Sun-Java-System-Web-Server/7.0
Date: Wed, 06 Feb 2008 12:09:45 GMT
Content-type: text/html
Last-modified: Fri, 25 Jan 2008 00:12:38 GMT
Content-length: 355
Connection: close

<html>...</html>
Connection to testserver closed by foreign host.
$
I'll try that same HTTP request again, but this time indicate that I want a compressed response:
$telnet testserver 2600
Trying 1.2.3.4...
Connected to testserver.
Escape character is '\^]'.
GET / HTTP/1.0
[press enter]
Accept-encoding: gzip [press enter twice]


HTTP/1.1 200 OK
Server: Sun-Java-System-Web-Server/7.0
Date: Wed, 06 Feb 2008 12:09:45 GMT
Content-type: text/html

Last-modified: Fri, 25 Jan 2008 00:12:38 GMT
Content-encoding: gzip
Vary: accept-encoding
Connection: close

MPËNÄ0...ó²õã\\å
Connection to testserver closed by foreign host.
$

When I ask for a compressed response, the server responds with a Content-encoding: gzip header and returns compressed response instead of the html.


\*\*Note that compress-file SAF is not available in Web Server 6.1, in that case you have to put pre compressed content into document root for find-compressed to work.

References

  1. Web Server software forum questions
  2. Compressing Web Content with mod_gzip and mod_deflate
  3. Best Practices for Speeding Up Your Web Site
  4. HTTP Compression
  5. Web Server 7.0 update 2 documentation about find-compressed function
  6. Web Server 7.0 update 2 documentation about http-compression function
  7. my previous blog Dynamic Compression Of Static Files

About

Meena Vyas

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today