Solaris - measuring the memory overhead of my process


What's the memory overhead of my Solaris process?

I'd like to know the amount of additional system memory required when my process is running. It sounds like a simple request, but standard tools like ps, prstat, top and ps don't give me the information... here's how I currently go about measuring the memory overhead of my processes.

Types of memory usage are typically defined as:
  1. Overall allocated virtual memory
  2. Resident memory
  3. Shared memory
  4. Private/Anonymous memory
The excellent pmap tool provides all of this information for a process, listing all of the memory segments that are used by a process. What it doesn't tell me is the incremental delta introduced by my process.

Here's an example output of pmap -x:


$ pmap -x 12167
12167:  /home/nstephen/bin/testapp
 Address  Kbytes     RSS    Anon  Locked Mode   Mapped File
00010000      24      16       -       - r-x--  testapp
00024000       8       8       -       - rwx--  testapp
00026000      32      32       -       - rwx--    [ heap ]
FF100000     584     448       -       - r-x--  libnsl.so.1
FF1A2000      40      40       -       - rwx--  libnsl.so.1
FF1AC000      24      16       -       - rwx--  libnsl.so.1
FF200000     864     816       -       - r-x--  libc.so.1
FF2E8000      32      32       -       - rwx--  libc.so.1
FF2F0000       8       8       -       - rwx--  libc.so.1
FF320000       8       8       -       - r-x--  libc_psr.so.1
FF330000      24      16       8       - rwx--    [ anon ]
FF340000      16      16       -       - r-x--  libcmd.so.1
FF354000       8       8       -       - rwx--  libcmd.so.1
FF360000       8       8       8       - rwx--    [ anon ]
FF36C000       8       8       -       - rwxs-    [ anon ]
FF378000      16      16       -       - r-x--  libthread.so.1
FF380000      16      16       -       - r-x--  libsecdb.so.1
FF394000       8       8       -       - rwx--  libsecdb.so.1
FF3A0000       8       8       -       - r-x--  libdl.so.1
FF3B0000     184     184       -       - r-x--  ld.so.1
FF3EE000       8       8       8       - rwx--  ld.so.1
FF3F0000       8       8       8       - rwx--  ld.so.1
FFBFC000      16      16       8       - rwx--    [ stack ]
-------- ------- ------- ------- -------
total Kb    1952    1744      40       -

Example output of pmap -x showing all the mapped memory segments

Now for this process, I can see that the overall allocated virtual memory is 1952Kb, but this is just virtual, it doesn't tell me about real memory on the machine being used.

The resident memory is more useful - it's 1744Kb This is the actual physical memory that my process is using, but this contains a whole bunch of memory that's shared with other processes.

The anonymous memory is 40Kb... this is the actual physical memory that is allocated uniquely to my process and not to any other processes.

So does my process require 1744Kb or does it require 40Kb? Well, that depends who else is using what in the shared memory segments.

I start up any more instances of this process when one is already running, I can expect new instances to share all the shared memory segments and only take up approx 40Kb each of their own private memory.

But what is the overhead of that first process? Which things are already shared (eg. libc), and which aren't? pmap doesn't tell me.

To measure this, I need to identify all the shared memory segments that aren't shared by any other process - the resident memory for these segments needs to be accounted for in the overhead of my first process instance.

I've written myself a little shellscript to do this calculation - it takes a 'pmap -x' snapshot of all processes in the system (it needs to run as root to do this), then goes through, for each process, adding up not only the per-process anonymous memory but also those shared memory segments that aren't mapped by any other process...



#!/bin/ksh
# script from http://blogs.sun.com/nickstephen
TMPDIR=/tmp/SIZE.$$
rm -rf $TMPDIR
mkdir $TMPDIR

trap "/bin/rm -rf $TMPDIR" QUIT HUP INT TERM EXIT

# Go through the process table getting dumps from pmap -x
# and extracting lists of the mapped objects for each process
ps -e | tail +2 | while read proc tty time name ; do
    proctmp=$TMPDIR/$proc
    pmap -x $proc > $proctmp.pmap
    cat $proctmp.pmap | tail +3 | awk '/\^[0-9A-Z]/ { print $NF }'|sort|uniq > $proctmp.map
done 2>/dev/null

# Display the header row on stderr so that it can be ignored during a sort
echo 'TOTAL\\tRSS\\tPID: command' >&2
echo '=====\\t===\\t============' >&2

# Now iterate through each of the process map files finding map entries
# that are unique to just this PID - we adjust the memory used by a process
# to include the resident size of any maps owned just by the process
ls -1 $TMPDIR/\*.map | while read filename ; do

    # skip processes that we failed to read data for
    if [ ! -s $filename ] ; then
        continue;
    fi

    pid=`basename $filename .map`;
    pmapfile=$TMPDIR/$pid.pmap
   
    othermapfiles=`ls -1 $TMPDIR/\*.map | grep -v $filename`

    # totalize memory from all map entries not in other map files
    total=0
    cat $filename | while read libname ; do
        /usr/xpg4/bin/grep -q $libname $othermapfiles
        if [ $? -ne 0 ] ; then
            total=$(($total+`awk '$NF ~ /'$libname'/ { total=total+$3-$4 }
END {printf "%d",total}' $pmapfile`))
        fi
    done

    # read anonymous memory from pmap -x summary
    anonmem=`tail -1 $pmapfile | awk '{print $(NF-1)}'`
    # if no anonymous memory, set to zero
    if [ "$anonmem" = "-" ] ; then
        anonmem=0;
    fi

    # total memory for process = anonymous memory + unique map entries
    total=$(($total+$anonmem))
   
    # RSS memory for process comes from pmap -x summary
    rss=`tail -1 $pmapfile | awk '{print $(NF-2)}'`

    # now display per-process info
    echo $total'\\t'$rss'\\t'`head -1 $pmapfile`
done | sort -n

Shellscript to parse the output across all processes from pmap -x

This provides me with the following output for my testapp process above



$ ksh ~/size.ksh |grep testapp
TOTAL   RSS     PID: command
=====   ===     ============
64      1744    12167:
/home/nstephen/bin/testapp
Example output

... The total additional memory (in user-space, I don't know how to measure kernel space) required for my process is 64Kb, not just the 40Kb of anonymous memory pmap told me, since, looking back at the output of 'pmap -x' above, you can see that the binary itself, 'testapp' required 24Kb of resident mapped memory. Were this app using its own libraries (not just the standard libraries) you'd also see those "private" shared libraries show up in the memory footprint.

This technique isn't so useful for small processes such as this example, it's only approximate after all. Memory usage changes over time, and don't forget we're not measuring any memory required for kernel data structures.

I've found this script useful when measuring the incremental memory requirements of larger processes such as databases or Java virtual machines. Now that we have the joys of dtrace, I'm wondering to myself if dtrace can't provide a more powerful way of doing the same thing, for example, by placing a per-process probe that counts  vminfo page-ins as a process starts up...
Comments:

Post a Comment:
Comments are closed for this entry.
About

nickstephen

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today