Solaris - measuring the memory overhead of my process
By nickstephen on Feb 06, 2007
What's the memory overhead of my Solaris process?
I'd like to know the amount of additional system memory required when my process is running. It sounds like a simple request, but standard tools like ps, prstat, top and ps don't give me the information... here's how I currently go about measuring the memory overhead of my processes.
Types of memory usage are typically defined as:
- Overall allocated virtual memory
- Resident memory
- Shared memory
- Private/Anonymous memory
Here's an example output of pmap -x:
|Example output of pmap -x showing all the mapped memory segments|
Now for this process, I can see that the overall allocated virtual memory is 1952Kb, but this is just virtual, it doesn't tell me about real memory on the machine being used.
The resident memory is more useful - it's 1744Kb This is the actual physical memory that my process is using, but this contains a whole bunch of memory that's shared with other processes.
The anonymous memory is 40Kb... this is the actual physical memory that is allocated uniquely to my process and not to any other processes.
So does my process require 1744Kb or does it require 40Kb? Well, that depends who else is using what in the shared memory segments.
I start up any more instances of this process when one is already running, I can expect new instances to share all the shared memory segments and only take up approx 40Kb each of their own private memory.
But what is the overhead of that first process? Which things are already shared (eg. libc), and which aren't? pmap doesn't tell me.
To measure this, I need to identify all the shared memory segments that aren't shared by any other process - the resident memory for these segments needs to be accounted for in the overhead of my first process instance.
I've written myself a little shellscript to do this calculation - it takes a 'pmap -x' snapshot of all processes in the system (it needs to run as root to do this), then goes through, for each process, adding up not only the per-process anonymous memory but also those shared memory segments that aren't mapped by any other process...
|Shellscript to parse the output across all processes from pmap -x|
This provides me with the following output for my testapp process above
... The total additional memory (in user-space, I don't know how to measure kernel space) required for my process is 64Kb, not just the 40Kb of anonymous memory pmap told me, since, looking back at the output of 'pmap -x' above, you can see that the binary itself, 'testapp' required 24Kb of resident mapped memory. Were this app using its own libraries (not just the standard libraries) you'd also see those "private" shared libraries show up in the memory footprint.
This technique isn't so useful for small processes such as this example, it's only approximate after all. Memory usage changes over time, and don't forget we're not measuring any memory required for kernel data structures.
I've found this script useful when measuring the incremental memory requirements of larger processes such as databases or Java virtual machines. Now that we have the joys of dtrace, I'm wondering to myself if dtrace can't provide a more powerful way of doing the same thing, for example, by placing a per-process probe that counts vminfo page-ins as a process starts up...