libumem to detect write-beyond-what-you-allocate errors
By Hema on Nov 18, 2007
I am a big fan of libumem . I've been using it for years to debug application crashes reported by our customers. I've found it very useful in isolating the source of corruption . I thought I'll share some of my experiences here.
I will use some examples from real cases although I might obscure the names of some of the libraries.
One of our elite customers reported that their application was crashing and he suspected that java is the cause of the crash. This was a pretty complex java application that involved few native libraries as well.
One of the challenging part of a support job is isolating the problem and even more challenging is convincing the customer that the problem is elsewhere and not where he thinks it is.
The first thing to do of course is open the corefile in mdb and run umem_verify command. This prints the name, address and the integrity of the cache. Take the address of the cache containing corrupt buffer and run umem_verify against it. This gives you the address of the corrupt buffer.
# mdb java corefile
Loading modules: [ libumem.so.1 libthread.so.1 libc.so.1 ]
Cache Name Addr Cache Integrity
umem_bufctl_audit_cache 254008 clean
umem_alloc_8 256008 clean
umem_alloc_16 25c008 clean
umem_alloc_24 25e008 1 corrupt buffer
umem_alloc_32 260008 clean
umem_alloc_40 262008 clean
The corrupt buffer comes from cache: umem_alloc_24:
Summary for cache 'umem_alloc_24'
buffer 1bc9628 (allocated) has a corrupt redzone size encoding
Let's dump the buffer:
The first 8 bytes has meta data information.The actual buffer starts at: 0x1bc9628 + 0x8 = 0x1bc9630
0x1bc9630: 3137322e 32332e31 37302e37 3700cafe
feedface 1498 1bcdc98
The contents of the buffer (hightlighted in green) indicate that the application has written 14 bytes. The content is actually hexadecimal ascii equivalent of IP address: 172.23.170.77 followed by a NULL character.
From the redzone data, let's find out the actual number of bytes that this application allocated.
Redzone data is 8 bytes that follows the buffer. When a buffer is allocated with libumem, the first 4 bytes of redzone contains the pattern 0xfeedface and the last 4 bytes contains the encoded value of the actual size of memory allocated by the application. Do the following math to find the actual size allocated by the application.
0x1498 == 5272t
5272/251 = 21
21 - 8 = 13 bytes
Aha, someone allocated 13 bytes and wrote 14 bytes into it -- that explains it all.
Now, let's see who allocated this buffer itself:, to do that, take the value following the redzone data and run the command bufctl_audit against it:
ADDR BUFADDR TIMESTAMP THR LASTLOG CONTENTS CACHE SLAB NEXT DEPTH
01bcdc98 01bc9628 a35debd7cb0b0 3b 000d92c0 00000000 0025e008 00934450 00000000 f
As you can see above, this buffer is getting allocated in the native library libobscure.so via java native interface. When they allocated the memory to store the IP address, they did not the NULL character into account and therefore were writing beyond what they actually allocated. This information was enough to convince the customer that the corruption was not in java but instead in a native library that they were using.