Monday Nov 19, 2007

libumem to detect modify-after-free corruptions.

Here is another example of how libumem can help us detect memory corruption..

The problem reported by the customer was that the Sun Web Server 6.1 protected by Sun's Access Manager Policy Agent  running in production was crashing.  This problem was not showing up in their staging environment, it was only the servers in production that were crashing.

Customer sent in several core files for us to analyze.  The stack trace in the first  core file that I looked at showed that we were aborting in libmtmalloc.so.1. 

=>[1] libc.so.1:__lwp_kill(0x0, 0x6, 0x0, 0xfe0bc000, 0x5, 0xaf295c8), at 0xfe0a0218
[2] libc.so.1:raise(0x6, 0x0, 0xe1efba50, 0xffffffff, 0x74ab0, 0xaf02900), at 0xfe050c80
[3] libc.so.1:abort(0x0, 0x0, 0x88, 0xffffffff, 0x74ab0, 0xaf02870), at 0xfe06e98
[4] libmtmalloc.so.1:free(0xaf28060, 0xfffb, 0xaf02800, 0xaf02828, 0x74ab0, 0xaf02828),

 I vaguely remembered that there was a bug in mtmalloc that returned an already freed pointer. I searched our bug database and found the bug . As I read through the synopsis of the bug  it became clear that we were NOT running into this bug. The bug was about : libmtmalloc's realloc() returning an already freed pointer. I don't think, we were doing a realloc here.  I ruled out that bug and just to be sure, I checked the mtmalloc patch level on the system and found that they were running the latest mtmalloc patch.

I looked at the other core files sent by the customer, and this  time, the crash was somewhere else. As I analyzed the core files and pstack output I noticed that the crashes were random in nature. The randomness of the crash indicated  memory corruption.  I initially suspected that this could be double-free type of  an error.

 fe0a0218 _lwp_kill (6, 0, e1ffba50, ffffffff, 74620, 2969700) + 18
fe036e98 addsev   (0, 0, 88, ffffffff, 74620, 2969670)
ff390bcc free     (3e0d188, fffb, 2969600, 2969628, 74620, 2969628) + 1e0
fd726f08 void\*__Crun::vector_del(void\*,unsigned,void(\*)(void\*)) (3e0d188, 0, 0, ffffffff, 0,
fd70e934 std::vector<bool>::iterator&std::vector<bool>::iterator::operator++()

I requested the customer to use libumem , the customer obliged (BIG Thank you) and sent us three corefiles generated with libumem enabled.

I opened the corefile in mdb, ran ::umem_verify and to my surprise, the integrity of all the caches came up as clean. Where do I go from here ?

Okay, I ran ::umem_status command and that printed the exact nature of corruption, including the stacktrace of the thread that last freed this buffer and the offset at which someone wrote to this buffer after it was being freed.

This information was sufficient for Sun Access Manager engineering team to come up with a fix and release  hotpatch 2.2-01 against Policy agent.  

How good is that ! I told you, libumem is so powerful yet so simple and easy to use.

mdb>::umem_status
Status: ready and active
Concurrency: 8
Logs: transaction=256k content=256k fail=256k (inactive)
Message buffer:
umem allocator: buffer modified after being freed modification occurred at offset 0x20 (0xdeadbeefdeadbeef replaced by 0xdeadbeeedeadbeef)
buffer=a7a0708 bufctl=a813f68 cache: umem_alloc_96
previous transaction on buffer a7a0708:
thread=28 time=T-284.970780700 slab=a5dede0 cache: umem_alloc_96
libumem.so.1'umem_cache_free+0x4c
libumem.so.1'process_free+0x006c
libames6.so'PR_DestroyLock
libames6.so'~Node
libames6.so'~RefCntPtr
libames6.so'~Tree
libames6.so'~PolicyEntry
libames6.so'~RefCntPtr
libames6.so'cleanup+0x0030
libames6.so'operator+0x01fc
libames6.so'spin+0x023c
libnspr4.so'_pt_root+0x00d4
libthread.so.1'_lwp_start
umem: heap corruption detected


Sunday Nov 18, 2007

libumem to detect write-beyond-what-you-allocate errors


I am a big fan of libumem . I've been using it for years to debug application crashes reported by our customers. I've found it very useful in isolating the source of corruption . I thought I'll share some of my experiences here.

I will use some examples from real cases although I might obscure the names of some of the libraries.

One of our elite customers reported that their application was crashing and he suspected that java is the cause of  the crash. This was a pretty complex java application that involved few native libraries as well.

One of the challenging part of a support job is isolating the problem and even more challenging is convincing the customer that the problem is elsewhere and not where he thinks it is. 

But, you know what, with libumem you'll see how easy this could be. Okay, I asked the customer to run their application with libumem enabled and send us another crash and so they did.

The first thing to do of course is open the corefile in mdb and run umem_verify command. This prints the name, address and the integrity of the cache. Take the address of the cache containing corrupt buffer and run umem_verify against it.  This gives you the address of the corrupt buffer.

 # mdb java corefile
Loading modules: [ libumem.so.1 libthread.so.1 libc.so.1 ]
> ::umem_verify
Cache Name                        Addr          Cache Integrity
umem_bufctl_audit_cache      254008        clean
umem_alloc_8                      256008        clean
umem_alloc_16                     25c008        clean
umem_alloc_24                    25e008        1 corrupt buffer
umem_alloc_32                     260008        clean
umem_alloc_40                     262008        clean

The corrupt buffer comes from cache: umem_alloc_24:                

> 25e008::umem_verify
Summary for cache 'umem_alloc_24'
  buffer 1bc9628 (allocated) has a corrupt redzone size encoding

Let's dump the buffer:

The first 8 bytes has meta data information.The actual buffer starts at: 0x1bc9628 + 0x8 = 0x1bc9630


> 0x1bc9630/7X
0x1bc9630: 3137322e 32332e31 37302e37 3700cafe
feedface 1498 1bcdc98

The contents of the buffer (hightlighted in green) indicate that the application has written 14 bytes. The content is actually hexadecimal ascii equivalent of IP address: 172.23.170.77 followed by a NULL character.

From the redzone data, let's find out the actual number of bytes that this application allocated.

Redzone data is 8 bytes that follows the buffer. When a buffer is allocated with libumem, the first 4 bytes of redzone contains the pattern 0xfeedface and the last 4 bytes contains the encoded value of the actual size of memory allocated by the application. Do the  following math to find the actual size allocated by the application.

0x1498 == 5272t
5272/251 = 21
21 - 8 = 13 bytes

Aha, someone allocated 13 bytes and wrote 14 bytes into it -- that explains it all.

Now, let's see who allocated this buffer itself:, to do that, take the value following the redzone data and run the command bufctl_audit against it:

> 1bcdc98::bufctl_audit
ADDR BUFADDR TIMESTAMP THR LASTLOG CONTENTS CACHE SLAB NEXT DEPTH
01bcdc98 01bc9628 a35debd7cb0b0 3b 000d92c0 00000000 0025e008 00934450 00000000 f
libumem.so.1`umem_cache_alloc+0x208
libumem.so.1`umem_alloc+0x44
libumem.so.1`malloc+0x2c
libobscure.so.1`decode+0x0020
libobscure.so.1`getItemN+0x0070
libobscure.so.1`_InitSession+0x934
libobscure.so.1`Java_com_xxx_yyy_od_zzzInterface_main+0x01f8
0xf900bc20

As you can see above, this buffer is getting allocated in the native library libobscure.so via java native interface. When they allocated the memory to store the IP address, they did not the NULL character into account and therefore were writing beyond what they actually allocated. This information was enough to convince the customer that the corruption was not in java but instead in a native library that they were using.

About

Hema

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today