Watchpoints features in Solaris 10

In my last post I described how watchpoints work in Solaris, or how they're supposed to work. The reality is that there have been some small problems that have prevented a large number of watchpoints from being practical for complicated programs. I've made some changes in Solaris 10 so that they work in all situations, which made it onto Adam's Top 11-20 Features in Solaris 10.

How watchpoints are used

Typically, watchpoints are used in one of two ways. First, they are used for debugging userland applications. If you know that memory is getting corrupted, or know that a variable is being modified from an unknown location, you can set a watchpoint through a debugger and be notified when the variable changes. In this case, we only have to keep track of a handful of watchpoints. But they are also used for memory allocator redzones, to prevent buffer overflows and memory corruption. For every allocation, you put a watched region on either end, so that if the program tries to access unknown territory, a SIGTRAP signal is sent so the program can be debugged. In this case, we have to deal with thousands of watchpoints (two for every allocation), and we fault on virtually every heap access1.

Watchpoints in strange places

Watchpoints have worked for the most part since they were put into Solaris. Whenever a watchpoint is tripped, we end up in the kernel, where we have to look at the instruction we faulted on and take appropriate action. There were some instructions that we didn't quite decode properly when there were watchpoints present. On SPARC, the cas and casx instructions (used heavily in recent C++ libraries) could cause a SEGV if they tried to access a watched page. On x86, instructions that accessed the stack (pushl and movl, for example) would cause a similar segfault if there was a watchpoint on a stack page.

Multithreaded programs

There has been a particularly nasty watchpoint problem for a while when dealing with lots of watchpoints in multithreaded programs. When one thread hit a watchpoint, we have to stop all the other threads. But in the process of stopping, those threads may trigger a watchpoint, we try to stop the original watchpoint thread. We end up spinning in the kernel, where the only solution is to reboot the system.

Scalability

In the past, watchpoints were kept in a linked list for each process. This means that every time a program added a watchpoint or accessed a watched page, it would spend a linear amount of time trying to find the watchpoint. This is fine when you only have a handful of watchpoints, but can be a real problem when you have thousands of them. These linked lists have since been replaced with AVL trees. Individual watchpoints may be slow, but 10,000 watchpoints have nearly the same impact as 10 watchpoints. This can result in as much as 100x improvement for large number of watchpoints.

All of the above problems have been fixed in Solaris 10. The end result is that tools like watchmalloc(3malloc) and dbx's memory checking features are actually practical on large programs.


1 Remember that we have to fault on every access to a page that contains a watchpoint, even if it's not the address we're actually interested in.

Comments:

Post a Comment:
Comments are closed for this entry.
About

Musings about Fishworks, Operating Systems, and the software that runs on them.

Search

Archives
« August 2015
SunMonTueWedThuFriSat
      
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
     
Today