Fuzzing around with Nevada

I guess that it is time for another of my pet projects to come to light. For the last seven months or so (on and off), I have been conducting some rudimentary fuzz testing on Solaris Nevada. Initially it started off as my winter (break) project with build 42 and has continued through a few other builds with my most recent being build 68.

For those unfamiliar with the concept, the goal of fuzz testing is to provide random input to programs and see how they behave. The results thus far have been pretty interesting. Many, in fact the vast majority, of programs in Nevada gracefully handled the input and either exited, provided a usage message or did something else equally benign. That said, a good number of programs failed to gracefully cope with the random input. In these cases, the typical response was a core dump although a few programs were triggered to enter an infinite loop - which was quite interesting.

The tests were conducted using code derived from the work published at the University of Wisconsin. In actuality, I only performed one of a handful of tests that they support - stdin fuzz testing. Basically programs are subjected to the equivalent of:

$ program < [file_containing_some_random_input]

I would love to do some of their additional tests as time permits. At any rate, the results are in and to date, a problem has been found with nearly 80 programs. Bug reports have been filed for each and every one and can be tracked using the keyword fuzz at the OpenSolaris Bug Database Search. To see the programs impacted thus far, try this link.

So far, a number of these have been reviewed and accepted and better still several have been already fixed and the changes integrated back into the code base. Even cooler, some of the fixes have been accepted upstream in other open-source projects such as X.org. What a great example of the participation age where the results of a single test in Nevada have helped to improve the quality for every user of that code (regardless of the OS on which that code is run).

Over time, I would love to see more sophisticated tests integrated into the testing process (e.g., command-line argument aware fuzz input testing), but for now this will serve as a start to point us in the right direction.

I would love to know if others have conducted similar tests and how they turned out.

Take care,

Glenn

Technorati Tag:

Comments:

Post a Comment:
Comments are closed for this entry.
About

This area of cyberspace is dedicated the goal of raising cybersecurity awareness. This blog will discuss cybersecurity risks, trends, news and best practices with a focus on improving mission assurance.

Search

Archives
« March 2015
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today