Friday Jan 04, 2008

UPDATED: Solaris - Now With More Fuzz

Every six months or so, I try to do a run of my fuzz tests against the Solaris OS. The first test was conducted a year ago with build 42 followed by a test during our summer break on build 68 of Nevada. It should come as no shock then that I conducted another test during the winter break on build 80.

The tools and methodology are the same (although there are still some kinks to be worked out to make it fully automated), but for those who have not read my earlier post, I will summarize. The tests were conducted on a fresh installation of Nevada build 80 built with the SUNWXCall (Entire + OEM) installation cluster. A sparse-root, non-global zone (called "fuzz") was created for the tests and the software was loaded into the zone. Next, the names of all of the ELF binaries were collected, using the make-exec-list script run from within in the non-global zone. Next, the make-fuzz-tests script was run to generate the 36 different fuzz files to be used as input for each binary tested. Lastly, the test was kicked off using the exec-fuzz-tests script. The script pretty much runs unattended except when I need to kill off runaway processes. I still need to add some code to kill off anything started at the end of each test so you do not end up with tons of extra processes running and consuming memory.

At any rate, the test run completed and I have posted my results in Bugster and the bugs are also available in the OpenSolaris Bug Database Search using the keyword fuzz. The programs impacted can be viewed using this query.

While I tend to do this kind of work for fun as a holiday distraction, it does have real benefit. Programs that fail during a fuzz test (usually core dumping although a runaway or two have also been found) fail due to unvalidated input that leads to a buffer overflow or arithmetic exception of some kind. Input validation is not to be taken lightly and should be performed by every program and service. In fact, on the CERT Top 10 Secure Coding Practices list, validate input is item #1 and with good reason.

Take care,


Technorati Tag:

Monday Jul 23, 2007

Fuzzing around with Nevada

I guess that it is time for another of my pet projects to come to light. For the last seven months or so (on and off), I have been conducting some rudimentary fuzz testing on Solaris Nevada. Initially it started off as my winter (break) project with build 42 and has continued through a few other builds with my most recent being build 68.

For those unfamiliar with the concept, the goal of fuzz testing is to provide random input to programs and see how they behave. The results thus far have been pretty interesting. Many, in fact the vast majority, of programs in Nevada gracefully handled the input and either exited, provided a usage message or did something else equally benign. That said, a good number of programs failed to gracefully cope with the random input. In these cases, the typical response was a core dump although a few programs were triggered to enter an infinite loop - which was quite interesting.

The tests were conducted using code derived from the work published at the University of Wisconsin. In actuality, I only performed one of a handful of tests that they support - stdin fuzz testing. Basically programs are subjected to the equivalent of:

$ program < [file_containing_some_random_input]

I would love to do some of their additional tests as time permits. At any rate, the results are in and to date, a problem has been found with nearly 80 programs. Bug reports have been filed for each and every one and can be tracked using the keyword fuzz at the OpenSolaris Bug Database Search. To see the programs impacted thus far, try this link.

So far, a number of these have been reviewed and accepted and better still several have been already fixed and the changes integrated back into the code base. Even cooler, some of the fixes have been accepted upstream in other open-source projects such as What a great example of the participation age where the results of a single test in Nevada have helped to improve the quality for every user of that code (regardless of the OS on which that code is run).

Over time, I would love to see more sophisticated tests integrated into the testing process (e.g., command-line argument aware fuzz input testing), but for now this will serve as a start to point us in the right direction.

I would love to know if others have conducted similar tests and how they turned out.

Take care,


Technorati Tag:


This area of cyberspace is dedicated the goal of raising cybersecurity awareness. This blog will discuss cybersecurity risks, trends, news and best practices with a focus on improving mission assurance.


« June 2016