Approaches for Discovering Security Vulnerabilities in Software Applications
By Eric P. Maurice-Oracle on Mar 09, 2010
Hello, this is Denis Pilipchuk again.
Hearing about a critical security issue in a product is one of the most feared situations for a product manager. Vulnerabilities, in addition to compromising the security postures of customers using the affected products, impact the bottom line of software vendors as a result of the direct costs associated with the release of the patches, as well, as indirect costs generated by customer lack of confidence..
In a previous blog entry, Darius Wiles explained that a majority of security defects in Oracle software are detected and fixed without ever reaching customers. This is because Oracle's goal is to provide security defect-free products and has significantly invested in various security tools and programs to achieve that goal. Apart from the efforts of the Ethical Hackers from Oracle's Global Product Security (GPS) team, the development teams at Oracle have several additional resources available to them.
Regular code reviews are the front line of defense when attempting to prevent vulnerabilities in released software. While regular code reviews may not catch every vulnerability, especially the most complex design issues, it does provide an effective safeguard against such pesky errors as improper checks for boundary conditions, failures to release the resources and handle exceptions, as well as general adherence to the product's design specifications (including its security part). This activity may be carried out in different ways: by outside consultants or by the development teams themselves. Key success factors in the effectiveness of regular code reviews are the technical expertise of the reviewers and their understanding of the product type they review. In Oracle experience, it generally works best when product teams are deeply involved with the reviews themselves, because they have the most direct knowledge of the code base and the new features being introduced. Of course, reviewers need to be knowledgeable in security coding, and their expertise must be maintained through ongoing security training. In addition, as much as possible, case-based training is desirable, so as to exposing reviewers to the proper context of their job.
The second line of defense involves the use of various code analyzers. Code analyzers should be used regularly during the entire development lifecycle. In fact, the use of these tools should be encouraged as soon as there is buildable code checked into the source repository. The tools should then be used regularly, for example - with nightly builds, or separately and with less frequency for larger products. Typically, these tools trace inputs (sources) and outputs (sinks) to analyze control flow and permutation of "tainted" data through different code paths in the tested software, and check more general code quality issues (like exception handling and hardcoded passwords) along the way. It is most inexpensive to fix issues when vulnerabilites are discovered at this stage because they only require making a bug entry against a developer and a code fix be produced.
Choosing the right tool for the job is extremely important. The tool needs to provide an appropriate coverage across the different languages (like Java, C/C++) and technologies (like Adobe Flash). We have found that tools are not necessarily equally efficient on all languages and platforms: a tool, which works great on C/C++, does not necessarily perform appropriately for Java, even if it claims to support it.
Unfortunately, not all classes of security vulnerabilities (and definitely not instances) can be found using static analysis tools. While these tools can usually discover certain classes of security vulnerabilities, such as Buffer Overflows, Cross-site scripting, and SQL Injection), pretty effectively, they are not helpful for other classes of issues, especially those related to weak design choices, such as key and credentials storage. Furthermore, all static code analyzers by design tend to be verbose and suffer from a significant number of false positives. "Training" static analysis tools to reduce false positives is usually possible, but this requires significant time and investment.
Another issue with static analysis tools is related to "false negatives", i.e. instances where the tools report the code to be secure, when it isn't. Recent research, including the SATE project by NIST, reported that static analysis tools from all of the participating major vendors generated significant (up to 50%) level of false negatives. While the exact percentage of false negatives may be the subject of endless controversy, it is clear to me that, as a matter of good security development practice, static analysis tools should be supplemented by dynamic testing in order to reduce the number of vulnerabilities in complex software.
Dynamic testing tools such as blackbox and graybox testing applications, and various types of fuzzers, comprise the third line of defense. These tools are executed against a running product instance (or a group of products), and as a result, can only be utilized toward the end of the product release cycle, when QA testing begins. All of those tools are based on a similar principle - they mimic the behavior of a rogue client and use a variety of pre-built known attack patterns to hit the server's exposed network interfaces with one of more malicious exploits, checking return values (or their absence) to find out whether the attack was successful. Note that proper care should be exercised when using these tools because, among other things, they can actually trigger alerts with the IT security staff. This is because the use of dynamic testing tools, from an Intrusion Detction System perspective, can look like a real attack.
Graybox tools have the added advantage of knowing the "internals" of the applications being tested: they can instrument the application during the build to add realtime monitoring of their behavior during a simulated attack. Blackbox and graybox testing for assessing web applications usually takes place over the HTTP/HTTPS protocol. There exists an overlap between the classes of security issues that can be checked by Web application dynamic assessment tools and those that can be discovered by static tools. However, dynamic tools usually report additional vulnerabilities, not generally caught by static analyzers, because in practice it is impossible to analyze all execution paths within the application for every possible permutation of input data. In addition, unlike static code analyzers, which often report on only theoretically exploitable issues, dynamic tools tend to result in more precise and practical reports. Although not 100% false positive-free, dynamic tools usually provide clearer information, showing request path and parameters that lead to the vulnerabilities.
Fuzzers constitute a special subcategory of dynamic tools and are typically used for protocol-level verification. Fuzzers are designed to break a server by submitting permutations for a valid message to see whether one of them will cause an unexpected/undesirable behavior such as a denial of service. Typically, fuzzers operate at a lower, protocol level, than blackbox Web application testing tools, which work at the application level. Specialized fuzzers are available for pretty much any well-known protocol (such as HTTP, FTP, SOAP, etc.), as well as custom frameworks for developing new ones. Protocol-specific fuzzers generally possess a great deal of intelligence about the protocol they're designed to test, and its message structure, checksums, and features. This results in better, more intelligent testing of the targeted protocol, whereas lower-level fuzzers (for example - PCAP fuzzer) do not have that knowledge and often try to change data blindly (or depend on the tester to define the anomalies).
For a number of years, Oracle has been investing a lot of time and money in deploying various tools to catch security problems before software is released to customers. Furtermore, most recently, we have observed that a growing number of customers' security teams have started running their own assessment tools, utilizing many of the same tools Oracle has been using. I feel that the growing adoption of these tools by customers will put additional pressure on those vendors who may not have yet adopted robust secure development practices. The proper use of a combination of tools, people, and processes, demonstrates due diligence in establishing and running an efficient security assurance programs and ultimately a commitment to the security posture of customers. The proper use of security tools is an integral part to this commitment.