Open source components have played a growing role in software development (commercial and in-house development). The traditional role of a developer has evolved from coding most of everything to re-using as much as possible known and trustworthy components. As a result, a growing aspect of software design and development decisions has become the integration of open-source and third-party components into increasingly large and complex software.
The question as to whether open source software is inherently more secure than commercial (i.e. "closed") software has been ardently debated for a number of years. The purpose of this blog entry is not to definitely settle this argument, though I would argue that software (whether open source or closed source) that is developed by security-aware developers tends to be inherently more secure. Regardless of this controversy, there are important security implications regarding the use of open source components.
The wide use of certain components (open source or not) has captured the attention of both security researchers and malicious actors. Indeed, the discovery of a 0-day in a widely-used component can imply for malicious actors the prospective of "hacking once and exploiting anywhere" or large financial gain if the bug is sold on the black market. As a result, a growing number of security vulnerabilities have been found and reported in open source components. The positive impact of increased security research in widely-used components will (hopefully) be the improved security-worthiness of these components (and the relative fulfillment of the "million eyes" theory), and the increased awareness of the security implications of the use of open source components within development organizations.
In many instances, the vulnerabilities found in public components have been given cute names: POODLE, Heartbleed, Dirty Cow, Shellshock, Venom, etc. (in no particular order). These names contributed to a sense of urgency (sometimes panic) within many organizations, often to the detriment of a rational analysis of the actual severity of these issues and their relative exploitability in the affected environments.
Less security-sophisticated organizations have been particularly affected by this sense of urgency and many have attempted to scan their environment to find software containing the vulnerability "du jour." However, it has been Oracle's experience that while many free tools provide organizations with the ability to relatively accurately identify the presence of open source components in IT systems, the majority of these tools have an abysmal track record at accurately identifying the version(s) of these components, and much less determine the exploitability of the issues associated with them. As a result, less security-sophisticated organizations are facing reports with a large number of false positive, and are unable to make sense of these findings (Oracle support has seen an increase in the submission of such inaccurate reports). From a security assurance perspective, I believe that there are 3 significant and often under-discussed topics related to the use of open source components in complex software development:
How can we assess and enhance security assurance activities in open source projects?
Assessing the security maturity of an open source project is not necessarily an easy thing. There are certain tools and derived methodologies and principles (e.g., Building Security In Maturity Model (BSIMM), Safecode), that can be used to assess the relative security maturity of commercial software developers but their application to open source projects is difficult. For example, how can one determine the amount of security skills available in an open source projects and whether code changes are systematically reviewed by skilled security experts?
Furthermore, should the software industry try to come up together with means to coordinate the role of commercial vendors in helping enhance the security posture of the most common open source projects for the benefit of all vendors and the community? Is it enough to commit that security fixes be shared with the community when an issue is discovered while a component is being used in a commercial offering?
How can we ensure that these components are obtained securely?
A number of organizations (whether for the purpose of developing commercial software or their own systems) are concerned solely about "toxic" licenses when procuring open source components, while they should be equally concerned about bringing in toxic code.
One problem is the potential downloading and use of obsolete software (which contains known security flaws that have been fixed in the most recent releases). This problem can relatively easily be solved by forcing developers to only download the most recent releases from the official project repository.
Many developers prefer pulling compiled binaries, instead of compiling the source code themselves (and verifying its authenticity). Developers should be aware of the risk of pulling malicious code (it's not because it is labelled as "foo" that it actually is "foo"; it may actually be "foo + a nasty backdoor"). There have been several publicly-reported security incidents resulting from the downloading of maliciously altered programs.
How can we provide the necessary security care of a solution that include open source components throughout its useful life?
Once an organization has decided to use an external component in their solution, the organization also should consider how they will maintain the solution. The maintenance and patching implications of third party components are often overlooked. For example, organizations may be faced with hardware limitations in their products. They may have to deprecate hardware products more quickly because a required open source component is no longer supported on a specific platform, or the technical requirements of the subsequent releases of the component exceed the specifications of the hardware. In hardware environments, there is also the obvious question of whether patching mechanisms are available for the updating of open source components on the platform.
There are also problematic implications of the use of open source components when they are used in purely-software solutions. Security fixes for open source components are often unpredictable. How does this unpredictability affect the availability of production systems, or customers' requirement to have fixed maintenance schedule?
In conclusion, the questions listed in this blog entry are just a few of the questions that one should consider when developing technology-based products (which seems to be about almost everything these days.) These questions are particularly important as open source components represent a large and increasing chunk of the technology supply chain, not only of commercial technology vendors, but also of cloud providers. Security assurance policies and practices should take these questions into consideration, and highlight the fact that open source, while incredibly useful, is not necessarily "free" but requires specific sets of commitments and due diligence obligations.