Corporate Security Blog

Recent Posts

Critical Patch Updates

Security Alert CVE-2019-2725 Released

Oracle has just released Security Alert CVE-2019-2725.  This Security Alert was released in response to a recently-disclosed vulnerability affecting Oracle WebLogic Server.  This vulnerability affects a number of versions of Oracle WebLogic Server and has received a CVSS Base Score of 9.8.  WebLogic Server customers should refer to the Security Alert Advisory for information on affected versions and how to obtain the required patches.    Please note that vulnerability CVE-2019-2725 has been associated in press reports with vulnerabilities CVE-2018-2628, CVE-2018-2893, and CVE-2017-10271.  These vulnerabilities were addressed in patches released in previous Critical Patch Update releases.   Due to the severity of this vulnerability, Oracle recommends that this Security Alert be applied as soon as possible.   For more information: The Security Alert advisory is located at  https://www.oracle.com/technetwork/security-advisory/alert-cve-2019-2725-5466295.html  The October 2017 Critical Patch Update advisory is located at https://www.oracle.com/technetwork/topics/security/cpuoct2017-3236626.html The April 2018 Critical Patch Update advisory is located at https://www.oracle.com/technetwork/security-advisory/cpuapr2018-3678067.html The July 2018 Critical patch Update advisory is located at https://www.oracle.com/technetwork/security-advisory/cpujul2018-4258247.html

Oracle has just released Security Alert CVE-2019-2725.  This Security Alert was released in response to a recently-disclosed vulnerability affecting Oracle WebLogic Server.  This vulnerability affects...

Industry Insights

Oracle Linux certified under Common Criteria and FIPS 140-2

Oracle Linux 7 has just received both a Common Criteria (CC) Certification which was performed against the National Information Assurance Partnership (NIAP) General Purpose Operating System Protection Profile (OSPP) v4.1 as well as a FIPS 140-2 validation of its cryptographic modules.  Oracle Linux is currently one of only two operating systems – and the only Linux distribution – on the NIAP Product Compliant List.  U.S. Federal procurement policy requires IT products sold to the Department of Defense (DoD) to be on this list; therefore, Federal cloud customers who select Oracle Cloud Infrastructure can now opt for a NIAP CC-certified operating system that also includes FIPS 140-2 validated cryptographic modules, by making Oracle Linux 7 the platform for their cloud services solution. Common Criteria Certification for Oracle Linux 7 The National Information Assurance Partnership (NIAP) is “responsible for U.S. implementation of the Common Criteria, including management of the NIAP Common Criteria Evaluation and Validation Scheme (CCEVS) validation body.”(See About NIAP at https://www.niap-ccevs.org/ ) The Operating Systems Protection Profile (OSPP) series are the only NIAP-approved Protection Profiles for operating systems. “A Protection Profile is an implementation-independent set of security requirements and test activities for a particular technology that enables achievable, repeatable, and testable (CC) evaluations.”  They are intended to “accurately describe the security functionality of the systems being certified in terms of [CC] and to define functional and assurance requirements for such products.”  In other words, the OSPP enables organizations to make an accurate comparison of operating systems security functions. (For both quotations, see NIAP Frequently Asked Questions (FAQ) at https://www.niap-ccevs.org/Ref/FAQ.cfm) In addition, products that certify against these Protection Profiles can also help you meet certain US government procurement rules.  As set forth in the Committee on National Security Systems Policy (CNSSP) #11, National Policy Governing the Acquisition of Information Assurance (IA) and IA-Enabled Information Technology Products (published in June 2013), “All [common off-the-shelf] COTS IA and IA-enabled IT products acquired for use to protect information on NSS shall comply with the requirements of the NIAP program in accordance with NSA-approved processes.”   Oracle Linux is now the only Linux distribution on the NIAP Product Compliant List.  It is one of only two operating systems on the list. You may recall that Linux distributions (including Oracle Linux) have previously completed Common Criteria evaluations (mostly against a German standard protection profile), these evaluations are now limited because they are only officially recognized in Germany and within the European SOG-IS agreement. Furthermore, the revised Common Criteria Recognition Arrangement (CCRA) announcement on the CCRA News Page from September 8th 2014, states that “After September 8th 2017, mutually recognized certificates will either require protection profile-based evaluations or claim conformance to evaluation assurance levels 1 through 2 in accordance with the new CCRA.”  That means evaluations conducted within the CCRA acceptance rules, such as the Oracle Linux 7.3 evaluation, are globally recognized in the 30 countries that have signed the CCRA. As a result, Oracle Linux 7.3 is the only Linux distribution that meets current US procurement rules. It is important to recognize that the exact status of the certifications of operating systems under the NIAP OSPP has significant implications for the use of cloud services by U.S. government agencies.  The Federal Risk and Authorization Management Program (FedRAMP) website states that it is a “government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services.” For both FedRamp Moderate and High, the SA-4 Guidance states “The use of Common Criteria (ISO/IEC 15408) evaluated products is strongly preferred.” FIPS 140-2 Level 1 Validation for Oracle Linux 6 and 7 In addition to the Common Criteria Certification, Oracle Linux cryptographic modules are also now FIPS 140-2 validated. FIPS 140-2 is a prerequisite for NIAP Common Criteria evaluations. “All cryptography in the TOE for which NIST provides validation testing of FIPS-approved and NIST-recommended cryptographic algorithms and their individual components must be NIST validated (CAVP and/or CMVP). At a minimum an appropriate NIST CAVP certificate is required before a NIAP CC Certificate will be awarded.” (See NIAP Policy Letter #5, June 25, 2018 at https://www.niap-ccevs.org/Documents_and_Guidance/ccevs/policy-ltr-5-update3.pdf ) FIPS is also a mandatory standard for all cryptographic modules used by the US government. “This standard is applicable to all Federal agencies that use cryptographic-based security systems to protect sensitive information in computer and telecommunication systems (including voice systems) as defined in Section 5131 of the Information Technology Management Reform Act of 1996, Public Law 104-106.” (See Cryptographic Module Validation Program; What Is The Applicability Of CMVP To The US Government? at https://csrc.nist.gov/projects/cryptographic-module-validation-program ). Finally, FIPS is required for any cryptography that is a part of a FedRamp certified cloud service. “For data flows crossing the authorization boundary or anywhere else encryption      is required, FIPS 140 compliant/validated cryptography must be employed. FIPS 140 compliant/validated products will have certificate numbers. These certificate numbers will be required to be identified in the SSP as a demonstration of this capability. JAB TRs will not authorize a cloud service that does not have this capability.” (See FedRamp Tips & Cues Compilation, January 2018, at https://www.fedramp.gov/assets/resources/documents/FedRAMP_Tips_and_Cues.pdf ). Oracle includes FIPS 140-2 Level 1 validated cryptography into Oracle Linux 6 and Oracle Linux 7 on x86-64 systems with the Unbreakable Enterprise Kernel and the Red Hat Compatible Kernel. The platforms used for FIPS 140 validation testing include Oracle Server X6-2 and Oracle Server X7-2, running Oracle Linux 6.9 and 7.3. Oracle “vendor affirms” that the FIPS validation is maintained on other x86-64 equivalent hardware that has been qualified in its Oracle Linux Hardware Certification List (HCL), on the corresponding Oracle Linux releases. Oracle Linux cryptographic modules enable FIPS 140-compliant operations for key use cases such as data protection and integrity, remote administration (SSH, HTTPS TLS, SNMP, and IPSEC), cryptographic key generation, and key/certificate management. Federal cloud customers who select Oracle Cloud Infrastructure can now opt for a NIAP CC-certified operating system (that also includes FIPS 140-2 validated cryptographic modules) by making Oracle Linux 7 the bedrock of their cloud services solution. Oracle Linux is engineered for open cloud infrastructure. It delivers leading performance, scalability, reliability, and security for enterprise SaaS and PaaS workloads as well as traditional enterprise applications. Oracle Linux Support offers access to award-winning Oracle support resources and Linux support specialists, zero-downtime updates using Ksplice, additional management tools such as Oracle Enterprise Manager and lifetime support, all at a low cost. For a matrix of Oracle security evaluations currently in progress as well as those completed, please refer to the Oracle Security Evaluations. Visit Oracle Linux Security to learn how Oracle Linux can help keep your systems secure and improve the speed and stability of your operations.  

Oracle Linux 7 has just received both a Common Criteria (CC) Certification which was performed against the National Information Assurance Partnership (NIAP) General Purpose Operating System Protection...

Industry Insights

Welcome to Oracle’s corporate security blog

Hi, all! My name is Mary Ann Davidson and I am the Chief Security Officer for Oracle. I’m the first contributor to our relaunched corporate security blog. Having many different security voices contribute to this blog will help our customers understand the breadth of security at Oracle - across multiple organizations and multiple lines of business. Security at Oracle is way too big (and too important) to be constrained to one person or even one organization. This blog entry will describe how security is organized at Oracle and what my organization does specifically. When I joined Oracle (and before I started working in security), we were just beginning to build “Oracle Financials” – at the time, general ledger, purchasing and payables applications - which have long since expanded to a huge portfolio of business applications. Since then, we’ve continued to grow: more business applications, middleware, operating systems, engineered systems, industry solutions (e.g., construction, retail, hospitality, financial services) and of course, many clouds (infrastructure as a service (IaaS), platform as a service (PaaS) and Software as a Service (SaaS) - business applications we run for our customers). Plus databases, of course! The amount of diversity we have in terms of our product and service portfolio has a significant impact from a security perspective. The first one is pretty obvious: nobody can be an expert on absolutely everything in security (and even if one person were an expert on everything, there isn’t enough time for one person to be responsible for securing absolutely everything, everywhere in Oracle). The second is also obvious: security must be a cultural value, because you can never hire enough security experts to look over the shoulders of everyone else to make sure they are doing whatever they do securely. As a result, Oracle has adopted a decentralized security model, albeit with corporate security oversight: “trust, but verify.” With regard to our core business, security expertise remains in development. By that I mean that development organizations are responsible for the security-worthiness of what they design and build, and in particular, security has to be “built in, not bolted on” since security doesn’t work well (if at all) as an afterthought. (As I used to say when I worked in construction management in the US Navy, “You can’t add rebar after the concrete has set.”) Security oversight falls under the following main groups at Oracle: Global Physical Security (facility security, investigations, executive protection, etc.), Global Information Security (the “what” we do as a company in terms of corporate security policies, including compliance, forensic investigations, etc.), Corporate Security Architecture (review and approval prior to systems going live to ensure they are securely architected), and Global Product Security, which is my team. I mentioned I am the CSO for Oracle but really, that would be better categorized as “Chief Security Assurance Officer.” What does assurance at Oracle encompass? In essence, that everything we build – hardware and software products, services, and consulting engagements – has security built in and maintains a lifecycle of security. In order to do that, my team has developed an extensive program – from “what” we do to “how” we do it – including verifying that “we did what we are supposed to do.” The “what” includes secure coding and secure development standards, covering not only “don’t do X,” but “here’s how to do Y.” We train many people in development organizations on these standards (e.g., not only developers but quality assurance (QA) people and doc writers, some of whom write code samples that we obviously want to reflect secure coding practice). We have more extensive, tailored training tracks, as well. Our secure development requirements also include architectural risk analysis (ARA), since people building systems (or even features within systems) need to think about the type of threats the system will be subjected to and design with those threats in mind.  These programs and activities are collectively known as Oracle Software Security Assurance (OSSA). One of the ways we decentralize security is by identifying and appointing people in development and consulting organizations to be our “security boots on the ground.” Specifically, we have around 60 senior Security Leads and over 1,700 Security Points Of Contact (SPOCs) that implement Oracle Software Security Assurance programs across a multiplicity of development organizations and consulting. Development teams are required to use various security analysis and testing tools, including both static and dynamic analysis, to triage the security bugs found and to attempt to fix the worst issues the quickest. We use a lot of different tools to do this, since no one tool works equally well for all types of code. We also build tools in-house to help us find security problems (e.g., a static analysis tool called Parfait, built by Oracle Labs, which we optimize for use within Oracle). Other tools are developed by the ethical hacking team (EHT), e.g., the wonderfully-named SQL*Splat, which fuzzes PL/SQL code. The EHT’s job is to attempt to break our products and services before “real” bad guys do, and in particular to capture “larger lessons learned” from the results of the EHT’s work, so we can share those observations (e.g., via a new coding standard or an automated tool) across multiple teams in development. I’m also pleased to note that the EHT’s skills are so popular that a number of development groups in Oracle have stood up their own EHTs. My team also includes people who manage security vulnerabilities: the SecAlert team, who manage the release of our quarterly critical path updates (CPUs), and the Security Alert program, as well as engaging with the security researcher community. Lastly, we have a team of security evaluators who take selected products and services through international Criteria (ISO-15408) and U.S Federal Information Processing (FIPS)-140 certifications: another way we “trust, but verify.” Security assurance is not only increasingly important – think of the bazillions of Internet of Things devices as people insist on implanting sensors in absolutely everything - but increasingly asked about by customers who want to know “how did you build – and manage – this product or service?” That is another reason we make sure we can measure what teams across the company are doing or not doing in assurance and help uplift those who need to do better. In the future, we will be publishing more blog entries to discuss the respective roles of security oversight teams, as well as the security work of operational and development teams. “Many voices” will illustrate the breadth and width of security at Oracle, and how seriously we take it. On a personal note, I look forward to reading about the great work my many valued colleagues at Oracle are doing to continue to make security rock solid, and a core cultural value.

Hi, all! My name is Mary Ann Davidson and I am the Chief Security Officer for Oracle. I’m the first contributor to our relaunched corporate security blog. Having many different security voices...

Security Updates

Intel Processor L1TF vulnerabilities: CVE-2018-3615, CVE-2018-3620, CVE-2018-3646

Today, Intel disclosed a new set of speculative execution side-channel processor vulnerabilities affecting their processors.    These L1 Terminal Fault (L1TF) vulnerabilities affect a number of Intel processors, and they have received three CVE identifiers: CVE-2018-3615 impacts Intel Software Guard Extensions (SGX) and has a CVSS Base Score of 7.9. CVE-2018-3620 impacts operating systems and System Management Mode (SMM) running on Intel processors and has a CVSS Base Score of 7.1. CVE-2018-3646 impacts virtualization software and Virtual Machine Monitors (VMM) running on Intel processors and has a CVSS Base Score of 7.1 These vulnerabilities derive from a flaw in Intel processors, in which operations performed by a processor while using speculative execution can result in a compromise of the confidentiality of data between threads executing on a physical CPU core.  As with other variants of speculative execution side-channel issues (i.e., Spectre and Meltdown), successful exploitation of L1TF vulnerabilities require the attacker to have the ability to run malicious code on the targeted systems.  Therefore, L1TF vulnerabilities are not directly exploitable against servers which do not allow the execution of untrusted code.  While Oracle has not yet received reports of successful exploitation of this speculative execution side-channel issue “in the wild,” Oracle has worked with Intel and other industry partners to develop technical mitigations against these issues.  The technical steps Intel recommends to mitigate L1TF vulnerabilities on affected systems include: Ensuring that affected Intel processors are running the latest Intel processor microcode. Intel reports that the microcode update  it has released for the Spectre 3a (CVE-2018-3640) and Spectre 4 (CVE-2018-3639) vulnerabilities also contains the microcode instructions which can be used to mitigate the L1TF vulnerabilities. Updated microcode by itself is not sufficient to protect against L1TF. Applying the necessary OS and virtualization software patches against affected systems. To be effective, OS patches will require the presence of the updated Intel processor microcode.  This is because updated microcode by itself is not sufficient to protect against L1TF.  Corresponding OS and virtualization software updates are also required to mitigate the L1TF vulnerabilities present in Intel processors. Disabling Intel Hyper-Threading technology in some situations. Disabling HT alone is not sufficient for mitigating L1TF vulnerabilities. Disabling HT will result in significant performance degradation. In response to the various L1TF Intel processor vulnerabilities: Oracle Hardware Oracle recommends that administrators of x86-based Systems carefully assess the L1TF threat for their systems and implement the appropriate security mitigations.Oracle will provide specific guidance for Oracle Engineered Systems. Oracle has determined that Oracle SPARC servers are not affected by the L1TF vulnerabilities. Oracle has determined that Oracle Intel x86 Servers are not impacted by vulnerability CVE-2018-3615 because the processors in use with these systems do not make use of Intel Software Guard Extensions (SGX). Oracle Operating Systems (Linux and Solaris) and Virtualization Oracle has released security patches for Oracle Linux 7, Oracle Linux 6 and Oracle VM Server for X86 products.  In addition to OS patches, customers should run the current version of the Intel microcode to mitigate these issues.  Oracle Linux customers can take advantage of Oracle Ksplice to apply these updates without needing to reboot their systems. Oracle has determined that Oracle Solaris on x86 is not affected by vulnerabilities CVE-2018-3615 and CVE-2018-3620 regardless of the underlying Intel processor on these systems.  It is however affected by vulnerability CVE-2018-3646 when using Kernel Zones. The necessary patches will be provided at a later date.  Oracle Solaris on SPARC is not affected by the L1TF vulnerabilities. Oracle Cloud The Oracle Cloud Security and DevOps teams continue to work in collaboration with our industry partners on implementing the necessary mitigations to protect customer instances and data across all Oracle Cloud offerings: Oracle Cloud (IaaS, PaaS, SaaS), Oracle NetSuite, Oracle GBU Cloud Services, Oracle Data Cloud, and Oracle Managed Cloud Services.   Oracle’s first priority is to mitigate the risk of tenant-to-tenant attacks. Oracle will notify and coordinate with the affected customers for any required maintenance activities as additional mitigating controls continue to be implemented. Oracle has determined that a number of Oracle's cloud services are not affected by the L1TF vulnerabilities.  They include Autonomous Data Warehouse service, which provides a fully managed database optimized for running data warehouse workloads, and Oracle Autonomous Transaction Processing service, which provides a fully managed database service optimized for running online transaction processing and mixed database workloads.  No further action is required by customers of these services as both were found to require no additional mitigating controls based on service design and are not affected by the L1TF vulnerabilities (CVE-2018-3615, CVE-2018-3620, and CVE-2018-3646).    Bare metal instances in Oracle Cloud Infrastructure (OCI) Compute offer full control of a physical server and require no additional Oracle code to run.  By design, the bare metal instances are isolated from other customer instances on the OCI network whether they be virtual machines or bare metal.  However, for customers running their own virtualization stack on bare metal instances, the L1TF vulnerability could allow a virtual machine to access privileged information from the underlying hypervisor or other VMs on the same bare metal instance.  These customers should review the Intel recommendations about vulnerabilities CVE-2018-3615, CVE-2018-3620, CVE-2018-3646 and make changes to their configurations as they deem appropriate. Note that many industry experts anticipate that new techniques leveraging these processor flaws will continue to be disclosed for the foreseeable future.  Future speculative side-channel processor vulnerabilities are likely to continue to impact primarily operating systems and virtualization platforms, as addressing them will likely require software update and microcode update.  Oracle therefore recommends that customers remain on current security release levels, including firmware, and applicable microcode updates (delivered as Firmware or OS patches), as well as software upgrades.    For more information:   The information in this blog entry is also published as MOS Note 2434830.1: “Information about the L1TF Intel processor vulnerabilities (CVE-2018-3615, CVE-2018-3620, CVE-2018-3646)” Oracle Linux can refer to the bulletins located at https://linux.oracle.com/cve/CVE-2018-3620.html,  and https://linux.oracle.com/cve/CVE-2018-3646.html Solaris customers should refer to MOS 2434208.1 : “L1 Terminal Fault (CVE-2018-3615, CVE-2018-3620, & CVE-2018-3646) Vulnerabilities” and MOS 2434206.1 : “Disabling x86 Hyperthreading in Oracle Solaris” Oracle x86 hardware customers should refer to MOS 2434171.1 : “L1 Terminal Fault (CVE-2018-3620, CVE-2018-3646) Vulnerabilities on Oracle x86 Servers” For information about the availability of Intel microcode for Oracle hardware, see MOS Note 2406316.1: “CVE-2018-3640 (Spectre v3a), CVE-2018-3639 (Spectre v4) Vulnerabilities: Intel Processor Microcode Availability (Doc ID 2406316.1)” The “Oracle Cloud Security Response to Intel L1TF Vulnerabilities” is located at https://docs.cloud.oracle.com/iaas/Content/Security/Reference/L1TF_response.htm The “Oracle Cloud Infrastructure Customer Advisory for L1TF Impact on the Compute Service” is located at https://docs.cloud.oracle.com/iaas/Content/Security/Reference/L1TF_computeimpact.htm The “Oracle Cloud Infrastructure Customer Advisory for L1TF Impact on the Database Service” is located at https://docs.cloud.oracle.com/iaas/Content/Security/Reference/L1TF_databaseimpact.htm The document “Protecting your Compute Instance Against the L1TF Vulnerability” is located at https://docs.cloud.oracle.com/iaas/Content/Security/Reference/L1TF_protectinginstance.htm

Today, Intel disclosed a new set of speculative execution side-channel processor vulnerabilities affecting their processors.    These L1 Terminal Fault (L1TF) vulnerabilities affect a number of Intel...

Critical Patch Updates

Security Alert CVE-2018-3110 Released

Oracle just released Security Alert CVE-2018-3110.  This vulnerability affects the Oracle Database versions and on Windows.  It has received a CVSS Base Score of 9.9, and it is not remotely exploitable without authentication.  Vulnerability CVE-2018-3110 also affects Oracle Database version on Windows as well as Oracle Database on Linux and Unix; however, patches for those versions and platforms were included in the July 2018 Critical Patch Update. Due to the nature of this vulnerability, Oracle recommends that customers apply these patches as soon as possible.  This means that: Customers running Oracle Database versions and on Windows should apply the patches provided by the Security Alert. Customers running version on Windows or any version of the database on Linux or Unix should apply the July 2018 Critical Patch Update if they have not already done so.  For More Information: • The Advisory for Security Alert CVE-2018-3110 is located at http://www.oracle.com/technetwork/security-advisory/alert-cve-2018-3110-5032149.html • The Advisory for the July 2018 Critical Patch Update is located at http://www.oracle.com/technetwork/security-advisory/cpujul2018-4258247.html

Oracle just released Security Alert CVE-2018-3110.  This vulnerability affects the Oracle Database versions and on Windows.  It has received a CVSS Base Score of 9.9, and it is not...

Critical Patch Updates

July 2018 Critical Patch Update Released

Oracle today released the July 2018 Critical Patch Update. This Critical Patch Update provided security updates for a wide range of product families, including: Oracle Database Server, , Oracle Global Lifecycle Management, Oracle Fusion Middleware, Oracle E-Business Suite, Oracle PeopleSoft, Oracle Siebel CRM, Oracle Industry Applications (Construction, Communications, Financial Services, Hospitality, Insurance, Retail, Utilities), Oracle Java SE, Oracle Virtualization, Oracle MySQL, and Oracle Sun Systems Products Suite. 37% of the vulnerabilities fixed with this Critical Patch Update are for third-party components included in Oracle product distributions.  The CVSS v3 Standard considers vulnerabilities with a CVSS Base Score between 9.0 and 10.0 to have a qualitative rating of “Critical.”  Vulnerabilities with a CVSS Base Score between 7.0 and 8.9, have a qualitative rating of “High.” While Oracle cautions against performing quantitative analysis against the content of each Critical Patch Update release because such analysis is excessively complex (e.g., the same CVE may be listed multiple times, because certain components are widely used across different products), it is fair to note that bugs in third-party components make up a disproportionate amount of severe vulnerabilities in this Critical Patch Update.  90% of the critical vulnerabilities addressed in this Critical Patch Update are for non-Oracle CVEs.  Non-Oracle CVEs also make up 56% of the Critical and High vulnerabilities addressed in this Critical Patch Update. Finally, note that many industry experts anticipate that a number of new variants of exploits leveraging known flaws in modern processor designs (currently referred as “Spectre” variants) will continue to be discovered.  Oracle is actively engaged with Intel and other industry partners to come up with technical mitigations against these processor vulnerabilities as they are being reported.  For more information about this Critical Patch Update, customers should refer to the Critical Patch Update Advisory and the executive summary published on My Oracle Support (Doc ID 2420273.1).  

Oracle today released the July 2018 Critical Patch Update. This Critical Patch Update provided security updates for a wide range of product families, including: Oracle Database Server, , Oracle Global...

Security Updates

Updates about the “Spectre” series of processor vulnerabilities and CVE-2018-3693

A new processor vulnerability was announced today. Vulnerability CVE-2018-3693 (“Bounds Check Bypass Store” or BCBS) is closely related to Spectre v1. As with previous iterations of Spectre and Meltdown, Oracle is actively engaged with Intel and other industry partners to develop technical mitigations against this processor vulnerability. Note that many industry experts anticipate that a number of new variants of exploits leveraging these known flaws in modern processor designs will continue to be disclosed for the foreseeable future. These issues are likely to primarily impact operating systems and virtualization platforms, and may require software update, microcode update, or both. Fortunately, the conditions of exploitation for these issues remain similar: malicious exploitation requires the attackers to first obtain the privileges required to install and execute malicious code against the targeted systems. In regard to vulnerabilities CVE-2018-3640 (“Spectre v3a”) and CVE-2018-3639 (“Spectre v4”), Oracle has determined that the SPARC processors manufactured by Oracle (i.e., SPARC M8, T8, M7, T7, S7, M6, M5, T5, T4, T3, T2, T1) are not affected by these variants. In addition, Oracle has delivered microcode patches for the last 4 generations of Oracle x86 Servers. As with previous versions of the Spectre and Meltdown vulnerabilities (see MOS Note ID 2347948.1), Oracle will publish information about these issues on My Oracle Support.

A new processor vulnerability was announced today. Vulnerability CVE-2018-3693 (“Bounds Check Bypass Store” or BCBS) is closely related to Spectre v1. As with previous iterations of Spectre...

Security Updates

Updates about processor vulnerabilities CVE-2018-3640 (“Spectre v3a”) and CVE-2018-3639 (“Spectre v4”)

Two new processor vulnerabilities were publicly disclosed on  May 21, 2018.  They are vulnerabilities CVE-2018-3640 ( “Spectre v3a” or “Rogue System Register Read”) and CVE-2018-3639 (“Spectre v4” or “Speculative Store Buffer Bypass”).  Both vulnerabilities have received a CVSS Base Score of 4.3.  Successful exploitation of vulnerability CVE-2018-3639 requires local access to the targeted system.  Mitigating this vulnerability on affected systems will require both software and microcode updates.  Successful exploitation of vulnerability CVE-2018-3640 also requires local access to the targeted system.  Mitigating this vulnerability on affected Intel processors is solely performed by applying updated processor-specific microcode. Working with the industry, Oracle has just released the required software updates for Oracle Linux and Oracle VM along with the microcode recently released by Intel for certain x86 platforms.  Oracle will continue to release new microcode updates and firmware patches as production microcode becomes available from Intel.  As for previous versions of the Spectre and Meltdown vulnerabilities (see MOS Note ID 2347948.1), Oracle will publish a list of products affected by CVE-2018-3639 and CVE-2018-3640 along with other technical information on My Oracle Support (MOS Note ID 2399123.1).  In addition, the Oracle Cloud teams will be working to identify and apply necessary updates if warranted, as they become available from Oracle and third-party suppliers, in accordance with applicable change management processes

Two new processor vulnerabilities were publicly disclosed on  May 21, 2018.  They are vulnerabilities CVE-2018-3640 ( “Spectre v3a” or “Rogue System Register Read”) and CVE-2018-3639 (“Spectre v4” or...

Critical Patch Updates

April 2018 Critical Patch Update Released

Oracle today released the April 2018 Critical Patch Update. This Critical Patch Update provided security updates for a wide range of product families, including: Oracle Database Server, Oracle Fusion Middleware, Oracle E-Business Suite, Oracle PeopleSoft, Oracle Industry Applications (Construction, Financial Services, Hospitality, Retail, Utilities), Oracle Java SE, and Oracle Systems Products Suite. Approximately 35% of the security fixes provided by this Critical Patch Update are for non-Oracle Common Vulnerabilities and Exposures (CVEs): that is, security fixes for third-party products (e.g., open source components) that are included in traditional Oracle product distributions.  In many instances, the same CVE is listed multiple times in the Critical Patch Update Advisory, because a vulnerable common component (e.g., Apache) may be present in many different Oracle products. Note that Oracle started releasing security updates in response to the Spectre (CVE-2017-5715 and CVE-2017-5753) and Meltdown (CVE-2017-5754) processor vulnerabilities with the January 2018 Critical Patch Update.  Customers should refer to this Advisory and the “Addendum to the January 2018 Critical Patch Update Advisory for Spectre and Meltdown” My Oracle Support note (Doc ID 2347948.1) for information about newly-released updates. At this point in time, Oracle has issued the corresponding security patches for Oracle Linux and Virtualization and Oracle Solaris on SPARC (SPARC 64-bit systems are not affected by Meltdown), and Oracle is working on producing the necessary updates for Solaris on x86 (noting the diversity of supported processors complicates the creation of the security patches related to these issues). For more information about this Critical Patch Update, customers should refer to the Critical Patch Update Advisory and the executive summary published on My Oracle Support (Doc ID 2383583.1).   

Oracle today released the April 2018 Critical Patch Update. This Critical Patch Update provided security updates for a wide range of product families, including: Oracle Database Server, Oracle Fusion...

Critical Patch Updates

Security Alert CVE-2017-9805 Released

Last week, Equifax identified an Apache Struts 2 vulnerability, CVE-2017-5638, as having been exploited in a significant security incident. Oracle distributed the Apache Foundation’s fixes for CVE-2017-5638 several months ago in the April 2017 Critical Patch Update, which should have already been applied to customer systems well before this breach came to light. Recently, the Apache Foundation released fixes for a number of additional Apache Struts 2 vulnerabilities, including CVE-2017-9805, CVE-2017-7672, CVE-2017-9787, CVE-2017-9791, CVE-2017-9793, CVE-2017-9804, and CVE-2017-12611. Oracle just published Security Alert CVE-2017-9805 in order to distribute these fixes to our customers. Please refer to the Security Alert advisory for the technical details of these bugs as well as the CVSS Base Score information. Oracle strongly recommends that customers apply the fixes contained in this Security Alert as soon as possible. Furthermore, Oracle reminds customers that they should keep up with security releases and should have applied the July 2017 Critical Patch Update (the most recent Critical Patch Update release). The next Critical Patch Update release is on October 17, 2017. For More Information: The Security Alerts and Critical Patch Updates page is located at https://www.oracle.com/technetwork/topics/security/alerts-086861.html A blog entry titled "Take Advantage of Oracle Software Security Assurance" is located at https://blogs.oracle.com/oraclesecurity/take-advantage-of-oracle-software-security-assurance. This blog entry provides a description of the Critical Patch Update and Security Alert programs and general recommendations around security patching.

Last week, Equifax identified an Apache Struts 2 vulnerability, CVE-2017-5638, as having been exploited in a significant security incident. Oracle distributed the Apache Foundation’s fixes...

Oracle Security

Oracle's Security Fixing Practices

In a previous blog entry, we discussed how Oracle customers should take advantage of Oracle's ongoing security assurance effort in order to help preserve their security posture over time. In today's blog entry, we're going to discuss the highlights of Oracle's security fixing practices and their implications for Oracle customers. As stated in the previous blog entry, the Critical Patch Update program is Oracle's primary mechanism for the delivery of security fixes in all supported Oracle product releases and the Security Alert program provides for the release of fixes for severe vulnerabilities outside of the normal Critical Patch Update schedule. Oracle always recommends that customers remain on actively-supported versions and apply the security fixes provided by Critical Patch Updates and Security Alerts as soon as possible. So, how does Oracle decide to provide security fixes? Where does the company start (i.e., for what product versions do security fixes get first generated)? What goes into security releases? What are Oracle's objectives? The primary objective of Oracle's security fixing policies is to help preserve the security posture of ALL Oracle customers. This means that Oracle tries to fix vulnerabilities in severity order for each Oracle product family. In certain instances, security fixes cannot be backported; in other instances, lower severity fixes are required because of dependencies among security fixes. Additionally, Oracle treats customers equally by providing customers with the same vulnerability information and access to fixes across actively-used platform and version combinations at the same time. Oracle does not provide additional information about the specifics of vulnerabilities beyond what is provided in the Critical Patch Update (or Security Alert) advisory and pre-release note, the pre-installation notes, the readme files, and FAQs. The only and narrow exception to this practice is for the customers who report a security vulnerability. When a customer is reporting a security vulnerability, Oracle will treat the customer in much the same way the company treats security researchers: the customer gets detailed information about the vulnerability as well as information about expected fixing date, and in some instances access to a temporary patch to test the effectiveness of a given fix. However, the scope of the information shared between Oracle and the customer is limited to the original vulnerability being reported by the customer. Another objective for Oracle's security fixing policies is not so much about producing fixes as quickly as possible, as it is to making sure that these fixes get applied by customers as quickly as possible. Prior to 2005 and the introduction of the Critical Patch Update program, security fixes were published by Oracle as they become produced by development without any fixed schedule (as Oracle would today release a Security Alert). Feedback we received was that this lack of predictability was challenging for customers, and as a result, many customers reported that they no longer applied fixes. Customers said that a predictable schedule would help them ensure that security fixes were picked up more quickly and consistently. As a result, Oracle created the Critical Patch Update program to bring predictability to Oracle customers. Since 2005, and in spite of a growing number of product families, Oracle has never missed a Critical Patch Update release. It is also worth noting that Critical Patch Update releases for most Oracle products are cumulative. This means that by applying a Critical Patch Update, a customer gets all the security fixes included in a specific Critical Patch Update release as well as all the previously-released fixes for a given product-version combination. This allows customers who may have missed Critical Patch Update releases to quickly "catch up" to current security releases. Let's now have a look at the order with which Oracle produces fixes for security vulnerabilities. Security fixes are produced by Oracle in the following order: Main code line. The main code line is the code line for the next major release version of the product. Patch set for non-terminal release version. Patch sets are rollup patches for major release versions. A Terminal release version is a version where no additional patch sets are planned. Critical Patch Update. These are fixes against initial release versions or their subsequent patch sets This means that, in certain instances, security fixes can be backported for inclusion in future patch sets or products that are released before their actual inclusion in a future Critical Patch Update release. This also mean that systems updated with patch sets or upgraded with a new product release will receive the security fixes previously included in the patch set or release. One consequence of Oracle's practices is that newer Oracle product versions tend to provide an improved security posture over previous versions, because they benefit from the inclusion of security fixes that have not been or cannot be backported by Oracle. In conclusion, the best way for Oracle customers to fully leverage Oracle's ongoing security assurance effort is to: Remain on actively supported release versions and their most recent patch set—so that they can have continued access to security fixes; Move to the most recent release version of a product—so that they benefit from fixes that cannot be backported and other security enhancements introduced in the code line over time; Promptly apply Critical Patch Updates and Security Alert fixes—so that they prevent the exploitation of vulnerabilities patched by Oracle, which are known by malicious attackers and can be quickly weaponized after the release of Oracle fixes. For more information: - Oracle Software Security Assurance website - Security Alerts and Critical Patch Updates

In a previous blog entry, we discussed how Oracle customers should take advantage of Oracle's ongoing security assurance effort in order to help preserve their security posture over time. In...

Oracle Security

Take Advantage of Oracle Software Security Assurance

In a previous blog entry (What is Assurance and Why Does It Matter?), Mary Ann Davidson explains the importance of Security Assurance and introduces Oracle Software Security Assurance, Oracle’s methodology for building security into the design, build, testing, and maintenance of its products. The primary objective of software security assurance is to help ensure that security controls provided by software are effective, work in a predictable fashion, and are appropriate for that software. The purpose of ongoing security assurance is to make sure that this objective continues to be met over time (throughout the useful life of software). The development of enterprise software is a complex matter. Even in mature development organizations, bugs still occur, and the use of automated tools does not completely prevent software defects. One important aspect of ongoing security assurance is therefore to remediate security bugs in released code. Another aspect of ongoing security assurance is to ensure that the security controls provided by software continue to be appropriate when the use cases for software change. For example, years ago backups were performed mostly on tapes or other devices physically connected to the server being backed up, while today many backups are performed over private or public networks and sometimes stored in a cloud. Finally, other aspects for ongoing security assurance activities include changing threats (e.g., new attack methods) or obsolete technologies (e.g., deprecated encryption algorithms). Oracle customers need to take advantage of Oracle ongoing security assurance efforts in order to preserve over time their security posture associated with their use of Oracle products. To that end, Oracle recommends that customers remain on actively-supported versions and apply security fixes as quickly as possible after they have been published by Oracle. Introduced in 2005, the Critical Patch Update program is the primary mechanism for the backport of security fixes for all Oracle on-premises products. The Critical Patch Update is Oracle’s program for the distribution of security fixes in previously-released versions of Oracle software. Critical Patch Updates are regularly scheduled: they are issued quarterly on the Tuesday closest to the 17th of the month in January, April, July, and October. This fixed schedule is intended to provide enough predictability to enable customers to apply security fixes in normal maintenance windows. Furthermore, the dates of the Critical Patch Update releases are intended to fall outside of traditional "blackout" periods when no changes to production systems are typically allowed (e.g., end of fiscal years or quarters or significant holidays). Note that in addition to this regularly-scheduled program for security releases, Oracle retains the ability to issue out of schedule patches or workaround instructions in case of particularly critical vulnerabilities and/or when active exploits are reported "in the wild." This program is known as the Security Alert Program. Critical Patch Update and Security Alert fixes are only provided for product versions that are "covered under the Premier Support or Extended Support phases of the Lifetime Support Policy." This means that Oracle does not backport fixes to product versions that are out of support. Furthermore, unsupported product releases are not tested for the presence of vulnerabilities. It is, however, common for vulnerabilities to be found in legacy code, and vulnerabilities fixed in a given Critical Patch Update release can also affect older product versions that are no longer supported. As a result, organizations choosing to continue to use unsupported systems face increasing risks over time. Malicious attackers are known to reverse-engineer the content of published security fixes and it is common for exploit code to be to be published in hacking frameworks soon after Oracle discloses vulnerabilities with the release of a Critical Patch Update or Security Alert. Continuing to use unsupported systems can therefore have two serious implications:(a) Unsupported releases are likely to be affected by vulnerabilities which are not known by the affected software user because these releases are no longer subject to ongoing security assurance activities, and (b) Unsupported releases are likely to be vulnerable to flaws that are known by malicious perpetrators because these bugs have been fixed (and publicly disclosed) in subsequent releases. Unfortunately, security studies continue to report that in addition to human errors and systems misconfigurations, the lack of timely security patching constitutes one of the greatest reasons for the compromise of IT systems by malicious attackers. See for example, the Federal Trade Commission’s paper "Start with Security: A Guide for Business", which recommends that organizations have effective means to keep up with security releases of their software (whether commercial or open source). Delays in security patching and overall lapses in good security hygiene have plagued IT organizations for years. In many instances, organizations will report the "fear of breaking something in a business-critical system" as the reason for not keeping up with security patches. Here lies a fundamental paradox: a given system may be considered too important to fail (or temporarily brought offline), and this is the reason why it is not kept up to date with security patches! The hope for these organizations is that the known system availability interruption outweighs the potential impact of a security incident that could result from not keeping up with a security release. This amounts to driving a car with very little gas left in the tank and thinking "I don’t have time to stop at the gas station, because I really need my car and I am too busy to gas up." Obviously, the scarcity of technical personnel and the costs associated with testing complex applications and deploying patches further exacerbate the problem. The larger the IT environment, the more complex, and the more operation-critical, the greater is the "to patch or not to patch" conundrum. In recent years, Oracle has issued stronger caution against postponing the application of security fixes or knowingly continuing to use unsupported versions. For example, the April 2017 Critical Patch Update Advisory includes the following warning: "Oracle continues to periodically receive reports of attempts to maliciously exploit vulnerabilities for which Oracle has already released fixes. In some instances, it has been reported that attackers have been successful because targeted customers had failed to apply available Oracle patches. Oracle therefore strongly recommends that customers remain on actively-supported versions and apply Critical Patch Update fixes without delay." Keeping up with security releases is simply a critical requirement for preserving the security posture of an IT environment, regardless of the technologies (or vendors) in use.

In a previous blog entry (What is Assurance and Why Does It Matter?), Mary Ann Davidson explains the importance of Security Assurance and introduces Oracle Software Security Assurance, Oracle’s...

Industry Insights

What Is Assurance and Why Does It Matter?

If you are an old security hand, you can skip reading this. If you think "assurance" is something you pay for so your repair bills are covered if someone hits your car, please keep reading. Way back in the pre-Internet days, I used to say that computer security was kind of a lonely job, because hardly any customers seemed to be really interested in talking about it. There were, of course, some keenly interested customers, including defense and intelligence agencies and a few banks, most of which were concerned with our security functionality and—to a lesser degree—how we were building security into everything, a difference I will explain below, and which is known as assurance. Times change. Now, when I meet someone who complains of a virus, it's better-than-even odds that he is talking about the latest digital plague and not a case of the flu. Information technology (IT) has moved way beyond mission-critical applications to things that are literally in the palm of our hands and is in places we never even thought would (or in some cases should) be computerized ("turn your crock pot on remotely? There's an app for that!") More and more of our world is not only IT-based but Internet accessible. Alas, the growth in Internet-accessible whatchamacallits has also led to a growth in Evil Dudes in Upper Wherever wreaking havoc in systems in Anywheresville. This is one big reason that cybersecurity is something (almost) everybody cares about. Historically, computer security has often been described as "CIA" (Confidentiality, Integrity and Availability): Confidentiality means that the data is protected such that people who don't need to access it, can't access it, via restrictions on who can view, delete or change data. For example, at Oracle, I can review my salary online (so can my Human Resource representative), but I cannot look at the salaries of employees who do not report to me. Integrity means that the data hasn't been corrupted (technical term: "futzed with"). In other words, you know that "A" means "A" and isn't really "B" that has been garbled to look like "A." Corrupted data is often worse than no data, precisely because you can't trust it. (Wire transfers wouldn't work if extra 0s were mysteriously and randomly appended to amounts.) Availability means in that you are able to access data (and systems) you have legitimate access to—when you need to. In other words, someone hasn't prevented access by say, flooding a machine with so many requests that the system just gives up (the digital equivalent of a persistent three-year-old asking for more candy "now mommy now mommy now mommy" to the point where mommy can't think). C, I and A are all important attributes of security that may vary in terms of importance from system to system. Assurance is not CIA, but it is the confidence that a system does what it was designed to do, including protecting against specific threats, and also that there aren't sneaky ways around the security controls it provides. It's important because, if you don't have a lot of confidence that the CIA cannot be bypassed by Evil Dude (or Evil Dudette), then the CIA isn't useful. If you have a digital doorlock—but the lock designer goofed by allowing anybody to unlock the door by typing '98765,' then you don't have any security once Evil Dude figures out that 98765 always gets him into your house (and shares that with everybody else on the Internet). Here's the definition of assurance that the US Department of Defense uses: "Software assurance relates to "the level of confidence that software functions as intended and is free of vulnerabilities, either intentionally or unintentionally designed or inserted as part of the software (https://acc.dau.mil/CommunityBrowser.aspx?id=25749)." When I started working in security, most security people knew a lot about the CIA of security, but fewer of us—fewer of anybody—thought about the "functions as designed" and "free from vulnerabilities" part. "Functions as intended" is a design aspect of security. That means that a designer not only considered what the software (or hardware) was intended to do, but thought about how someone could try to make the software (or hardware) do what it was not intended to do. Both are important because unless you never deploy a product, it's most likely going to be attacked, somehow, somewhere, by someone. Thinking about how Evil Dude can try to break stuff (and making that hard/unlikely to succeed) is a very important part of "functions as intended." The "free of vulnerabilities" part is also important; having said that, nobody who knows anything about code would say, "all of our code is absolutely perfect." ("Pride goeth before destruction and a haughty spirit before a fall.") That said, one of the most important aspects of assurance is secure coding. Secure coding practices include training your designers, developers, testers (and yes, even documentation writers) about how code can be broken, so people think about that before starting to code. Having a development process that incorporates security into design, development, testing and maintenance is also important. Security isn't a sort of magic pixie dust you can sprinkle over software or hardware after it's all done to magically make it secure—it is a quality just as structural integrity is part of a building, not something you slap your head over and think, "dang, I forgot the rebar, I need to add some to this building." It's too late after the concrete has set. Secure coding practices include actively looking for coding errors that could be exploited by a Evil Dude, triaging those coding errors to determine "how bad is bad," fixing the worst stuff the fastest and making sure that a problem is fixed completely. If Evil Dude can break in by typing '^X,' it's tempting to just redo your code so typing ^X doesn't get Evil Dude anything. But that likely isn't the root of the problem (what about ^Y - what does that do? Or ^Z, ^A...?) Automated tools designed to help find avoidable, preventable defects in software are a huge help (they didn't really exist when I started in security). Nobody who buys a house expects the house to be 100% perfect, but you'd like to think that the architect hired structural engineers to ensure the walls wouldn't fall over, the contractor had people checking the work all along ("don't skimp on the rebar"), there was a building inspection, etc. Noting that even with a really well-designed and well-built house, there is probably a ding or two in the paint somewhere even before you move in—it's probably not letter perfect. Code is like that, too, although a "ding in your code" is probably more significant than a ding in your paint, so there should be far fewer of them. Assurance matters not only because people who use IT want to know things work as intended—and cannot be easily broken—but because time, money and people are always limited resources. Most companies would rather hire 50 more salespeople to reach more potential customers than hire 50 more people to patch their systems. (For that matter, I'd rather have such strong, repeatable, ingrained secure development practices that instead of hiring 50 more people to fix bad/ insecure code, we can use those 50 people to build new, cool (and secure) stuff.) Assurance is always going to be good and necessary, even as the baseline technology we are trying to "assure" continues to change. One of the most enjoyable aspects of my job is continuing to "bake security in" as we grow in breadth and as we adapt to changes in the market. Many companies are moving from "buy-build-maintain" their own systems to "rent," by using cloud services. (It makes a lot of sense: companies don't typically build apartment buildings in every city their employees visit: they use "cloud housing," a.k.a. hotels.) The increasing move to cloud services comes with security challenges, but also has a lot of security benefits. If it's hard to find enough IT people to maintain your systems, it's even harder to find enough security people to defend them. A service provider can secure the same thing, 5000 times, much better than 5000 individual customers can. (Or, alternatively, a service provider can secure one big multi-tenant service offering better than the 5000 customers using it can do themselves.) The assurance practices we have adapted from "home grown" software and hardware has already morphed and will continue to morph to how we build and deliver cloud services. Click here for more information on Oracle assurance.

If you are an old security hand, you can skip reading this. If you think "assurance" is something you pay for so your repair bills are covered if someone hits your car, please keep reading. Way back in...

Security Trends

The State of Open Source Security

Open source components have played a growing role in software development (commercial and in-house development). The traditional role of a developer has evolved from coding most of everything to re-using as much as possible known and trustworthy components. As a result, a growing aspect of software design and development decisions has become the integration of open-source and third-party components into increasingly large and complex software. The question as to whether open source software is inherently more secure than commercial (i.e. "closed") software has been ardently debated for a number of years. The purpose of this blog entry is not to definitely settle this argument, though I would argue that software (whether open source or closed source) that is developed by security-aware developers tends to be inherently more secure. Regardless of this controversy, there are important security implications regarding the use of open source components. The wide use of certain components (open source or not) has captured the attention of both security researchers and malicious actors. Indeed, the discovery of a 0-day in a widely-used component can imply for malicious actors the prospective of "hacking once and exploiting anywhere" or large financial gain if the bug is sold on the black market. As a result, a growing number of security vulnerabilities have been found and reported in open source components. The positive impact of increased security research in widely-used components will (hopefully) be the improved security-worthiness of these components (and the relative fulfillment of the "million eyes" theory), and the increased awareness of the security implications of the use of open source components within development organizations. In many instances, the vulnerabilities found in public components have been given cute names: POODLE, Heartbleed, Dirty Cow, Shellshock, Venom, etc. (in no particular order). These names contributed to a sense of urgency (sometimes panic) within many organizations, often to the detriment of a rational analysis of the actual severity of these issues and their relative exploitability in the affected environments. Less security-sophisticated organizations have been particularly affected by this sense of urgency and many have attempted to scan their environment to find software containing the vulnerability "du jour." However, it has been Oracle's experience that while many free tools provide organizations with the ability to relatively accurately identify the presence of open source components in IT systems, the majority of these tools have an abysmal track record at accurately identifying the version(s) of these components, and much less determine the exploitability of the issues associated with them. As a result, less security-sophisticated organizations are facing reports with a large number of false positive, and are unable to make sense of these findings (Oracle support has seen an increase in the submission of such inaccurate reports). From a security assurance perspective, I believe that there are 3 significant and often under-discussed topics related to the use of open source components in complex software development: How can we assess and enhance security assurance activities in open source projects? How can we ensure that these components are obtained securely? How does the use of open source components affect ongoing assurance activities throughout the useful life of associated products? How can we assess and enhance security assurance activities in open source projects? Assessing the security maturity of an open source project is not necessarily an easy thing. There are certain tools and derived methodologies and principles (e.g., Building Security In Maturity Model (BSIMM), Safecode), that can be used to assess the relative security maturity of commercial software developers but their application to open source projects is difficult. For example, how can one determine the amount of security skills available in an open source projects and whether code changes are systematically reviewed by skilled security experts? Furthermore, should the software industry try to come up together with means to coordinate the role of commercial vendors in helping enhance the security posture of the most common open source projects for the benefit of all vendors and the community? Is it enough to commit that security fixes be shared with the community when an issue is discovered while a component is being used in a commercial offering? How can we ensure that these components are obtained securely? A number of organizations (whether for the purpose of developing commercial software or their own systems) are concerned solely about "toxic" licenses when procuring open source components, while they should be equally concerned about bringing in toxic code. One problem is the potential downloading and use of obsolete software (which contains known security flaws that have been fixed in the most recent releases). This problem can relatively easily be solved by forcing developers to only download the most recent releases from the official project repository. Many developers prefer pulling compiled binaries, instead of compiling the source code themselves (and verifying its authenticity). Developers should be aware of the risk of pulling malicious code (it's not because it is labelled as "foo" that it actually is "foo"; it may actually be "foo + a nasty backdoor"). There have been several publicly-reported security incidents resulting from the downloading of maliciously altered programs. How can we provide the necessary security care of a solution that include open source components throughout its useful life? Once an organization has decided to use an external component in their solution, the organization also should consider how they will maintain the solution. The maintenance and patching implications of third party components are often overlooked. For example, organizations may be faced with hardware limitations in their products. They may have to deprecate hardware products more quickly because a required open source component is no longer supported on a specific platform, or the technical requirements of the subsequent releases of the component exceed the specifications of the hardware. In hardware environments, there is also the obvious question of whether patching mechanisms are available for the updating of open source components on the platform. There are also problematic implications of the use of open source components when they are used in purely-software solutions. Security fixes for open source components are often unpredictable. How does this unpredictability affect the availability of production systems, or customers' requirement to have fixed maintenance schedule? In conclusion, the questions listed in this blog entry are just a few of the questions that one should consider when developing technology-based products (which seems to be about almost everything these days.) These questions are particularly important as open source components represent a large and increasing chunk of the technology supply chain, not only of commercial technology vendors, but also of cloud providers. Security assurance policies and practices should take these questions into consideration, and highlight the fact that open source, while incredibly useful, is not necessarily "free" but requires specific sets of commitments and due diligence obligations.

Open source components have played a growing role in software development (commercial and in-house development). The traditional role of a developer has evolved from coding most of everything...

Industry Insights

Common Criteria and the Future of Security Evaluations

For years, I (and many others) have recommended that customers demand more of their information technology suppliers in terms of security assurance – that is, proof that security is “built in” and not “bolted on,” that security is “part of” the product or service developed and can be assessed in a meaningful way. While many customers are focused on one kind of assurance – the degree to which a product is free from security vulnerabilities – it is extremely important to know the degree to which a product was designed to meet specific security threats (and how well it does that). These are two distinct approaches to security – that are quite complementary – and a point that should increasingly be of value for all customers. The good news is that many IT customers – whether of on-premises products or cloud services - are asking for more “proof of assurance,” and many vendors are paying more attention. Great! At the same time, sadly, a core international standard for assurance: the Common Criteria (CC) (ISO 15408), is at risk. The Common Criteria allows you to evaluate your IT products via an independent lab (certified by the national “scheme” in which the lab is domiciled). Seven levels of assurance are defined – generally, the higher the evaluation assurance level (EAL), the more “proof” you have to provide that your product 1) addresses specific (named) security threats 2) via specific (named) technical remedies to those threats. Over the past few years, CC experts have packaged technology-specific security threats, objectives, functions and assurance requirements into “Protection Profiles” that have a pre-defined assurance level. The best part of the CC is the CC Recognition Arrangement (CCRA), the benefit of which is that a CC security evaluation done in one country (subject to some limits) is recognized in multiple other countries (27, at present). The benefit to customers is that they can have a baseline level of confidence in a product they buy because an independent entity has looked at/validated a set of security claims about that product. Unfortunately, the CC in danger of losing this key benefit of mutual recognition. The main tension is between countries that want fast, cookie cutter, “one assurance size fits all” evaluations, and those that want (for at least some classes of products) higher levels of assurance. These tensions threaten to shatter the CCRA, with the risk of an “every country for itself,” “every market sector for itself” or worse, “every customer for itself” attempt to impose inconsistent assurance requirements on vendors that sell products and services in the global marketplace. Customers will not be well-served if there is no standardized and widely-recognized starting point for a conversation about product assurance. The uncertainty about the future of the CC creates opportunity for new, potentially expensive and unproven assurance validation approaches. Every Tom, Dick, and Harriet is jumping on the assurance bandwagon, whether it is developing a new assurance methodology (that the promoters hope will be adopted as a standard, although it’s hardly a standard if one company “owns” the methodology), or lobbying for the use of one proprietary scanning tool or another (noting that none of the tools that analyze code are themselves certified for accuracy and cost-efficiency, nor are the operators of these tools). Nature abhors a vacuum: if the CCRA fractures, there are multiple entities ready to promote their assurance solutions – which may or may not work. (Note: I freely admit that a current weakness of the CC is that, while vulnerability analysis is part of a CC evaluation, it’s not all that one would want. A needed improvement would be a mechanism that ensures that vendors use a combination of tools to more comprehensively attempt to find security vulnerabilities that can weaken security mechanisms and have a risk-based program for triaging and fixing them. Validating that vendors are doing their own tire-kicking – and fixing holes in the tires before the cars leave the factory – would be a positive change.) Why does this threat of CC balkanization matter? First of all, testing the exact same product or service 27 times won’t in all likelihood lead to a 27-fold security improvement, especially when the cost of the testing is born by the same entity over and over (the vendor). Worse, since the resources (time, money, and people) that would be used to improve actual security are assigned to jumping through the same hoop 27 times, we may paradoxically end up with worse security. We may also end up with worse security to the extent that there will be less incentive for the labs that do CC evaluations to pursue excellence and cost efficiency in testing if they have less competition (for example, from labs in other countries, as is the case under the CCRA) and they are handed a captive marketplace via country-specific evaluation schemes. Second, whatever the shortcomings of the CC, it is a strong, broadly-adopted foundation for security that to-date has the support of multiple stakeholders. While it may be improved upon, it is nonetheless better to do one thing in one market that benefits and is accepted in 26 other markets than to do 27 or more expensive testing iterations that will not lead to a 27-fold improvement in security. This is especially true in categories of products that some national schemes have deemed “too complex to evaluate meaningfully.” The alternative clearly isn't per-country testing or per-customer testing, because it is in nobody's interests and not feasible for vendors to do repeated one-off assurance fire-drills for multiple system integrators. Even if the CC is “not sufficient” for all types of testing for all products, it is still a reputable and strong baseline to build upon. Demand for Higher Assurance In part, the continuing demand for higher assurance CC evaluations is due to the nature of some of the products: smart cards, for example, are often used for payment systems, where there is a well understood need for “higher proof of security-worthiness.” Also, smart cards generally have a smaller code footprint, fewer interfaces that are well-defined and thus they lend themselves fairly well to more in-depth, higher assurance validation. Indeed, the smart card industry – in a foreshadowing and/or inspiration of CC community Protection Profiles (cPPs), was an early adopter of devising common security requirements and “proof of security claims,” doubtless understanding that all smart card manufacturers - and the financial institutions who are heavy issuers of them - have a vested interest in “shared trustworthiness.” This is a great example of understanding that, to quote Ben Franklin, “We must all hang together or assuredly we shall all hang separately.” The demand for higher assurance evaluations continues in part because the CC has been so successful. Customers worldwide became accustomed to “EAL4” as the gold standard for most commercial software. “EAL-none”—the direction of new style community Protection Profiles (cPP)—hasn’t captured the imagination of the global marketplace for evaluated software in part because the promoters of “no-EAL is the new EAL4” have not made the necessary business case for why “new is better than old.” An honorable, realistic assessment of “new-style” cPPs would explain what the benefits are of the new approach and what the downsides are as part of making a case that “new is better than old.” Consumers do not necessarily upgrade their TV just because they are told “new is better than old;” they upgrade because they can see a larger screen, clearer picture, and better value for money. Product Complexity and Evaluations To the extent security evaluation methodology can be more precise and repeatable, that facilitates more consistent evaluations across the board at a lower evaluation cost. However, there is a big difference between products that were designed to do a small set of core functions, using standard protocols, and products that have a broader swathe of functionality and have far more flexibility as to how that functionality is implemented. This means that it will be impossible to standardize testing across products in some product evaluation categories. For example, routers use standard Internet protocols (or well-known proprietary protocols) and are relatively well defined in terms of what they do. Therefore, it is far easier to test their security using standardized tests as part of a CC evaluation to, for example, determine attack resistance, correctness of protocol implementation, and so forth. The Network Device Protection Profile (NDPP) is the perfect template for this type of evaluation. Relational databases, on the other hand, use structured query language (SQL) but that does not mean all SQL syntax in all commercial databases is identical, or that protocols used to connect to the database are all identical, or that common functionality is completely comparable among databases. For example, Oracle was the first relational database to implement commercial row level access control: specifically, by attaching a security policy to a table that causes a rewrite of SQL to enforce additional security constraints. Since Oracle developed (and patented) row level access control, other vendors have implemented similar (but not identical) functionality. As a result, no set of standard tests can adequately test each vendor’s row level security implementation, any more than you can use the same key on locks made by different manufacturers. Prescriptive (monolithic) testing can work for verifying protocol implementations; it will not work in cases where features are implemented differently. Even worse, prescriptive testing may have the effect of “design by test harness.” Some national CC schemes have expressed concerns that an evaluation of some classes of products (like databases) will not be “meaningful” because of the size and complexity of these products [1], or that these products do not lend themselves to repeatable, cross-product (prescriptive) testing. This is true, to a point: it is much easier to do a building inspection of a 1000-square foot or 100-square meter bungalow than of Buckingham Palace. However, given that some of these large, complex products are the core underpinning of many critical systems, does it make sense to ignore them because it’s not “rapid, repeatable and objective” to evaluate even a core part of their functionality? These classes of products are heavily used in the core market sectors the national schemes serve: all the more reason the schemes should not preclude evaluation of them. Worse, given that customers subject to these CC schemes still want evaluated products, a lack of mutual recognition of these evaluations (thus breaking the CCRA) or negation of the ability to evaluate merely drives costs up. Demand for inefficient and ineffective ad hoc security assurances continues to increase and will explode if vendors are precluded from evaluating entire classes of products that are widely-used and highly security relevant. No national scheme, despite good intentions, can successfully control its national marketplace, or the global marketplace for information technology. Innovation One of the downsides of rapid, basic, vanilla evaluations is that it stifles the uptake of innovative security features in a customer base that has a lot to protect. Most security-aware customers (like defense and intelligence customers) want new and innovative approaches to security to support their mission. They also want the new innovations vetted properly (via a CC evaluation). Typically, a community Protection Profile (cPP) defines the set of minimum security functions that a product in category X does. Add-ons can in theory be done via an extended package (EP) – if the community agrees to it and the schemes allow it. The vendor and customer community should encourage the ability to evaluate innovative solutions through an EP, as long as the EP does not specify a particular approach to a threat to the exclusion of other ways to address the threat. This would continue to advance the state of the security art in particular product categories without waiting until absolutely everyone has Security Feature Y. It’s almost always a good thing to build a better mousetrap: there are always more mice to fend off. Rapid adoption of EPs would enable security-aware customers, many of whom are required to use evaluated products, to adopt new features readily, without waiting for: a) every vendor to have a solution addressing that problem (especially since some vendors may never develop similar functionality) b) the cPP to have been modified, and c) all vendors to have evaluated against the new cPP (that includes the new security feature) Given the increasing focus of governments on improvements to security (in some cases by legislation), national schemes should be the first in line to support “faster innovation/faster evaluation,” to support the customer base they are purportedly serving. Last but really first, in the absence of the ability to rapidly evaluate new, innovative security features, customers who would most benefit from using those features may be unable or unwilling to use them, or may only use them at the expense of “one-off” assurance validation. Is it really in anyone’s interest to ask vendors to do repeated one-off assurance fire-drills for multiple system integrators? Conclusion The Common Criteria – and in particular, the Common Criteria recognition – form a valuable, proven foundation for assurance in a digital world that is increasingly in need of it. That strong foundation can nonetheless be strengthened by: 1) recognizing and supporting the legitimate need for higher assurance evaluations in some classes of product 2) enabling faster innovation in security and the ability to evaluate it 3) continuing to evaluate core products that have historically had and continue to have broad usage and market demand (e.g., databases and operating systems) 4) embracing, where apropos, repeatable testing and validation, while recognizing the limitations thereof that apply in some cases to entire classes of products and ensuring that such testing is not unnecessarily prescriptive. [1] https://www.niap-ccevs.org/Documents_and_Guidance/ccevs/DBMS%20Position%20Statement.pdf

For years, I (and many others) have recommended that customers demand more of their information technology suppliers in terms of security assurance – that is, proof that security is “built in” and not...

Security Updates

Security Alert CVE-2016-0603 Released

Oracle just released Security Alert CVE-2016-0603 to address a vulnerability that can be exploited when installing Java 6, 7 or 8 on the Windows platform. This vulnerability has received a CVSS Base Score of 7.6. To be successfully exploited, this vulnerability requires that an unsuspecting user be tricked into visiting a malicious web site and download files to the user's system before installing Java 6, 7 or 8. Though considered relatively complex to exploit, this vulnerability may result, if successfully exploited, in a complete compromise of the unsuspecting user’s system. Because the exposure exists only during the installation process, users need not upgrade existing Java installations to address the vulnerability. However, Java users who have downloaded any old version of Java prior to 6u113, 7u97 or 8u73, should discard these old downloads and replace them with 6u113, 7u97 or 8u73 or later. As a reminder, Oracle recommends that Java home users visit Java.com to ensure that they are running the most recent version of Java SE and that all older versions of Java SE have been completely removed. Oracle further advises against downloading Java from sites other than Java.com as these sites may be malicious. For more information, the advisory for Security Alert CVE-2016-0603 is located at http://www.oracle.com/technetwork/topics/security/alert-cve-2016-0603-2874360.html

Oracle just released Security Alert CVE-2016-0603 to address a vulnerability that can be exploited when installing Java 6, 7 or 8 on the Windows platform. This vulnerability has received a CVSS Base...

Critical Patch Updates

January 2016 Critical Patch Update Released

Oracle today released the January 2016 Critical Patch Update. With this Critical Patch Update release, the Critical Patch Update program enters its 11th year of existence (the first Critical Patch Update was released in January 2005). As a reminder, Critical Patch Updates are currently released 4 times a year, on a schedule announced a year in advance. Oracle recommends that customers apply this Critical Patch Update as soon as possible. The January 2016 Critical Patch Update provides fixes for a wide range of product families; including: Oracle Database None of these database vulnerabilities are remotely exploitable without authentication. Java SE vulnerabilities Oracle strongly recommends that Java home users visit the Java.com website, to ensure that they are using the most recent version of Java and are advised to remove obsolete Java SE versions from their computers if they are not absolutely needed. Oracle E-Business Suite Oracle’s ongoing assurance effort with E-Business Suite helps remediate security issues and is intended to help enhance the overall security posture provided by E-Business Suite. Oracle takes security seriously, and strongly encourages customers to keep up with newer releases in order to benefit from Oracle’s ongoing security assurance effort. For More Information: The January 2016 Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/topics/security/cpujan2016-2367955.html The Oracle Software Security Assurance web site is located at https://www.oracle.com/support/assurance/index.html Oracle Applications Lifetime Support Policy is located at http://www.oracle.com/us/support/library/lifetime-support-applications-069216.pdf

Oracle today released the January 2016 Critical Patch Update. With this Critical Patch Update release, the Critical Patch Update program enters its 11th year of existence (the first Critical Patch...

Industry Insights

Improving the Speed of Product Evaluations

Hi there, Oracle Security blog readers; Josh Brickman here again. Today I want to share some of our thoughts about Common Criteria (CC) evaluations specifically those under the US Scheme of the CC run by the National Information Assurance Partnership (NIAP). NIAP is one of the leaders behind the significant evolution of the Common Criteria, resulting in ratification of a new Common Criteria Recognition Arrangement last year. Background In 2009, NIAP advocated for a radical change in the CC by creating Protection Profiles quickly for many technology types. As described by NIAP [1]: In this new paradigm, NIAP will only accept products into evaluation claiming exact compliance to a NIAP-approved Protection Profile. These NIAP-approved Protection Profiles (PP) produce evaluation results that are achievable, repeatable, and testable — allowing for more a more consistent and rapid evaluation process. [2] Using this approach, assurance activities are tailored to technologies and eliminate the need for evaluation assurance levels (pre-cast assurance activities that were generically used for most CC evaluations also known as EAL's). Another key element is the goal of eliminating variables such as lab experience and a subjective evaluator influencing the results. As a result, these changes help make evaluations less “gameable" — a great idea. Even better: with the new approach, instead of requiring confidential internal documentation and sometimes limited amounts of source code, the labs will now only evaluate what is available to any customer — the finished product and its documentation. New Protection Profiles describe assurance testing activities; in theory, a vendor could take the same product and documentation to different labs and receive consistent results. One other objective of NIAP's vision of a revitalized CC is based on the goal of 90 day evaluations. Under the new paradigm, evaluations should take less than six months (their current time limit) and after labs and vendors develop experience with the new process, ideally complete them in three months. For vendors, this is a terrific development as it maximizes the amount of time an evaluated product can remain on the market before a new version is released. Customers also benefit greatly from this change — if a product can be certified quickly, a greater number of new products will be available for procurement by customers who require products to be CC-evaluated before acquisition. Using the old system, a product often was outdated — replaced by a new version with new functionality — by the time it finished an evaluation. There is a lot to like with these developments — vendors benefit from a consistent and timely process, customers benefit from greater choice in evaluated products, and labs stand to benefit as more vendors send more products for evaluation using resources that are scalable. Entropy Evaluation Unfortunately, there are a few bumps along this road to assurance nirvana. As security researchers have known for some time, sources of entropy are critical for encryption as they help ensure random number generation is suitably random. As such, NIAP introduced requirements for entropy assessments for every product going through an evaluation [3]. Not a bad idea on the surface, however, in the commercial marketplace the mix of technologies, intellectual property (IP), and approaches to development mean it is hard to evaluate entropy (to ensure it is following good practices) without reviewing confidential design documents and — potentially — source code. As noted in Atsec's insightful blog post from September 2013: The TOE Security Assurance Requirements specified in Table 2 of the NDPP [4] is (roughly) equivalent to Evaluation Assurance Level 1 (EAL1) [5] per CC Part 3...The NDPP does not require design documentation of the TOE itself; nevertheless its Annex D does require design documentation of the entropy source — which is often provided by the underlying Operating System (OS). Suppose that a TOE runs on Windows, Linux, AIX and Solaris and so on, some of which may utilize cryptographic acceleration hardware (e.g., Intel processors supporting RDRAND instruction). In order to claim NDPP compliance and succeed in the CC evaluation, the vendor is obligated to provide the design documentation of the entropy source from all those various Operating Systems and/or hardware accelerators. This is not only a daunting task, but also mission impossible because the design of some entropy sources are proprietary to some OS or hardware vendors. At first NIAP stated that Entropy Assessment Reports (EARs) would be OK to submit in draft form in order for an evaluation to start. But NIAP found that finalizing the EAR was taking weeks and months because often the expertise — or documentation — did not exist in the vendor's development organization. Vendors also found that NIAP did not trust the labs to evaluate the EAR's and decided to insert their own experts from the Information Assurance Directorate (IAD) into the process. In practice this means that the EAR must be evaluated by IAD before it can be approved; no other evaluation element in the NIAP scheme requires this oversight except that of entropy. More problematic, some disclosure of the entropy source code and design documentation for entropy generation is now required by NIAP. As I noted above, some vendors may have no trouble providing this extra information to complete an entropy assessment, however, if your product relies on an external or third-party entropy source, you may be unable to provide the material (even if you are willing, your source may not wish to cough up their IP). Speeding Up, or Slowing Down? NIAP has a six month evaluation time limit with a public goal of 90 days for an evaluation. An evaluation involves a vendor providing evidence and a third-party lab evaluating it against certain criteria (in this case the Common Criteria). On the one hand, we have a new Protection Profile paradigm that streamlines evaluations and removes requirements for source code and confidential data provided. However, with the introduction of Entropy Assessment Reports (EAR), vendors are increasingly unable to complete an evaluation within the 90-day window. NIAP's solution is to evaluate an EAR BEFORE a CC evaluation can start. We see this happening now within the NIAP scheme and in other schemes trying to evaluate against NIAP PP's. There are good reasons why NIAP is so concerned with entropy, yet we need to examine how best to meet these requirements in the context of the current IT ecosystem. As Atsec noted in the same post: In theory, a thorough analysis on the entropy source coupled with some statistical tests on the raw data is absolutely necessary to gain some assurance of the entropy that plays such a vital role in supporting security functionality of IT products...However, the requirements of entropy analysis stated above impose an enormous burden on the vendors as well as the labs to an extent that they are out of balance (in regard to effort expended) compared to other requirements; or in some cases, it may not be possible to meet the requirements.

Hi there, Oracle Security blog readers; Josh Brickman here again. Today I want to share some of our thoughts about Common Criteria (CC) evaluations specifically those under the US Scheme of the CC...

Industry Insights

FIPS: The Crypto "Catch 22"

Hello, Oracle blog reader! My name is Joshua Brickman and I run Oracle's Security Evaluations team (SECEVAL). At SECEVAL we are charged with shepherding certain Oracle products through security certifications and accreditations, mostly for government use. Today in my initial blog I'd like to talk to you about cryptography [1] (or crypto) as it relates to these government certifications. First a little history; crypto actually goes back to classical Rome. The Caesar Cipher was used by the Roman military to pass messages during a battle. In World War II, Germany was very successful encrypting messages utilizing a machine known as "Enigma." But when Enigma's secret codes were famously broken (due mostly to procedural errors) it helped to turn the tide of the war. The main idea of crypto hasn't changed though: only allow intended recipients to understand a given message—to everyone else it's unintelligible [2]. Today's crypto is created by complex computer modules utilizing "algorithms." The funny thing about crypto is someone is always trying to break it — but usually using computers (also running complex programs). One of the U.S. government's approaches to combatting code breaking has been via the Federal Information Processing Standard (FIPS) program maintained by the National Institute of Standards and Technology (NIST). NIST uses FIPS to publish approved algorithms that software needs to correctly implement in order to claim their products meet the FIPS standard. The FIPS standard defines acceptable algorithms; vendors implement their crypto to meet them. So why would technology vendors need to comply with this standard? Well FIPS "140-2" defines the acceptable algorithms and modules that can be procured by the US government. In fact NIST keeps a list of approved modules (implementations of algorithms) on their website. To get on this list of approved modules, vendors like Oracle must get their modules and algorithms validated. Unfortunately for many technology providers there isn't any alternative to FIPS 140-2 (version 2 of the current approved release) [3]. FIPS 140-2 is required by U.S. government law [4] if a vendor wants to do business with the government. The validation process is an expensive and time-consuming process. Vendors must hire a third-party lab to do the validation; most also hire consultants to write documentation that is required as a part of the process. The validation process for FIPS 140-2 requires that vendors prove they have implemented their crypto correctly against the approved algorithms. This proof is accomplished via these labs that validate each module is correctly implementing the selected algorithms. FIPS 140-2 has also been adopted by banks and other financial services. If you think about it, this makes a lot of sense: there is nothing more sensitive than financial data and banks want assurances that the crypto they depend on to protect it is as trusted as the safes that protect the cash. Interestingly, smart-card manufacturers also require FIPS: this is not a government mandate, but rather a financial industry requirement to minimize their fraud rates (by making the cost for criminals to break the crypto in smart cards too high for them to bother). Finally the Canadian government partners with NIST on the FIPS 140-2 program known as the Cryptographic Module Validation Program (CMVP) and mirrors U.S. procurement requirements. All of this would be the cost of doing business with the US (and Canadian) governments except that there are a couple of elements that are broken. 1. Government customers who buy crypto technology require a FIPS 140-2 certificate of validation (not a letter from a lab or being on a list of "under validation"). In our experience, a reasonable timeframe for a validation is a few months. Unfortunately, after all of the work is completed, the final "reports" (which lead to the certificates) go into a black hole otherwise known as the "In Review" status. This is when the vendor, lab (and consultant) have done as much work as possible but are now waiting for both the Canadian and US governments to review the case for validation. The queue (as I write this blog) has been as long as nine months over the last couple of years. An Oracle example was the Sun Crypto Accelerator (SCA) 6000 product. On January 14, 2013 our lab submitted the final FIPS 140-2 report for the SCA6000 to NIST. Seven months later, we heard back our first feedback on our case for validation. We finally received our certificate on September 11, 2013. NIST is quite aware of the delays [5]; they are considering several options including but not limited to increasing fees to pay for additional contractors, pushing more work to labs and/or revising their review process--but they haven't announced any changes that might resolve the issue other than minor fee increases. From our point of view, it's really difficult to justify the significant investments required for these validations when you can't predict their completion. If you think about the shelf life of technology these days, a delay of say a year in a two year "life-span" of a product reduces the value to the customer by 50%. As a vendor we have no control over that delay, and unless customers are willing to buy invalidated products (and they can't by law in the U.S.) they are forced to use older products that may be missing key new features. 2. NIST also refers to outdated requirements [6] under the Common Criteria. The FIPS 140-2 Standard references Common Criteria Evaluation Assurance Level (EAL) directly when discussing eligibility for the various FIPS levels [7]. However, due to current policies with the U.S. Scheme of the Common Criteria (National Information Assurance Partnership or NIAP), it is no longer possible to be evaluated at EAL3 or higher. This prevents any vendor from obtaining a FIPS 140-2 validation (of any software or firmware cryptographic module) at any level higher than Level 1. One example of an Oracle product that we would have validated higher than Level 1 is the Solaris Cryptographic Framework (SCF). Originally coded to be validated at FIPS Level 2, Oracle SCF was forced to drop its validation claim down to Level 1. This was because NIST's Implementation Guidance for FIPS 140-2 required that the Operating Systems be evaluated against a deprecated Common Criteria Protection Profile if vendors of crypto modules running on those OSs wanted the modules to be validated at Level 2. Unfortunately an older version of Solaris was Common Criteria evaluated against that deprecated Operating System Protection Profile. The version of SCF that we needed FIPS 140-2 validated didn't run on that old version of Solaris. Oracle had market demands that required a timely completion of its FIPS validation so a business decision was made to lower the bar. We've pointed out this contradiction (as I'm sure have other vendors) but at this writing the problem is still not resolved by NIST. Oracle is committed to working with NIAP and NIST to make both Common Criteria and FIPS 140-2 as broadly accepted as possible. U.S. and Canadian government customers want to buy products that meet the high standards defined by FIPS 140-2, and that many other security-conscious customers recognize. Along with many other vendors of crypto modules in our products, Oracle is very keen to see NIST's queue of work brought down to a level that would allow the longest shelf life for our validations. We also would like to see NIST and NIAP work together to align their policies and interpretations of standards to provide greater selection of strong cryptography. These current obstacles are providing a reduced choice of FIPS 140-2 validated products. If removed, there will be a larger number of FIPS 140-2 validated crypto modules available for purchase. Increasing choice of validated products for the US and Canadian governments can only contribute to improved national security. --- [1] According to Webopedia, "Cryptography is the art of protecting information by transforming it (encrypting it) into an unreadable format, called cipher text. Only those who possess a secret key can decipher (or decrypt) the message into plain text. Encrypted messages can sometimes be broken by cryptanalysis, also called code breaking, although modern cryptography techniques are virtually unbreakable." [2] "FIPS 140 Demystified, An Introductory Guide for Developers," by Wesley Higaki and Ray Potter, ISBN-13: 978-1460990391, 2011. [3] FIPS 140-3 has been in draft form for many years. NIST has been pulling key elements of the new standard and adding them to FIPS 140-2 as "Implementation Guidance." [4] "Federal Information Processing Standards (FIPS) are approved by the Secretary of Commerce and issued by NIST in accordance with the Federal Information Security Management Act of 2002 (FISMA). FIPS are compulsory and binding for federal agencies. FISMA requires that federal agencies comply with these standards, and therefore, agencies may not waive their use." [5] As noted in the summary of the recent first FIPS Conference, "...The current length of The Queue means that it can take developers many months to get their modules validated and hence available for procurement from federal agencies. In an increasing number of cases, products are obsolete or un-supported by the time the validation is finally documented. We heard how the unpredictability of The Queue is a problem too, since it greatly affects how developers can perform their marketing, sales and project planning." [6] For example, http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf "Security Level 3 allows the software and firmware components of a cryptographic module to be executed on a general purpose computing system using an operating system that meets the functional requirements specified in the PPs listed in Annex B with the additional functional requirement of a Trusted Path (FTP_TRP.1) and is evaluated at the CC evaluation assurance level EAL3 (or higher)." [7] There are four levels of FIPS, at level "1" all components must be "production grade" and level 2 adds requirements for physical tamper-evidence and role-based authentication. Levels 3 and 4 are typically for hardware. -- http://en.wikipedia.org/wiki/FIPS_140

Hello, Oracle blog reader! My name is Joshua Brickman and I run Oracle's Security Evaluations team (SECEVAL). At SECEVAL we are charged with shepherding certain Oracle products through...