X

Corporate Security Blog

Recent Posts

Industry Insights

Security Sessions at Oracle OpenWorld

A number of OpenWorld sessions will be dedicated to security, around topics including: Database Security, Identity and Access Management, Oracle Cloud Infrastructure Security, and Application Security.  The corporate security teams will also be featured in a number of these sessions, including: Cooperation in a Cause (PAN4753) There are no silos in security: no person or team can do it all. At Oracle, corporate security includes multiple teams, with different remits in security governance, and they all successfully cooperate across organizational boundaries to improve the security of Oracle’s corporate infrastructure. The benefits include cross-pollination, a wider perspective on security, broader access to skilled resources—and less compliance answer shopping. It also enables streamlined security to-dos, faster security compliance, and fewer crossed signals. During this session hear from corporate security oversight leaders about how to integrate compliance efforts for better results with less churn. This session will be presented by Paul Andres, Corporate Security Architect, Steve Deitrick, VP Global Information Security, and Mary Ann Davidson, Chief Security Officer. It will take place on Tuesday, September 17, 03:15 PM - 04:00 PM at the Moscone South, Room 203.    Secure Development in a DevOps and Cloud World (TIP4791) DevOps is an approach to software development and deployment that encourages rapid and flexible response to changes. Many cloud providers are proponents of DevOps. At the same time, development organizations often leverage cloud services to mirror the flexibility of DevOps software development and deployment within their IT infrastructure. Does the rapid and flexible nature of DevOps negatively impact the security of the deliverables? In this session see how secure development and deployment practices can be performed in a DevOps and cloud world. This session will be presented by John Heimann, Vice President, Security Programs, and Uppili Srinivasan, Chief Security Architect, Oracle SaaS.  It will take place on Tuesday, September 17, 01:45 PM - 02:30 PM at the Moscone South, Room 152 B.    Key Steps to Securing Business-Critical Applications (CON5918) Business applications are increasingly targeted by malicious attackers who seek to steal the “crown jewels” of an organization. In this joint session learn from security experts from Onapsis and Oracle how to secure your environment, as well as actionable hardening recommendations. These security experts share common security audit findings and provide tips and techniques to develop an effective security patching strategy. This session will be presented by Bruce Lowenthal, Senior Director, Security Alerts Group, Oracle, and Juan Perez-Etchegoyen, CTO, Onapsis Inc.  It will take place on Tuesday, September 17, 01:45 PM - 02:30 PM at the Moscone West, Room 3007 A.   Understanding Security Advisories and Identifying Mitigating Controls (TIP4701) Oracle periodically receives reports of attempts to maliciously exploit vulnerabilities for which Oracle has already released patches. In some instances, attackers have been successful because targeted customers had failed to apply the available updates. Oracle recommends that customers remain on actively supported versions and apply critical patch update fixes without delay. However, patching activities often conflict with production requirements. This session provides the knowledge necessary to analyze Oracle security advisories so you can accurately prioritize the application of security updates and determine the adequacy of mitigating controls in your environment. This session will be presented by Reshma Banerjee, Director, Security Alerts, Oracle. It will take place on Wednesday, September 18, 3:45 PM – 04:30 PM at the Moscone South, Room 159 A.

A number of OpenWorld sessions will be dedicated to security, around topics including: Database Security, Identity and Access Management, Oracle Cloud Infrastructure Security, and Application Security.  The...

Security Updates

RAMBleed DRAM Vulnerabilities

On June 11th, security researchers published a paper titled “RAMBleed Reading Bits in Memory without Accessing Them”.  This paper describes attacks against Dynamic Random Access Memory (DRAM) modules that are already susceptible to Rowhammer-style attacks. The new attack methods described in this paper are not microprocessor-specific, they leverage known issues in DRAM memory.  These attacks only impact DDR4 and DDR3 memory modules, and older generations DDR2 and DDR1 memory modules are not vulnerable to these attacks. While the RAMBleed issues leverage RowHammer, RAMBleed is different in that confidentiality of data may be compromised: RAMBleed uses RowHammer as a side channel to discover the values of adjacent memory.  Please note that successfully leveraging RAMBleed exploits require that the malicious attacker be able to locally execute malicious code against the targeted system.  At this point in time, Oracle believes that: All current and many older families of Oracle x86 (X5, X6, X7, X8, E1) and Oracle SPARC servers (S7, T7, T8, M7, M8) employing DDR4 DIMMs are not expected to be impacted by RAMBleed.  This is because Oracle only employs DDR4 DIMMs that have implemented the Target Row Refresh (TRR) defense mechanism against RowHammer.  Oracle’s memory suppliers have stated that these implementations have been designed to be effective against RowHammer.  Older systems making use of DDR3 memory are also not expected to be impacted by RAMBleed because they are making use of a combination of other RowHammer mitigations (e.g., pseudo-TRR and increased DIMM refresh rates in addition to Error-Correcting Code (ECC)).  Oracle is currently not aware of any research that would indicate that the combination of these mechanisms would not be effective against RAMBleed.  Oracle Cloud Infrastructure (OCI) is not impacted by the RAMBleed issues because OCI servers only use DDR4 memory with built-in defenses as previously described.  Exadata Engineered Systems use DDR4 memory (X5 family and newer) and DDR3 memory (X4 family and older). Finally, Oracle does not believe that additional software patches will need to be produced to address the RAMBleed issues, as these memory issues can be only be addressed through hardware configuration changes.  In other words, no additional security patches are expected for Oracle product distributions. For more information about Oracle Corporate Security Practices, see https://www.oracle.com/corporate/security-practices/

On June 11th, security researchers published a paper titled “RAMBleed Reading Bits in Memory without Accessing Them”.  This paper describes attacks against Dynamic Random Access Memory (DRAM) modules...

Industry Insights

Securing the Oracle Cloud

Greetings from sunny Seattle! My name is Eran Feigenbaum and I am the Chief Information Security Officer for the Oracle Cloud. Oracle Cloud Infrastructure (OCI) is what we call a Gen2 cloud, a fundamentally re-designed public cloud, architected for superior customer isolation and enterprise application performance than the cloud designs of ten years past. OCI is the platform for Autonomous Data Warehouse and Autonomous Transaction Processing  and, in short order, for all Oracle applications  (see Oracle CEO Mark Hurd on moving NetSuite to the Oracle Cloud),.  This is my inaugural post on our relaunched corporate security blog (thank you Mary Ann) and I’m thrilled to begin a substantive discussion with you about public cloud security. But first things first, with this blog I will describe how my group is organized and functions to protect the infrastructure for the literally thousands of applications and services moving to and continuously being developed on Oracle OCI. My journey to Oracle was paved on over two decades-worth of experience in security. I was lucky to experience the cloud evolution from all sides in my various roles as pen tester, architect, cloud provider and cloud customer. Certainly, the core set of learnings came from nearly a decade of leading security for what is now Google Cloud. This was during a time when cloud business models were very much in their infancy, as were the protection mechanisms for customer isolation. Later, I would understand the challenges differently as the CISO of an e-commerce venture. Jet.com was a cloud-native business, so while we had no physical data centers, I understood well the limitations of first-generation cloud designs in dealing with cloud-borne threats and data protection requirements. So, when it came to joining OCI, the decision was an easy one. In its Gen2 offering, I saw that Oracle was building the future of enterprise cloud; a place where “enterprise-grade” had meaningful payoff in architecture choices like isolated network virtualization to control threat proliferation and as importantly, DevSecOps was foundational to OCI, not a transformation challenge. What security leader would not want to be a part of that? OCI distinguishes itself among cloud providers for having predictable performance and a security-first design, so most of our customers are organizations with high sensitivity to data and information protection. They are building high performance computing applications, and that includes our Oracle internal customers, so security must be continuous, ubiquitous, agile and above all scalable. By extension then, the OCI Security Group is in many ways the modern Security Operations Center (SOC). Our job is to enable the continuous integration and continuous deployment (CI/CD) pipeline. In building the team, I aimed at three main goals: 1) build a complete organization that could address not only detection and response but proactively ensure the security of services developed and deployed on OCI, 2) create a culture and operating practice of frequent communication and metrics sharing among teams to ensure continuous goal evaluation and 3) align with the practices that Oracle’s corporate security teams had set and refined over four decades of protecting customers’ most sensitive data. To that end the Chief Security Office at Oracle Cloud Infrastructure (OCI) consists of six (6) teams. Between these six (6) teams, the OCI Security Group provides a comprehensive and proactive set of security services, technologies, guidance, and processes that ensure a good security posture and address security risks. Security Assurance: Works collaboratively with the security teams and stakeholders throughout Oracle to drive the development and deployment of security controls, technologies, processes, and guidance for those building on OCI. Product Security: This team really examines and evolves the OCI architecture, both hardware and software/services, to ensure we are taking advantage of innovations and making those changes that enhance our security posture. Offensive Security: The work of this team is really to understand and emulate the methods of bad actors. Some of the work involves research, penetration testing and simulating advanced threats, against our hardware and software. All work is about strengthening our architecture and defensive capability. Defensive Security: These are really the first responders of cloud security. They work proactively to spot weaknesses and in the event of incidents, work to remediate them within the shortest possible window. Security Automation Services: We know that automation is fundamental to scaling but it is also key to shortening detection and response time. The team aggregates and correlates information about risks and methods to develop visualizations and tools that expedite risk reduction. Security Go-To-Market: One of the most common requests of me is to share information on our security architecture, methods, tooling and best practices. Our internal and external customers want reference architectures and information on how to benefit from our experience. Having this function as part of the group gives the team access to ground truth and aligns with a core value to “put customers first”. While the team organization is set up for completeness of function in service to the CI/CD pipeline, the key to achieving continuous security and security improvement is how well all members operate as a unit. I think of each team as being essential to the others. Each area generates intelligence that informs the other units and propels them in a kind of virtuous cycle with security automation enabling accelerated revolutions through this cycle. As Oracle engineers, for instance, plan for the re-homing or development of new applications and services on OCI, our security architecture works with them. Throughout the drawing board and design phases, we advise on best practices, compliance considerations, tooling and what the process for continuous security will look like during the integration and deployment phases. Security assurance personnel, experts in code review best practices, give guidance and create awareness about the benefits of a security mindset for code development. At time of implementation and execution, the offensive security team conducts tests looking for weaknesses and vulnerabilities which will be surfaced both to the development teams as well as to our defensive security teams for both near term and long-term strategic remediation. This process is continuous as changes and updates can quickly alter the security posture of an environment or an application, so our aim is rapid response and most importantly refining practices and processes that will reduce the risk from those same vulnerabilities for the long term. This latter includes continuous security awareness training so that a security mindset is the cultural norm even as we scale and grow at a rapid pace. Agility and scale in security are an imperative for a cloud operator, especially one at Oracle’s size and scope which attracts the most security sensitive businesses, governments and organizations. Our approach to security automation applies to nearly every activity and process of OCI security. We observe that which can be replicated and actioned either without human intervention or through self service mechanisms. Automation provides innovations and tooling that help not only our OCI security group but internal security stakeholders and even customers. Through visibility and self-service mechanisms, we make developers and service owners part of the OCI security mission and consequently improve our ability to maintain consistent security. I mentioned at the beginning of this post that key to security effectiveness is not only an organizational structure built for the modern cloud but also security functional areas that are interdependent and in constant communication. One of the best ways that I have found to do this in my career managing large teams is through the Objective and Key Results (OKR) process. Similar, to Key Performance Indicators (KPIs), OKRs enable measurement of success or failure, but unlike KPIs, Objectives and Key Results (OKRs) encourage leaders, teams and contributors to make big bets, stretch beyond what seems merely achievable toward what can be revolutionary. In his seminal book Measure What Matters (of which I talk about to anyone who will listen), John Doerr outlines the structure by which agile enterprises stay aligned to mission even as they adjust to account for changes in business conditions. The key results will confirm if the direction is correct or needs adjusting. The teams of the OCI Security group stay aligned and informed by one another through the OKR system. The focus on cross communication, deduplication and realignment give us visibility to the incremental improvements and successes. With this description of the OCI Security Group, I’ve given you some insights to how we secure the industry’s most technically advanced public cloud. Over the next months, I am eager to delve deeper on the architecture choices and innovations that set us apart. Let the journey of getting to know OCI security begin!        

Greetings from sunny Seattle! My name is Eran Feigenbaum and I am the Chief Information Security Officer for the Oracle Cloud. Oracle Cloud Infrastructure (OCI) is what we call a Gen2 cloud,...

Security Updates

Intel Processor MDS Vulnerabilities: CVE-2019-11091, CVE-2018-12126, CVE-2018-12130, and CVE-2018-12127

Today, Intel disclosed a new set of speculative execution side channel vulnerabilities, collectively referred as “Microarchitectural Data Sampling” (MDS).  These vulnerabilities affect a number of Intel processors and have received four distinct CVE identifiers to reflect how they impact the different microarchitectural structures of the affected Intel processors: CVE-2019-11091: Microarchitectural Data Sampling Uncacheable Memory (MDSUM) CVE-2018-12126: Microarchitectural Store Buffer Data Sampling (MSBDS)  CVE-2018-12127: Microarchitectural Load Port Data Sampling (MLPDS) CVE-2018-12130: Microarchitectural Fill Buffer Data Sampling (MFBDS) While vulnerability CVE-2019-11091 has received a CVSS Base Score of 3.8, the other vulnerabilities have all been rated with a CVSS Base Score of 6.5.   As a result of the flaw in the architecture of these processors, an attacker who can execute malicious code locally on an affected system can compromise the confidentiality of data previously handled on the same thread or compromise the confidentiality of data from other hyperthreads on the same processor as the thread where the malicious code executes.  As a result, MDS vulnerabilities are not directly exploitable against servers that do not allow the execution of untrusted code. These vulnerabilities are collectively referred as Microarchitectural Data Sampling issues (MDS issues) because they refer to issues related to microarchitectural structures of the Intel processors other than the level 1 data cache.  The affected microarchitectural structures in the affected Intel processors are the Data Sampling Uncacheable Memory (uncacheable memory on some microprocessors utilizing speculative execution), the store buffers (temporary buffers to hold store addresses and data), the fill buffers (temporary buffers between CPU caches), and the load ports (temporary buffers used when loading data into registers).  MDS issues are therefore distinct from the previously-disclosed Rogue Data Cache Load (RDCL) and L1 Terminal Fault (L1TF) issues. Effectively mitigating these MDS vulnerabilities will require updates to Operating Systems and Virtualization software in addition to updated Intel CPU microcode.  While Oracle has not yet received reports of successful exploitation of these issues “in the wild,” Oracle has worked with Intel and other industry partners to develop technical mitigations against these issues. In response to these MDS issues: Oracle Hardware: Oracle recommends that administrators of x86-based Systems carefully assess the impact of the MDS flaws for their systems and implement the appropriate security mitigations.  Oracle will provide specific guidance for Oracle Engineered Systems. Oracle has determined that Oracle SPARC servers are not affected by these MDS vulnerabilities. Oracle Operating Systems (Linux and Solaris) and Virtualization: Oracle has released security patches for Oracle Linux 7, Oracle Linux 6 and Oracle VM Server for X86 products.  In addition to OS patches, customers should run the current version of the Intel microcode to mitigate these issues. In certain instances, Oracle Linux customers can take advantage of Oracle Ksplice to apply these updates without needing to reboot their systems. Oracle has determined that Oracle Solaris on x86 is affected by these vulnerabilities.  Customers should refer to Doc ID 2540621.1  for additional information. Oracle has determined that Oracle Solaris on SPARC is not affected by these MDS vulnerabilities. Oracle Cloud: The Oracle Cloud Security and DevOps teams continue to work in collaboration with our industry partners on implementing mitigations for these MDS vulnerabilities that are designed to protect customer instances and data across all Oracle Cloud offerings: Oracle Cloud (IaaS, PaaS, SaaS), Oracle NetSuite, Oracle GBU Cloud Services, Oracle Data Cloud, and Oracle Managed Cloud Services.  Oracle will inform Cloud customers using the normal maintenance notification mechanisms about required maintenance activities as additional mitigating controls continue to be implemented in response to the MDS vulnerabilities. Oracle has determined that the MDS vulnerabilities will not impact a number of Oracle's cloud services.  They include Autonomous Data Warehouse service, which provides a fully managed database optimized for running data warehouse workloads, and Oracle Autonomous Transaction Processing service, which provides a fully managed database service optimized for running online transaction processing and mixed database workloads.  No further action is required by customers of these services as both were found to require no additional mitigating controls based on service design to prevent the exploitation of the MDS vulnerabilities.   Bare metal instances in Oracle Cloud Infrastructure (OCI) Compute offer full control of a physical server and require no additional Oracle code to run.  By design, the bare metal instances are isolated from other customer instances on the OCI network whether they be virtual machines or bare metal.  However, for customers running their own virtualization stack on bare metal instances, the MDS vulnerability could allow a virtual machine to access privileged information from the underlying hypervisor or other VMs on the same bare metal instance.  These customers should review the Intel recommendations about these MDS vulnerabilities and make the recommended changes to their configurations. As previously anticipated, we continue to expect that new techniques leveraging speculative execution flaws in processors will continue to be disclosed.  These issues are likely to continue to impact primarily operating systems and virtualization platforms and addressing these issues will likely continue to require software update and microcode update.  Oracle therefore recommends that customers remain on current security release levels, including firmware, and applicable microcode updates (delivered as Firmware or OS patches), as well as software upgrades.   For more information: Oracle Linux customers can refer to the bulletins located at https://linux.oracle.com/cve/CVE-2019-11091.html, https://linux.oracle.com/cve/CVE-2018-12126.html, https://linux.oracle.com/cve/CVE-2018-12130.html, https://linux.oracle.com/cve/CVE-2018-12127.html For information about the availability of Intel microcode for Oracle hardware, see Intel MDS vulnerabilities (CVE-2019-11091, CVE-2018-12126, CVE-2018-12130, and CVE-2018-12127: Intel Processor Microcode Availability (Doc ID 2540606.1) and Intel MDS (CVE-2019-11091, CVE-2018-12126, CVE-2018-12130 and CVE-2018-12127) Vulnerabilities in Oracle x86 Servers (Doc ID 2540621.1) Oracle Solaris customers should refer to Intel MDS Vulnerabilities (CVE-2019-11091, CVE-2018-12126, CVE-2018-12130, and CVE-2018-12127): Oracle Solaris Impact (Doc ID 2540522.1) Oracle Cloud Infrastructure (OCI) customers should refer to https://docs.cloud.oracle.com/iaas/Content/Security/Reference/MDS_response.htm   

Today, Intel disclosed a new set of speculative execution side channel vulnerabilities, collectively referred as “Microarchitectural Data Sampling” (MDS).  These vulnerabilities affect a number of...

Critical Patch Updates

Security Alert CVE-2019-2725 Released

Oracle has just released Security Alert CVE-2019-2725.  This Security Alert was released in response to a recently-disclosed vulnerability affecting Oracle WebLogic Server.  This vulnerability affects a number of versions of Oracle WebLogic Server and has received a CVSS Base Score of 9.8.  WebLogic Server customers should refer to the Security Alert Advisory for information on affected versions and how to obtain the required patches.    Please note that vulnerability CVE-2019-2725 has been associated in press reports with vulnerabilities CVE-2018-2628, CVE-2018-2893, and CVE-2017-10271.  These vulnerabilities were addressed in patches released in previous Critical Patch Update releases.   Due to the severity of this vulnerability, Oracle recommends that this Security Alert be applied as soon as possible.   For more information: The Security Alert advisory is located at  https://www.oracle.com/technetwork/security-advisory/alert-cve-2019-2725-5466295.html  The October 2017 Critical Patch Update advisory is located at https://www.oracle.com/technetwork/topics/security/cpuoct2017-3236626.html The April 2018 Critical Patch Update advisory is located at https://www.oracle.com/technetwork/security-advisory/cpuapr2018-3678067.html The July 2018 Critical patch Update advisory is located at https://www.oracle.com/technetwork/security-advisory/cpujul2018-4258247.html

Oracle has just released Security Alert CVE-2019-2725.  This Security Alert was released in response to a recently-disclosed vulnerability affecting Oracle WebLogic Server.  This vulnerability affects...

Industry Insights

Oracle Linux certified under Common Criteria and FIPS 140-2

Oracle Linux 7 has just received both a Common Criteria (CC) Certification which was performed against the National Information Assurance Partnership (NIAP) General Purpose Operating System Protection Profile (OSPP) v4.1 as well as a FIPS 140-2 validation of its cryptographic modules.  Oracle Linux is currently one of only two operating systems – and the only Linux distribution – on the NIAP Product Compliant List.  U.S. Federal procurement policy requires IT products sold to the Department of Defense (DoD) to be on this list; therefore, Federal cloud customers who select Oracle Cloud Infrastructure can now opt for a NIAP CC-certified operating system that also includes FIPS 140-2 validated cryptographic modules, by making Oracle Linux 7 the platform for their cloud services solution. Common Criteria Certification for Oracle Linux 7 The National Information Assurance Partnership (NIAP) is “responsible for U.S. implementation of the Common Criteria, including management of the NIAP Common Criteria Evaluation and Validation Scheme (CCEVS) validation body.”(See About NIAP at https://www.niap-ccevs.org/ ) The Operating Systems Protection Profile (OSPP) series are the only NIAP-approved Protection Profiles for operating systems. “A Protection Profile is an implementation-independent set of security requirements and test activities for a particular technology that enables achievable, repeatable, and testable (CC) evaluations.”  They are intended to “accurately describe the security functionality of the systems being certified in terms of [CC] and to define functional and assurance requirements for such products.”  In other words, the OSPP enables organizations to make an accurate comparison of operating systems security functions. (For both quotations, see NIAP Frequently Asked Questions (FAQ) at https://www.niap-ccevs.org/Ref/FAQ.cfm) In addition, products that certify against these Protection Profiles can also help you meet certain US government procurement rules.  As set forth in the Committee on National Security Systems Policy (CNSSP) #11, National Policy Governing the Acquisition of Information Assurance (IA) and IA-Enabled Information Technology Products (published in June 2013), “All [common off-the-shelf] COTS IA and IA-enabled IT products acquired for use to protect information on NSS shall comply with the requirements of the NIAP program in accordance with NSA-approved processes.”   Oracle Linux is now the only Linux distribution on the NIAP Product Compliant List.  It is one of only two operating systems on the list. You may recall that Linux distributions (including Oracle Linux) have previously completed Common Criteria evaluations (mostly against a German standard protection profile), these evaluations are now limited because they are only officially recognized in Germany and within the European SOG-IS agreement. Furthermore, the revised Common Criteria Recognition Arrangement (CCRA) announcement on the CCRA News Page from September 8th 2014, states that “After September 8th 2017, mutually recognized certificates will either require protection profile-based evaluations or claim conformance to evaluation assurance levels 1 through 2 in accordance with the new CCRA.”  That means evaluations conducted within the CCRA acceptance rules, such as the Oracle Linux 7.3 evaluation, are globally recognized in the 30 countries that have signed the CCRA. As a result, Oracle Linux 7.3 is the only Linux distribution that meets current US procurement rules. It is important to recognize that the exact status of the certifications of operating systems under the NIAP OSPP has significant implications for the use of cloud services by U.S. government agencies.  The Federal Risk and Authorization Management Program (FedRAMP) website states that it is a “government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services.” For both FedRamp Moderate and High, the SA-4 Guidance states “The use of Common Criteria (ISO/IEC 15408) evaluated products is strongly preferred.” FIPS 140-2 Level 1 Validation for Oracle Linux 6 and 7 In addition to the Common Criteria Certification, Oracle Linux cryptographic modules are also now FIPS 140-2 validated. FIPS 140-2 is a prerequisite for NIAP Common Criteria evaluations. “All cryptography in the TOE for which NIST provides validation testing of FIPS-approved and NIST-recommended cryptographic algorithms and their individual components must be NIST validated (CAVP and/or CMVP). At a minimum an appropriate NIST CAVP certificate is required before a NIAP CC Certificate will be awarded.” (See NIAP Policy Letter #5, June 25, 2018 at https://www.niap-ccevs.org/Documents_and_Guidance/ccevs/policy-ltr-5-update3.pdf ) FIPS is also a mandatory standard for all cryptographic modules used by the US government. “This standard is applicable to all Federal agencies that use cryptographic-based security systems to protect sensitive information in computer and telecommunication systems (including voice systems) as defined in Section 5131 of the Information Technology Management Reform Act of 1996, Public Law 104-106.” (See Cryptographic Module Validation Program; What Is The Applicability Of CMVP To The US Government? at https://csrc.nist.gov/projects/cryptographic-module-validation-program ). Finally, FIPS is required for any cryptography that is a part of a FedRamp certified cloud service. “For data flows crossing the authorization boundary or anywhere else encryption      is required, FIPS 140 compliant/validated cryptography must be employed. FIPS 140 compliant/validated products will have certificate numbers. These certificate numbers will be required to be identified in the SSP as a demonstration of this capability. JAB TRs will not authorize a cloud service that does not have this capability.” (See FedRamp Tips & Cues Compilation, January 2018, at https://www.fedramp.gov/assets/resources/documents/FedRAMP_Tips_and_Cues.pdf ). Oracle includes FIPS 140-2 Level 1 validated cryptography into Oracle Linux 6 and Oracle Linux 7 on x86-64 systems with the Unbreakable Enterprise Kernel and the Red Hat Compatible Kernel. The platforms used for FIPS 140 validation testing include Oracle Server X6-2 and Oracle Server X7-2, running Oracle Linux 6.9 and 7.3. Oracle “vendor affirms” that the FIPS validation is maintained on other x86-64 equivalent hardware that has been qualified in its Oracle Linux Hardware Certification List (HCL), on the corresponding Oracle Linux releases. Oracle Linux cryptographic modules enable FIPS 140-compliant operations for key use cases such as data protection and integrity, remote administration (SSH, HTTPS TLS, SNMP, and IPSEC), cryptographic key generation, and key/certificate management. Federal cloud customers who select Oracle Cloud Infrastructure can now opt for a NIAP CC-certified operating system (that also includes FIPS 140-2 validated cryptographic modules) by making Oracle Linux 7 the bedrock of their cloud services solution. Oracle Linux is engineered for open cloud infrastructure. It delivers leading performance, scalability, reliability, and security for enterprise SaaS and PaaS workloads as well as traditional enterprise applications. Oracle Linux Support offers access to award-winning Oracle support resources and Linux support specialists, zero-downtime updates using Ksplice, additional management tools such as Oracle Enterprise Manager and lifetime support, all at a low cost. For a matrix of Oracle security evaluations currently in progress as well as those completed, please refer to the Oracle Security Evaluations. Visit Oracle Linux Security to learn how Oracle Linux can help keep your systems secure and improve the speed and stability of your operations.  

Oracle Linux 7 has just received both a Common Criteria (CC) Certification which was performed against the National Information Assurance Partnership (NIAP) General Purpose Operating System Protection...

Industry Insights

Welcome to Oracle’s corporate security blog

Hi, all! My name is Mary Ann Davidson and I am the Chief Security Officer for Oracle. I’m the first contributor to our relaunched corporate security blog. Having many different security voices contribute to this blog will help our customers understand the breadth of security at Oracle - across multiple organizations and multiple lines of business. Security at Oracle is way too big (and too important) to be constrained to one person or even one organization. This blog entry will describe how security is organized at Oracle and what my organization does specifically. When I joined Oracle (and before I started working in security), we were just beginning to build “Oracle Financials” – at the time, general ledger, purchasing and payables applications - which have long since expanded to a huge portfolio of business applications. Since then, we’ve continued to grow: more business applications, middleware, operating systems, engineered systems, industry solutions (e.g., construction, retail, hospitality, financial services) and of course, many clouds (infrastructure as a service (IaaS), platform as a service (PaaS) and Software as a Service (SaaS) - business applications we run for our customers). Plus databases, of course! The amount of diversity we have in terms of our product and service portfolio has a significant impact from a security perspective. The first one is pretty obvious: nobody can be an expert on absolutely everything in security (and even if one person were an expert on everything, there isn’t enough time for one person to be responsible for securing absolutely everything, everywhere in Oracle). The second is also obvious: security must be a cultural value, because you can never hire enough security experts to look over the shoulders of everyone else to make sure they are doing whatever they do securely. As a result, Oracle has adopted a decentralized security model, albeit with corporate security oversight: “trust, but verify.” With regard to our core business, security expertise remains in development. By that I mean that development organizations are responsible for the security-worthiness of what they design and build, and in particular, security has to be “built in, not bolted on” since security doesn’t work well (if at all) as an afterthought. (As I used to say when I worked in construction management in the US Navy, “You can’t add rebar after the concrete has set.”) Security oversight falls under the following main groups at Oracle: Global Physical Security (facility security, investigations, executive protection, etc.), Global Information Security (the “what” we do as a company in terms of corporate security policies, including compliance, forensic investigations, etc.), Corporate Security Architecture (review and approval prior to systems going live to ensure they are securely architected), and Global Product Security, which is my team. I mentioned I am the CSO for Oracle but really, that would be better categorized as “Chief Security Assurance Officer.” What does assurance at Oracle encompass? In essence, that everything we build – hardware and software products, services, and consulting engagements – has security built in and maintains a lifecycle of security. In order to do that, my team has developed an extensive program – from “what” we do to “how” we do it – including verifying that “we did what we are supposed to do.” The “what” includes secure coding and secure development standards, covering not only “don’t do X,” but “here’s how to do Y.” We train many people in development organizations on these standards (e.g., not only developers but quality assurance (QA) people and doc writers, some of whom write code samples that we obviously want to reflect secure coding practice). We have more extensive, tailored training tracks, as well. Our secure development requirements also include architectural risk analysis (ARA), since people building systems (or even features within systems) need to think about the type of threats the system will be subjected to and design with those threats in mind.  These programs and activities are collectively known as Oracle Software Security Assurance (OSSA). One of the ways we decentralize security is by identifying and appointing people in development and consulting organizations to be our “security boots on the ground.” Specifically, we have around 60 senior Security Leads and over 1,700 Security Points Of Contact (SPOCs) that implement Oracle Software Security Assurance programs across a multiplicity of development organizations and consulting. Development teams are required to use various security analysis and testing tools, including both static and dynamic analysis, to triage the security bugs found and to attempt to fix the worst issues the quickest. We use a lot of different tools to do this, since no one tool works equally well for all types of code. We also build tools in-house to help us find security problems (e.g., a static analysis tool called Parfait, built by Oracle Labs, which we optimize for use within Oracle). Other tools are developed by the ethical hacking team (EHT), e.g., the wonderfully-named SQL*Splat, which fuzzes PL/SQL code. The EHT’s job is to attempt to break our products and services before “real” bad guys do, and in particular to capture “larger lessons learned” from the results of the EHT’s work, so we can share those observations (e.g., via a new coding standard or an automated tool) across multiple teams in development. I’m also pleased to note that the EHT’s skills are so popular that a number of development groups in Oracle have stood up their own EHTs. My team also includes people who manage security vulnerabilities: the SecAlert team, who manage the release of our quarterly critical path updates (CPUs), and the Security Alert program, as well as engaging with the security researcher community. Lastly, we have a team of security evaluators who take selected products and services through international Criteria (ISO-15408) and U.S Federal Information Processing (FIPS)-140 certifications: another way we “trust, but verify.” Security assurance is not only increasingly important – think of the bazillions of Internet of Things devices as people insist on implanting sensors in absolutely everything - but increasingly asked about by customers who want to know “how did you build – and manage – this product or service?” That is another reason we make sure we can measure what teams across the company are doing or not doing in assurance and help uplift those who need to do better. In the future, we will be publishing more blog entries to discuss the respective roles of security oversight teams, as well as the security work of operational and development teams. “Many voices” will illustrate the breadth and width of security at Oracle, and how seriously we take it. On a personal note, I look forward to reading about the great work my many valued colleagues at Oracle are doing to continue to make security rock solid, and a core cultural value.

Hi, all! My name is Mary Ann Davidson and I am the Chief Security Officer for Oracle. I’m the first contributor to our relaunched corporate security blog. Having many different security voices...

Security Updates

Intel Processor L1TF vulnerabilities: CVE-2018-3615, CVE-2018-3620, CVE-2018-3646

Today, Intel disclosed a new set of speculative execution side-channel processor vulnerabilities affecting their processors.    These L1 Terminal Fault (L1TF) vulnerabilities affect a number of Intel processors, and they have received three CVE identifiers: CVE-2018-3615 impacts Intel Software Guard Extensions (SGX) and has a CVSS Base Score of 7.9. CVE-2018-3620 impacts operating systems and System Management Mode (SMM) running on Intel processors and has a CVSS Base Score of 7.1. CVE-2018-3646 impacts virtualization software and Virtual Machine Monitors (VMM) running on Intel processors and has a CVSS Base Score of 7.1 These vulnerabilities derive from a flaw in Intel processors, in which operations performed by a processor while using speculative execution can result in a compromise of the confidentiality of data between threads executing on a physical CPU core.  As with other variants of speculative execution side-channel issues (i.e., Spectre and Meltdown), successful exploitation of L1TF vulnerabilities require the attacker to have the ability to run malicious code on the targeted systems.  Therefore, L1TF vulnerabilities are not directly exploitable against servers which do not allow the execution of untrusted code.  While Oracle has not yet received reports of successful exploitation of this speculative execution side-channel issue “in the wild,” Oracle has worked with Intel and other industry partners to develop technical mitigations against these issues.  The technical steps Intel recommends to mitigate L1TF vulnerabilities on affected systems include: Ensuring that affected Intel processors are running the latest Intel processor microcode. Intel reports that the microcode update  it has released for the Spectre 3a (CVE-2018-3640) and Spectre 4 (CVE-2018-3639) vulnerabilities also contains the microcode instructions which can be used to mitigate the L1TF vulnerabilities. Updated microcode by itself is not sufficient to protect against L1TF. Applying the necessary OS and virtualization software patches against affected systems. To be effective, OS patches will require the presence of the updated Intel processor microcode.  This is because updated microcode by itself is not sufficient to protect against L1TF.  Corresponding OS and virtualization software updates are also required to mitigate the L1TF vulnerabilities present in Intel processors. Disabling Intel Hyper-Threading technology in some situations. Disabling HT alone is not sufficient for mitigating L1TF vulnerabilities. Disabling HT will result in significant performance degradation. In response to the various L1TF Intel processor vulnerabilities: Oracle Hardware Oracle recommends that administrators of x86-based Systems carefully assess the L1TF threat for their systems and implement the appropriate security mitigations.Oracle will provide specific guidance for Oracle Engineered Systems. Oracle has determined that Oracle SPARC servers are not affected by the L1TF vulnerabilities. Oracle has determined that Oracle Intel x86 Servers are not impacted by vulnerability CVE-2018-3615 because the processors in use with these systems do not make use of Intel Software Guard Extensions (SGX). Oracle Operating Systems (Linux and Solaris) and Virtualization Oracle has released security patches for Oracle Linux 7, Oracle Linux 6 and Oracle VM Server for X86 products.  In addition to OS patches, customers should run the current version of the Intel microcode to mitigate these issues.  Oracle Linux customers can take advantage of Oracle Ksplice to apply these updates without needing to reboot their systems. Oracle has determined that Oracle Solaris on x86 is not affected by vulnerabilities CVE-2018-3615 and CVE-2018-3620 regardless of the underlying Intel processor on these systems.  It is however affected by vulnerability CVE-2018-3646 when using Kernel Zones. The necessary patches will be provided at a later date.  Oracle Solaris on SPARC is not affected by the L1TF vulnerabilities. Oracle Cloud The Oracle Cloud Security and DevOps teams continue to work in collaboration with our industry partners on implementing the necessary mitigations to protect customer instances and data across all Oracle Cloud offerings: Oracle Cloud (IaaS, PaaS, SaaS), Oracle NetSuite, Oracle GBU Cloud Services, Oracle Data Cloud, and Oracle Managed Cloud Services.   Oracle’s first priority is to mitigate the risk of tenant-to-tenant attacks. Oracle will notify and coordinate with the affected customers for any required maintenance activities as additional mitigating controls continue to be implemented. Oracle has determined that a number of Oracle's cloud services are not affected by the L1TF vulnerabilities.  They include Autonomous Data Warehouse service, which provides a fully managed database optimized for running data warehouse workloads, and Oracle Autonomous Transaction Processing service, which provides a fully managed database service optimized for running online transaction processing and mixed database workloads.  No further action is required by customers of these services as both were found to require no additional mitigating controls based on service design and are not affected by the L1TF vulnerabilities (CVE-2018-3615, CVE-2018-3620, and CVE-2018-3646).    Bare metal instances in Oracle Cloud Infrastructure (OCI) Compute offer full control of a physical server and require no additional Oracle code to run.  By design, the bare metal instances are isolated from other customer instances on the OCI network whether they be virtual machines or bare metal.  However, for customers running their own virtualization stack on bare metal instances, the L1TF vulnerability could allow a virtual machine to access privileged information from the underlying hypervisor or other VMs on the same bare metal instance.  These customers should review the Intel recommendations about vulnerabilities CVE-2018-3615, CVE-2018-3620, CVE-2018-3646 and make changes to their configurations as they deem appropriate. Note that many industry experts anticipate that new techniques leveraging these processor flaws will continue to be disclosed for the foreseeable future.  Future speculative side-channel processor vulnerabilities are likely to continue to impact primarily operating systems and virtualization platforms, as addressing them will likely require software update and microcode update.  Oracle therefore recommends that customers remain on current security release levels, including firmware, and applicable microcode updates (delivered as Firmware or OS patches), as well as software upgrades.    For more information:   The information in this blog entry is also published as MOS Note 2434830.1: “Information about the L1TF Intel processor vulnerabilities (CVE-2018-3615, CVE-2018-3620, CVE-2018-3646)” Oracle Linux can refer to the bulletins located at https://linux.oracle.com/cve/CVE-2018-3620.html,  and https://linux.oracle.com/cve/CVE-2018-3646.html Solaris customers should refer to MOS 2434208.1 : “L1 Terminal Fault (CVE-2018-3615, CVE-2018-3620, & CVE-2018-3646) Vulnerabilities” and MOS 2434206.1 : “Disabling x86 Hyperthreading in Oracle Solaris” Oracle x86 hardware customers should refer to MOS 2434171.1 : “L1 Terminal Fault (CVE-2018-3620, CVE-2018-3646) Vulnerabilities on Oracle x86 Servers” For information about the availability of Intel microcode for Oracle hardware, see MOS Note 2406316.1: “CVE-2018-3640 (Spectre v3a), CVE-2018-3639 (Spectre v4) Vulnerabilities: Intel Processor Microcode Availability (Doc ID 2406316.1)” The “Oracle Cloud Security Response to Intel L1TF Vulnerabilities” is located at https://docs.cloud.oracle.com/iaas/Content/Security/Reference/L1TF_response.htm The “Oracle Cloud Infrastructure Customer Advisory for L1TF Impact on the Compute Service” is located at https://docs.cloud.oracle.com/iaas/Content/Security/Reference/L1TF_computeimpact.htm The “Oracle Cloud Infrastructure Customer Advisory for L1TF Impact on the Database Service” is located at https://docs.cloud.oracle.com/iaas/Content/Security/Reference/L1TF_databaseimpact.htm The document “Protecting your Compute Instance Against the L1TF Vulnerability” is located at https://docs.cloud.oracle.com/iaas/Content/Security/Reference/L1TF_protectinginstance.htm

Today, Intel disclosed a new set of speculative execution side-channel processor vulnerabilities affecting their processors.    These L1 Terminal Fault (L1TF) vulnerabilities affect a number of Intel...

Critical Patch Updates

Security Alert CVE-2018-3110 Released

Oracle just released Security Alert CVE-2018-3110.  This vulnerability affects the Oracle Database versions 11.2.0.4 and 12.2.0.1 on Windows.  It has received a CVSS Base Score of 9.9, and it is not remotely exploitable without authentication.  Vulnerability CVE-2018-3110 also affects Oracle Database version 12.1.0.2 on Windows as well as Oracle Database on Linux and Unix; however, patches for those versions and platforms were included in the July 2018 Critical Patch Update. Due to the nature of this vulnerability, Oracle recommends that customers apply these patches as soon as possible.  This means that: Customers running Oracle Database versions 11.2.0.4 and 12.2.0.1 on Windows should apply the patches provided by the Security Alert. Customers running version 12.1.0.2 on Windows or any version of the database on Linux or Unix should apply the July 2018 Critical Patch Update if they have not already done so.  For More Information: • The Advisory for Security Alert CVE-2018-3110 is located at http://www.oracle.com/technetwork/security-advisory/alert-cve-2018-3110-5032149.html • The Advisory for the July 2018 Critical Patch Update is located at http://www.oracle.com/technetwork/security-advisory/cpujul2018-4258247.html

Oracle just released Security Alert CVE-2018-3110.  This vulnerability affects the Oracle Database versions 11.2.0.4 and 12.2.0.1 on Windows.  It has received a CVSS Base Score of 9.9, and it is not...

Critical Patch Updates

July 2018 Critical Patch Update Released

Oracle today released the July 2018 Critical Patch Update. This Critical Patch Update provided security updates for a wide range of product families, including: Oracle Database Server, , Oracle Global Lifecycle Management, Oracle Fusion Middleware, Oracle E-Business Suite, Oracle PeopleSoft, Oracle Siebel CRM, Oracle Industry Applications (Construction, Communications, Financial Services, Hospitality, Insurance, Retail, Utilities), Oracle Java SE, Oracle Virtualization, Oracle MySQL, and Oracle Sun Systems Products Suite. 37% of the vulnerabilities fixed with this Critical Patch Update are for third-party components included in Oracle product distributions.  The CVSS v3 Standard considers vulnerabilities with a CVSS Base Score between 9.0 and 10.0 to have a qualitative rating of “Critical.”  Vulnerabilities with a CVSS Base Score between 7.0 and 8.9, have a qualitative rating of “High.” While Oracle cautions against performing quantitative analysis against the content of each Critical Patch Update release because such analysis is excessively complex (e.g., the same CVE may be listed multiple times, because certain components are widely used across different products), it is fair to note that bugs in third-party components make up a disproportionate amount of severe vulnerabilities in this Critical Patch Update.  90% of the critical vulnerabilities addressed in this Critical Patch Update are for non-Oracle CVEs.  Non-Oracle CVEs also make up 56% of the Critical and High vulnerabilities addressed in this Critical Patch Update. Finally, note that many industry experts anticipate that a number of new variants of exploits leveraging known flaws in modern processor designs (currently referred as “Spectre” variants) will continue to be discovered.  Oracle is actively engaged with Intel and other industry partners to come up with technical mitigations against these processor vulnerabilities as they are being reported.  For more information about this Critical Patch Update, customers should refer to the Critical Patch Update Advisory and the executive summary published on My Oracle Support (Doc ID 2420273.1).  

Oracle today released the July 2018 Critical Patch Update. This Critical Patch Update provided security updates for a wide range of product families, including: Oracle Database Server, , Oracle Global...

Security Updates

Updates about the “Spectre” series of processor vulnerabilities and CVE-2018-3693

A new processor vulnerability was announced today. Vulnerability CVE-2018-3693 (“Bounds Check Bypass Store” or BCBS) is closely related to Spectre v1. As with previous iterations of Spectre and Meltdown, Oracle is actively engaged with Intel and other industry partners to develop technical mitigations against this processor vulnerability. Note that many industry experts anticipate that a number of new variants of exploits leveraging these known flaws in modern processor designs will continue to be disclosed for the foreseeable future. These issues are likely to primarily impact operating systems and virtualization platforms, and may require software update, microcode update, or both. Fortunately, the conditions of exploitation for these issues remain similar: malicious exploitation requires the attackers to first obtain the privileges required to install and execute malicious code against the targeted systems. In regard to vulnerabilities CVE-2018-3640 (“Spectre v3a”) and CVE-2018-3639 (“Spectre v4”), Oracle has determined that the SPARC processors manufactured by Oracle (i.e., SPARC M8, T8, M7, T7, S7, M6, M5, T5, T4, T3, T2, T1) are not affected by these variants. In addition, Oracle has delivered microcode patches for the last 4 generations of Oracle x86 Servers. As with previous versions of the Spectre and Meltdown vulnerabilities (see MOS Note ID 2347948.1), Oracle will publish information about these issues on My Oracle Support.

A new processor vulnerability was announced today. Vulnerability CVE-2018-3693 (“Bounds Check Bypass Store” or BCBS) is closely related to Spectre v1. As with previous iterations of Spectre...

Security Updates

Updates about processor vulnerabilities CVE-2018-3640 (“Spectre v3a”) and CVE-2018-3639 (“Spectre v4”)

Two new processor vulnerabilities were publicly disclosed on  May 21, 2018.  They are vulnerabilities CVE-2018-3640 ( “Spectre v3a” or “Rogue System Register Read”) and CVE-2018-3639 (“Spectre v4” or “Speculative Store Buffer Bypass”).  Both vulnerabilities have received a CVSS Base Score of 4.3.  Successful exploitation of vulnerability CVE-2018-3639 requires local access to the targeted system.  Mitigating this vulnerability on affected systems will require both software and microcode updates.  Successful exploitation of vulnerability CVE-2018-3640 also requires local access to the targeted system.  Mitigating this vulnerability on affected Intel processors is solely performed by applying updated processor-specific microcode. Working with the industry, Oracle has just released the required software updates for Oracle Linux and Oracle VM along with the microcode recently released by Intel for certain x86 platforms.  Oracle will continue to release new microcode updates and firmware patches as production microcode becomes available from Intel.  As for previous versions of the Spectre and Meltdown vulnerabilities (see MOS Note ID 2347948.1), Oracle will publish a list of products affected by CVE-2018-3639 and CVE-2018-3640 along with other technical information on My Oracle Support (MOS Note ID 2399123.1).  In addition, the Oracle Cloud teams will be working to identify and apply necessary updates if warranted, as they become available from Oracle and third-party suppliers, in accordance with applicable change management processes

Two new processor vulnerabilities were publicly disclosed on  May 21, 2018.  They are vulnerabilities CVE-2018-3640 ( “Spectre v3a” or “Rogue System Register Read”) and CVE-2018-3639 (“Spectre v4” or...

Critical Patch Updates

April 2018 Critical Patch Update Released

Oracle today released the April 2018 Critical Patch Update. This Critical Patch Update provided security updates for a wide range of product families, including: Oracle Database Server, Oracle Fusion Middleware, Oracle E-Business Suite, Oracle PeopleSoft, Oracle Industry Applications (Construction, Financial Services, Hospitality, Retail, Utilities), Oracle Java SE, and Oracle Systems Products Suite. Approximately 35% of the security fixes provided by this Critical Patch Update are for non-Oracle Common Vulnerabilities and Exposures (CVEs): that is, security fixes for third-party products (e.g., open source components) that are included in traditional Oracle product distributions.  In many instances, the same CVE is listed multiple times in the Critical Patch Update Advisory, because a vulnerable common component (e.g., Apache) may be present in many different Oracle products. Note that Oracle started releasing security updates in response to the Spectre (CVE-2017-5715 and CVE-2017-5753) and Meltdown (CVE-2017-5754) processor vulnerabilities with the January 2018 Critical Patch Update.  Customers should refer to this Advisory and the “Addendum to the January 2018 Critical Patch Update Advisory for Spectre and Meltdown” My Oracle Support note (Doc ID 2347948.1) for information about newly-released updates. At this point in time, Oracle has issued the corresponding security patches for Oracle Linux and Virtualization and Oracle Solaris on SPARC (SPARC 64-bit systems are not affected by Meltdown), and Oracle is working on producing the necessary updates for Solaris on x86 (noting the diversity of supported processors complicates the creation of the security patches related to these issues). For more information about this Critical Patch Update, customers should refer to the Critical Patch Update Advisory and the executive summary published on My Oracle Support (Doc ID 2383583.1).   

Oracle today released the April 2018 Critical Patch Update. This Critical Patch Update provided security updates for a wide range of product families, including: Oracle Database Server, Oracle Fusion...

Critical Patch Updates

Security Alert CVE-2017-9805 Released

Last week, Equifax identified an Apache Struts 2 vulnerability, CVE-2017-5638, as having been exploited in a significant security incident. Oracle distributed the Apache Foundation’s fixes for CVE-2017-5638 several months ago in the April 2017 Critical Patch Update, which should have already been applied to customer systems well before this breach came to light. Recently, the Apache Foundation released fixes for a number of additional Apache Struts 2 vulnerabilities, including CVE-2017-9805, CVE-2017-7672, CVE-2017-9787, CVE-2017-9791, CVE-2017-9793, CVE-2017-9804, and CVE-2017-12611. Oracle just published Security Alert CVE-2017-9805 in order to distribute these fixes to our customers. Please refer to the Security Alert advisory for the technical details of these bugs as well as the CVSS Base Score information. Oracle strongly recommends that customers apply the fixes contained in this Security Alert as soon as possible. Furthermore, Oracle reminds customers that they should keep up with security releases and should have applied the July 2017 Critical Patch Update (the most recent Critical Patch Update release). The next Critical Patch Update release is on October 17, 2017. For More Information: The Security Alerts and Critical Patch Updates page is located at https://www.oracle.com/technetwork/topics/security/alerts-086861.html A blog entry titled "Take Advantage of Oracle Software Security Assurance" is located at https://blogs.oracle.com/oraclesecurity/take-advantage-of-oracle-software-security-assurance. This blog entry provides a description of the Critical Patch Update and Security Alert programs and general recommendations around security patching.

Last week, Equifax identified an Apache Struts 2 vulnerability, CVE-2017-5638, as having been exploited in a significant security incident. Oracle distributed the Apache Foundation’s fixes...

Oracle Security

Oracle's Security Fixing Practices

In a previous blog entry, we discussed how Oracle customers should take advantage of Oracle's ongoing security assurance effort in order to help preserve their security posture over time. In today's blog entry, we're going to discuss the highlights of Oracle's security fixing practices and their implications for Oracle customers. As stated in the previous blog entry, the Critical Patch Update program is Oracle's primary mechanism for the delivery of security fixes in all supported Oracle product releases and the Security Alert program provides for the release of fixes for severe vulnerabilities outside of the normal Critical Patch Update schedule. Oracle always recommends that customers remain on actively-supported versions and apply the security fixes provided by Critical Patch Updates and Security Alerts as soon as possible. So, how does Oracle decide to provide security fixes? Where does the company start (i.e., for what product versions do security fixes get first generated)? What goes into security releases? What are Oracle's objectives? The primary objective of Oracle's security fixing policies is to help preserve the security posture of ALL Oracle customers. This means that Oracle tries to fix vulnerabilities in severity order for each Oracle product family. In certain instances, security fixes cannot be backported; in other instances, lower severity fixes are required because of dependencies among security fixes. Additionally, Oracle treats customers equally by providing customers with the same vulnerability information and access to fixes across actively-used platform and version combinations at the same time. Oracle does not provide additional information about the specifics of vulnerabilities beyond what is provided in the Critical Patch Update (or Security Alert) advisory and pre-release note, the pre-installation notes, the readme files, and FAQs. The only and narrow exception to this practice is for the customers who report a security vulnerability. When a customer is reporting a security vulnerability, Oracle will treat the customer in much the same way the company treats security researchers: the customer gets detailed information about the vulnerability as well as information about expected fixing date, and in some instances access to a temporary patch to test the effectiveness of a given fix. However, the scope of the information shared between Oracle and the customer is limited to the original vulnerability being reported by the customer. Another objective for Oracle's security fixing policies is not so much about producing fixes as quickly as possible, as it is to making sure that these fixes get applied by customers as quickly as possible. Prior to 2005 and the introduction of the Critical Patch Update program, security fixes were published by Oracle as they become produced by development without any fixed schedule (as Oracle would today release a Security Alert). Feedback we received was that this lack of predictability was challenging for customers, and as a result, many customers reported that they no longer applied fixes. Customers said that a predictable schedule would help them ensure that security fixes were picked up more quickly and consistently. As a result, Oracle created the Critical Patch Update program to bring predictability to Oracle customers. Since 2005, and in spite of a growing number of product families, Oracle has never missed a Critical Patch Update release. It is also worth noting that Critical Patch Update releases for most Oracle products are cumulative. This means that by applying a Critical Patch Update, a customer gets all the security fixes included in a specific Critical Patch Update release as well as all the previously-released fixes for a given product-version combination. This allows customers who may have missed Critical Patch Update releases to quickly "catch up" to current security releases. Let's now have a look at the order with which Oracle produces fixes for security vulnerabilities. Security fixes are produced by Oracle in the following order: Main code line. The main code line is the code line for the next major release version of the product. Patch set for non-terminal release version. Patch sets are rollup patches for major release versions. A Terminal release version is a version where no additional patch sets are planned. Critical Patch Update. These are fixes against initial release versions or their subsequent patch sets This means that, in certain instances, security fixes can be backported for inclusion in future patch sets or products that are released before their actual inclusion in a future Critical Patch Update release. This also mean that systems updated with patch sets or upgraded with a new product release will receive the security fixes previously included in the patch set or release. One consequence of Oracle's practices is that newer Oracle product versions tend to provide an improved security posture over previous versions, because they benefit from the inclusion of security fixes that have not been or cannot be backported by Oracle. In conclusion, the best way for Oracle customers to fully leverage Oracle's ongoing security assurance effort is to: Remain on actively supported release versions and their most recent patch set—so that they can have continued access to security fixes; Move to the most recent release version of a product—so that they benefit from fixes that cannot be backported and other security enhancements introduced in the code line over time; Promptly apply Critical Patch Updates and Security Alert fixes—so that they prevent the exploitation of vulnerabilities patched by Oracle, which are known by malicious attackers and can be quickly weaponized after the release of Oracle fixes. For more information: - Oracle Software Security Assurance website - Security Alerts and Critical Patch Updates

In a previous blog entry, we discussed how Oracle customers should take advantage of Oracle's ongoing security assurance effort in order to help preserve their security posture over time. In...

Oracle Security

Take Advantage of Oracle Software Security Assurance

In a previous blog entry (What is Assurance and Why Does It Matter?), Mary Ann Davidson explains the importance of Security Assurance and introduces Oracle Software Security Assurance, Oracle’s methodology for building security into the design, build, testing, and maintenance of its products. The primary objective of software security assurance is to help ensure that security controls provided by software are effective, work in a predictable fashion, and are appropriate for that software. The purpose of ongoing security assurance is to make sure that this objective continues to be met over time (throughout the useful life of software). The development of enterprise software is a complex matter. Even in mature development organizations, bugs still occur, and the use of automated tools does not completely prevent software defects. One important aspect of ongoing security assurance is therefore to remediate security bugs in released code. Another aspect of ongoing security assurance is to ensure that the security controls provided by software continue to be appropriate when the use cases for software change. For example, years ago backups were performed mostly on tapes or other devices physically connected to the server being backed up, while today many backups are performed over private or public networks and sometimes stored in a cloud. Finally, other aspects for ongoing security assurance activities include changing threats (e.g., new attack methods) or obsolete technologies (e.g., deprecated encryption algorithms). Oracle customers need to take advantage of Oracle ongoing security assurance efforts in order to preserve over time their security posture associated with their use of Oracle products. To that end, Oracle recommends that customers remain on actively-supported versions and apply security fixes as quickly as possible after they have been published by Oracle. Introduced in 2005, the Critical Patch Update program is the primary mechanism for the backport of security fixes for all Oracle on-premises products. The Critical Patch Update is Oracle’s program for the distribution of security fixes in previously-released versions of Oracle software. Critical Patch Updates are regularly scheduled: they are issued quarterly on the Tuesday closest to the 17th of the month in January, April, July, and October. This fixed schedule is intended to provide enough predictability to enable customers to apply security fixes in normal maintenance windows. Furthermore, the dates of the Critical Patch Update releases are intended to fall outside of traditional "blackout" periods when no changes to production systems are typically allowed (e.g., end of fiscal years or quarters or significant holidays). Note that in addition to this regularly-scheduled program for security releases, Oracle retains the ability to issue out of schedule patches or workaround instructions in case of particularly critical vulnerabilities and/or when active exploits are reported "in the wild." This program is known as the Security Alert Program. Critical Patch Update and Security Alert fixes are only provided for product versions that are "covered under the Premier Support or Extended Support phases of the Lifetime Support Policy." This means that Oracle does not backport fixes to product versions that are out of support. Furthermore, unsupported product releases are not tested for the presence of vulnerabilities. It is, however, common for vulnerabilities to be found in legacy code, and vulnerabilities fixed in a given Critical Patch Update release can also affect older product versions that are no longer supported. As a result, organizations choosing to continue to use unsupported systems face increasing risks over time. Malicious attackers are known to reverse-engineer the content of published security fixes and it is common for exploit code to be to be published in hacking frameworks soon after Oracle discloses vulnerabilities with the release of a Critical Patch Update or Security Alert. Continuing to use unsupported systems can therefore have two serious implications:(a) Unsupported releases are likely to be affected by vulnerabilities which are not known by the affected software user because these releases are no longer subject to ongoing security assurance activities, and (b) Unsupported releases are likely to be vulnerable to flaws that are known by malicious perpetrators because these bugs have been fixed (and publicly disclosed) in subsequent releases. Unfortunately, security studies continue to report that in addition to human errors and systems misconfigurations, the lack of timely security patching constitutes one of the greatest reasons for the compromise of IT systems by malicious attackers. See for example, the Federal Trade Commission’s paper "Start with Security: A Guide for Business", which recommends that organizations have effective means to keep up with security releases of their software (whether commercial or open source). Delays in security patching and overall lapses in good security hygiene have plagued IT organizations for years. In many instances, organizations will report the "fear of breaking something in a business-critical system" as the reason for not keeping up with security patches. Here lies a fundamental paradox: a given system may be considered too important to fail (or temporarily brought offline), and this is the reason why it is not kept up to date with security patches! The hope for these organizations is that the known system availability interruption outweighs the potential impact of a security incident that could result from not keeping up with a security release. This amounts to driving a car with very little gas left in the tank and thinking "I don’t have time to stop at the gas station, because I really need my car and I am too busy to gas up." Obviously, the scarcity of technical personnel and the costs associated with testing complex applications and deploying patches further exacerbate the problem. The larger the IT environment, the more complex, and the more operation-critical, the greater is the "to patch or not to patch" conundrum. In recent years, Oracle has issued stronger caution against postponing the application of security fixes or knowingly continuing to use unsupported versions. For example, the April 2017 Critical Patch Update Advisory includes the following warning: "Oracle continues to periodically receive reports of attempts to maliciously exploit vulnerabilities for which Oracle has already released fixes. In some instances, it has been reported that attackers have been successful because targeted customers had failed to apply available Oracle patches. Oracle therefore strongly recommends that customers remain on actively-supported versions and apply Critical Patch Update fixes without delay." Keeping up with security releases is simply a critical requirement for preserving the security posture of an IT environment, regardless of the technologies (or vendors) in use.

In a previous blog entry (What is Assurance and Why Does It Matter?), Mary Ann Davidson explains the importance of Security Assurance and introduces Oracle Software Security Assurance, Oracle’s...

Industry Insights

What Is Assurance and Why Does It Matter?

If you are an old security hand, you can skip reading this. If you think "assurance" is something you pay for so your repair bills are covered if someone hits your car, please keep reading. Way back in the pre-Internet days, I used to say that computer security was kind of a lonely job, because hardly any customers seemed to be really interested in talking about it. There were, of course, some keenly interested customers, including defense and intelligence agencies and a few banks, most of which were concerned with our security functionality and—to a lesser degree—how we were building security into everything, a difference I will explain below, and which is known as assurance. Times change. Now, when I meet someone who complains of a virus, it's better-than-even odds that he is talking about the latest digital plague and not a case of the flu. Information technology (IT) has moved way beyond mission-critical applications to things that are literally in the palm of our hands and is in places we never even thought would (or in some cases should) be computerized ("turn your crock pot on remotely? There's an app for that!") More and more of our world is not only IT-based but Internet accessible. Alas, the growth in Internet-accessible whatchamacallits has also led to a growth in Evil Dudes in Upper Wherever wreaking havoc in systems in Anywheresville. This is one big reason that cybersecurity is something (almost) everybody cares about. Historically, computer security has often been described as "CIA" (Confidentiality, Integrity and Availability): Confidentiality means that the data is protected such that people who don't need to access it, can't access it, via restrictions on who can view, delete or change data. For example, at Oracle, I can review my salary online (so can my Human Resource representative), but I cannot look at the salaries of employees who do not report to me. Integrity means that the data hasn't been corrupted (technical term: "futzed with"). In other words, you know that "A" means "A" and isn't really "B" that has been garbled to look like "A." Corrupted data is often worse than no data, precisely because you can't trust it. (Wire transfers wouldn't work if extra 0s were mysteriously and randomly appended to amounts.) Availability means in that you are able to access data (and systems) you have legitimate access to—when you need to. In other words, someone hasn't prevented access by say, flooding a machine with so many requests that the system just gives up (the digital equivalent of a persistent three-year-old asking for more candy "now mommy now mommy now mommy" to the point where mommy can't think). C, I and A are all important attributes of security that may vary in terms of importance from system to system. Assurance is not CIA, but it is the confidence that a system does what it was designed to do, including protecting against specific threats, and also that there aren't sneaky ways around the security controls it provides. It's important because, if you don't have a lot of confidence that the CIA cannot be bypassed by Evil Dude (or Evil Dudette), then the CIA isn't useful. If you have a digital doorlock—but the lock designer goofed by allowing anybody to unlock the door by typing '98765,' then you don't have any security once Evil Dude figures out that 98765 always gets him into your house (and shares that with everybody else on the Internet). Here's the definition of assurance that the US Department of Defense uses: "Software assurance relates to "the level of confidence that software functions as intended and is free of vulnerabilities, either intentionally or unintentionally designed or inserted as part of the software (https://acc.dau.mil/CommunityBrowser.aspx?id=25749)." When I started working in security, most security people knew a lot about the CIA of security, but fewer of us—fewer of anybody—thought about the "functions as designed" and "free from vulnerabilities" part. "Functions as intended" is a design aspect of security. That means that a designer not only considered what the software (or hardware) was intended to do, but thought about how someone could try to make the software (or hardware) do what it was not intended to do. Both are important because unless you never deploy a product, it's most likely going to be attacked, somehow, somewhere, by someone. Thinking about how Evil Dude can try to break stuff (and making that hard/unlikely to succeed) is a very important part of "functions as intended." The "free of vulnerabilities" part is also important; having said that, nobody who knows anything about code would say, "all of our code is absolutely perfect." ("Pride goeth before destruction and a haughty spirit before a fall.") That said, one of the most important aspects of assurance is secure coding. Secure coding practices include training your designers, developers, testers (and yes, even documentation writers) about how code can be broken, so people think about that before starting to code. Having a development process that incorporates security into design, development, testing and maintenance is also important. Security isn't a sort of magic pixie dust you can sprinkle over software or hardware after it's all done to magically make it secure—it is a quality just as structural integrity is part of a building, not something you slap your head over and think, "dang, I forgot the rebar, I need to add some to this building." It's too late after the concrete has set. Secure coding practices include actively looking for coding errors that could be exploited by a Evil Dude, triaging those coding errors to determine "how bad is bad," fixing the worst stuff the fastest and making sure that a problem is fixed completely. If Evil Dude can break in by typing '^X,' it's tempting to just redo your code so typing ^X doesn't get Evil Dude anything. But that likely isn't the root of the problem (what about ^Y - what does that do? Or ^Z, ^A...?) Automated tools designed to help find avoidable, preventable defects in software are a huge help (they didn't really exist when I started in security). Nobody who buys a house expects the house to be 100% perfect, but you'd like to think that the architect hired structural engineers to ensure the walls wouldn't fall over, the contractor had people checking the work all along ("don't skimp on the rebar"), there was a building inspection, etc. Noting that even with a really well-designed and well-built house, there is probably a ding or two in the paint somewhere even before you move in—it's probably not letter perfect. Code is like that, too, although a "ding in your code" is probably more significant than a ding in your paint, so there should be far fewer of them. Assurance matters not only because people who use IT want to know things work as intended—and cannot be easily broken—but because time, money and people are always limited resources. Most companies would rather hire 50 more salespeople to reach more potential customers than hire 50 more people to patch their systems. (For that matter, I'd rather have such strong, repeatable, ingrained secure development practices that instead of hiring 50 more people to fix bad/ insecure code, we can use those 50 people to build new, cool (and secure) stuff.) Assurance is always going to be good and necessary, even as the baseline technology we are trying to "assure" continues to change. One of the most enjoyable aspects of my job is continuing to "bake security in" as we grow in breadth and as we adapt to changes in the market. Many companies are moving from "buy-build-maintain" their own systems to "rent," by using cloud services. (It makes a lot of sense: companies don't typically build apartment buildings in every city their employees visit: they use "cloud housing," a.k.a. hotels.) The increasing move to cloud services comes with security challenges, but also has a lot of security benefits. If it's hard to find enough IT people to maintain your systems, it's even harder to find enough security people to defend them. A service provider can secure the same thing, 5000 times, much better than 5000 individual customers can. (Or, alternatively, a service provider can secure one big multi-tenant service offering better than the 5000 customers using it can do themselves.) The assurance practices we have adapted from "home grown" software and hardware has already morphed and will continue to morph to how we build and deliver cloud services. Click here for more information on Oracle assurance.

If you are an old security hand, you can skip reading this. If you think "assurance" is something you pay for so your repair bills are covered if someone hits your car, please keep reading. Way back in...

Security Trends

The State of Open Source Security

Open source components have played a growing role in software development (commercial and in-house development). The traditional role of a developer has evolved from coding most of everything to re-using as much as possible known and trustworthy components. As a result, a growing aspect of software design and development decisions has become the integration of open-source and third-party components into increasingly large and complex software. The question as to whether open source software is inherently more secure than commercial (i.e. "closed") software has been ardently debated for a number of years. The purpose of this blog entry is not to definitely settle this argument, though I would argue that software (whether open source or closed source) that is developed by security-aware developers tends to be inherently more secure. Regardless of this controversy, there are important security implications regarding the use of open source components. The wide use of certain components (open source or not) has captured the attention of both security researchers and malicious actors. Indeed, the discovery of a 0-day in a widely-used component can imply for malicious actors the prospective of "hacking once and exploiting anywhere" or large financial gain if the bug is sold on the black market. As a result, a growing number of security vulnerabilities have been found and reported in open source components. The positive impact of increased security research in widely-used components will (hopefully) be the improved security-worthiness of these components (and the relative fulfillment of the "million eyes" theory), and the increased awareness of the security implications of the use of open source components within development organizations. In many instances, the vulnerabilities found in public components have been given cute names: POODLE, Heartbleed, Dirty Cow, Shellshock, Venom, etc. (in no particular order). These names contributed to a sense of urgency (sometimes panic) within many organizations, often to the detriment of a rational analysis of the actual severity of these issues and their relative exploitability in the affected environments. Less security-sophisticated organizations have been particularly affected by this sense of urgency and many have attempted to scan their environment to find software containing the vulnerability "du jour." However, it has been Oracle's experience that while many free tools provide organizations with the ability to relatively accurately identify the presence of open source components in IT systems, the majority of these tools have an abysmal track record at accurately identifying the version(s) of these components, and much less determine the exploitability of the issues associated with them. As a result, less security-sophisticated organizations are facing reports with a large number of false positive, and are unable to make sense of these findings (Oracle support has seen an increase in the submission of such inaccurate reports). From a security assurance perspective, I believe that there are 3 significant and often under-discussed topics related to the use of open source components in complex software development: How can we assess and enhance security assurance activities in open source projects? How can we ensure that these components are obtained securely? How does the use of open source components affect ongoing assurance activities throughout the useful life of associated products? How can we assess and enhance security assurance activities in open source projects? Assessing the security maturity of an open source project is not necessarily an easy thing. There are certain tools and derived methodologies and principles (e.g., Building Security In Maturity Model (BSIMM), Safecode), that can be used to assess the relative security maturity of commercial software developers but their application to open source projects is difficult. For example, how can one determine the amount of security skills available in an open source projects and whether code changes are systematically reviewed by skilled security experts? Furthermore, should the software industry try to come up together with means to coordinate the role of commercial vendors in helping enhance the security posture of the most common open source projects for the benefit of all vendors and the community? Is it enough to commit that security fixes be shared with the community when an issue is discovered while a component is being used in a commercial offering? How can we ensure that these components are obtained securely? A number of organizations (whether for the purpose of developing commercial software or their own systems) are concerned solely about "toxic" licenses when procuring open source components, while they should be equally concerned about bringing in toxic code. One problem is the potential downloading and use of obsolete software (which contains known security flaws that have been fixed in the most recent releases). This problem can relatively easily be solved by forcing developers to only download the most recent releases from the official project repository. Many developers prefer pulling compiled binaries, instead of compiling the source code themselves (and verifying its authenticity). Developers should be aware of the risk of pulling malicious code (it's not because it is labelled as "foo" that it actually is "foo"; it may actually be "foo + a nasty backdoor"). There have been several publicly-reported security incidents resulting from the downloading of maliciously altered programs. How can we provide the necessary security care of a solution that include open source components throughout its useful life? Once an organization has decided to use an external component in their solution, the organization also should consider how they will maintain the solution. The maintenance and patching implications of third party components are often overlooked. For example, organizations may be faced with hardware limitations in their products. They may have to deprecate hardware products more quickly because a required open source component is no longer supported on a specific platform, or the technical requirements of the subsequent releases of the component exceed the specifications of the hardware. In hardware environments, there is also the obvious question of whether patching mechanisms are available for the updating of open source components on the platform. There are also problematic implications of the use of open source components when they are used in purely-software solutions. Security fixes for open source components are often unpredictable. How does this unpredictability affect the availability of production systems, or customers' requirement to have fixed maintenance schedule? In conclusion, the questions listed in this blog entry are just a few of the questions that one should consider when developing technology-based products (which seems to be about almost everything these days.) These questions are particularly important as open source components represent a large and increasing chunk of the technology supply chain, not only of commercial technology vendors, but also of cloud providers. Security assurance policies and practices should take these questions into consideration, and highlight the fact that open source, while incredibly useful, is not necessarily "free" but requires specific sets of commitments and due diligence obligations.

Open source components have played a growing role in software development (commercial and in-house development). The traditional role of a developer has evolved from coding most of everything...

Industry Insights

Common Criteria and the Future of Security Evaluations

For years, I (and many others) have recommended that customers demand more of their information technology suppliers in terms of security assurance – that is, proof that security is “built in” and not “bolted on,” that security is “part of” the product or service developed and can be assessed in a meaningful way. While many customers are focused on one kind of assurance – the degree to which a product is free from security vulnerabilities – it is extremely important to know the degree to which a product was designed to meet specific security threats (and how well it does that). These are two distinct approaches to security – that are quite complementary – and a point that should increasingly be of value for all customers. The good news is that many IT customers – whether of on-premises products or cloud services - are asking for more “proof of assurance,” and many vendors are paying more attention. Great! At the same time, sadly, a core international standard for assurance: the Common Criteria (CC) (ISO 15408), is at risk. The Common Criteria allows you to evaluate your IT products via an independent lab (certified by the national “scheme” in which the lab is domiciled). Seven levels of assurance are defined – generally, the higher the evaluation assurance level (EAL), the more “proof” you have to provide that your product 1) addresses specific (named) security threats 2) via specific (named) technical remedies to those threats. Over the past few years, CC experts have packaged technology-specific security threats, objectives, functions and assurance requirements into “Protection Profiles” that have a pre-defined assurance level. The best part of the CC is the CC Recognition Arrangement (CCRA), the benefit of which is that a CC security evaluation done in one country (subject to some limits) is recognized in multiple other countries (27, at present). The benefit to customers is that they can have a baseline level of confidence in a product they buy because an independent entity has looked at/validated a set of security claims about that product. Unfortunately, the CC in danger of losing this key benefit of mutual recognition. The main tension is between countries that want fast, cookie cutter, “one assurance size fits all” evaluations, and those that want (for at least some classes of products) higher levels of assurance. These tensions threaten to shatter the CCRA, with the risk of an “every country for itself,” “every market sector for itself” or worse, “every customer for itself” attempt to impose inconsistent assurance requirements on vendors that sell products and services in the global marketplace. Customers will not be well-served if there is no standardized and widely-recognized starting point for a conversation about product assurance. The uncertainty about the future of the CC creates opportunity for new, potentially expensive and unproven assurance validation approaches. Every Tom, Dick, and Harriet is jumping on the assurance bandwagon, whether it is developing a new assurance methodology (that the promoters hope will be adopted as a standard, although it’s hardly a standard if one company “owns” the methodology), or lobbying for the use of one proprietary scanning tool or another (noting that none of the tools that analyze code are themselves certified for accuracy and cost-efficiency, nor are the operators of these tools). Nature abhors a vacuum: if the CCRA fractures, there are multiple entities ready to promote their assurance solutions – which may or may not work. (Note: I freely admit that a current weakness of the CC is that, while vulnerability analysis is part of a CC evaluation, it’s not all that one would want. A needed improvement would be a mechanism that ensures that vendors use a combination of tools to more comprehensively attempt to find security vulnerabilities that can weaken security mechanisms and have a risk-based program for triaging and fixing them. Validating that vendors are doing their own tire-kicking – and fixing holes in the tires before the cars leave the factory – would be a positive change.) Why does this threat of CC balkanization matter? First of all, testing the exact same product or service 27 times won’t in all likelihood lead to a 27-fold security improvement, especially when the cost of the testing is born by the same entity over and over (the vendor). Worse, since the resources (time, money, and people) that would be used to improve actual security are assigned to jumping through the same hoop 27 times, we may paradoxically end up with worse security. We may also end up with worse security to the extent that there will be less incentive for the labs that do CC evaluations to pursue excellence and cost efficiency in testing if they have less competition (for example, from labs in other countries, as is the case under the CCRA) and they are handed a captive marketplace via country-specific evaluation schemes. Second, whatever the shortcomings of the CC, it is a strong, broadly-adopted foundation for security that to-date has the support of multiple stakeholders. While it may be improved upon, it is nonetheless better to do one thing in one market that benefits and is accepted in 26 other markets than to do 27 or more expensive testing iterations that will not lead to a 27-fold improvement in security. This is especially true in categories of products that some national schemes have deemed “too complex to evaluate meaningfully.” The alternative clearly isn't per-country testing or per-customer testing, because it is in nobody's interests and not feasible for vendors to do repeated one-off assurance fire-drills for multiple system integrators. Even if the CC is “not sufficient” for all types of testing for all products, it is still a reputable and strong baseline to build upon. Demand for Higher Assurance In part, the continuing demand for higher assurance CC evaluations is due to the nature of some of the products: smart cards, for example, are often used for payment systems, where there is a well understood need for “higher proof of security-worthiness.” Also, smart cards generally have a smaller code footprint, fewer interfaces that are well-defined and thus they lend themselves fairly well to more in-depth, higher assurance validation. Indeed, the smart card industry – in a foreshadowing and/or inspiration of CC community Protection Profiles (cPPs), was an early adopter of devising common security requirements and “proof of security claims,” doubtless understanding that all smart card manufacturers - and the financial institutions who are heavy issuers of them - have a vested interest in “shared trustworthiness.” This is a great example of understanding that, to quote Ben Franklin, “We must all hang together or assuredly we shall all hang separately.” The demand for higher assurance evaluations continues in part because the CC has been so successful. Customers worldwide became accustomed to “EAL4” as the gold standard for most commercial software. “EAL-none”—the direction of new style community Protection Profiles (cPP)—hasn’t captured the imagination of the global marketplace for evaluated software in part because the promoters of “no-EAL is the new EAL4” have not made the necessary business case for why “new is better than old.” An honorable, realistic assessment of “new-style” cPPs would explain what the benefits are of the new approach and what the downsides are as part of making a case that “new is better than old.” Consumers do not necessarily upgrade their TV just because they are told “new is better than old;” they upgrade because they can see a larger screen, clearer picture, and better value for money. Product Complexity and Evaluations To the extent security evaluation methodology can be more precise and repeatable, that facilitates more consistent evaluations across the board at a lower evaluation cost. However, there is a big difference between products that were designed to do a small set of core functions, using standard protocols, and products that have a broader swathe of functionality and have far more flexibility as to how that functionality is implemented. This means that it will be impossible to standardize testing across products in some product evaluation categories. For example, routers use standard Internet protocols (or well-known proprietary protocols) and are relatively well defined in terms of what they do. Therefore, it is far easier to test their security using standardized tests as part of a CC evaluation to, for example, determine attack resistance, correctness of protocol implementation, and so forth. The Network Device Protection Profile (NDPP) is the perfect template for this type of evaluation. Relational databases, on the other hand, use structured query language (SQL) but that does not mean all SQL syntax in all commercial databases is identical, or that protocols used to connect to the database are all identical, or that common functionality is completely comparable among databases. For example, Oracle was the first relational database to implement commercial row level access control: specifically, by attaching a security policy to a table that causes a rewrite of SQL to enforce additional security constraints. Since Oracle developed (and patented) row level access control, other vendors have implemented similar (but not identical) functionality. As a result, no set of standard tests can adequately test each vendor’s row level security implementation, any more than you can use the same key on locks made by different manufacturers. Prescriptive (monolithic) testing can work for verifying protocol implementations; it will not work in cases where features are implemented differently. Even worse, prescriptive testing may have the effect of “design by test harness.” Some national CC schemes have expressed concerns that an evaluation of some classes of products (like databases) will not be “meaningful” because of the size and complexity of these products [1], or that these products do not lend themselves to repeatable, cross-product (prescriptive) testing. This is true, to a point: it is much easier to do a building inspection of a 1000-square foot or 100-square meter bungalow than of Buckingham Palace. However, given that some of these large, complex products are the core underpinning of many critical systems, does it make sense to ignore them because it’s not “rapid, repeatable and objective” to evaluate even a core part of their functionality? These classes of products are heavily used in the core market sectors the national schemes serve: all the more reason the schemes should not preclude evaluation of them. Worse, given that customers subject to these CC schemes still want evaluated products, a lack of mutual recognition of these evaluations (thus breaking the CCRA) or negation of the ability to evaluate merely drives costs up. Demand for inefficient and ineffective ad hoc security assurances continues to increase and will explode if vendors are precluded from evaluating entire classes of products that are widely-used and highly security relevant. No national scheme, despite good intentions, can successfully control its national marketplace, or the global marketplace for information technology. Innovation One of the downsides of rapid, basic, vanilla evaluations is that it stifles the uptake of innovative security features in a customer base that has a lot to protect. Most security-aware customers (like defense and intelligence customers) want new and innovative approaches to security to support their mission. They also want the new innovations vetted properly (via a CC evaluation). Typically, a community Protection Profile (cPP) defines the set of minimum security functions that a product in category X does. Add-ons can in theory be done via an extended package (EP) – if the community agrees to it and the schemes allow it. The vendor and customer community should encourage the ability to evaluate innovative solutions through an EP, as long as the EP does not specify a particular approach to a threat to the exclusion of other ways to address the threat. This would continue to advance the state of the security art in particular product categories without waiting until absolutely everyone has Security Feature Y. It’s almost always a good thing to build a better mousetrap: there are always more mice to fend off. Rapid adoption of EPs would enable security-aware customers, many of whom are required to use evaluated products, to adopt new features readily, without waiting for: a) every vendor to have a solution addressing that problem (especially since some vendors may never develop similar functionality) b) the cPP to have been modified, and c) all vendors to have evaluated against the new cPP (that includes the new security feature) Given the increasing focus of governments on improvements to security (in some cases by legislation), national schemes should be the first in line to support “faster innovation/faster evaluation,” to support the customer base they are purportedly serving. Last but really first, in the absence of the ability to rapidly evaluate new, innovative security features, customers who would most benefit from using those features may be unable or unwilling to use them, or may only use them at the expense of “one-off” assurance validation. Is it really in anyone’s interest to ask vendors to do repeated one-off assurance fire-drills for multiple system integrators? Conclusion The Common Criteria – and in particular, the Common Criteria recognition – form a valuable, proven foundation for assurance in a digital world that is increasingly in need of it. That strong foundation can nonetheless be strengthened by: 1) recognizing and supporting the legitimate need for higher assurance evaluations in some classes of product 2) enabling faster innovation in security and the ability to evaluate it 3) continuing to evaluate core products that have historically had and continue to have broad usage and market demand (e.g., databases and operating systems) 4) embracing, where apropos, repeatable testing and validation, while recognizing the limitations thereof that apply in some cases to entire classes of products and ensuring that such testing is not unnecessarily prescriptive. [1] https://www.niap-ccevs.org/Documents_and_Guidance/ccevs/DBMS%20Position%20Statement.pdf

For years, I (and many others) have recommended that customers demand more of their information technology suppliers in terms of security assurance – that is, proof that security is “built in” and not...

Security Updates

Security Alert CVE-2016-0603 Released

Oracle just released Security Alert CVE-2016-0603 to address a vulnerability that can be exploited when installing Java 6, 7 or 8 on the Windows platform. This vulnerability has received a CVSS Base Score of 7.6. To be successfully exploited, this vulnerability requires that an unsuspecting user be tricked into visiting a malicious web site and download files to the user's system before installing Java 6, 7 or 8. Though considered relatively complex to exploit, this vulnerability may result, if successfully exploited, in a complete compromise of the unsuspecting user’s system. Because the exposure exists only during the installation process, users need not upgrade existing Java installations to address the vulnerability. However, Java users who have downloaded any old version of Java prior to 6u113, 7u97 or 8u73, should discard these old downloads and replace them with 6u113, 7u97 or 8u73 or later. As a reminder, Oracle recommends that Java home users visit Java.com to ensure that they are running the most recent version of Java SE and that all older versions of Java SE have been completely removed. Oracle further advises against downloading Java from sites other than Java.com as these sites may be malicious. For more information, the advisory for Security Alert CVE-2016-0603 is located at http://www.oracle.com/technetwork/topics/security/alert-cve-2016-0603-2874360.html

Oracle just released Security Alert CVE-2016-0603 to address a vulnerability that can be exploited when installing Java 6, 7 or 8 on the Windows platform. This vulnerability has received a CVSS Base...

Critical Patch Updates

January 2016 Critical Patch Update Released

Oracle today released the January 2016 Critical Patch Update. With this Critical Patch Update release, the Critical Patch Update program enters its 11th year of existence (the first Critical Patch Update was released in January 2005). As a reminder, Critical Patch Updates are currently released 4 times a year, on a schedule announced a year in advance. Oracle recommends that customers apply this Critical Patch Update as soon as possible. The January 2016 Critical Patch Update provides fixes for a wide range of product families; including: Oracle Database None of these database vulnerabilities are remotely exploitable without authentication. Java SE vulnerabilities Oracle strongly recommends that Java home users visit the Java.com website, to ensure that they are using the most recent version of Java and are advised to remove obsolete Java SE versions from their computers if they are not absolutely needed. Oracle E-Business Suite Oracle’s ongoing assurance effort with E-Business Suite helps remediate security issues and is intended to help enhance the overall security posture provided by E-Business Suite. Oracle takes security seriously, and strongly encourages customers to keep up with newer releases in order to benefit from Oracle’s ongoing security assurance effort. For More Information: The January 2016 Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/topics/security/cpujan2016-2367955.html The Oracle Software Security Assurance web site is located at https://www.oracle.com/support/assurance/index.html Oracle Applications Lifetime Support Policy is located at http://www.oracle.com/us/support/library/lifetime-support-applications-069216.pdf

Oracle today released the January 2016 Critical Patch Update. With this Critical Patch Update release, the Critical Patch Update program enters its 11th year of existence (the first Critical Patch...

Industry Insights

Improving the Speed of Product Evaluations

Hi there, Oracle Security blog readers; Josh Brickman here again. Today I want to share some of our thoughts about Common Criteria (CC) evaluations specifically those under the US Scheme of the CC run by the National Information Assurance Partnership (NIAP). NIAP is one of the leaders behind the significant evolution of the Common Criteria, resulting in ratification of a new Common Criteria Recognition Arrangement last year. Background In 2009, NIAP advocated for a radical change in the CC by creating Protection Profiles quickly for many technology types. As described by NIAP [1]: In this new paradigm, NIAP will only accept products into evaluation claiming exact compliance to a NIAP-approved Protection Profile. These NIAP-approved Protection Profiles (PP) produce evaluation results that are achievable, repeatable, and testable — allowing for more a more consistent and rapid evaluation process. [2] Using this approach, assurance activities are tailored to technologies and eliminate the need for evaluation assurance levels (pre-cast assurance activities that were generically used for most CC evaluations also known as EAL's). Another key element is the goal of eliminating variables such as lab experience and a subjective evaluator influencing the results. As a result, these changes help make evaluations less “gameable" — a great idea. Even better: with the new approach, instead of requiring confidential internal documentation and sometimes limited amounts of source code, the labs will now only evaluate what is available to any customer — the finished product and its documentation. New Protection Profiles describe assurance testing activities; in theory, a vendor could take the same product and documentation to different labs and receive consistent results. One other objective of NIAP's vision of a revitalized CC is based on the goal of 90 day evaluations. Under the new paradigm, evaluations should take less than six months (their current time limit) and after labs and vendors develop experience with the new process, ideally complete them in three months. For vendors, this is a terrific development as it maximizes the amount of time an evaluated product can remain on the market before a new version is released. Customers also benefit greatly from this change — if a product can be certified quickly, a greater number of new products will be available for procurement by customers who require products to be CC-evaluated before acquisition. Using the old system, a product often was outdated — replaced by a new version with new functionality — by the time it finished an evaluation. There is a lot to like with these developments — vendors benefit from a consistent and timely process, customers benefit from greater choice in evaluated products, and labs stand to benefit as more vendors send more products for evaluation using resources that are scalable. Entropy Evaluation Unfortunately, there are a few bumps along this road to assurance nirvana. As security researchers have known for some time, sources of entropy are critical for encryption as they help ensure random number generation is suitably random. As such, NIAP introduced requirements for entropy assessments for every product going through an evaluation [3]. Not a bad idea on the surface, however, in the commercial marketplace the mix of technologies, intellectual property (IP), and approaches to development mean it is hard to evaluate entropy (to ensure it is following good practices) without reviewing confidential design documents and — potentially — source code. As noted in Atsec's insightful blog post from September 2013: The TOE Security Assurance Requirements specified in Table 2 of the NDPP [4] is (roughly) equivalent to Evaluation Assurance Level 1 (EAL1) [5] per CC Part 3...The NDPP does not require design documentation of the TOE itself; nevertheless its Annex D does require design documentation of the entropy source — which is often provided by the underlying Operating System (OS). Suppose that a TOE runs on Windows, Linux, AIX and Solaris and so on, some of which may utilize cryptographic acceleration hardware (e.g., Intel processors supporting RDRAND instruction). In order to claim NDPP compliance and succeed in the CC evaluation, the vendor is obligated to provide the design documentation of the entropy source from all those various Operating Systems and/or hardware accelerators. This is not only a daunting task, but also mission impossible because the design of some entropy sources are proprietary to some OS or hardware vendors. At first NIAP stated that Entropy Assessment Reports (EARs) would be OK to submit in draft form in order for an evaluation to start. But NIAP found that finalizing the EAR was taking weeks and months because often the expertise — or documentation — did not exist in the vendor's development organization. Vendors also found that NIAP did not trust the labs to evaluate the EAR's and decided to insert their own experts from the Information Assurance Directorate (IAD) into the process. In practice this means that the EAR must be evaluated by IAD before it can be approved; no other evaluation element in the NIAP scheme requires this oversight except that of entropy. More problematic, some disclosure of the entropy source code and design documentation for entropy generation is now required by NIAP. As I noted above, some vendors may have no trouble providing this extra information to complete an entropy assessment, however, if your product relies on an external or third-party entropy source, you may be unable to provide the material (even if you are willing, your source may not wish to cough up their IP). Speeding Up, or Slowing Down? NIAP has a six month evaluation time limit with a public goal of 90 days for an evaluation. An evaluation involves a vendor providing evidence and a third-party lab evaluating it against certain criteria (in this case the Common Criteria). On the one hand, we have a new Protection Profile paradigm that streamlines evaluations and removes requirements for source code and confidential data provided. However, with the introduction of Entropy Assessment Reports (EAR), vendors are increasingly unable to complete an evaluation within the 90-day window. NIAP's solution is to evaluate an EAR BEFORE a CC evaluation can start. We see this happening now within the NIAP scheme and in other schemes trying to evaluate against NIAP PP's. There are good reasons why NIAP is so concerned with entropy, yet we need to examine how best to meet these requirements in the context of the current IT ecosystem. As Atsec noted in the same post: In theory, a thorough analysis on the entropy source coupled with some statistical tests on the raw data is absolutely necessary to gain some assurance of the entropy that plays such a vital role in supporting security functionality of IT products...However, the requirements of entropy analysis stated above impose an enormous burden on the vendors as well as the labs to an extent that they are out of balance (in regard to effort expended) compared to other requirements; or in some cases, it may not be possible to meet the requirements.

Hi there, Oracle Security blog readers; Josh Brickman here again. Today I want to share some of our thoughts about Common Criteria (CC) evaluations specifically those under the US Scheme of the CC...

Industry Insights

FIPS: The Crypto "Catch 22"

Hello, Oracle blog reader! My name is Joshua Brickman and I run Oracle's Security Evaluations team (SECEVAL). At SECEVAL we are charged with shepherding certain Oracle products through security certifications and accreditations, mostly for government use. Today in my initial blog I'd like to talk to you about cryptography [1] (or crypto) as it relates to these government certifications. First a little history; crypto actually goes back to classical Rome. The Caesar Cipher was used by the Roman military to pass messages during a battle. In World War II, Germany was very successful encrypting messages utilizing a machine known as "Enigma." But when Enigma's secret codes were famously broken (due mostly to procedural errors) it helped to turn the tide of the war. The main idea of crypto hasn't changed though: only allow intended recipients to understand a given message—to everyone else it's unintelligible [2]. Today's crypto is created by complex computer modules utilizing "algorithms." The funny thing about crypto is someone is always trying to break it — but usually using computers (also running complex programs). One of the U.S. government's approaches to combatting code breaking has been via the Federal Information Processing Standard (FIPS) program maintained by the National Institute of Standards and Technology (NIST). NIST uses FIPS to publish approved algorithms that software needs to correctly implement in order to claim their products meet the FIPS standard. The FIPS standard defines acceptable algorithms; vendors implement their crypto to meet them. So why would technology vendors need to comply with this standard? Well FIPS "140-2" defines the acceptable algorithms and modules that can be procured by the US government. In fact NIST keeps a list of approved modules (implementations of algorithms) on their website. To get on this list of approved modules, vendors like Oracle must get their modules and algorithms validated. Unfortunately for many technology providers there isn't any alternative to FIPS 140-2 (version 2 of the current approved release) [3]. FIPS 140-2 is required by U.S. government law [4] if a vendor wants to do business with the government. The validation process is an expensive and time-consuming process. Vendors must hire a third-party lab to do the validation; most also hire consultants to write documentation that is required as a part of the process. The validation process for FIPS 140-2 requires that vendors prove they have implemented their crypto correctly against the approved algorithms. This proof is accomplished via these labs that validate each module is correctly implementing the selected algorithms. FIPS 140-2 has also been adopted by banks and other financial services. If you think about it, this makes a lot of sense: there is nothing more sensitive than financial data and banks want assurances that the crypto they depend on to protect it is as trusted as the safes that protect the cash. Interestingly, smart-card manufacturers also require FIPS: this is not a government mandate, but rather a financial industry requirement to minimize their fraud rates (by making the cost for criminals to break the crypto in smart cards too high for them to bother). Finally the Canadian government partners with NIST on the FIPS 140-2 program known as the Cryptographic Module Validation Program (CMVP) and mirrors U.S. procurement requirements. All of this would be the cost of doing business with the US (and Canadian) governments except that there are a couple of elements that are broken. 1. Government customers who buy crypto technology require a FIPS 140-2 certificate of validation (not a letter from a lab or being on a list of "under validation"). In our experience, a reasonable timeframe for a validation is a few months. Unfortunately, after all of the work is completed, the final "reports" (which lead to the certificates) go into a black hole otherwise known as the "In Review" status. This is when the vendor, lab (and consultant) have done as much work as possible but are now waiting for both the Canadian and US governments to review the case for validation. The queue (as I write this blog) has been as long as nine months over the last couple of years. An Oracle example was the Sun Crypto Accelerator (SCA) 6000 product. On January 14, 2013 our lab submitted the final FIPS 140-2 report for the SCA6000 to NIST. Seven months later, we heard back our first feedback on our case for validation. We finally received our certificate on September 11, 2013. NIST is quite aware of the delays [5]; they are considering several options including but not limited to increasing fees to pay for additional contractors, pushing more work to labs and/or revising their review process--but they haven't announced any changes that might resolve the issue other than minor fee increases. From our point of view, it's really difficult to justify the significant investments required for these validations when you can't predict their completion. If you think about the shelf life of technology these days, a delay of say a year in a two year "life-span" of a product reduces the value to the customer by 50%. As a vendor we have no control over that delay, and unless customers are willing to buy invalidated products (and they can't by law in the U.S.) they are forced to use older products that may be missing key new features. 2. NIST also refers to outdated requirements [6] under the Common Criteria. The FIPS 140-2 Standard references Common Criteria Evaluation Assurance Level (EAL) directly when discussing eligibility for the various FIPS levels [7]. However, due to current policies with the U.S. Scheme of the Common Criteria (National Information Assurance Partnership or NIAP), it is no longer possible to be evaluated at EAL3 or higher. This prevents any vendor from obtaining a FIPS 140-2 validation (of any software or firmware cryptographic module) at any level higher than Level 1. One example of an Oracle product that we would have validated higher than Level 1 is the Solaris Cryptographic Framework (SCF). Originally coded to be validated at FIPS Level 2, Oracle SCF was forced to drop its validation claim down to Level 1. This was because NIST's Implementation Guidance for FIPS 140-2 required that the Operating Systems be evaluated against a deprecated Common Criteria Protection Profile if vendors of crypto modules running on those OSs wanted the modules to be validated at Level 2. Unfortunately an older version of Solaris was Common Criteria evaluated against that deprecated Operating System Protection Profile. The version of SCF that we needed FIPS 140-2 validated didn't run on that old version of Solaris. Oracle had market demands that required a timely completion of its FIPS validation so a business decision was made to lower the bar. We've pointed out this contradiction (as I'm sure have other vendors) but at this writing the problem is still not resolved by NIST. Oracle is committed to working with NIAP and NIST to make both Common Criteria and FIPS 140-2 as broadly accepted as possible. U.S. and Canadian government customers want to buy products that meet the high standards defined by FIPS 140-2, and that many other security-conscious customers recognize. Along with many other vendors of crypto modules in our products, Oracle is very keen to see NIST's queue of work brought down to a level that would allow the longest shelf life for our validations. We also would like to see NIST and NIAP work together to align their policies and interpretations of standards to provide greater selection of strong cryptography. These current obstacles are providing a reduced choice of FIPS 140-2 validated products. If removed, there will be a larger number of FIPS 140-2 validated crypto modules available for purchase. Increasing choice of validated products for the US and Canadian governments can only contribute to improved national security. --- [1] According to Webopedia, "Cryptography is the art of protecting information by transforming it (encrypting it) into an unreadable format, called cipher text. Only those who possess a secret key can decipher (or decrypt) the message into plain text. Encrypted messages can sometimes be broken by cryptanalysis, also called code breaking, although modern cryptography techniques are virtually unbreakable." [2] "FIPS 140 Demystified, An Introductory Guide for Developers," by Wesley Higaki and Ray Potter, ISBN-13: 978-1460990391, 2011. [3] FIPS 140-3 has been in draft form for many years. NIST has been pulling key elements of the new standard and adding them to FIPS 140-2 as "Implementation Guidance." [4] "Federal Information Processing Standards (FIPS) are approved by the Secretary of Commerce and issued by NIST in accordance with the Federal Information Security Management Act of 2002 (FISMA). FIPS are compulsory and binding for federal agencies. FISMA requires that federal agencies comply with these standards, and therefore, agencies may not waive their use." [5] As noted in the summary of the recent first FIPS Conference, "...The current length of The Queue means that it can take developers many months to get their modules validated and hence available for procurement from federal agencies. In an increasing number of cases, products are obsolete or un-supported by the time the validation is finally documented. We heard how the unpredictability of The Queue is a problem too, since it greatly affects how developers can perform their marketing, sales and project planning." [6] For example, http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf "Security Level 3 allows the software and firmware components of a cryptographic module to be executed on a general purpose computing system using an operating system that meets the functional requirements specified in the PPs listed in Annex B with the additional functional requirement of a Trusted Path (FTP_TRP.1) and is evaluated at the CC evaluation assurance level EAL3 (or higher)." [7] There are four levels of FIPS, at level "1" all components must be "production grade" and level 2 adds requirements for physical tamper-evidence and role-based authentication. Levels 3 and 4 are typically for hardware. -- http://en.wikipedia.org/wiki/FIPS_140

Hello, Oracle blog reader! My name is Joshua Brickman and I run Oracle's Security Evaluations team (SECEVAL). At SECEVAL we are charged with shepherding certain Oracle products through...