X

Corporate Security Blog

Recent Posts

Industry Insights

Intel Processor L1TF vulnerabilities: CVE-2018-3615, CVE-2018-3620, CVE-2018-3646

Today, Intel disclosed a new set of speculative execution side-channel processor vulnerabilities affecting their processors.    These L1 Terminal Fault (L1TF) vulnerabilities affect a number of Intel processors, and they have received three CVE identifiers: CVE-2018-3615 impacts Intel Software Guard Extensions (SGX) and has a CVSS Base Score of 7.9. CVE-2018-3620 impacts operating systems and System Management Mode (SMM) running on Intel processors and has a CVSS Base Score of 7.1. CVE-2018-3646 impacts virtualization software and Virtual Machine Monitors (VMM) running on Intel processors and has a CVSS Base Score of 7.1 These vulnerabilities derive from a flaw in Intel processors, in which operations performed by a processor while using speculative execution can result in a compromise of the confidentiality of data between threads executing on a physical CPU core.  As with other variants of speculative execution side-channel issues (i.e., Spectre and Meltdown), successful exploitation of L1TF vulnerabilities require the attacker to have the ability to run malicious code on the targeted systems.  Therefore, L1TF vulnerabilities are not directly exploitable against servers which do not allow the execution of untrusted code.  While Oracle has not yet received reports of successful exploitation of this speculative execution side-channel issue “in the wild,” Oracle has worked with Intel and other industry partners to develop technical mitigations against these issues.  The technical steps Intel recommends to mitigate L1TF vulnerabilities on affected systems include: Ensuring that affected Intel processors are running the latest Intel processor microcode. Intel reports that the microcode update  it has released for the Spectre 3a (CVE-2018-3640) and Spectre 4 (CVE-2018-3639) vulnerabilities also contains the microcode instructions which can be used to mitigate the L1TF vulnerabilities. Updated microcode by itself is not sufficient to protect against L1TF. Applying the necessary OS and virtualization software patches against affected systems. To be effective, OS patches will require the presence of the updated Intel processor microcode.  This is because updated microcode by itself is not sufficient to protect against L1TF.  Corresponding OS and virtualization software updates are also required to mitigate the L1TF vulnerabilities present in Intel processors. Disabling Intel Hyper-Threading technology in some situations. Disabling HT alone is not sufficient for mitigating L1TF vulnerabilities. Disabling HT will result in significant performance degradation. In response to the various L1TF Intel processor vulnerabilities: Oracle Hardware Oracle recommends that administrators of x86-based Systems carefully assess the L1TF threat for their systems and implement the appropriate security mitigations.Oracle will provide specific guidance for Oracle Engineered Systems. Oracle has determined that Oracle SPARC servers are not affected by the L1TF vulnerabilities. Oracle has determined that Oracle Intel x86 Servers are not impacted by vulnerability CVE-2018-3615 because the processors in use with these systems do not make use of Intel Software Guard Extensions (SGX). Oracle Operating Systems (Linux and Solaris) and Virtualization Oracle has released security patches for Oracle Linux 7, Oracle Linux 6 and Oracle VM Server for X86 products.  In addition to OS patches, customers should run the current version of the Intel microcode to mitigate these issues.  Oracle Linux customers can take advantage of Oracle Ksplice to apply these updates without needing to reboot their systems. Oracle has determined that Oracle Solaris on x86 is not affected by vulnerabilities CVE-2018-3615 and CVE-2018-3620 regardless of the underlying Intel processor on these systems.  It is however affected by vulnerability CVE-2018-3646 when using Kernel Zones. The necessary patches will be provided at a later date.  Oracle Solaris on SPARC is not affected by the L1TF vulnerabilities. Oracle Cloud The Oracle Cloud Security and DevOps teams continue to work in collaboration with our industry partners on implementing the necessary mitigations to protect customer instances and data across all Oracle Cloud offerings: Oracle Cloud (IaaS, PaaS, SaaS), Oracle NetSuite, Oracle GBU Cloud Services, Oracle Data Cloud, and Oracle Managed Cloud Services.   Oracle’s first priority is to mitigate the risk of tenant-to-tenant attacks. Oracle will notify and coordinate with the affected customers for any required maintenance activities as additional mitigating controls continue to be implemented. Oracle has determined that a number of Oracle's cloud services are not affected by the L1TF vulnerabilities.  They include Autonomous Data Warehouse service, which provides a fully managed database optimized for running data warehouse workloads, and Oracle Autonomous Transaction Processing service, which provides a fully managed database service optimized for running online transaction processing and mixed database workloads.  No further action is required by customers of these services as both were found to require no additional mitigating controls based on service design and are not affected by the L1TF vulnerabilities (CVE-2018-3615, CVE-2018-3620, and CVE-2018-3646).    Bare metal instances in Oracle Cloud Infrastructure (OCI) Compute offer full control of a physical server and require no additional Oracle code to run.  By design, the bare metal instances are isolated from other customer instances on the OCI network whether they be virtual machines or bare metal.  However, for customers running their own virtualization stack on bare metal instances, the L1TF vulnerability could allow a virtual machine to access privileged information from the underlying hypervisor or other VMs on the same bare metal instance.  These customers should review the Intel recommendations about vulnerabilities CVE-2018-3615, CVE-2018-3620, CVE-2018-3646 and make changes to their configurations as they deem appropriate. Note that many industry experts anticipate that new techniques leveraging these processor flaws will continue to be disclosed for the foreseeable future.  Future speculative side-channel processor vulnerabilities are likely to continue to impact primarily operating systems and virtualization platforms, as addressing them will likely require software update and microcode update.  Oracle therefore recommends that customers remain on current security release levels, including firmware, and applicable microcode updates (delivered as Firmware or OS patches), as well as software upgrades.    For more information:   The information in this blog entry is also published as MOS Note 2434830.1: “Information about the L1TF Intel processor vulnerabilities (CVE-2018-3615, CVE-2018-3620, CVE-2018-3646)” Oracle Linux can refer to the bulletins located at https://linux.oracle.com/cve/CVE-2018-3620.html,  and https://linux.oracle.com/cve/CVE-2018-3646.html Solaris customers should refer to MOS 2434208.1 : “L1 Terminal Fault (CVE-2018-3615, CVE-2018-3620, & CVE-2018-3646) Vulnerabilities” and MOS 2434206.1 : “Disabling x86 Hyperthreading in Oracle Solaris” Oracle x86 hardware customers should refer to MOS 2434171.1 : “L1 Terminal Fault (CVE-2018-3620, CVE-2018-3646) Vulnerabilities on Oracle x86 Servers” For information about the availability of Intel microcode for Oracle hardware, see MOS Note 2406316.1: “CVE-2018-3640 (Spectre v3a), CVE-2018-3639 (Spectre v4) Vulnerabilities: Intel Processor Microcode Availability (Doc ID 2406316.1)” The “Oracle Cloud Security Response to Intel L1TF Vulnerabilities” is located at https://docs.cloud.oracle.com/iaas/Content/Security/Reference/L1TF_response.htm The “Oracle Cloud Infrastructure Customer Advisory for L1TF Impact on the Compute Service” is located at https://docs.cloud.oracle.com/iaas/Content/Security/Reference/L1TF_computeimpact.htm The “Oracle Cloud Infrastructure Customer Advisory for L1TF Impact on the Database Service” is located at https://docs.cloud.oracle.com/iaas/Content/Security/Reference/L1TF_databaseimpact.htm The document “Protecting your Compute Instance Against the L1TF Vulnerability” is located at https://docs.cloud.oracle.com/iaas/Content/Security/Reference/L1TF_protectinginstance.htm

Today, Intel disclosed a new set of speculative execution side-channel processor vulnerabilities affecting their processors.    These L1 Terminal Fault (L1TF) vulnerabilities affect a number of Intel...

Critical Patch Updates

Security Alert CVE-2018-3110 Released

Oracle just released Security Alert CVE-2018-3110.  This vulnerability affects the Oracle Database versions 11.2.0.4 and 12.2.0.1 on Windows.  It has received a CVSS Base Score of 9.9, and it is not remotely exploitable without authentication.  Vulnerability CVE-2018-3110 also affects Oracle Database version 12.1.0.2 on Windows as well as Oracle Database on Linux and Unix; however, patches for those versions and platforms were included in the July 2018 Critical Patch Update. Due to the nature of this vulnerability, Oracle recommends that customers apply these patches as soon as possible.  This means that: Customers running Oracle Database versions 11.2.0.4 and 12.2.0.1 on Windows should apply the patches provided by the Security Alert. Customers running version 12.1.0.2 on Windows or any version of the database on Linux or Unix should apply the July 2018 Critical Patch Update if they have not already done so.  For More Information: • The Advisory for Security Alert CVE-2018-3110 is located at http://www.oracle.com/technetwork/security-advisory/alert-cve-2018-3110-5032149.html • The Advisory for the July 2018 Critical Patch Update is located at http://www.oracle.com/technetwork/security-advisory/cpujul2018-4258247.html

Oracle just released Security Alert CVE-2018-3110.  This vulnerability affects the Oracle Database versions 11.2.0.4 and 12.2.0.1 on Windows.  It has received a CVSS Base Score of 9.9, and it is not...

Critical Patch Updates

July 2018 Critical Patch Update Released

Oracle today released the July 2018 Critical Patch Update. This Critical Patch Update provided security updates for a wide range of product families, including: Oracle Database Server, , Oracle Global Lifecycle Management, Oracle Fusion Middleware, Oracle E-Business Suite, Oracle PeopleSoft, Oracle Siebel CRM, Oracle Industry Applications (Construction, Communications, Financial Services, Hospitality, Insurance, Retail, Utilities), Oracle Java SE, Oracle Virtualization, Oracle MySQL, and Oracle Sun Systems Products Suite. 37% of the vulnerabilities fixed with this Critical Patch Update are for third-party components included in Oracle product distributions.  The CVSS v3 Standard considers vulnerabilities with a CVSS Base Score between 9.0 and 10.0 to have a qualitative rating of “Critical.”  Vulnerabilities with a CVSS Base Score between 7.0 and 8.9, have a qualitative rating of “High.” While Oracle cautions against performing quantitative analysis against the content of each Critical Patch Update release because such analysis is excessively complex (e.g., the same CVE may be listed multiple times, because certain components are widely used across different products), it is fair to note that bugs in third-party components make up a disproportionate amount of severe vulnerabilities in this Critical Patch Update.  90% of the critical vulnerabilities addressed in this Critical Patch Update are for non-Oracle CVEs.  Non-Oracle CVEs also make up 56% of the Critical and High vulnerabilities addressed in this Critical Patch Update. Finally, note that many industry experts anticipate that a number of new variants of exploits leveraging known flaws in modern processor designs (currently referred as “Spectre” variants) will continue to be discovered.  Oracle is actively engaged with Intel and other industry partners to come up with technical mitigations against these processor vulnerabilities as they are being reported.  For more information about this Critical Patch Update, customers should refer to the Critical Patch Update Advisory and the executive summary published on My Oracle Support (Doc ID 2420273.1).  

Oracle today released the July 2018 Critical Patch Update. This Critical Patch Update provided security updates for a wide range of product families, including: Oracle Database Server, , Oracle Global...

Security Updates

Updates about the “Spectre” series of processor vulnerabilities and CVE-2018-3693

A new processor vulnerability was announced today. Vulnerability CVE-2018-3693 (“Bounds Check Bypass Store” or BCBS) is closely related to Spectre v1. As with previous iterations of Spectre and Meltdown, Oracle is actively engaged with Intel and other industry partners to develop technical mitigations against this processor vulnerability. Note that many industry experts anticipate that a number of new variants of exploits leveraging these known flaws in modern processor designs will continue to be disclosed for the foreseeable future. These issues are likely to primarily impact operating systems and virtualization platforms, and may require software update, microcode update, or both. Fortunately, the conditions of exploitation for these issues remain similar: malicious exploitation requires the attackers to first obtain the privileges required to install and execute malicious code against the targeted systems. In regard to vulnerabilities CVE-2018-3640 (“Spectre v3a”) and CVE-2018-3639 (“Spectre v4”), Oracle has determined that the SPARC processors manufactured by Oracle (i.e., SPARC M8, T8, M7, T7, S7, M6, M5, T5, T4, T3, T2, T1) are not affected by these variants. In addition, Oracle has delivered microcode patches for the last 4 generations of Oracle x86 Servers. As with previous versions of the Spectre and Meltdown vulnerabilities (see MOS Note ID 2347948.1), Oracle will publish information about these issues on My Oracle Support.

A new processor vulnerability was announced today. Vulnerability CVE-2018-3693 (“Bounds Check Bypass Store” or BCBS) is closely related to Spectre v1. As with previous iterations of Spectre...

Oracle Security

Updates about processor vulnerabilities CVE-2018-3640 (“Spectre v3a”) and CVE-2018-3639 (“Spectre v4”)

Two new processor vulnerabilities were publicly disclosed on  May 21, 2018.  They are vulnerabilities CVE-2018-3640 ( “Spectre v3a” or “Rogue System Register Read”) and CVE-2018-3639 (“Spectre v4” or “Speculative Store Buffer Bypass”).  Both vulnerabilities have received a CVSS Base Score of 4.3.  Successful exploitation of vulnerability CVE-2018-3639 requires local access to the targeted system.  Mitigating this vulnerability on affected systems will require both software and microcode updates.  Successful exploitation of vulnerability CVE-2018-3640 also requires local access to the targeted system.  Mitigating this vulnerability on affected Intel processors is solely performed by applying updated processor-specific microcode. Working with the industry, Oracle has just released the required software updates for Oracle Linux and Oracle VM along with the microcode recently released by Intel for certain x86 platforms.  Oracle will continue to release new microcode updates and firmware patches as production microcode becomes available from Intel.  As for previous versions of the Spectre and Meltdown vulnerabilities (see MOS Note ID 2347948.1), Oracle will publish a list of products affected by CVE-2018-3639 and CVE-2018-3640 along with other technical information on My Oracle Support (MOS Note ID 2399123.1).  In addition, the Oracle Cloud teams will be working to identify and apply necessary updates if warranted, as they become available from Oracle and third-party suppliers, in accordance with applicable change management processes

Two new processor vulnerabilities were publicly disclosed on  May 21, 2018.  They are vulnerabilities CVE-2018-3640 ( “Spectre v3a” or “Rogue System Register Read”) and CVE-2018-3639 (“Spectre v4” or...

Critical Patch Updates

April 2018 Critical Patch Update Released

Oracle today released the April 2018 Critical Patch Update. This Critical Patch Update provided security updates for a wide range of product families, including: Oracle Database Server, Oracle Fusion Middleware, Oracle E-Business Suite, Oracle PeopleSoft, Oracle Industry Applications (Construction, Financial Services, Hospitality, Retail, Utilities), Oracle Java SE, and Oracle Systems Products Suite. Approximately 35% of the security fixes provided by this Critical Patch Update are for non-Oracle Common Vulnerabilities and Exposures (CVEs): that is, security fixes for third-party products (e.g., open source components) that are included in traditional Oracle product distributions.  In many instances, the same CVE is listed multiple times in the Critical Patch Update Advisory, because a vulnerable common component (e.g., Apache) may be present in many different Oracle products. Note that Oracle started releasing security updates in response to the Spectre (CVE-2017-5715 and CVE-2017-5753) and Meltdown (CVE-2017-5754) processor vulnerabilities with the January 2018 Critical Patch Update.  Customers should refer to this Advisory and the “Addendum to the January 2018 Critical Patch Update Advisory for Spectre and Meltdown” My Oracle Support note (Doc ID 2347948.1) for information about newly-released updates. At this point in time, Oracle has issued the corresponding security patches for Oracle Linux and Virtualization and Oracle Solaris on SPARC (SPARC 64-bit systems are not affected by Meltdown), and Oracle is working on producing the necessary updates for Solaris on x86 (noting the diversity of supported processors complicates the creation of the security patches related to these issues). For more information about this Critical Patch Update, customers should refer to the Critical Patch Update Advisory and the executive summary published on My Oracle Support (Doc ID 2383583.1).   

Oracle today released the April 2018 Critical Patch Update. This Critical Patch Update provided security updates for a wide range of product families, including: Oracle Database Server, Oracle Fusion...

Critical Patch Updates

Security Alert CVE-2017-9805 Released

Last week, Equifax identified an Apache Struts 2 vulnerability, CVE-2017-5638, as having been exploited in a significant security incident. Oracle distributed the Apache Foundation’s fixes for CVE-2017-5638 several months ago in the April 2017 Critical Patch Update, which should have already been applied to customer systems well before this breach came to light. Recently, the Apache Foundation released fixes for a number of additional Apache Struts 2 vulnerabilities, including CVE-2017-9805, CVE-2017-7672, CVE-2017-9787, CVE-2017-9791, CVE-2017-9793, CVE-2017-9804, and CVE-2017-12611. Oracle just published Security Alert CVE-2017-9805 in order to distribute these fixes to our customers. Please refer to the Security Alert advisory for the technical details of these bugs as well as the CVSS Base Score information. Oracle strongly recommends that customers apply the fixes contained in this Security Alert as soon as possible. Furthermore, Oracle reminds customers that they should keep up with security releases and should have applied the July 2017 Critical Patch Update (the most recent Critical Patch Update release). The next Critical Patch Update release is on October 17, 2017. For More Information: The Security Alerts and Critical Patch Updates page is located at https://www.oracle.com/technetwork/topics/security/alerts-086861.html A blog entry titled "Take Advantage of Oracle Software Security Assurance" is located at https://blogs.oracle.com/oraclesecurity/take-advantage-of-oracle-software-security-assurance. This blog entry provides a description of the Critical Patch Update and Security Alert programs and general recommendations around security patching.

Last week, Equifax identified an Apache Struts 2 vulnerability, CVE-2017-5638, as having been exploited in a significant security incident. Oracle distributed the Apache Foundation’s fixes...

Oracle Security

Securing the Oracle Cloud

sup { vertical-align: baseline; position: relative; top: -0.4em; } Technology safeguards, fewer risks, and unparalleled security motivate CIOs to embrace cloud computing. If one thing is constant in the IT world, it's change. Consider the age-old dilemma of security versus innovation. Just a few years ago, concerns about data security and privacy prevented some organizations from adopting cloud-based business models. Today, many of these concerns have been alleviated. IT leaders are migrating their applications and data to the cloud in order to benefit from security features offered by some cloud providers. The key is to choose the right technology—one that is designed to protect users, enhance safeguarding of data, and better address requirements under privacy laws. Find out why millions of users rely on advanced and complete cloud services to transform fundamental business processes more quickly and confidently than ever before. The Evolving Security Landscape Sophisticated threats: 76 percent of organizations experienced a security incident.1 Security alert overload: Midsize companies average 16,937 alerts per week; only 19 percent are reliable and 4 percent are investigated.2 Scarcity of talent: 66 percent of cybersecurity jobs cannot be filled by skilled candidates.3 Porous perimeter: 91 percent of organizations have security concerns about adopting cloud; only 14 percent believe traditional security is enough.4 Mitigating the Risk of Data Loss with Cloud Technology The IT security practices of many organizations that manage their own systems may not be strong enough to resist complex threats from malware, phishing schemes, and advanced persistent threats unleashed by malicious users, cybercriminal organizations, and state actors. The perimeter-based security controls typically implemented by organizations who manage their own security—from firewalls, intrusion detection systems, and antivirus software packages—are arguably no longer sufficient to prevent these threats. It's time to look further. It's time to look to the cloud. Thousands of organizations and millions of users obtain a better security position using a tier 1 public cloud provider than they can obtain in their own data centers. A full 78 percent of businesses surveyed say the cloud can improve both their security and their agility.5 Consider the facts: Most of today's security budgets are used to protect the network, with less than a third used to protect data and intellectual property that resides inside the organization.6 Network security is important, but it's not enough. Building Oracle's Defense-in-Depth Strategy Oracle Cloud is built around multiple layers of security and multiple levels of defense throughout the technology stack. Redundant controls provide exceptional resiliency, so if vulnerability is discovered and exploited in one layer, the unauthorized user will be confronted with another security control in the next layer. But having some of the world's best security technology is only part of the story. Oracle aligns people, processes, and technology to offer an integrated defense-in-depth platform: Preventive controls designed to mitigate unauthorized access to sensitive systems and data Detective controls designed to reveal unauthorized system and data changes through auditing, monitoring, and reporting Administrative measures to address security policies, practices, and procedures Gaining an Edge with Cloud Security In the Digital Age, companies depend on their information systems to connect with customers, sell products, operate equipment, maintain inventory, and carry out a wide range of other business processes. If your data is compromised, IT assets quickly become liabilities. A 2016 Ponemon Institute study found that the average cost of a data breach continues to rise each year, with each lost or stolen record that contains confidential information representing US$158 in costs or penalties.8 In response, more and more organizations are transitioning their information systems to the cloud to achieve better security for sensitive data and critical business processes. Security used to be an inhibitor to moving to the cloud. Now it's an enabler to get you where you need to go. Oracle helps you embrace the cloud quickly, and with confidence. Learn more about Oracle security cloud services, read the paper "Oracle Infrastructure and Platform Cloud Services Security", and try Oracle Cloud today. --- 1 QuinStreet Enterprise, "2015 Security Outlook: Meeting Today's Evolving Cyber-Threats". 2 Ponemon Institute, "The Cost of Malware Containment," 2015. 3 Leviathan Security Group, "Quantifying the Cost of Cloud Security,". 4 Crowd Research Partners, "Cloud Security: 2016 Spotlight Report" 5 Coleman Parkes Research, "A Secure Path to Digital Transformation". 6 CSO Market Pulse, "An Inside-Out Approach to Enterprise Security." 7 Jeff Kauflin, "The Fast-Growing Job with a Huge Skills Gap: Cyber Security," Forbes, March 16, 2017. 8 IBM Security, "2016 Ponemon Cost of Data Breach Study"

Technology safeguards, fewer risks, and unparalleled security motivate CIOs to embrace cloud computing. If one thing is constant in the IT world, it's change. Consider the age-old dilemma of security...

Oracle Security

Oracle's Security Fixing Practices

In a previous blog entry, we discussed how Oracle customers should take advantage of Oracle's ongoing security assurance effort in order to help preserve their security posture over time. In today's blog entry, we're going to discuss the highlights of Oracle's security fixing practices and their implications for Oracle customers. As stated in the previous blog entry, the Critical Patch Update program is Oracle's primary mechanism for the delivery of security fixes in all supported Oracle product releases and the Security Alert program provides for the release of fixes for severe vulnerabilities outside of the normal Critical Patch Update schedule. Oracle always recommends that customers remain on actively-supported versions and apply the security fixes provided by Critical Patch Updates and Security Alerts as soon as possible. So, how does Oracle decide to provide security fixes? Where does the company start (i.e., for what product versions do security fixes get first generated)? What goes into security releases? What are Oracle's objectives? The primary objective of Oracle's security fixing policies is to help preserve the security posture of ALL Oracle customers. This means that Oracle tries to fix vulnerabilities in severity order for each Oracle product family. In certain instances, security fixes cannot be backported; in other instances, lower severity fixes are required because of dependencies among security fixes. Additionally, Oracle treats customers equally by providing customers with the same vulnerability information and access to fixes across actively-used platform and version combinations at the same time. Oracle does not provide additional information about the specifics of vulnerabilities beyond what is provided in the Critical Patch Update (or Security Alert) advisory and pre-release note, the pre-installation notes, the readme files, and FAQs. The only and narrow exception to this practice is for the customers who report a security vulnerability. When a customer is reporting a security vulnerability, Oracle will treat the customer in much the same way the company treats security researchers: the customer gets detailed information about the vulnerability as well as information about expected fixing date, and in some instances access to a temporary patch to test the effectiveness of a given fix. However, the scope of the information shared between Oracle and the customer is limited to the original vulnerability being reported by the customer. Another objective for Oracle's security fixing policies is not so much about producing fixes as quickly as possible, as it is to making sure that these fixes get applied by customers as quickly as possible. Prior to 2005 and the introduction of the Critical Patch Update program, security fixes were published by Oracle as they become produced by development without any fixed schedule (as Oracle would today release a Security Alert). Feedback we received was that this lack of predictability was challenging for customers, and as a result, many customers reported that they no longer applied fixes. Customers said that a predictable schedule would help them ensure that security fixes were picked up more quickly and consistently. As a result, Oracle created the Critical Patch Update program to bring predictability to Oracle customers. Since 2005, and in spite of a growing number of product families, Oracle has never missed a Critical Patch Update release. It is also worth noting that Critical Patch Update releases for most Oracle products are cumulative. This means that by applying a Critical Patch Update, a customer gets all the security fixes included in a specific Critical Patch Update release as well as all the previously-released fixes for a given product-version combination. This allows customers who may have missed Critical Patch Update releases to quickly "catch up" to current security releases. Let's now have a look at the order with which Oracle produces fixes for security vulnerabilities. Security fixes are produced by Oracle in the following order: Main code line. The main code line is the code line for the next major release version of the product. Patch set for non-terminal release version. Patch sets are rollup patches for major release versions. A Terminal release version is a version where no additional patch sets are planned. Critical Patch Update. These are fixes against initial release versions or their subsequent patch sets This means that, in certain instances, security fixes can be backported for inclusion in future patch sets or products that are released before their actual inclusion in a future Critical Patch Update release. This also mean that systems updated with patch sets or upgraded with a new product release will receive the security fixes previously included in the patch set or release. One consequence of Oracle's practices is that newer Oracle product versions tend to provide an improved security posture over previous versions, because they benefit from the inclusion of security fixes that have not been or cannot be backported by Oracle. In conclusion, the best way for Oracle customers to fully leverage Oracle's ongoing security assurance effort is to: Remain on actively supported release versions and their most recent patch set—so that they can have continued access to security fixes; Move to the most recent release version of a product—so that they benefit from fixes that cannot be backported and other security enhancements introduced in the code line over time; Promptly apply Critical Patch Updates and Security Alert fixes—so that they prevent the exploitation of vulnerabilities patched by Oracle, which are known by malicious attackers and can be quickly weaponized after the release of Oracle fixes. For more information: - Oracle Software Security Assurance website - Security Alerts and Critical Patch Updates

In a previous blog entry, we discussed how Oracle customers should take advantage of Oracle's ongoing security assurance effort in order to help preserve their security posture over time. In...

Oracle Security

Take Advantage of Oracle Software Security Assurance

In a previous blog entry (What is Assurance and Why Does It Matter?), Mary Ann Davidson explains the importance of Security Assurance and introduces Oracle Software Security Assurance, Oracle’s methodology for building security into the design, build, testing, and maintenance of its products. The primary objective of software security assurance is to help ensure that security controls provided by software are effective, work in a predictable fashion, and are appropriate for that software. The purpose of ongoing security assurance is to make sure that this objective continues to be met over time (throughout the useful life of software). The development of enterprise software is a complex matter. Even in mature development organizations, bugs still occur, and the use of automated tools does not completely prevent software defects. One important aspect of ongoing security assurance is therefore to remediate security bugs in released code. Another aspect of ongoing security assurance is to ensure that the security controls provided by software continue to be appropriate when the use cases for software change. For example, years ago backups were performed mostly on tapes or other devices physically connected to the server being backed up, while today many backups are performed over private or public networks and sometimes stored in a cloud. Finally, other aspects for ongoing security assurance activities include changing threats (e.g., new attack methods) or obsolete technologies (e.g., deprecated encryption algorithms). Oracle customers need to take advantage of Oracle ongoing security assurance efforts in order to preserve over time their security posture associated with their use of Oracle products. To that end, Oracle recommends that customers remain on actively-supported versions and apply security fixes as quickly as possible after they have been published by Oracle. Introduced in 2005, the Critical Patch Update program is the primary mechanism for the backport of security fixes for all Oracle on-premises products. The Critical Patch Update is Oracle’s program for the distribution of security fixes in previously-released versions of Oracle software. Critical Patch Updates are regularly scheduled: they are issued quarterly on the Tuesday closest to the 17th of the month in January, April, July, and October. This fixed schedule is intended to provide enough predictability to enable customers to apply security fixes in normal maintenance windows. Furthermore, the dates of the Critical Patch Update releases are intended to fall outside of traditional "blackout" periods when no changes to production systems are typically allowed (e.g., end of fiscal years or quarters or significant holidays). Note that in addition to this regularly-scheduled program for security releases, Oracle retains the ability to issue out of schedule patches or workaround instructions in case of particularly critical vulnerabilities and/or when active exploits are reported "in the wild." This program is known as the Security Alert Program. Critical Patch Update and Security Alert fixes are only provided for product versions that are "covered under the Premier Support or Extended Support phases of the Lifetime Support Policy." This means that Oracle does not backport fixes to product versions that are out of support. Furthermore, unsupported product releases are not tested for the presence of vulnerabilities. It is, however, common for vulnerabilities to be found in legacy code, and vulnerabilities fixed in a given Critical Patch Update release can also affect older product versions that are no longer supported. As a result, organizations choosing to continue to use unsupported systems face increasing risks over time. Malicious attackers are known to reverse-engineer the content of published security fixes and it is common for exploit code to be to be published in hacking frameworks soon after Oracle discloses vulnerabilities with the release of a Critical Patch Update or Security Alert. Continuing to use unsupported systems can therefore have two serious implications:(a) Unsupported releases are likely to be affected by vulnerabilities which are not known by the affected software user because these releases are no longer subject to ongoing security assurance activities, and (b) Unsupported releases are likely to be vulnerable to flaws that are known by malicious perpetrators because these bugs have been fixed (and publicly disclosed) in subsequent releases. Unfortunately, security studies continue to report that in addition to human errors and systems misconfigurations, the lack of timely security patching constitutes one of the greatest reasons for the compromise of IT systems by malicious attackers. See for example, the Federal Trade Commission’s paper "Start with Security: A Guide for Business", which recommends that organizations have effective means to keep up with security releases of their software (whether commercial or open source). Delays in security patching and overall lapses in good security hygiene have plagued IT organizations for years. In many instances, organizations will report the "fear of breaking something in a business-critical system" as the reason for not keeping up with security patches. Here lies a fundamental paradox: a given system may be considered too important to fail (or temporarily brought offline), and this is the reason why it is not kept up to date with security patches! The hope for these organizations is that the known system availability interruption outweighs the potential impact of a security incident that could result from not keeping up with a security release. This amounts to driving a car with very little gas left in the tank and thinking "I don’t have time to stop at the gas station, because I really need my car and I am too busy to gas up." Obviously, the scarcity of technical personnel and the costs associated with testing complex applications and deploying patches further exacerbate the problem. The larger the IT environment, the more complex, and the more operation-critical, the greater is the "to patch or not to patch" conundrum. In recent years, Oracle has issued stronger caution against postponing the application of security fixes or knowingly continuing to use unsupported versions. For example, the April 2017 Critical Patch Update Advisory includes the following warning: "Oracle continues to periodically receive reports of attempts to maliciously exploit vulnerabilities for which Oracle has already released fixes. In some instances, it has been reported that attackers have been successful because targeted customers had failed to apply available Oracle patches. Oracle therefore strongly recommends that customers remain on actively-supported versions and apply Critical Patch Update fixes without delay." Keeping up with security releases is simply a critical requirement for preserving the security posture of an IT environment, regardless of the technologies (or vendors) in use.

In a previous blog entry (What is Assurance and Why Does It Matter?), Mary Ann Davidson explains the importance of Security Assurance and introduces Oracle Software Security Assurance, Oracle’s...

Industry Insights

What Is Assurance and Why Does It Matter?

If you are an old security hand, you can skip reading this. If you think "assurance" is something you pay for so your repair bills are covered if someone hits your car, please keep reading. Way back in the pre-Internet days, I used to say that computer security was kind of a lonely job, because hardly any customers seemed to be really interested in talking about it. There were, of course, some keenly interested customers, including defense and intelligence agencies and a few banks, most of which were concerned with our security functionality and—to a lesser degree—how we were building security into everything, a difference I will explain below, and which is known as assurance. Times change. Now, when I meet someone who complains of a virus, it's better-than-even odds that he is talking about the latest digital plague and not a case of the flu. Information technology (IT) has moved way beyond mission-critical applications to things that are literally in the palm of our hands and is in places we never even thought would (or in some cases should) be computerized ("turn your crock pot on remotely? There's an app for that!") More and more of our world is not only IT-based but Internet accessible. Alas, the growth in Internet-accessible whatchamacallits has also led to a growth in Evil Dudes in Upper Wherever wreaking havoc in systems in Anywheresville. This is one big reason that cybersecurity is something (almost) everybody cares about. Historically, computer security has often been described as "CIA" (Confidentiality, Integrity and Availability): Confidentiality means that the data is protected such that people who don't need to access it, can't access it, via restrictions on who can view, delete or change data. For example, at Oracle, I can review my salary online (so can my Human Resource representative), but I cannot look at the salaries of employees who do not report to me. Integrity means that the data hasn't been corrupted (technical term: "futzed with"). In other words, you know that "A" means "A" and isn't really "B" that has been garbled to look like "A." Corrupted data is often worse than no data, precisely because you can't trust it. (Wire transfers wouldn't work if extra 0s were mysteriously and randomly appended to amounts.) Availability means in that you are able to access data (and systems) you have legitimate access to—when you need to. In other words, someone hasn't prevented access by say, flooding a machine with so many requests that the system just gives up (the digital equivalent of a persistent three-year-old asking for more candy "now mommy now mommy now mommy" to the point where mommy can't think). C, I and A are all important attributes of security that may vary in terms of importance from system to system. Assurance is not CIA, but it is the confidence that a system does what it was designed to do, including protecting against specific threats, and also that there aren't sneaky ways around the security controls it provides. It's important because, if you don't have a lot of confidence that the CIA cannot be bypassed by Evil Dude (or Evil Dudette), then the CIA isn't useful. If you have a digital doorlock—but the lock designer goofed by allowing anybody to unlock the door by typing '98765,' then you don't have any security once Evil Dude figures out that 98765 always gets him into your house (and shares that with everybody else on the Internet). Here's the definition of assurance that the US Department of Defense uses: "Software assurance relates to "the level of confidence that software functions as intended and is free of vulnerabilities, either intentionally or unintentionally designed or inserted as part of the software (https://acc.dau.mil/CommunityBrowser.aspx?id=25749)." When I started working in security, most security people knew a lot about the CIA of security, but fewer of us—fewer of anybody—thought about the "functions as designed" and "free from vulnerabilities" part. "Functions as intended" is a design aspect of security. That means that a designer not only considered what the software (or hardware) was intended to do, but thought about how someone could try to make the software (or hardware) do what it was not intended to do. Both are important because unless you never deploy a product, it's most likely going to be attacked, somehow, somewhere, by someone. Thinking about how Evil Dude can try to break stuff (and making that hard/unlikely to succeed) is a very important part of "functions as intended." The "free of vulnerabilities" part is also important; having said that, nobody who knows anything about code would say, "all of our code is absolutely perfect." ("Pride goeth before destruction and a haughty spirit before a fall.") That said, one of the most important aspects of assurance is secure coding. Secure coding practices include training your designers, developers, testers (and yes, even documentation writers) about how code can be broken, so people think about that before starting to code. Having a development process that incorporates security into design, development, testing and maintenance is also important. Security isn't a sort of magic pixie dust you can sprinkle over software or hardware after it's all done to magically make it secure—it is a quality just as structural integrity is part of a building, not something you slap your head over and think, "dang, I forgot the rebar, I need to add some to this building." It's too late after the concrete has set. Secure coding practices include actively looking for coding errors that could be exploited by a Evil Dude, triaging those coding errors to determine "how bad is bad," fixing the worst stuff the fastest and making sure that a problem is fixed completely. If Evil Dude can break in by typing '^X,' it's tempting to just redo your code so typing ^X doesn't get Evil Dude anything. But that likely isn't the root of the problem (what about ^Y - what does that do? Or ^Z, ^A...?) Automated tools designed to help find avoidable, preventable defects in software are a huge help (they didn't really exist when I started in security). Nobody who buys a house expects the house to be 100% perfect, but you'd like to think that the architect hired structural engineers to ensure the walls wouldn't fall over, the contractor had people checking the work all along ("don't skimp on the rebar"), there was a building inspection, etc. Noting that even with a really well-designed and well-built house, there is probably a ding or two in the paint somewhere even before you move in—it's probably not letter perfect. Code is like that, too, although a "ding in your code" is probably more significant than a ding in your paint, so there should be far fewer of them. Assurance matters not only because people who use IT want to know things work as intended—and cannot be easily broken—but because time, money and people are always limited resources. Most companies would rather hire 50 more salespeople to reach more potential customers than hire 50 more people to patch their systems. (For that matter, I'd rather have such strong, repeatable, ingrained secure development practices that instead of hiring 50 more people to fix bad/ insecure code, we can use those 50 people to build new, cool (and secure) stuff.) Assurance is always going to be good and necessary, even as the baseline technology we are trying to "assure" continues to change. One of the most enjoyable aspects of my job is continuing to "bake security in" as we grow in breadth and as we adapt to changes in the market. Many companies are moving from "buy-build-maintain" their own systems to "rent," by using cloud services. (It makes a lot of sense: companies don't typically build apartment buildings in every city their employees visit: they use "cloud housing," a.k.a. hotels.) The increasing move to cloud services comes with security challenges, but also has a lot of security benefits. If it's hard to find enough IT people to maintain your systems, it's even harder to find enough security people to defend them. A service provider can secure the same thing, 5000 times, much better than 5000 individual customers can. (Or, alternatively, a service provider can secure one big multi-tenant service offering better than the 5000 customers using it can do themselves.) The assurance practices we have adapted from "home grown" software and hardware has already morphed and will continue to morph to how we build and deliver cloud services. Click here for more information on Oracle assurance.

If you are an old security hand, you can skip reading this. If you think "assurance" is something you pay for so your repair bills are covered if someone hits your car, please keep reading. Way back in...

Security Trends

The State of Open Source Security

Open source components have played a growing role in software development (commercial and in-house development). The traditional role of a developer has evolved from coding most of everything to re-using as much as possible known and trustworthy components. As a result, a growing aspect of software design and development decisions has become the integration of open-source and third-party components into increasingly large and complex software. The question as to whether open source software is inherently more secure than commercial (i.e. "closed") software has been ardently debated for a number of years. The purpose of this blog entry is not to definitely settle this argument, though I would argue that software (whether open source or closed source) that is developed by security-aware developers tends to be inherently more secure. Regardless of this controversy, there are important security implications regarding the use of open source components. The wide use of certain components (open source or not) has captured the attention of both security researchers and malicious actors. Indeed, the discovery of a 0-day in a widely-used component can imply for malicious actors the prospective of "hacking once and exploiting anywhere" or large financial gain if the bug is sold on the black market. As a result, a growing number of security vulnerabilities have been found and reported in open source components. The positive impact of increased security research in widely-used components will (hopefully) be the improved security-worthiness of these components (and the relative fulfillment of the "million eyes" theory), and the increased awareness of the security implications of the use of open source components within development organizations. In many instances, the vulnerabilities found in public components have been given cute names: POODLE, Heartbleed, Dirty Cow, Shellshock, Venom, etc. (in no particular order). These names contributed to a sense of urgency (sometimes panic) within many organizations, often to the detriment of a rational analysis of the actual severity of these issues and their relative exploitability in the affected environments. Less security-sophisticated organizations have been particularly affected by this sense of urgency and many have attempted to scan their environment to find software containing the vulnerability "du jour." However, it has been Oracle's experience that while many free tools provide organizations with the ability to relatively accurately identify the presence of open source components in IT systems, the majority of these tools have an abysmal track record at accurately identifying the version(s) of these components, and much less determine the exploitability of the issues associated with them. As a result, less security-sophisticated organizations are facing reports with a large number of false positive, and are unable to make sense of these findings (Oracle support has seen an increase in the submission of such inaccurate reports). From a security assurance perspective, I believe that there are 3 significant and often under-discussed topics related to the use of open source components in complex software development: How can we assess and enhance security assurance activities in open source projects? How can we ensure that these components are obtained securely? How does the use of open source components affect ongoing assurance activities throughout the useful life of associated products? How can we assess and enhance security assurance activities in open source projects? Assessing the security maturity of an open source project is not necessarily an easy thing. There are certain tools and derived methodologies and principles (e.g., Building Security In Maturity Model (BSIMM), Safecode), that can be used to assess the relative security maturity of commercial software developers but their application to open source projects is difficult. For example, how can one determine the amount of security skills available in an open source projects and whether code changes are systematically reviewed by skilled security experts? Furthermore, should the software industry try to come up together with means to coordinate the role of commercial vendors in helping enhance the security posture of the most common open source projects for the benefit of all vendors and the community? Is it enough to commit that security fixes be shared with the community when an issue is discovered while a component is being used in a commercial offering? How can we ensure that these components are obtained securely? A number of organizations (whether for the purpose of developing commercial software or their own systems) are concerned solely about "toxic" licenses when procuring open source components, while they should be equally concerned about bringing in toxic code. One problem is the potential downloading and use of obsolete software (which contains known security flaws that have been fixed in the most recent releases). This problem can relatively easily be solved by forcing developers to only download the most recent releases from the official project repository. Many developers prefer pulling compiled binaries, instead of compiling the source code themselves (and verifying its authenticity). Developers should be aware of the risk of pulling malicious code (it's not because it is labelled as "foo" that it actually is "foo"; it may actually be "foo + a nasty backdoor"). There have been several publicly-reported security incidents resulting from the downloading of maliciously altered programs. How can we provide the necessary security care of a solution that include open source components throughout its useful life? Once an organization has decided to use an external component in their solution, the organization also should consider how they will maintain the solution. The maintenance and patching implications of third party components are often overlooked. For example, organizations may be faced with hardware limitations in their products. They may have to deprecate hardware products more quickly because a required open source component is no longer supported on a specific platform, or the technical requirements of the subsequent releases of the component exceed the specifications of the hardware. In hardware environments, there is also the obvious question of whether patching mechanisms are available for the updating of open source components on the platform. There are also problematic implications of the use of open source components when they are used in purely-software solutions. Security fixes for open source components are often unpredictable. How does this unpredictability affect the availability of production systems, or customers' requirement to have fixed maintenance schedule? In conclusion, the questions listed in this blog entry are just a few of the questions that one should consider when developing technology-based products (which seems to be about almost everything these days.) These questions are particularly important as open source components represent a large and increasing chunk of the technology supply chain, not only of commercial technology vendors, but also of cloud providers. Security assurance policies and practices should take these questions into consideration, and highlight the fact that open source, while incredibly useful, is not necessarily "free" but requires specific sets of commitments and due diligence obligations.

Open source components have played a growing role in software development (commercial and in-house development). The traditional role of a developer has evolved from coding most of everything...

Industry Insights

Common Criteria and the Future of Security Evaluations

For years, I (and many others) have recommended that customers demand more of their information technology suppliers in terms of security assurance – that is, proof that security is “built in” and not “bolted on,” that security is “part of” the product or service developed and can be assessed in a meaningful way. While many customers are focused on one kind of assurance – the degree to which a product is free from security vulnerabilities – it is extremely important to know the degree to which a product was designed to meet specific security threats (and how well it does that). These are two distinct approaches to security – that are quite complementary – and a point that should increasingly be of value for all customers. The good news is that many IT customers – whether of on-premises products or cloud services - are asking for more “proof of assurance,” and many vendors are paying more attention. Great! At the same time, sadly, a core international standard for assurance: the Common Criteria (CC) (ISO 15408), is at risk. The Common Criteria allows you to evaluate your IT products via an independent lab (certified by the national “scheme” in which the lab is domiciled). Seven levels of assurance are defined – generally, the higher the evaluation assurance level (EAL), the more “proof” you have to provide that your product 1) addresses specific (named) security threats 2) via specific (named) technical remedies to those threats. Over the past few years, CC experts have packaged technology-specific security threats, objectives, functions and assurance requirements into “Protection Profiles” that have a pre-defined assurance level. The best part of the CC is the CC Recognition Arrangement (CCRA), the benefit of which is that a CC security evaluation done in one country (subject to some limits) is recognized in multiple other countries (27, at present). The benefit to customers is that they can have a baseline level of confidence in a product they buy because an independent entity has looked at/validated a set of security claims about that product. Unfortunately, the CC in danger of losing this key benefit of mutual recognition. The main tension is between countries that want fast, cookie cutter, “one assurance size fits all” evaluations, and those that want (for at least some classes of products) higher levels of assurance. These tensions threaten to shatter the CCRA, with the risk of an “every country for itself,” “every market sector for itself” or worse, “every customer for itself” attempt to impose inconsistent assurance requirements on vendors that sell products and services in the global marketplace. Customers will not be well-served if there is no standardized and widely-recognized starting point for a conversation about product assurance. The uncertainty about the future of the CC creates opportunity for new, potentially expensive and unproven assurance validation approaches. Every Tom, Dick, and Harriet is jumping on the assurance bandwagon, whether it is developing a new assurance methodology (that the promoters hope will be adopted as a standard, although it’s hardly a standard if one company “owns” the methodology), or lobbying for the use of one proprietary scanning tool or another (noting that none of the tools that analyze code are themselves certified for accuracy and cost-efficiency, nor are the operators of these tools). Nature abhors a vacuum: if the CCRA fractures, there are multiple entities ready to promote their assurance solutions – which may or may not work. (Note: I freely admit that a current weakness of the CC is that, while vulnerability analysis is part of a CC evaluation, it’s not all that one would want. A needed improvement would be a mechanism that ensures that vendors use a combination of tools to more comprehensively attempt to find security vulnerabilities that can weaken security mechanisms and have a risk-based program for triaging and fixing them. Validating that vendors are doing their own tire-kicking – and fixing holes in the tires before the cars leave the factory – would be a positive change.) Why does this threat of CC balkanization matter? First of all, testing the exact same product or service 27 times won’t in all likelihood lead to a 27-fold security improvement, especially when the cost of the testing is born by the same entity over and over (the vendor). Worse, since the resources (time, money, and people) that would be used to improve actual security are assigned to jumping through the same hoop 27 times, we may paradoxically end up with worse security. We may also end up with worse security to the extent that there will be less incentive for the labs that do CC evaluations to pursue excellence and cost efficiency in testing if they have less competition (for example, from labs in other countries, as is the case under the CCRA) and they are handed a captive marketplace via country-specific evaluation schemes. Second, whatever the shortcomings of the CC, it is a strong, broadly-adopted foundation for security that to-date has the support of multiple stakeholders. While it may be improved upon, it is nonetheless better to do one thing in one market that benefits and is accepted in 26 other markets than to do 27 or more expensive testing iterations that will not lead to a 27-fold improvement in security. This is especially true in categories of products that some national schemes have deemed “too complex to evaluate meaningfully.” The alternative clearly isn't per-country testing or per-customer testing, because it is in nobody's interests and not feasible for vendors to do repeated one-off assurance fire-drills for multiple system integrators. Even if the CC is “not sufficient” for all types of testing for all products, it is still a reputable and strong baseline to build upon. Demand for Higher Assurance In part, the continuing demand for higher assurance CC evaluations is due to the nature of some of the products: smart cards, for example, are often used for payment systems, where there is a well understood need for “higher proof of security-worthiness.” Also, smart cards generally have a smaller code footprint, fewer interfaces that are well-defined and thus they lend themselves fairly well to more in-depth, higher assurance validation. Indeed, the smart card industry – in a foreshadowing and/or inspiration of CC community Protection Profiles (cPPs), was an early adopter of devising common security requirements and “proof of security claims,” doubtless understanding that all smart card manufacturers - and the financial institutions who are heavy issuers of them - have a vested interest in “shared trustworthiness.” This is a great example of understanding that, to quote Ben Franklin, “We must all hang together or assuredly we shall all hang separately.” The demand for higher assurance evaluations continues in part because the CC has been so successful. Customers worldwide became accustomed to “EAL4” as the gold standard for most commercial software. “EAL-none”—the direction of new style community Protection Profiles (cPP)—hasn’t captured the imagination of the global marketplace for evaluated software in part because the promoters of “no-EAL is the new EAL4” have not made the necessary business case for why “new is better than old.” An honorable, realistic assessment of “new-style” cPPs would explain what the benefits are of the new approach and what the downsides are as part of making a case that “new is better than old.” Consumers do not necessarily upgrade their TV just because they are told “new is better than old;” they upgrade because they can see a larger screen, clearer picture, and better value for money. Product Complexity and Evaluations To the extent security evaluation methodology can be more precise and repeatable, that facilitates more consistent evaluations across the board at a lower evaluation cost. However, there is a big difference between products that were designed to do a small set of core functions, using standard protocols, and products that have a broader swathe of functionality and have far more flexibility as to how that functionality is implemented. This means that it will be impossible to standardize testing across products in some product evaluation categories. For example, routers use standard Internet protocols (or well-known proprietary protocols) and are relatively well defined in terms of what they do. Therefore, it is far easier to test their security using standardized tests as part of a CC evaluation to, for example, determine attack resistance, correctness of protocol implementation, and so forth. The Network Device Protection Profile (NDPP) is the perfect template for this type of evaluation. Relational databases, on the other hand, use structured query language (SQL) but that does not mean all SQL syntax in all commercial databases is identical, or that protocols used to connect to the database are all identical, or that common functionality is completely comparable among databases. For example, Oracle was the first relational database to implement commercial row level access control: specifically, by attaching a security policy to a table that causes a rewrite of SQL to enforce additional security constraints. Since Oracle developed (and patented) row level access control, other vendors have implemented similar (but not identical) functionality. As a result, no set of standard tests can adequately test each vendor’s row level security implementation, any more than you can use the same key on locks made by different manufacturers. Prescriptive (monolithic) testing can work for verifying protocol implementations; it will not work in cases where features are implemented differently. Even worse, prescriptive testing may have the effect of “design by test harness.” Some national CC schemes have expressed concerns that an evaluation of some classes of products (like databases) will not be “meaningful” because of the size and complexity of these products [1], or that these products do not lend themselves to repeatable, cross-product (prescriptive) testing. This is true, to a point: it is much easier to do a building inspection of a 1000-square foot or 100-square meter bungalow than of Buckingham Palace. However, given that some of these large, complex products are the core underpinning of many critical systems, does it make sense to ignore them because it’s not “rapid, repeatable and objective” to evaluate even a core part of their functionality? These classes of products are heavily used in the core market sectors the national schemes serve: all the more reason the schemes should not preclude evaluation of them. Worse, given that customers subject to these CC schemes still want evaluated products, a lack of mutual recognition of these evaluations (thus breaking the CCRA) or negation of the ability to evaluate merely drives costs up. Demand for inefficient and ineffective ad hoc security assurances continues to increase and will explode if vendors are precluded from evaluating entire classes of products that are widely-used and highly security relevant. No national scheme, despite good intentions, can successfully control its national marketplace, or the global marketplace for information technology. Innovation One of the downsides of rapid, basic, vanilla evaluations is that it stifles the uptake of innovative security features in a customer base that has a lot to protect. Most security-aware customers (like defense and intelligence customers) want new and innovative approaches to security to support their mission. They also want the new innovations vetted properly (via a CC evaluation). Typically, a community Protection Profile (cPP) defines the set of minimum security functions that a product in category X does. Add-ons can in theory be done via an extended package (EP) – if the community agrees to it and the schemes allow it. The vendor and customer community should encourage the ability to evaluate innovative solutions through an EP, as long as the EP does not specify a particular approach to a threat to the exclusion of other ways to address the threat. This would continue to advance the state of the security art in particular product categories without waiting until absolutely everyone has Security Feature Y. It’s almost always a good thing to build a better mousetrap: there are always more mice to fend off. Rapid adoption of EPs would enable security-aware customers, many of whom are required to use evaluated products, to adopt new features readily, without waiting for: a) every vendor to have a solution addressing that problem (especially since some vendors may never develop similar functionality) b) the cPP to have been modified, and c) all vendors to have evaluated against the new cPP (that includes the new security feature) Given the increasing focus of governments on improvements to security (in some cases by legislation), national schemes should be the first in line to support “faster innovation/faster evaluation,” to support the customer base they are purportedly serving. Last but really first, in the absence of the ability to rapidly evaluate new, innovative security features, customers who would most benefit from using those features may be unable or unwilling to use them, or may only use them at the expense of “one-off” assurance validation. Is it really in anyone’s interest to ask vendors to do repeated one-off assurance fire-drills for multiple system integrators? Conclusion The Common Criteria – and in particular, the Common Criteria recognition – form a valuable, proven foundation for assurance in a digital world that is increasingly in need of it. That strong foundation can nonetheless be strengthened by: 1) recognizing and supporting the legitimate need for higher assurance evaluations in some classes of product 2) enabling faster innovation in security and the ability to evaluate it 3) continuing to evaluate core products that have historically had and continue to have broad usage and market demand (e.g., databases and operating systems) 4) embracing, where apropos, repeatable testing and validation, while recognizing the limitations thereof that apply in some cases to entire classes of products and ensuring that such testing is not unnecessarily prescriptive. [1] https://www.niap-ccevs.org/Documents_and_Guidance/ccevs/DBMS%20Position%20Statement.pdf

For years, I (and many others) have recommended that customers demand more of their information technology suppliers in terms of security assurance – that is, proof that security is “built in” and not...

Industry Insights

Unmasking Hackers with User Behavior Analytics

Many people keep sensitive documents in cloud storage services and the latest breach shows that hackers are focusing on online storage cloud services more frequently. This opens the door to huge vulnerabilities if employees are storing sensitive enterprise information in the cloud. From a preventative perspective, security personnel should review their security measures for the following: Require multi-factor authentication to access the application Enforce password strength and complexity requirements Require and enforce frequent password resets for employees But manual processes and policies are not enough. At minimum, enterprises should look at automating the enforcement of these policies. For example, you may require multi-factor authentication, but how do you ensure that it's required at all times? A cloud access security broker (CASB) continuously monitors configurations to alert security personnel when changes are made, and automatically creates incident tickets to revert security configurations back to the default setting.   How can enterprises prevent further damage if their employees' credentials were compromised in this hack? We recommend utilizing user behavior analytics (UBA) to look for anomalous activity in an account. UBA uses advanced machine learning techniques to create a baseline for normal behavior for each user. If a hacker is accessing an employee's account using stolen credentials, UBA will flag a number of indicators that this access deviates from the normal behavior of a legitimate user.   Palerra LORIC is a cloud access security broker (CASB) that supports cloud storage services. Here's a few indicators LORIC can use to unmask a potential hacker with stolen credentials: Flag a login from an unusual IP address or geographic location Detect a spike in number of file downloads compared to normal user activity Detect logins outside of normal access hours for the user Detect anomalous file sharing or file previewing activities The ability to gauge legitimate access and activities becomes even more important when you consider that many people use the same password for multiple applications. Instead of just protecting a single online storage cloud service, UBA helps the enterprise protect any cloud environment that could be accessed using the stolen passwords. If you're concerned that hackers may access your cloud storage environment using stolen employee credentials, you should take preventative and remedial action. Adding a cloud security automation tool help prevents a breach by enforcing password best practices, and helps prevents additional damage after a breach by unmasking hackers posing as legitimate users by flagging anomalous activity.

Many people keep sensitive documents in cloud storage services and the latest breach shows that hackers are focusing on online storage cloud services more frequently. This opens the door to...

Product News

Why Monitoring Alone is Not Enough in Cloud Security

Comprehensive threat intelligence is key for ensuring accuracy and maximize effectiveness of automated security solutions. Monitoring alone is not enough to correctly identify and remediate a breach. And, while human supervision will always be part of the security equation, the overwhelming amount of data accessible from cloud providers makes it impossible for security personnel to identify and remediate all threats. Here are three ways organizations can use threat intelligence to enhance their current security measures and go beyond simply monitoring their cloud environment: Require multi-factor authentication: Multi-factor authentication gives the end user more visibility into potential attacks on their account, and they'll change their password before a breach occurs. But how do you ensure that multi-factor authentication is required at all times? A cloud security automation platform continuously monitors security configurations to alert security personnel when changes are made, and automatically creates incident tickets to revert security configurations back to the default setting. Configure password policies and strength to maintain password integrity: Many people use the same password to log into multiple service providers, and most do not regularly update their passwords. Organizations should configure password policies to ensure passwords expire every 90 days, and cap the number of recycled passwords that can be used. An automated security system enforces password strength requirements to reduce the likelihood of a breach. These systems can flag changes to the password settings, which might indicate an insider threat or hacker access to your system. Utilize comprehensive threat intelligence: In order to focus on the most credible threats, your security team needs clear, actionable information. Using key security indicators in your automation program can consolidate and correlate data to provide instant insight into the security posture of your cloud services. By setting up custom notifications for likely threat scenarios, security teams can focus on the most immediate threats instead of chasing down potentially useless information. It's not enough to simply monitor cloud services or have a "set it and forget it" mindset about security configurations. Instead, companies must leverage cloud security automation to bring the most immediate and credible threats to the attention of the security team.

Comprehensive threat intelligence is key for ensuring accuracy and maximize effectiveness of automated security solutions. Monitoring alone is not enough to correctly identify and remediate a breach....

Product News

Can a CASB Protect You From the Treacherous 12? - Part 4: CASBs and the Treacherous 7 through 12

Welcome to the fourth in a four-part series on how Cloud Access Security Brokers (CASBs) can help protect your organization from the top twelve threats to cloud computing in 2016. If you want to read the first three blogs, their links are provided below.   This blog series examines whether a CASB can protect your organization from the top cloud computing threats identified by a Cloud Security Alliance (CSA) working group. The four-part series includes: - Part 1: CASB 101 - Part 2: CASBs and Threat Detection - Part 3: CASBs and the Treacherous 1- 6 - Part 4: CASBs and the Treacherous 7-12 CASBs and the Treacherous 7 through 12 The final 6 of the "Treacherous 12" threats that the CSA working group identified are: 7. Advanced Persistent Threats (APTs) 8. Data loss 9. Insufficient due diligence 10. Abuse and nefarious use of cloud services 11. Denial of Service (DoS) 12. Shared technology issues Here is a definition and an anecdote for each of these threats, along with an assessment of whether a Cloud Access Security Broker (CASB) like Palerra can help protect against it. 7. Advanced Persistent Threats (APTs) An APT is a parasitical form of cyberattack that infiltrates systems and establishes a foothold in the computing infrastructure. Once the foothold is in place, the perpetrator can smuggle data and intellectual property.  A CASB can help with APT attacks. A CASB can help detect anomalies in inbound and outbound data to identify data exfiltration, which further enables you to discover that a network is the target of an attack.  8. Data loss Data loss can be due to malicious attacks, accidental deletion by the cloud service provider, or a physical catastrophe such as a fire or earthquake. A CASB is not the solution in this case. Cloud service providers should take measures to back up data according to best practices in business continuity and disaster recovery. Consumers of these services should review the service provider's data loss provisions.  9. Insufficient due diligence When a business is under pressure to leverage the benefits of the cloud, the selection process for adopting cloud technologies and choosing cloud service providers can get rushed and proper due diligence can be skipped. When that occurs, organizations are exposing a myriad of commercial, financial, technical, legal, and compliance risks.  A CASB is not the solution in this case. Executives need to develop a good roadmap and checklist for due diligence when evaluating technologies and cloud service providers. A CASB can help in that process, but the responsibility is with the executives.  10. Abuse and nefarious use of cloud services Poorly secured cloud service deployments, free cloud service trials, and account sign-ups that exploit fraudulent payment instruments expose all cloud computing models (including IaaS, Paas, and SaaS).  A CASB can help monitor identity as a service (IaaS) workloads and software as a service (SaaS) access patterns to better detect suspicious activity such as abnormal launches and terminations of compute instances and abnormal user access patterns. 11. Denial of Service (DoS) A DoS attack is meant to prevent users of a service from being able to access their data or applications. DoS attacks also flood the cloud service provider with access requests, with the intent of disrupting the service. A CASB is not the solution in this case. Cloud providers hold the responsibility for taking appropriate precautions to mitigate the impact of DoS attacks. 12. Shared technology issues Cloud service providers deliver scalable services by sharing infrastructure, platforms, or applications. Because of this shared architecture, one vulnerability or misconfiguration can lead to a compromise across IaaS, PaaS, and SaaS. For example, the VENOM vulnerability allowed attackers to compromise any virtualized platform, which opened millions of virtual machines to attack. A CASB can help with monitoring of compute, storage, network, and application resources, as well as user security enforcement and cloud service configurations, whether the service model is IaaS, PaaS, or SaaS. However, not all CASBs cover all areas, so be sure that you are working with one that does. This is the final blog post in the four-part series. For additional information, check out our white paper, "Can a CASB Protect You from the 2016 Treacherous 12?". Or if you prefer an abbreviated format, check out our infographic on the same topic.

Welcome to the fourth in a four-part series on how Cloud Access Security Brokers (CASBs) can help protect your organization from the top twelve threats to cloud computing in 2016. If you want to read...

Product News

Can a CASB Help Protect You From the Treacherous 12? - Part 3: CASBs and the Treacherous 1 through 6

Welcome to the third in a four-part series on how Cloud Access Security Brokers (CASBs) can help protect your organization from the top twelve threats to cloud computing in 2016. If you want to read the first two blogs, their links are provided below. This blog series examines whether a CASB can help protect your organization from the top cloud computing threats identified by a Cloud Security Alliance (CSA) working group. The four-part series includes:  - Part 1: CASB 101 - Part 2: CASBs and Threat Detection - Part 3: CASBs and the Treacherous 1- 6 - Part 4: CASBs and the Treacherous 7-12 CASBs and the Treacherous 1 through 6 The first 6 of the "Treacherous 12" threats that the CSA working group identified are: - Data breach - Weak identity, credential, and access management - Insecure APIs - System and application vulnerability - Account hijacking - Malicious insiders Here is a definition and an anecdote for each of these threats, along with an assessment of whether a Cloud Access Security Broker (CASB) like Palerra can help protect against it. Data breach A data breach occurs when an unauthorized person releases, views, steals, or otherwise uses sensitive, protected, or confidential information. A CASB can help detect data breaches by monitoring privileged users, encryption policies, and sensitive data movement. Weak identity, credential, and access management This refers to data breaches and enabling of attacks due to such factors as not having scalable identity and access management systems, failure to use multi-factor authentication, and permitting users to have weak passwords. A CASB can help monitor and detect weak authentication policies, user, service account access patterns, and non-compliant use of cryptographic keys. Insecure APIs The security and availability of cloud services depends on the security of the APIs that cloud computing providers make available for third party vendors. A CASB can help monitor API usage in clouds and detect unusual activities originating from API calls. A CASB can also assist in supporting risk scoring of external APIs and applications based on the activity. System and application vulnerabilities These are exploitable bugs in programs that attackers can use to infiltrate a computer system for the purpose of stealing data, taking control of the system, or disrupting service operations. Two well known examples, Heartbleed and Shellshock, proved that open source applications are vulnerable to threats. The systems most affected by Heartbleed and Shellshock were running Linux, which means that 67.7% of all websites were impacted. A CASB can help with security-hardened baseline configurations, continuous monitoring, and alerts if there is a change to the desired configurations and change in the application access patterns. Account hijacking Methods of account hijacking include phishing, fraud, and exploitation of software vulnerabilities. Attackers can eavesdrop on activities and transactions, then manipulate data, return falsified information, and redirect to illegitimate sites. Targeted "CEO Fraud" phishing scams are estimated to have cost $2.3B over 3 years. In these scams, an employee gets an email from what appears to be the CEO asking them to wire money. The FBI estimates that organizations lose $25K to $75K per attack. Could a CASB have helped? A CASB can enhance efforts to monitor users, privileged users, service accounts, and API keys. CASBs use machine learning techniques and behavioral analytics to further efforts in detecting account hijacking threats.  Malicious insiders A malicious insider is a current or former employee, contractor, or other business partner who has or had authorized access to an organization's network, system, or data and intentionally exceeded or misused that access. A CASB can assist efforts to monitor for overly-privileged user accounts, plus help with changes to user profiles, roles, and privileges, and notify you of drift from compliant baselines. Also, a CASB can help detect malicious user activity using behavior analytics (UBA).   If you don't want to wait for the final post in the four-part series, check out our white paper, "Can a CASB Protect You from the 2016 Treacherous 12?"

Welcome to the third in a four-part series on how Cloud Access Security Brokers (CASBs) can help protect your organization from the top twelve threats to cloud computing in 2016. If you want to read...

Industry Insights

Data Breaches in Cloud-Based Enterprises

The Cloud Enterprise Is at Risk As the chart below shows, every major industry vertical has been targeted, with retail, finance, and healthcare being the most frequently breached. What's also relevant is that the top verticals have been rapidly expanding their adoption of the cloud. For example, a Cloud Security Alliance study in late 2015 reported that the banking and finance vertical is using cloud-based technologies extensively for mission-critical applications including email, customer relationship management (CRM), and content management. What are the top threats and what can be done about them? It's no surprise that security trends in 2016 show continuous threats that typical enterprises need to care about. The "California Data Breach Report" identified the top trends over the last four years and indicated that malware and hacking as well as misuse by insiders continue to rise and are among the largest threats. From: California Data Breach Report. Additionally, at the RSA Conference this year, the Cloud Security Alliance (CSA) released their latest research on the top 12 threats in cloud computing—what the CSA calls "The Treacherous 12." (The report is available here.) As you can see in the CSA report, top threats include weak credentials, insecure APIs, account hijacking, and malicious insiders. The trends described in the California breach report and the CSA's "The Treacherous 12" report are very similar. As these threats to enterprises (including their employees and customers) increase, security monitoring becomes critical. And as enterprises increasingly move their mission-critical applications into the cloud, security technologies from service providers such as Cloud Access Security Brokers (CASBs) become potentially vital to the cloud-based enterprise. Cloud Access Security Brokers (CASBs) can provide deeper visibility into activity within and across the enterprise perimeter. Plus, as threats continue to permeate cloud enterprises, CASBs can help protect against them with groundbreaking innovations in anomaly detection and behavioral analytics. Palerra is proud to be on the leading edge of this innovation. We recently released a white paper that describes how CASBs can help enterprises mitigate threats and consequently reduce breaches. I'd welcome your thoughts on this document and invite you to talk to our team about how to help you with your security concerns.

The Cloud Enterprise Is at Risk As the chart below shows, every major industry vertical has been targeted, with retail, finance, and healthcare being the most frequently breached. What's also relevant...

Product News

Can a CASB Protect You From the Treacherous 12? - Part 2: CASBs and Threat Protection

Welcome to the second in a four-part series on how Cloud Access Security Brokers (CASBs) can help protect your organization from the top twelve threats to cloud computing in 2016. If you would like to review the first blog in the series just click here. This blog series examines whether a CASB can protect your organization from the top cloud computing threats identified by a Cloud Security Alliance (CSA) working group. The four-part series includes:  - Part 1: CASB 101 - Part 2: CASBs and Threat Detection - Part 3: CASBs and the Treacherous 1-6 - Part 4: CASBs and the Treacherous 7-12 CASBs and Threat Detection In the area of threat protection, a CASB can help detect and prevent malicious behavior (including actions that put data at risk) by monitoring user activity within cloud services and the security configuration of the services. CASBs use various security technologies such as threat intelligence, malware detection, machine learning, and user behavior analytics (UEBA) to improve the accuracy of threat detection. This diagram shows a typical architecture for a threat analytics engine in a CASB solution.  As shown in the diagram, the threat detection engine uses multiple data feeds: User and programmatic access and activity. This data is collected from cloud services, as well as network services such as proxies, gateways, and firewalls. The data includes cloud service logins, logouts, and transactions (for example, sensitive files shared outside the organization and modifications of sensitive data). Threat intelligence feeds. These are community and commercial intelligence feeds that include known vulnerabilities, malware, blacklisted IP addresses, and suspicious geographical locations. Application context. Context includes time-of-day and date of transaction event, employee status, and possible employee travel. Security configurations. This includes settings for multi-factor authentication, encryption, and management of logged-in sessions (for example, inactivity timeouts). Enterprise baselines. An enterprise can have baselines or standards for IOCs, blacklisted IP addresses, sensitive events, and threat models. A threat engine can use this data to detect risks and reduce false positives. Using activity and contextual data from various sources, the threat detection engine normalizes the data for event correlation, builds user behavior profiles, and applies machine learning techniques to better detect bad events, anomalous behavior, risky users, and immediate threats. Examples of threats that CASBs detect include brute-force attacks against user credentials, high-risk user activity, malicious insider activity, misuse of session keys (for example SSH keys), overly-privileged administrators, too many privileged administrators, content malware, IP address hopping (traversing apparently large distances in extremely short time frames), data exfiltration, and public sharing of sensitive content.  In addition to threat detection, it is equally important for CASBs to automate incident remediation. If a user's credentials are compromised and a decision to lock the user's account is made, a CASB solution should also make sure that the user's cloud access is denied.  If you don't want to wait for the next two blog posts in the series, check out our white paper, "Can a CASB Protect You from the 2016 Treacherous 12?"

Welcome to the second in a four-part series on how Cloud Access Security Brokers (CASBs) can help protect your organization from the top twelve threats to cloud computing in 2016. If you would like to...

Product News

Can a CASB Protect You From the Treacherous 12? - Part 1: CASB 101

Welcome to the first in a four-part series on how Cloud Access Security Brokers (CASBs) can help protect your organization from the top twelve threats to cloud computing in 2016. Cloud services deliver business-supporting technology more efficiently than ever before. But they also pose significant risk. That's why the Cloud Security Alliance (CSA) published "The Treacherous 12: Cloud Computing Top Threats in 2016" and we published a related white paper. Whether your cloud services are sanctioned or unsanctioned, the door is wide open for the Treacherous 12. The CSA's report recommends that organizations take security policies, processes, and best practices into account. While that is sound advice, it just isn't enough. In a Gartner Press Release (10/6/15), "Gartner Reveals Top Predictions for IT Organizations and Users for 2016 and Beyond," they predicted that through 2020, 95% of cloud security failures will be the customer's fault. This does not necessarily mean that customers lack security expertise. It does mean that it's no longer sufficient to know how to make decisions about risk mitigation in the cloud. To reliably address cloud security, automation is the key. That's where Cloud Access Security Brokers (CASBs) come into play. This blog series will examine whether a CASB can help protect your organization from the top cloud computing threats identified by the CSA working group. The specific topics that I will cover in this series will be: - Part 1: CASB 101 - Part 3: CASBs and the Treacherous 1-6 - Part 4: CASBs and the Treacherous 7-12 CASB 101 CASBs provide information security professionals with a critical control point for the secure and compliant use of cloud services across multiple cloud providers.  Today, most CASBs focus only on software as a service (SaaS), although they can and should enforce best practices and security policies across all cloud services, including infrastructure (IaaS) and platforms (PaaS).  CASBs provide four pillars of support by automating visibility, compliance, data security, and threat protection. - Visibility: CASBs give administrators increased visibility into an organization's cloud usage. This includes discovery tools to help detect use of unauthorized cloud services (shadow or stealth IT), as well as enhanced visibility into access of sanctioned cloud services by managed and unmanaged end users on managed and unmanaged devices, as well as programmatic (cloud to cloud) access. - Compliance: CASBs impose controls on cloud usage to help enforce compliance with industry regulations (for example, HIPAA). They can also assist in efforts to detect when cloud service usage is at risk of falling out of compliance. - Data Security: CASBs enhances enforcement to corporate security policies to restrict access to sensitive data and to help make sure that data is encrypted or tokenized appropriately, while still allowing application functions such as search to operate. Most CASBs also offer features to help prevent data leaks, for example, by marking data as sensitive, preventing data downloads, or redacting data. - Threat Protection: This includes threat intelligence, anomaly detection and malware protection, as well as controlling unauthorized devices and users from accessing corporate cloud services. In addition to the four pillars, a CASB should also integrate with existing or planned security solutions without creating a security technology silo. Key integration areas include: - Next-Gen Firewall (NGFW) and Software Gateway (SWG): NGFW and SWG integration enables a CASB to get access to what cloud services are being used within the enterprise, including both sanctioned and unsanctioned cloud services. - Identity as a Service (IDaaS): IDaaS integration includes user authentication and federation of identities into SaaS applications. - Data Loss and Prevention (DLP): DLP integration extends CASB support for data security to prevention of data loss. - Security Information and Event Management (SIEM): SIEM integration enables CASBs to manage incidents and alerts from an existing SIEM system. In summary, CASB features work together to better protect cloud services, resulting in increased visibility and control across the enterprise. It is reasonable to assume that a CASB can potentially do a significant amount to protect organizations against the treacherous 12 top cloud security threats. My next blog will focus on the role of CASBs in threat protection, and then the final two blogs will talk specifically about each one of the 12 threats to explain the role that CASBs play. If you don't want to wait for the next three blogs, check out our white paper, "Can a CASB Protect You from the 2016 Treacherous 12?"

Welcome to the first in a four-part series on how Cloud Access Security Brokers (CASBs) can help protect your organization from the top twelve threats to cloud computing in 2016. Cloud services deliver...

Security Updates

Security Alert CVE-2016-0603 Released

Oracle just released Security Alert CVE-2016-0603 to address a vulnerability that can be exploited when installing Java 6, 7 or 8 on the Windows platform. This vulnerability has received a CVSS Base Score of 7.6. To be successfully exploited, this vulnerability requires that an unsuspecting user be tricked into visiting a malicious web site and download files to the user's system before installing Java 6, 7 or 8. Though considered relatively complex to exploit, this vulnerability may result, if successfully exploited, in a complete compromise of the unsuspecting user’s system. Because the exposure exists only during the installation process, users need not upgrade existing Java installations to address the vulnerability. However, Java users who have downloaded any old version of Java prior to 6u113, 7u97 or 8u73, should discard these old downloads and replace them with 6u113, 7u97 or 8u73 or later. As a reminder, Oracle recommends that Java home users visit Java.com to ensure that they are running the most recent version of Java SE and that all older versions of Java SE have been completely removed. Oracle further advises against downloading Java from sites other than Java.com as these sites may be malicious. For more information, the advisory for Security Alert CVE-2016-0603 is located at http://www.oracle.com/technetwork/topics/security/alert-cve-2016-0603-2874360.html

Oracle just released Security Alert CVE-2016-0603 to address a vulnerability that can be exploited when installing Java 6, 7 or 8 on the Windows platform. This vulnerability has received a CVSS Base...

Critical Patch Updates

January 2016 Critical Patch Update Released

Oracle today released the January 2016 Critical Patch Update. With this Critical Patch Update release, the Critical Patch Update program enters its 11th year of existence (the first Critical Patch Update was released in January 2005). As a reminder, Critical Patch Updates are currently released 4 times a year, on a schedule announced a year in advance. Oracle recommends that customers apply this Critical Patch Update as soon as possible. The January 2016 Critical Patch Update provides fixes for a wide range of product families; including: Oracle Database None of these database vulnerabilities are remotely exploitable without authentication. Java SE vulnerabilities Oracle strongly recommends that Java home users visit the Java.com website, to ensure that they are using the most recent version of Java and are advised to remove obsolete Java SE versions from their computers if they are not absolutely needed. Oracle E-Business Suite Oracle’s ongoing assurance effort with E-Business Suite helps remediate security issues and is intended to help enhance the overall security posture provided by E-Business Suite. Oracle takes security seriously, and strongly encourages customers to keep up with newer releases in order to benefit from Oracle’s ongoing security assurance effort. For More Information: The January 2016 Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/topics/security/cpujan2016-2367955.html The Oracle Software Security Assurance web site is located at https://www.oracle.com/support/assurance/index.html Oracle Applications Lifetime Support Policy is located at http://www.oracle.com/us/support/library/lifetime-support-applications-069216.pdf

Oracle today released the January 2016 Critical Patch Update. With this Critical Patch Update release, the Critical Patch Update program enters its 11th year of existence (the first Critical Patch...

Industry Insights

Improving the Speed of Product Evaluations

Hi there, Oracle Security blog readers; Josh Brickman here again. Today I want to share some of our thoughts about Common Criteria (CC) evaluations specifically those under the US Scheme of the CC run by the National Information Assurance Partnership (NIAP). NIAP is one of the leaders behind the significant evolution of the Common Criteria, resulting in ratification of a new Common Criteria Recognition Arrangement last year. Background In 2009, NIAP advocated for a radical change in the CC by creating Protection Profiles quickly for many technology types. As described by NIAP [1]: In this new paradigm, NIAP will only accept products into evaluation claiming exact compliance to a NIAP-approved Protection Profile. These NIAP-approved Protection Profiles (PP) produce evaluation results that are achievable, repeatable, and testable — allowing for more a more consistent and rapid evaluation process. [2] Using this approach, assurance activities are tailored to technologies and eliminate the need for evaluation assurance levels (pre-cast assurance activities that were generically used for most CC evaluations also known as EAL's). Another key element is the goal of eliminating variables such as lab experience and a subjective evaluator influencing the results. As a result, these changes help make evaluations less “gameable" — a great idea. Even better: with the new approach, instead of requiring confidential internal documentation and sometimes limited amounts of source code, the labs will now only evaluate what is available to any customer — the finished product and its documentation. New Protection Profiles describe assurance testing activities; in theory, a vendor could take the same product and documentation to different labs and receive consistent results. One other objective of NIAP's vision of a revitalized CC is based on the goal of 90 day evaluations. Under the new paradigm, evaluations should take less than six months (their current time limit) and after labs and vendors develop experience with the new process, ideally complete them in three months. For vendors, this is a terrific development as it maximizes the amount of time an evaluated product can remain on the market before a new version is released. Customers also benefit greatly from this change — if a product can be certified quickly, a greater number of new products will be available for procurement by customers who require products to be CC-evaluated before acquisition. Using the old system, a product often was outdated — replaced by a new version with new functionality — by the time it finished an evaluation. There is a lot to like with these developments — vendors benefit from a consistent and timely process, customers benefit from greater choice in evaluated products, and labs stand to benefit as more vendors send more products for evaluation using resources that are scalable. Entropy Evaluation Unfortunately, there are a few bumps along this road to assurance nirvana. As security researchers have known for some time, sources of entropy are critical for encryption as they help ensure random number generation is suitably random. As such, NIAP introduced requirements for entropy assessments for every product going through an evaluation [3]. Not a bad idea on the surface, however, in the commercial marketplace the mix of technologies, intellectual property (IP), and approaches to development mean it is hard to evaluate entropy (to ensure it is following good practices) without reviewing confidential design documents and — potentially — source code. As noted in Atsec's insightful blog post from September 2013: The TOE Security Assurance Requirements specified in Table 2 of the NDPP [4] is (roughly) equivalent to Evaluation Assurance Level 1 (EAL1) [5] per CC Part 3...The NDPP does not require design documentation of the TOE itself; nevertheless its Annex D does require design documentation of the entropy source — which is often provided by the underlying Operating System (OS). Suppose that a TOE runs on Windows, Linux, AIX and Solaris and so on, some of which may utilize cryptographic acceleration hardware (e.g., Intel processors supporting RDRAND instruction). In order to claim NDPP compliance and succeed in the CC evaluation, the vendor is obligated to provide the design documentation of the entropy source from all those various Operating Systems and/or hardware accelerators. This is not only a daunting task, but also mission impossible because the design of some entropy sources are proprietary to some OS or hardware vendors. At first NIAP stated that Entropy Assessment Reports (EARs) would be OK to submit in draft form in order for an evaluation to start. But NIAP found that finalizing the EAR was taking weeks and months because often the expertise — or documentation — did not exist in the vendor's development organization. Vendors also found that NIAP did not trust the labs to evaluate the EAR's and decided to insert their own experts from the Information Assurance Directorate (IAD) into the process. In practice this means that the EAR must be evaluated by IAD before it can be approved; no other evaluation element in the NIAP scheme requires this oversight except that of entropy. More problematic, some disclosure of the entropy source code and design documentation for entropy generation is now required by NIAP. As I noted above, some vendors may have no trouble providing this extra information to complete an entropy assessment, however, if your product relies on an external or third-party entropy source, you may be unable to provide the material (even if you are willing, your source may not wish to cough up their IP). Speeding Up, or Slowing Down? NIAP has a six month evaluation time limit with a public goal of 90 days for an evaluation. An evaluation involves a vendor providing evidence and a third-party lab evaluating it against certain criteria (in this case the Common Criteria). On the one hand, we have a new Protection Profile paradigm that streamlines evaluations and removes requirements for source code and confidential data provided. However, with the introduction of Entropy Assessment Reports (EAR), vendors are increasingly unable to complete an evaluation within the 90-day window. NIAP's solution is to evaluate an EAR BEFORE a CC evaluation can start. We see this happening now within the NIAP scheme and in other schemes trying to evaluate against NIAP PP's. There are good reasons why NIAP is so concerned with entropy, yet we need to examine how best to meet these requirements in the context of the current IT ecosystem. As Atsec noted in the same post: In theory, a thorough analysis on the entropy source coupled with some statistical tests on the raw data is absolutely necessary to gain some assurance of the entropy that plays such a vital role in supporting security functionality of IT products...However, the requirements of entropy analysis stated above impose an enormous burden on the vendors as well as the labs to an extent that they are out of balance (in regard to effort expended) compared to other requirements; or in some cases, it may not be possible to meet the requirements.

Hi there, Oracle Security blog readers; Josh Brickman here again. Today I want to share some of our thoughts about Common Criteria (CC) evaluations specifically those under the US Scheme of the CC...

Industry Insights

FIPS: The Crypto "Catch 22"

Hello, Oracle blog reader! My name is Joshua Brickman and I run Oracle's Security Evaluations team (SECEVAL). At SECEVAL we are charged with shepherding certain Oracle products through security certifications and accreditations, mostly for government use. Today in my initial blog I'd like to talk to you about cryptography [1] (or crypto) as it relates to these government certifications. First a little history; crypto actually goes back to classical Rome. The Caesar Cipher was used by the Roman military to pass messages during a battle. In World War II, Germany was very successful encrypting messages utilizing a machine known as "Enigma." But when Enigma's secret codes were famously broken (due mostly to procedural errors) it helped to turn the tide of the war. The main idea of crypto hasn't changed though: only allow intended recipients to understand a given message—to everyone else it's unintelligible [2]. Today's crypto is created by complex computer modules utilizing "algorithms." The funny thing about crypto is someone is always trying to break it — but usually using computers (also running complex programs). One of the U.S. government's approaches to combatting code breaking has been via the Federal Information Processing Standard (FIPS) program maintained by the National Institute of Standards and Technology (NIST). NIST uses FIPS to publish approved algorithms that software needs to correctly implement in order to claim their products meet the FIPS standard. The FIPS standard defines acceptable algorithms; vendors implement their crypto to meet them. So why would technology vendors need to comply with this standard? Well FIPS "140-2" defines the acceptable algorithms and modules that can be procured by the US government. In fact NIST keeps a list of approved modules (implementations of algorithms) on their website. To get on this list of approved modules, vendors like Oracle must get their modules and algorithms validated. Unfortunately for many technology providers there isn't any alternative to FIPS 140-2 (version 2 of the current approved release) [3]. FIPS 140-2 is required by U.S. government law [4] if a vendor wants to do business with the government. The validation process is an expensive and time-consuming process. Vendors must hire a third-party lab to do the validation; most also hire consultants to write documentation that is required as a part of the process. The validation process for FIPS 140-2 requires that vendors prove they have implemented their crypto correctly against the approved algorithms. This proof is accomplished via these labs that validate each module is correctly implementing the selected algorithms. FIPS 140-2 has also been adopted by banks and other financial services. If you think about it, this makes a lot of sense: there is nothing more sensitive than financial data and banks want assurances that the crypto they depend on to protect it is as trusted as the safes that protect the cash. Interestingly, smart-card manufacturers also require FIPS: this is not a government mandate, but rather a financial industry requirement to minimize their fraud rates (by making the cost for criminals to break the crypto in smart cards too high for them to bother). Finally the Canadian government partners with NIST on the FIPS 140-2 program known as the Cryptographic Module Validation Program (CMVP) and mirrors U.S. procurement requirements. All of this would be the cost of doing business with the US (and Canadian) governments except that there are a couple of elements that are broken. 1. Government customers who buy crypto technology require a FIPS 140-2 certificate of validation (not a letter from a lab or being on a list of "under validation"). In our experience, a reasonable timeframe for a validation is a few months. Unfortunately, after all of the work is completed, the final "reports" (which lead to the certificates) go into a black hole otherwise known as the "In Review" status. This is when the vendor, lab (and consultant) have done as much work as possible but are now waiting for both the Canadian and US governments to review the case for validation. The queue (as I write this blog) has been as long as nine months over the last couple of years. An Oracle example was the Sun Crypto Accelerator (SCA) 6000 product. On January 14, 2013 our lab submitted the final FIPS 140-2 report for the SCA6000 to NIST. Seven months later, we heard back our first feedback on our case for validation. We finally received our certificate on September 11, 2013. NIST is quite aware of the delays [5]; they are considering several options including but not limited to increasing fees to pay for additional contractors, pushing more work to labs and/or revising their review process--but they haven't announced any changes that might resolve the issue other than minor fee increases. From our point of view, it's really difficult to justify the significant investments required for these validations when you can't predict their completion. If you think about the shelf life of technology these days, a delay of say a year in a two year "life-span" of a product reduces the value to the customer by 50%. As a vendor we have no control over that delay, and unless customers are willing to buy invalidated products (and they can't by law in the U.S.) they are forced to use older products that may be missing key new features. 2. NIST also refers to outdated requirements [6] under the Common Criteria. The FIPS 140-2 Standard references Common Criteria Evaluation Assurance Level (EAL) directly when discussing eligibility for the various FIPS levels [7]. However, due to current policies with the U.S. Scheme of the Common Criteria (National Information Assurance Partnership or NIAP), it is no longer possible to be evaluated at EAL3 or higher. This prevents any vendor from obtaining a FIPS 140-2 validation (of any software or firmware cryptographic module) at any level higher than Level 1. One example of an Oracle product that we would have validated higher than Level 1 is the Solaris Cryptographic Framework (SCF). Originally coded to be validated at FIPS Level 2, Oracle SCF was forced to drop its validation claim down to Level 1. This was because NIST's Implementation Guidance for FIPS 140-2 required that the Operating Systems be evaluated against a deprecated Common Criteria Protection Profile if vendors of crypto modules running on those OSs wanted the modules to be validated at Level 2. Unfortunately an older version of Solaris was Common Criteria evaluated against that deprecated Operating System Protection Profile. The version of SCF that we needed FIPS 140-2 validated didn't run on that old version of Solaris. Oracle had market demands that required a timely completion of its FIPS validation so a business decision was made to lower the bar. We've pointed out this contradiction (as I'm sure have other vendors) but at this writing the problem is still not resolved by NIST. Oracle is committed to working with NIAP and NIST to make both Common Criteria and FIPS 140-2 as broadly accepted as possible. U.S. and Canadian government customers want to buy products that meet the high standards defined by FIPS 140-2, and that many other security-conscious customers recognize. Along with many other vendors of crypto modules in our products, Oracle is very keen to see NIST's queue of work brought down to a level that would allow the longest shelf life for our validations. We also would like to see NIST and NIAP work together to align their policies and interpretations of standards to provide greater selection of strong cryptography. These current obstacles are providing a reduced choice of FIPS 140-2 validated products. If removed, there will be a larger number of FIPS 140-2 validated crypto modules available for purchase. Increasing choice of validated products for the US and Canadian governments can only contribute to improved national security. --- [1] According to Webopedia, "Cryptography is the art of protecting information by transforming it (encrypting it) into an unreadable format, called cipher text. Only those who possess a secret key can decipher (or decrypt) the message into plain text. Encrypted messages can sometimes be broken by cryptanalysis, also called code breaking, although modern cryptography techniques are virtually unbreakable." [2] "FIPS 140 Demystified, An Introductory Guide for Developers," by Wesley Higaki and Ray Potter, ISBN-13: 978-1460990391, 2011. [3] FIPS 140-3 has been in draft form for many years. NIST has been pulling key elements of the new standard and adding them to FIPS 140-2 as "Implementation Guidance." [4] "Federal Information Processing Standards (FIPS) are approved by the Secretary of Commerce and issued by NIST in accordance with the Federal Information Security Management Act of 2002 (FISMA). FIPS are compulsory and binding for federal agencies. FISMA requires that federal agencies comply with these standards, and therefore, agencies may not waive their use." [5] As noted in the summary of the recent first FIPS Conference, "...The current length of The Queue means that it can take developers many months to get their modules validated and hence available for procurement from federal agencies. In an increasing number of cases, products are obsolete or un-supported by the time the validation is finally documented. We heard how the unpredictability of The Queue is a problem too, since it greatly affects how developers can perform their marketing, sales and project planning." [6] For example, http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf "Security Level 3 allows the software and firmware components of a cryptographic module to be executed on a general purpose computing system using an operating system that meets the functional requirements specified in the PPs listed in Annex B with the additional functional requirement of a Trusted Path (FTP_TRP.1) and is evaluated at the CC evaluation assurance level EAL3 (or higher)." [7] There are four levels of FIPS, at level "1" all components must be "production grade" and level 2 adds requirements for physical tamper-evidence and role-based authentication. Levels 3 and 4 are typically for hardware. -- http://en.wikipedia.org/wiki/FIPS_140

Hello, Oracle blog reader! My name is Joshua Brickman and I run Oracle's Security Evaluations team (SECEVAL). At SECEVAL we are charged with shepherding certain Oracle products through...

Oracle

Integrated Cloud Applications & Platform Services