Hi there, Oracle Security blog readers; Josh Brickman here again. Today I want to share some of our thoughts about Common Criteria (CC) evaluations specifically those under the US Scheme of the CC run by the National Information Assurance Partnership (NIAP). NIAP is one of the leaders behind the significant evolution of the Common Criteria, resulting in ratification of a new Common Criteria Recognition Arrangement last year.
In 2009, NIAP advocated for a radical change in the CC by creating Protection Profiles quickly for many technology types. As described by NIAP :
In this new paradigm, NIAP will only accept products into evaluation claiming exact compliance to a NIAP-approved Protection Profile. These NIAP-approved Protection Profiles (PP) produce evaluation results that are achievable, repeatable, and testable — allowing for more a more consistent and rapid evaluation process. 
Using this approach, assurance activities are tailored to technologies and eliminate the need for evaluation assurance levels (pre-cast assurance activities that were generically used for most CC evaluations also known as EAL's). Another key element is the goal of eliminating variables such as lab experience and a subjective evaluator influencing the results. As a result, these changes help make evaluations less “gameable" — a great idea. Even better: with the new approach, instead of requiring confidential internal documentation and sometimes limited amounts of source code, the labs will now only evaluate what is available to any customer — the finished product and its documentation. New Protection Profiles describe assurance testing activities; in theory, a vendor could take the same product and documentation to different labs and receive consistent results.
One other objective of NIAP's vision of a revitalized CC is based on the goal of 90 day evaluations. Under the new paradigm, evaluations should take less than six months (their current time limit) and after labs and vendors develop experience with the new process, ideally complete them in three months. For vendors, this is a terrific development as it maximizes the amount of time an evaluated product can remain on the market before a new version is released. Customers also benefit greatly from this change — if a product can be certified quickly, a greater number of new products will be available for procurement by customers who require products to be CC-evaluated before acquisition. Using the old system, a product often was outdated — replaced by a new version with new functionality — by the time it finished an evaluation.
There is a lot to like with these developments — vendors benefit from a consistent and timely process, customers benefit from greater choice in evaluated products, and labs stand to benefit as more vendors send more products for evaluation using resources that are scalable.
Unfortunately, there are a few bumps along this road to assurance nirvana. As security researchers have known for some time, sources of entropy are critical for encryption as they help ensure random number generation is suitably random. As such, NIAP introduced requirements for entropy assessments for every product going through an evaluation . Not a bad idea on the surface, however, in the commercial marketplace the mix of technologies, intellectual property (IP), and approaches to development mean it is hard to evaluate entropy (to ensure it is following good practices) without reviewing confidential design documents and — potentially — source code. As noted in Atsec's insightful blog post from September 2013:
The TOE Security Assurance Requirements specified in Table 2 of the NDPP  is (roughly) equivalent to Evaluation Assurance Level 1 (EAL1)  per CC Part 3...The NDPP does not require design documentation of the TOE itself; nevertheless its Annex D does require design documentation of the entropy source — which is often provided by the underlying Operating System (OS). Suppose that a TOE runs on Windows, Linux, AIX and Solaris and so on, some of which may utilize cryptographic acceleration hardware (e.g., Intel processors supporting RDRAND instruction). In order to claim NDPP compliance and succeed in the CC evaluation, the vendor is obligated to provide the design documentation of the entropy source from all those various Operating Systems and/or hardware accelerators. This is not only a daunting task, but also mission impossible because the design of some entropy sources are proprietary to some OS or hardware vendors.
At first NIAP stated that Entropy Assessment Reports (EARs) would be OK to submit in draft form in order for an evaluation to start. But NIAP found that finalizing the EAR was taking weeks and months because often the expertise — or documentation — did not exist in the vendor's development organization. Vendors also found that NIAP did not trust the labs to evaluate the EAR's and decided to insert their own experts from the Information Assurance Directorate (IAD) into the process. In practice this means that the EAR must be evaluated by IAD before it can be approved; no other evaluation element in the NIAP scheme requires this oversight except that of entropy. More problematic, some disclosure of the entropy source code and design documentation for entropy generation is now required by NIAP. As I noted above, some vendors may have no trouble providing this extra information to complete an entropy assessment, however, if your product relies on an external or third-party entropy source, you may be unable to provide the material (even if you are willing, your source may not wish to cough up their IP).
Speeding Up, or Slowing Down?
NIAP has a six month evaluation time limit with a public goal of 90 days for an evaluation. An evaluation involves a vendor providing evidence and a third-party lab evaluating it against certain criteria (in this case the Common Criteria). On the one hand, we have a new Protection Profile paradigm that streamlines evaluations and removes requirements for source code and confidential data provided. However, with the introduction of Entropy Assessment Reports (EAR), vendors are increasingly unable to complete an evaluation within the 90-day window. NIAP's solution is to evaluate an EAR BEFORE a CC evaluation can start. We see this happening now within the NIAP scheme and in other schemes trying to evaluate against NIAP PP's. There are good reasons why NIAP is so concerned with entropy, yet we need to examine how best to meet these requirements in the context of the current IT ecosystem. As Atsec noted in the same post:
In theory, a thorough analysis on the entropy source coupled with some statistical tests on the raw data is absolutely necessary to gain some assurance of the entropy that plays such a vital role in supporting security functionality of IT products...However, the requirements of entropy analysis stated above impose an enormous burden on the vendors as well as the labs to an extent that they are out of balance (in regard to effort expended) compared to other requirements; or in some cases, it may not be possible to meet the requirements.
So how do we meet this need for entropy assessment and align it with market realities? Certainly we should talk about how to best leverage previous entropy assessments for products that rely on external sources to speed evaluations and address IP protection depending on who "owns" the entropy. A more immediate next step is to move entropy evaluation into the traditional Evaluation phase of the project. This may impact an evaluation's timeline, but it will allow everyone to work off the same clock. Just let vendors into evaluation  that have a Security Target compliant to a NIAP PP without a completed EAR. If a final EAR is not provided, they cannot complete the evaluation. Why is this simple change important for vendors? Because customers can procure products if they are on the “In Evaluation" list. Because evaluations will be time bound and since NIAP already does a good job making sure companies don't just put up products they don't intend to evaluate, this is a good way for buyers to start the long process of acquiring the enterprise technology they need. Buyers can always put into contracts requirements that the products complete CC evaluations with the appropriate consequences if they don't. Vendors will be incented to finish the evaluations by the very customers who are demanding them in the first place.
Finally once we move the EAR to be completed after the product has started the evaluation, let's do some measuring for a while. Once we have a sense of how long these evaluations really take we can reset expectations.
 https://www.niap-ccevs.org/Documents_and_Guidance/ccevs/scheme-pub-1.pdf, National Information Assurance Partnership/Common Criteria Evaluation and Validation Scheme, Publication #1, February 2014.
 The In Evaluation List allows buyers to purchase products that require CC evaluations before the certificate is awarded. In order to get on the In Evaluation list, a Security Target must be evaluated by the lab. Current rules also require a final EAR.