Friday, November 21, 2014

ICMC: Entropy Sources - Recommendations fo a Scalable, Repeatable and Comprehensive Evaluation Process

Sonu Shankar, Software Engineer, Cisco Systems
Alicia Squires, Manager, Global Certifications Team, Cisco
Ashit Vora, Lab Director and Co-Fonder, Acumen Security


When you're evaluating entropy your process has to be scalable, repeatable and comprehensive... well, comprehensive in a way that doesn't outweigh the assurance level you're going for. Ideally, the method used for the evaluation would be valid for FIPS-140 and Common Criteria.

Could we have the concept of a "module" certificate for entropy sources?

Let's think about the process for how we'd get here. we'd have to look at the Entropy Source: covering min-entropy estimation, review of built-in health tests, built-in oversampling, and a high-level design review.

There are several schemes that cover entropy and how to test it. You need to have a well documented description of the entropy source design, and leverage tools for providing statistical analysis of raw entropy.  It would be good to add statistical testing and heuristic analysis - but will vendors have the expertise to do this correctly?

How do you test for this?  First, you have to collect from raw entropy - disabling all of the conditioners (no hashing, LFSR, etc) - not always possible, as many chips also do the conditioning, so you cannot get the raw entropy. If you can't get the raw entropy, then it's not worth testing - as long as you've got good conditioning, it will  look like good entropy.

In order to run this test, you need to have at least one file of entropy contiaing 1 million symbols and the file has to be in binary format.

When it comes time to look at the results, the main metric is min-entropy.

You need to be careful, though, to not over sample from your entropy source and drain it. You need to be aware of how much entropy it can provide and use it appropriately. [* Not sure if I caught this correctly, as what I heard and saw didn't quite sync, and the slide moved away too quickly]

When it comes to reviewing noise source health test - need to catch catastrophic errors and reductions in entropy quality This is your first line of defense against side channel attacks. This may be implemented in software pre-DRBG or built-in to source.

Ideally, these entropy generators could have their own certificate, so that 3rd parties could use someone else's hardware for an entropy source - w/out having to worry difficult vendor NDA issues.

ICMC: Entopy: A FIPS and Common Criteria Perspective Including SP 800-90B (G22A)

Gary Granger, AT&E Technical Director, Leidos

Random values are required for applications using cryptography (such as for crypto keys, nonces, etc)

There are two basic strategies for generating random bits - non deterministic random bit generator (NDRBG) and deterministic random bit generator (DRBG) .  Both strategies depend on unpredictability.

Entropy source is covered in NIST SP 800-90B (design and testing requirements).  Entropy source model: Noise source, conditioning component, and health tests.

How do we measure entropy? A noise source sample represents a discrete random variable. There are several measures of entropy based on a random variable's probability distribution line Shannon Entropy or Min-Entropy.  NIST SP 800-90B specifies requirements using min-entropy (conservative estimate that facilitates entropy estimation).
 
FIPS has additional implications for RNG in their implementation guidance, specifically IG 7.11. It defines non-deterministic random number generators (NDRNG), identifies FIPS 140 requirements for tests, etc.

IG 7.13 covers cryptographic key strength modified by an entropy estimate  For example, the entropy has to have at least 112 bits of security strength or the associated algorithm and key shall not be used in the approved mode of operation.

But the basic problem - entropy standards and test methods do not yet exist. How can a vendor determine and document estimate of their entropy? How do we back up our claims?

There are also different concerns to consider if you are using an internal (to your boundary) source of entropy or an external (to your boundary) source for entropy.


ICMC: Validation of Cryptographic Protocol Implementations

Juan Gonzalez Nieto, Technical Manger, BAE Systems Applied Inteligence

FIS 140-2 and its Annexes do not cover protocol security, but the goal of this standard (and the organizations controlling it) is to provide better crypto implementations.  If the protocol around the crypto has issues, your crypto cannot protect you.

Mr. Nieto's problematic protocol example is TLS - he showed us a slide with just the vulns of the last 5 years... it ran off of the page (and the font was not that large....).

One of the issues is the complexity of the protocol. From a cryptographer's point of view, it's simple: RSA key transport or signed Diffie -Hellman + encryption. In reality, it's a huge collection of RFCs that is difficult to put together.

TLS/SSL has been around since 1995, with major revisions every few years (TLS 1.3 is currently in draft).  The basics of TLS are a handshake protocol and a record layer.  Sounds simple, but there are so many moving parts. Key exchange + Signature + Encryption + MAC... and all of those have many possible options.  When you combine all of those permutations, you end up with a horrifyingly long and complicated list (entertainingly cramped slide results) .:)

But where are the vulnerabities showing up?  Answer: everywhere (another hilarious slide ensues). Negotiation protocol, applications, libraries, key exchange, etc... all the places.

Many of the TLS/SSL cipher suites contain primitives that are vulnerable to a cryptanalytic attacks that re not allowed by FIPS 140-2, like DES, MD5, SHA1 (for signing), RC2, RC4, GOST, SkipJack.....

The RSA  key transport is happening with RSA PKCS#1 v 1.5 - but that's not allowed by FIPS 140-2, except for key transport. (See Bleichbaker 1998).

There are mitigations for the Bleichbaker, but as of this summer's USENIX Security conf... not great anymore. So, really, do not use static RSA transport (as proposed in TLS 1.3 draft). Recommendation: FIPS 140 should not allow PKCS#1 v 1.5 for key transport.  People should use RSA-OAEP for key transport (which is already approved).

Implementation issues, such as a predictable IV in AES-CBC mode, can expose plaintext recovery attacks. When the protocol is updated to mitigate, such as the fix in TLS 1.1/1.2 for Vaudanay's (2002) padding oracle attack, often something else comes along to take advantage of the fix (Lucky 13, a timing based attack).

Sometimes FIPS 140-2 just can't help us - for example, with he POODLE (2014) attack on SSL 3.0 (mitigation: disable SSL 3.0), FIPS 140-2 wouldn't have helped. Authenticated encryption protocols are out of scope.  Compression attacks like CRIME(2012)? Out of scope for FIPS 140-2.

Since Heartbleed, the CMVP has started asking labs to test known vulnerabilities. But, perhaps CMVP should address other well-known vulns?

Alas, most vulnerabilities occur outside of the cryptographic boundayr of the module, so it is out of scope.  The bigger the boundary, the more complex testing becomes.  FIPS 140-2's implicit assumption that if the crypto primitives are correct, then the protocols will likely be correct is flawed.

Perhaps we need a new approach for validation of cryptography that includes approved protocols and protocol testing?

In my personal opinion, I would like to see some of that expanded - but WITHOUT including the protocols in the boundary. As FIPS 140-2 does not have any concept of flaw remediation, if something like Heartbleed had been inside the boundary (and missed by the testers) - vendors would have found them, but had to break their validation in order to fix it.

Thursday, November 20, 2014

ICMC: Validating Sub-Chip Modules and Partial Cryptographic Accelerators

Carolyn French, Manager, CMVP, CSEC
Randall Easter, NIST, Security Testing Validation and Management Group

Partial Cryptographic Accelerators

Draft IG 1.9: Hybrid Module is crypto software module that takes advantage of "Security Relevant Components" on a chip.

But, that doesn't cover modern processors like Oracle's SPARC T4 and Intel's AES-NI - so there is a new IG (1.X): Processor Algorithm Accelerators (PAA).  If the software module relies on the instructions provided by the PAA (Mathematical construct and not the comlete algorithm as defined in NIST standards), and ccannot act independently - it's still a hybrid.  If there are issues with the hardware and the software could work on it's own (or on other platforms), then it is NOT a hybrid. (YAY for clarification!)

Sub-Chip Modules

What is this? A complete implementation of a defined cryptograpic module is implemented on part of a chip substrate.  This is different than when a partial implemenation of a defined cryptographic module is implemented on part of a chip substrate (see above).

A sub-chip has a logical soft core. The cryptographic module has  a contiguous and defined logical boundary with all crypto contained within. Durign physical placement, the crypto gates are scattered. Testing at the logical soft core voundary does not verify correct operation after synthesis and placement.

There are a lot of requirements in play here for these sub-chip modules. There is a physical boundary and a logical boundary. The physical boundary is around a single chip. The logical boundary will represent the collection of physical circuitry that was synthesized from the high level VHDL soft core cryptographic models.

Porting is a bit more difficult here - the soft core cna be re-used, unchanged, and embedded in other single-chip constructs - this requires Operational Regression testing.  This can be done at all levels, as long as other requirements are met.

If you have multiple disjoint sub-chip crypto... you can still do this, but it will result in two separate cryptographic modules/boundaries.

What if there are seveal soft cores, and they want to talk to each other? If I have several different disjoint software modules that are both validated and on the same physical device, we allow them to exchange keys in the clear. So, why not? As long as they are being directly transferred, and not outside of the trip through an intermediary.

As chip densities increase, we're going to see more of these cores on one chip.




ICMC: FIPS 140-2 Implementation Guidance 9.10: What is a Software Library and How to Engineer It for Compliance?

Apostol Vassilev, Cybersecurity Expert, Computer Security Division, NIST, Staff Member, CMVP

Why did we come up with IG 9.10 [Power On Self Tests]? There were many open quetions about how software libraries fit into the standard.  In particular, CMVP did not allow static libraries - but they existed. We needed to come up with reasons to rationalize our decision, so we could spend time doing things other than ddebating.

Related to this are IG 1.7 (Muliple Approved Modes of Operation) and IG 9.5 (Module Initialization during Power-Up).

The standard is clear in this case - the power-up self tests SHALL be initiated automatically and SHALL not require operator intervention.  For a software module implemented as a library, an operator action/intervention is any action taken on the library by an application linking to it.

Let's look a the execution control flow to understand this problem. When the library is loaded by the OS loader, execution control is not with the library UNLESS special provisions are taken. Static libraries are embedded into the object code and behave differently.

How do we instrument a library? Default entry points are well-known mechanism for operator-indeendent transfer of execution control to the library  This has been available for over 30 years, and exist for all types of libraries: static, shared, dynamic.

There are alternative instrumentation - in languages like C++, C# and Java you an leverage things like static constructors that are executed automatically upon loading the library containing them when it is loaded.

What if the OS does not provide a DEP mechanism and the module is in a procedural language like C?  You can consider switching to C++ or using a a C++ wrapper, so that you can get this functionality.  Lucky for my team, Solaris supports _init() functions. :)

Implementation Guidance 9.5 and 9.10 live in harmony - you need to understand and implement both correctly.

Static libraries can now be validated with the new guidance.

ICMC: Roadmap to Testing of New Algorithms

Sharon Keller, Director CAVP, NIST
Steve (?), CAVP, NIST

The CAVP takes over after NIST picks a new algorithm, the CAVP takes over and figures out how to test it.  They need to evaluate the algorithm from top to bottom - identify the mathematical formulas, components, etc.

The CAVP develop and implement the algorithm valdiation test suite. Which requirements  are addressable at this level? They develop the test metrics for the algorithm and exercise all mathematical elements of the algorithm. If something fails - why?  Is there an error in the algorithm, or an intentional failure - or is there an error in the test?

The next stop is to develop user documentaion and guidance, called validation system document (VS), documents test suite and provides instructions o implementing validation tests.  There is cross validation, and make sure that both teams come up with the same answers - a good way to check their own work.

The basic tests are Known Answer Tests (KAT) , Multi-block Message Test (MMT), and Monte Carlo Tests.  KATs are designed to verify the components  to algorithms. MMT will test algorithms where there may be chaining of information from one block to the next and make sure it still works. The Monte Carlo Tests are exhaustive, checking for flaws in the UI or race conditions.

Additionally need to test the boundaries - what happens if you encrypt the empty string?  What if we send in negative inputs?

There are many documents for validation testing - one for each algorithm or algorithm mode.

The goals of all these tests? Cover all the nooks and crannies - prevent hackers from taking advantage of poorly written code.

Currently, the CAVP is working on tests for SP 800-56C, SP 800-132 and SP800-56A (Rev2).

In the future, there will be tests for SP 800-56B (rev1), SP 800-106 and SP800-38A.  Which ones of these is more important for you to get these tests completed?

Upcoming algorithms that are still in draft, FIPS 202 (Draft) for SHA3, SP800-90A (Rev2) for DRBG, SP800-90B for Entropy Sources and SP 800-90C for construction of RBGS. Ms. Keller has learned the hard way - her team cannot write tests for algorithms until they are a published standard.


ICMC: Is Anybody Listening? Business Issues in Cryptographic Implementations?

Mary Ann Davidson, Chief Security Officer, Oracle Corporation

A tongue in cheek title... of course we're hoping nobody is listening!  While Ms. Davidson is not a lobbyist, she does spend time reading a lot of legislation - and tries not to pull out all of her hair.

There are business concerns around this legislation - we have to worry about how we comply, doing it right, etc.  Getting it right is very important at Oracle - that's why we don't let our engineers write their own crytpo [1] - we leverage known good cryptographic libraries.  Related to that, validations are critical to show we're doing this right. There should not be exceptions.

Security vulnerabilities... the last 6 months have been exhausting. What is going on?  We all are leveraging opensource we think is safe.

We would've loved if we could've said that we knew where all of our OpenSSL libraries were when we heard about Heartbleed. But, we didn't - it took us about 3 weeks to find them all! We all need to do better: better at tracking, better at awareness, better at getting the fixes out.

It could be worse - old source code doesn't go away, it just becomes unsupportable.  Nobody's customer wants to hear, "Sorry, we can't patch your system because that software is so old."

Most frustrating?  Everyone is too excited to tell the world about the vulnerability they found - it doesn't give vendors time to address this before EVERYONE knows how to attack the vulnerability. Please use responsible disclosure.

This isn't religion - this is a business problem! We need reliable and responsible disclosures. We need to have good patching processes in place in advance so we are prepared.We need our opensource code analyzed - don't assume there's "a thousand eyes" looking at it.

Ms. Davidson joked about her ethical hacking team. What does that mean? When they hack into our payroll system, they can only change her title - not her pay scale. How do you think she got to be CSO? ;-)

Customers are too hesitant to upgrade - but newer really is better! We are smarter now than we used to be, and sorry we just cannot patch you thousand year old system. We can't - you need to upgrade! The algorithms are better, the software is more secure - we've learned and you need to upgrade to reap those benefits.

But we need everyone to work with us - we cannot have software sitting in someone's queue for 6 months (or more) to get our validation done.  That diminishes our value of return - 6 months is a large chunk of a product's life cycle. Customers are stuck on these old versions of software, waiting for our new software to get its gold star. Six weeks? Sure - we can do that. Six months? No.

Ms. Davidson is not a lobbyist, but she's willing to go to Capital Hill to get more money for NIST. Time has real money value. How do we fix this?

What's a moral hazard? Think about the housing market - people were making bad investments, buying houses they couldn't afford to try to flip houses and it didn't work out. We rewarded those people, but not those who bought what they could afford (or didn't buy at all) - we rewarded their bad risk taking.

Can we talk with each other?  NIST says "poTAHto", NIAP says "poTAHto" - why aren't they talking?  FIPS 140-2 requires Common Criteria validations for the underlying OS for higher levels of validations - but NIAP said they don't want to do validations

We need consistency in order to do our jobs. Running around trying to satisfy the Nights Who Say Ni is not a good use of time.

And... The entropy of ... entropy requirements.  These are not specific, this is not "I know it when I see it". And why is NIAP getting into entropy business? That's the realm of NIST/FIPS.

Ms. Davidson ends with a modest proposal: Don't outsource your core mission.  Consultants are not neutral - and she's disturbed by all of the consultants she's seeing on The Hill.  They are not neutral - they will act in their own economic interest. How many times can they charge you for coming back and asking for clarification? Be aware of that.

She also requests that we promote the private-public partnership.  We need to figure out what the government is actually worried about - how is telling them the names of every individual that worked on code help with their mission? It's a great onus on business, and we're international companies - other countries won't like us sharing data about their citizens. Think about what we're trying to accomplish, and what is feasible for business to handle.

Finally, let's have "one security world order" - this is so much better than the Balkanization of security.  This ISO standard (ISO 19790) is a step in the right direction. Let's work together on the right solutions.

[1] Unless you're one of the teams at Oracle, like mine, who's job it is to write the cryptographic libraries for use by the rest of the organization. But even then, we do NOT invent our own algorithms. That would just be plain silly.