Friday, November 21, 2014

ICMC: Entropy Sources - Recommendations fo a Scalable, Repeatable and Comprehensive Evaluation Process

Sonu Shankar, Software Engineer, Cisco Systems
Alicia Squires, Manager, Global Certifications Team, Cisco
Ashit Vora, Lab Director and Co-Fonder, Acumen Security

When you're evaluating entropy your process has to be scalable, repeatable and comprehensive... well, comprehensive in a way that doesn't outweigh the assurance level you're going for. Ideally, the method used for the evaluation would be valid for FIPS-140 and Common Criteria.

Could we have the concept of a "module" certificate for entropy sources?

Let's think about the process for how we'd get here. we'd have to look at the Entropy Source: covering min-entropy estimation, review of built-in health tests, built-in oversampling, and a high-level design review.

There are several schemes that cover entropy and how to test it. You need to have a well documented description of the entropy source design, and leverage tools for providing statistical analysis of raw entropy.  It would be good to add statistical testing and heuristic analysis - but will vendors have the expertise to do this correctly?

How do you test for this?  First, you have to collect from raw entropy - disabling all of the conditioners (no hashing, LFSR, etc) - not always possible, as many chips also do the conditioning, so you cannot get the raw entropy. If you can't get the raw entropy, then it's not worth testing - as long as you've got good conditioning, it will  look like good entropy.

In order to run this test, you need to have at least one file of entropy contiaing 1 million symbols and the file has to be in binary format.

When it comes time to look at the results, the main metric is min-entropy.

You need to be careful, though, to not over sample from your entropy source and drain it. You need to be aware of how much entropy it can provide and use it appropriately. [* Not sure if I caught this correctly, as what I heard and saw didn't quite sync, and the slide moved away too quickly]

When it comes to reviewing noise source health test - need to catch catastrophic errors and reductions in entropy quality This is your first line of defense against side channel attacks. This may be implemented in software pre-DRBG or built-in to source.

Ideally, these entropy generators could have their own certificate, so that 3rd parties could use someone else's hardware for an entropy source - w/out having to worry difficult vendor NDA issues.

ICMC: Entopy: A FIPS and Common Criteria Perspective Including SP 800-90B (G22A)

Gary Granger, AT&E Technical Director, Leidos

Random values are required for applications using cryptography (such as for crypto keys, nonces, etc)

There are two basic strategies for generating random bits - non deterministic random bit generator (NDRBG) and deterministic random bit generator (DRBG) .  Both strategies depend on unpredictability.

Entropy source is covered in NIST SP 800-90B (design and testing requirements).  Entropy source model: Noise source, conditioning component, and health tests.

How do we measure entropy? A noise source sample represents a discrete random variable. There are several measures of entropy based on a random variable's probability distribution line Shannon Entropy or Min-Entropy.  NIST SP 800-90B specifies requirements using min-entropy (conservative estimate that facilitates entropy estimation).
FIPS has additional implications for RNG in their implementation guidance, specifically IG 7.11. It defines non-deterministic random number generators (NDRNG), identifies FIPS 140 requirements for tests, etc.

IG 7.13 covers cryptographic key strength modified by an entropy estimate  For example, the entropy has to have at least 112 bits of security strength or the associated algorithm and key shall not be used in the approved mode of operation.

But the basic problem - entropy standards and test methods do not yet exist. How can a vendor determine and document estimate of their entropy? How do we back up our claims?

There are also different concerns to consider if you are using an internal (to your boundary) source of entropy or an external (to your boundary) source for entropy.

ICMC: Validation of Cryptographic Protocol Implementations

Juan Gonzalez Nieto, Technical Manger, BAE Systems Applied Inteligence

FIS 140-2 and its Annexes do not cover protocol security, but the goal of this standard (and the organizations controlling it) is to provide better crypto implementations.  If the protocol around the crypto has issues, your crypto cannot protect you.

Mr. Nieto's problematic protocol example is TLS - he showed us a slide with just the vulns of the last 5 years... it ran off of the page (and the font was not that large....).

One of the issues is the complexity of the protocol. From a cryptographer's point of view, it's simple: RSA key transport or signed Diffie -Hellman + encryption. In reality, it's a huge collection of RFCs that is difficult to put together.

TLS/SSL has been around since 1995, with major revisions every few years (TLS 1.3 is currently in draft).  The basics of TLS are a handshake protocol and a record layer.  Sounds simple, but there are so many moving parts. Key exchange + Signature + Encryption + MAC... and all of those have many possible options.  When you combine all of those permutations, you end up with a horrifyingly long and complicated list (entertainingly cramped slide results) .:)

But where are the vulnerabities showing up?  Answer: everywhere (another hilarious slide ensues). Negotiation protocol, applications, libraries, key exchange, etc... all the places.

Many of the TLS/SSL cipher suites contain primitives that are vulnerable to a cryptanalytic attacks that re not allowed by FIPS 140-2, like DES, MD5, SHA1 (for signing), RC2, RC4, GOST, SkipJack.....

The RSA  key transport is happening with RSA PKCS#1 v 1.5 - but that's not allowed by FIPS 140-2, except for key transport. (See Bleichbaker 1998).

There are mitigations for the Bleichbaker, but as of this summer's USENIX Security conf... not great anymore. So, really, do not use static RSA transport (as proposed in TLS 1.3 draft). Recommendation: FIPS 140 should not allow PKCS#1 v 1.5 for key transport.  People should use RSA-OAEP for key transport (which is already approved).

Implementation issues, such as a predictable IV in AES-CBC mode, can expose plaintext recovery attacks. When the protocol is updated to mitigate, such as the fix in TLS 1.1/1.2 for Vaudanay's (2002) padding oracle attack, often something else comes along to take advantage of the fix (Lucky 13, a timing based attack).

Sometimes FIPS 140-2 just can't help us - for example, with he POODLE (2014) attack on SSL 3.0 (mitigation: disable SSL 3.0), FIPS 140-2 wouldn't have helped. Authenticated encryption protocols are out of scope.  Compression attacks like CRIME(2012)? Out of scope for FIPS 140-2.

Since Heartbleed, the CMVP has started asking labs to test known vulnerabilities. But, perhaps CMVP should address other well-known vulns?

Alas, most vulnerabilities occur outside of the cryptographic boundayr of the module, so it is out of scope.  The bigger the boundary, the more complex testing becomes.  FIPS 140-2's implicit assumption that if the crypto primitives are correct, then the protocols will likely be correct is flawed.

Perhaps we need a new approach for validation of cryptography that includes approved protocols and protocol testing?

In my personal opinion, I would like to see some of that expanded - but WITHOUT including the protocols in the boundary. As FIPS 140-2 does not have any concept of flaw remediation, if something like Heartbleed had been inside the boundary (and missed by the testers) - vendors would have found them, but had to break their validation in order to fix it.

Thursday, November 20, 2014

ICMC: Validating Sub-Chip Modules and Partial Cryptographic Accelerators

Carolyn French, Manager, CMVP, CSEC
Randall Easter, NIST, Security Testing Validation and Management Group

Partial Cryptographic Accelerators

Draft IG 1.9: Hybrid Module is crypto software module that takes advantage of "Security Relevant Components" on a chip.

But, that doesn't cover modern processors like Oracle's SPARC T4 and Intel's AES-NI - so there is a new IG (1.X): Processor Algorithm Accelerators (PAA).  If the software module relies on the instructions provided by the PAA (Mathematical construct and not the comlete algorithm as defined in NIST standards), and ccannot act independently - it's still a hybrid.  If there are issues with the hardware and the software could work on it's own (or on other platforms), then it is NOT a hybrid. (YAY for clarification!)

Sub-Chip Modules

What is this? A complete implementation of a defined cryptograpic module is implemented on part of a chip substrate.  This is different than when a partial implemenation of a defined cryptographic module is implemented on part of a chip substrate (see above).

A sub-chip has a logical soft core. The cryptographic module has  a contiguous and defined logical boundary with all crypto contained within. Durign physical placement, the crypto gates are scattered. Testing at the logical soft core voundary does not verify correct operation after synthesis and placement.

There are a lot of requirements in play here for these sub-chip modules. There is a physical boundary and a logical boundary. The physical boundary is around a single chip. The logical boundary will represent the collection of physical circuitry that was synthesized from the high level VHDL soft core cryptographic models.

Porting is a bit more difficult here - the soft core cna be re-used, unchanged, and embedded in other single-chip constructs - this requires Operational Regression testing.  This can be done at all levels, as long as other requirements are met.

If you have multiple disjoint sub-chip crypto... you can still do this, but it will result in two separate cryptographic modules/boundaries.

What if there are seveal soft cores, and they want to talk to each other? If I have several different disjoint software modules that are both validated and on the same physical device, we allow them to exchange keys in the clear. So, why not? As long as they are being directly transferred, and not outside of the trip through an intermediary.

As chip densities increase, we're going to see more of these cores on one chip.

ICMC: FIPS 140-2 Implementation Guidance 9.10: What is a Software Library and How to Engineer It for Compliance?

Apostol Vassilev, Cybersecurity Expert, Computer Security Division, NIST, Staff Member, CMVP

Why did we come up with IG 9.10 [Power On Self Tests]? There were many open quetions about how software libraries fit into the standard.  In particular, CMVP did not allow static libraries - but they existed. We needed to come up with reasons to rationalize our decision, so we could spend time doing things other than ddebating.

Related to this are IG 1.7 (Muliple Approved Modes of Operation) and IG 9.5 (Module Initialization during Power-Up).

The standard is clear in this case - the power-up self tests SHALL be initiated automatically and SHALL not require operator intervention.  For a software module implemented as a library, an operator action/intervention is any action taken on the library by an application linking to it.

Let's look a the execution control flow to understand this problem. When the library is loaded by the OS loader, execution control is not with the library UNLESS special provisions are taken. Static libraries are embedded into the object code and behave differently.

How do we instrument a library? Default entry points are well-known mechanism for operator-indeendent transfer of execution control to the library  This has been available for over 30 years, and exist for all types of libraries: static, shared, dynamic.

There are alternative instrumentation - in languages like C++, C# and Java you an leverage things like static constructors that are executed automatically upon loading the library containing them when it is loaded.

What if the OS does not provide a DEP mechanism and the module is in a procedural language like C?  You can consider switching to C++ or using a a C++ wrapper, so that you can get this functionality.  Lucky for my team, Solaris supports _init() functions. :)

Implementation Guidance 9.5 and 9.10 live in harmony - you need to understand and implement both correctly.

Static libraries can now be validated with the new guidance.

ICMC: Roadmap to Testing of New Algorithms

Sharon Keller, Director CAVP, NIST
Steve (?), CAVP, NIST

The CAVP takes over after NIST picks a new algorithm, the CAVP takes over and figures out how to test it.  They need to evaluate the algorithm from top to bottom - identify the mathematical formulas, components, etc.

The CAVP develop and implement the algorithm valdiation test suite. Which requirements  are addressable at this level? They develop the test metrics for the algorithm and exercise all mathematical elements of the algorithm. If something fails - why?  Is there an error in the algorithm, or an intentional failure - or is there an error in the test?

The next stop is to develop user documentaion and guidance, called validation system document (VS), documents test suite and provides instructions o implementing validation tests.  There is cross validation, and make sure that both teams come up with the same answers - a good way to check their own work.

The basic tests are Known Answer Tests (KAT) , Multi-block Message Test (MMT), and Monte Carlo Tests.  KATs are designed to verify the components  to algorithms. MMT will test algorithms where there may be chaining of information from one block to the next and make sure it still works. The Monte Carlo Tests are exhaustive, checking for flaws in the UI or race conditions.

Additionally need to test the boundaries - what happens if you encrypt the empty string?  What if we send in negative inputs?

There are many documents for validation testing - one for each algorithm or algorithm mode.

The goals of all these tests? Cover all the nooks and crannies - prevent hackers from taking advantage of poorly written code.

Currently, the CAVP is working on tests for SP 800-56C, SP 800-132 and SP800-56A (Rev2).

In the future, there will be tests for SP 800-56B (rev1), SP 800-106 and SP800-38A.  Which ones of these is more important for you to get these tests completed?

Upcoming algorithms that are still in draft, FIPS 202 (Draft) for SHA3, SP800-90A (Rev2) for DRBG, SP800-90B for Entropy Sources and SP 800-90C for construction of RBGS. Ms. Keller has learned the hard way - her team cannot write tests for algorithms until they are a published standard.

ICMC: Is Anybody Listening? Business Issues in Cryptographic Implementations?

Mary Ann Davidson, Chief Security Officer, Oracle Corporation

A tongue in cheek title... of course we're hoping nobody is listening!  While Ms. Davidson is not a lobbyist, she does spend time reading a lot of legislation - and tries not to pull out all of her hair.

There are business concerns around this legislation - we have to worry about how we comply, doing it right, etc.  Getting it right is very important at Oracle - that's why we don't let our engineers write their own crytpo [1] - we leverage known good cryptographic libraries.  Related to that, validations are critical to show we're doing this right. There should not be exceptions.

Security vulnerabilities... the last 6 months have been exhausting. What is going on?  We all are leveraging opensource we think is safe.

We would've loved if we could've said that we knew where all of our OpenSSL libraries were when we heard about Heartbleed. But, we didn't - it took us about 3 weeks to find them all! We all need to do better: better at tracking, better at awareness, better at getting the fixes out.

It could be worse - old source code doesn't go away, it just becomes unsupportable.  Nobody's customer wants to hear, "Sorry, we can't patch your system because that software is so old."

Most frustrating?  Everyone is too excited to tell the world about the vulnerability they found - it doesn't give vendors time to address this before EVERYONE knows how to attack the vulnerability. Please use responsible disclosure.

This isn't religion - this is a business problem! We need reliable and responsible disclosures. We need to have good patching processes in place in advance so we are prepared.We need our opensource code analyzed - don't assume there's "a thousand eyes" looking at it.

Ms. Davidson joked about her ethical hacking team. What does that mean? When they hack into our payroll system, they can only change her title - not her pay scale. How do you think she got to be CSO? ;-)

Customers are too hesitant to upgrade - but newer really is better! We are smarter now than we used to be, and sorry we just cannot patch you thousand year old system. We can't - you need to upgrade! The algorithms are better, the software is more secure - we've learned and you need to upgrade to reap those benefits.

But we need everyone to work with us - we cannot have software sitting in someone's queue for 6 months (or more) to get our validation done.  That diminishes our value of return - 6 months is a large chunk of a product's life cycle. Customers are stuck on these old versions of software, waiting for our new software to get its gold star. Six weeks? Sure - we can do that. Six months? No.

Ms. Davidson is not a lobbyist, but she's willing to go to Capital Hill to get more money for NIST. Time has real money value. How do we fix this?

What's a moral hazard? Think about the housing market - people were making bad investments, buying houses they couldn't afford to try to flip houses and it didn't work out. We rewarded those people, but not those who bought what they could afford (or didn't buy at all) - we rewarded their bad risk taking.

Can we talk with each other?  NIST says "poTAHto", NIAP says "poTAHto" - why aren't they talking?  FIPS 140-2 requires Common Criteria validations for the underlying OS for higher levels of validations - but NIAP said they don't want to do validations

We need consistency in order to do our jobs. Running around trying to satisfy the Nights Who Say Ni is not a good use of time.

And... The entropy of ... entropy requirements.  These are not specific, this is not "I know it when I see it". And why is NIAP getting into entropy business? That's the realm of NIST/FIPS.

Ms. Davidson ends with a modest proposal: Don't outsource your core mission.  Consultants are not neutral - and she's disturbed by all of the consultants she's seeing on The Hill.  They are not neutral - they will act in their own economic interest. How many times can they charge you for coming back and asking for clarification? Be aware of that.

She also requests that we promote the private-public partnership.  We need to figure out what the government is actually worried about - how is telling them the names of every individual that worked on code help with their mission? It's a great onus on business, and we're international companies - other countries won't like us sharing data about their citizens. Think about what we're trying to accomplish, and what is feasible for business to handle.

Finally, let's have "one security world order" - this is so much better than the Balkanization of security.  This ISO standard (ISO 19790) is a step in the right direction. Let's work together on the right solutions.

[1] Unless you're one of the teams at Oracle, like mine, who's job it is to write the cryptographic libraries for use by the rest of the organization. But even then, we do NOT invent our own algorithms. That would just be plain silly. 

ICMC: Random Thoughts - Is True Randomness an Illusion?

Helmut Kurth, Chief Scientist, atsec information security

Illusions can be fun ... if we know they are an illusion.  They can make us feel good... even if they are bad.  They can make us think something is okay... when it's really broken.

There are some illusions we like in technology, like virtualization! The illusion is that we have more resources than we really have.  We can save money - that's good, right?  Some people think that virtualization provides more security - but does it really?  Vendors claim their virtualized systems are just like a real hardware box... but are they? Often not exactly. Something has to give, we should understand this.

How is entropy impacted once you virtualized the system?  Virtualizaion can change the timing, it may just behave differently. Either way, are we getting the same entropy on these systems?

Often, we make incorrect assumptions about things like timing and the similarity of a virtualized system to its true hardware counterpart.

For example, if you're using time as your entropy source - you may assume the lowest order bits are changing most frequently and will provide more actual entropy. But, what if this is not the true timer? What if a hypervisor is intercepting and interpreting the concept of "time" - what if the hypervisor should not be trusted?

Shouldn't you be able to trust your hypervisor?  Once someone has breached the hypervisor, they can do all kinds of evil things underneath your VM and you won't be able to easily detect it (as your OS will be unchanged).

For example, the RDRAND instruction can be intercepted by a hypervisor. This is a "feature" documented by Intel.  So, as a user of the VM, you think you're getting some pretty good entropy from RDRAND - but you're really getting poor entropy from your hypervisor. How could you detect this?   Intel's RDRAND is often used as the sole source of randomness with no DRNG postprocessing (like in OpenSSL), regardless if the "randomness" is being used for generating nonces or for generating cryptographic key material.

Assuming a compromised hypervisor, the bad guy can have the key used to generate the "random" sequences used in the RDRAND emulation.  He can use this to generate the different random streams.  He is able to get the nonce, which is transmitted in the clear.

Launching the attack requires installation of a hypervisor or the modification of a running hypervisor. Just one flaw that allows the execution of privileged code is all it takes.  The hacker, this this case, may have the session key even before the communicating parties have it! At this point, it doesn't matter what algorithm you're using - communications are essentially in the clear.

This attack is not that easy - it requires taking over the hypervisor (not easy), and once you have the hypervisor, you can do anything you want!  But, this isn't about taking down the machine, this is about eavesdropping undetected for any length of time.

This is a sneaky attack - virus scanner and host-based IDS will tell you everything is okey dokey!  It is independent of the OS or ay applications (as many rely on RDRAND) .

How can you protect yourself?  The basic solution is diversification - do not use a single source of entropy.  Use a different RNG for nonce generation and generation of key material, and use RDRAND for seeding (with other sources) rather than directly for generating critical security parameters.  Read the hardware documentation carefully - make sure you understand what you're getting.

Intel isn't going to fix this, as this isn't a bug... it's a feature!

Entropy analysis isn't just about the mathematical entropy analysis - be aware of how you procure the numbers and how they are used in your system.


As a side note, we (in Solaris land) are aware that this is a tough problem and hard to get right and we've already implemented counter measures. Darren Moffat did a great write up on Solaris randomness and how we use it in the Solaris Cryptographic Framework, describing the counter measures we already have in place.

Wednesday, November 19, 2014

ICMC: Status of the Transition to New Algorithms and Stonger Keys

Allen Roginsky, Mathematician, NIST

FIPS 140-2 doesn't talk much about the algorithms themselves, they are covered in the Annexes.  There were minor changes back in 2002/2003, however the algorithms have changed. New algorithms have come in, old ones have been deprecated.

Under the ISO rules, every country can choose their own algorithms. In the US, we've already chosen our algorithms for FIPS 140-2. We'll likely continue to use the same ones in FIPS 140-4 (or whatever we call them).

The current major algorithm documents are SP 800-131A and FIPS 186-4.  The stronger key requirements went into effect last year and there is a major hit coming in at the end of 2015.

Why are we doing this transition?  Security strenght of 80 bits is insufficient (the 56-bit strong DES was broken long ago; attacks on the SHA-1 collision resistance property; advances in integer factorization; etc).  Some of the currently approved algorithms aren't strong regardless of the key length (the non SP-800-90A RNGs).  Transition plans were fist announced in SP 800-57, Part 1 in 2005.  We've delayed this from going into effect from 2010 to 2015, but cannot delay it further or we'll be hurting the consumers.

Approved are the best algorithms. Deprecated algorithms are not recommended, but can be used. This is different than restricted, which you should not use.  Legacy use have no guarantee, but really should not be used, except to verify previously generated signatures, for example.  Some algorithms are just simply not allowed.

For example, SKIPJACK decryption was allowed at the end of 2010 for legacy use only, but SKIPJACK encryption is disallowed.  Only 8 certificates were ever issued, so there were not any complaints bout this change.

At the end of 2010, two-key 3DES encyrption is restricted (100 bits of strength for two-key 3DES with no more than 3^20 (plantext, cyphertext) pairs), two-key 3DES decryption is legacy-use only.
At the end of 2015, two-key 3DES encryption is disallowed.  AES and three-key 3DES are acceptable.  We allowed this for so long, because it was in wide use and the attacks were not straight forward.

Digital Signatures

As of the end of 2010, signature generation algorithms with less than 112 bits of encryption strenght became deprecated. As of the end of 2013, there was a transition from FIPS 186-2 to FIPS 186-4 and signature generation algorithms with less than 112 bits of cryptographic strength became disallowed.

Signature verification with less than 112 bits of strength is legacy-use, beginning in 2011.

Deterministic Random Number Generators

This is the BIG problem! As of the end of 2010, te non-SP-800-90A compliant RNGs became deprecated. As of the endof 2015, the non-SP-800-90A compliant RNGs will become disallowed. As of the end of 2015, the non-SP-800-90A complaint RNGs became disallowed - RETROACTIVELY!  This will be a big expense, as previously purchased software can no longer be used.

Note from Randy Easter: What this means is that every validation that was done over the last 15 years and every validation that is not using this RNG, that item will be moved to the nonapproved line.  If the keying algorithm is using this RNG, ALL of those functions become non approved.

Key Agreement and Key Transport

As of the end of 2013, Key Agreement and Key Transport algorithms stay acceptable if: key strength is at least 112 bits AND the algorithms are compliant with the appropriate NIST standards: SP 800-56A, SP 800-56B and SP 800-38F. As of the end of 2013, the non-compliant Key Agreement and Key Transport (Key Encapsulation) algorithms have become deprecated if key strength is at least 112 bits.  Key wrapping must be complaint with one of the provisions of SP 800-38F. Everything else is disallowed.


Hash and MAC functions will be impacted, as well as some key derivation algorithms. See SP 800-131A for details

FIPS 186-2 to 186-4 Transition

Beginning in 2014, new implementations shall be tested for their compliance with FIPS 186-4. This applies to domain parameter generation, key pair generation and digital signature generation.  Signature verification per FIPS 186-2 is Legacy-use. Beginning this year (2014), RSA digital signature keys must be generated as shown in FIPS 186-4.

Future Transition Plans

You bet - already looking forward to the future.  We want to transition away from non-Approved implementations of key agreement (DLC-based) and key transport (RSA-based) schemes.  Unfortunately, there are too many modules in existence that are non compliant with SP 800-56A and 56B. We need a well thought out strategy for transition.

ICMC: Questions to CMVP (NIST/CSEC) on ISO 19790 Standard, 140-4 or Other

 Carolyn Finch, Manager, CMVP, CSEC
Randall Easter, NIST, Security Testing, Validation and Management Group
Allen Roginsky, Mathematician, NIST
Sharon Keller, Director, Cryptographic Algorithm Validation Program (CAVP), NIST

This panel is entirely questions and answers. I'll do my best to capture them.

Q: About bug fixing and patches. Is there an expedited way, once we get our validation, if there's a non security patch, is there a non-painful/easy way for us to update the validation?

A: Randy: Yes, we have a process, but whether it's painless or not... Even if the changes are not security relevant, you will have to go back to the lab. They will decide if it is security relevant or not, and they can do an electronic submission to NIST. It may require re-doing your algorithm testing and updating your certificate.  If there are no issues, we can usually do the update within a week - once we get the paperwork from the lab.

Q: What happens with a datacenter that is always doing critical functions, it can never give up. How can they do more tests?

A: Randy: If there is critical work being done, this can be deferred until the next time period. It doesn't say "after 42 deferrals you have to interrupt work".  There may be time when the processor is doing non crypto, then the checks can be run. The processor, in my experience, is not 100% busy on crypto.

Q: So, it can be indefinite?

A: Randy: Yes.

Q: Will CAVP now be testing and verifying all of the SHALL statements? Or is that a documentation thing?

A:  Sharon: CAVP tests all the things that are testable.  Back in the day, they all just gave tests for the algorithm - but then things got more complicated, like making sure vulnerabilities aren't introduced. For example, it's a good thing if the IV is never repeated, but that thing is not testable.

Randy: One good example is 800-90A. There are quite a bit of SHALLs in there, but there are some things that are difficult to test, so thing fall through the cracks. 

Q: In CC, we have the concept of Flaw Remediation. There are sometimes fixes we can make judegement calls ourselve. The time and money it costs to go back to the lab for FIPS makes it prohibitive for us to maintain validation

A: Randy: I  can't speak to CC, but once you open the box to make changes, then we need the lab to validate you only changed what you said you changed.  Not every change has to be revalidated, there can be judgment there.  This has been the policy in CMVP since 1995, every change has to be verified by the lab.  My experience in development, it's possible to think you've only made a small change, only later discover that it had unforeseen consequences.

A: Moderator: Working at a lab that does CC, there are vendors that abuse Flaw Remediation.

[VAF: Note: who better to judge how relevant a change is, than the developers who are intimately familiar with the codebase?]

Q: Is there a rough estimate when ISO 19790 will be adopted?

A: Randy: I honestly do not know. Hopefully mid next year, but just don't know. There will be a transition date from FIPS 140-2. But we don't know how long that will be. In the past, it was a 6 month transition period, but depending how this goes - we may need more time?

Q: Given the last public review of FIPS 140-3 was more than 5 years ago, are you ready to go forward with this standard?

A: Carolyn: This is a different process. The ISO process is different.  They don't have that type of review.

Randy: We could have a review of sorts about whether or not we want to move to the ISO standard. We won't pick up the old FIPS 140-3, as there are no DTRs and there won't be. The only path forward is with ISO 19790.

Q: Randy mentioned earlier that we'll still have Implementation Guidance. The current IGs are getting very large and difficult to navigate. How will this work with an international standard?

A: Randy: We work with Canada already and have a non-binding agreement with the Japanese guidance. We circulate the guidance with the labs before we post. The guidance is big, because we've been using FIPS 140-2 for nearly 13 years. It's only large because time has gone on for so long.  As the program has grown, the vendors and the users have gotten more sophisticated and therefor require particular guidance to address.  Hopefully we'll refresh more often and be able to better manage this.

Q: If I went down the ISO road right now, and in 18 months from now I'm ready to validate - but the new standard hasn't been adopted, yet, I won't pass FIPS 140-2, will I?

A: Randy: That's right. You could get this validated by JCVP, but the US doesn't recognize this standard at this time.  We would like a decision to get made. Yes, I said that last year, too.

Q: Elliptic Curve Cryptography has  come up significantly over the last year, particularly around NSA Suite B. Do you have plans to standardize any other curves that did not come from the NSA?

A: Allen: I am not aware of any work in this area. I am not aware of any weaknesses in the current curves. In general, one is good enough, the strength is not in the curve (as long as it meats the requirements) - the strength is in the key. Some are over the binary fields, some are over the primary fields. You can choose one or the other.  One problem with ECC is with the DUAL ECDRBG algorithm.  The problem is not with EC, but is in this particular algorithm. There was a guard in EC to protect against this, but you can't stop a problematic implementation or use.  There are no known issues with the existing curves.

Q: Our hardware now has partial implementations of crypto algorithms. We have software that can do the rest and work on the older processors, so it doesn't require the hardware but will use it if it's there.

A: Carolyn: If it can do everything in software, it's not a hybrid module.

Q: We had anticipated RSA 4096 being approved, but now it isn't.

A: Allen: It has been in FIPS 186-2, but it is no longer there. It is not allowed now for signature generation. The reason is to facilitate the transition to EC. It's very difficult to be sure that all of the floating point arithmetic is done correctly when you're dealing with these numbers. It can be done, but it's error prone. The decision was made to move to technology where the keys are shorter, like in EC. Some people have complained about this, but this is the decision. I think it's the correct one.

Q: Do you know of anything in the ISO standard that will impact current validations?

A: Randy: There is nothing that is retroactive. Existing validations should stand. New modules, though, will have to meet the new requirements once it's in place.

Q: ISO being international, but you haven't mentioned CAVS. There's no requirement that they are international.

A: Randy: It's an international standard, but CAVP will still be a US program. We'll be using an international standard. We're working on an ISO standard on how to do algorithm testing.

ICMC: Comparing ISO/IEC 19790 and FIPS PUB 140-2

William Tung, Leidos, Laboratory Manager

ISO/IEC 19790:2012 is the planned replacement for FIPS 140-2, but there's been no official announcement or timeframe, yet. This means no labs are accredited to perform these validations.

But... it's coming.

Some of this will be a rehash of our earlier sessions, but with more deep diving per section.

Degraded Operation

This mode can only be used after exiting an error state and must be able to provide information about this state.  Whatever mechanism or function is causing the failure shall be isolated.  While in the degraded mode, all conditional algorithm self-tests shall be performed prior to the first operational use of the algorithm and before degradation can be removed, all tests must pass

Cryptographic Module Interface

The cryptographic module interface is a fifth logical interface that cannot be used whenever you're in an error state.

Roles, Services and Authentication

The user role is optional, there is a minimum requirement of a crypto-officer role.  The minimal service requires showing the module's versioning info that matches the certificate on record.

There is also a new requirement for self-initiated cryptographic output capability.

Authentication strength requirements must be met by the module's implementation, not through policy controls or security rules.  For example, password size restrictions. ISO 19790 does not yet define what exactly those strength requirements are.  Level 4 modules will have to implement multi-factor authentication.

Software/Firmware Security

This section of the document applies only to software/firmware or hardware modules. Level 2 modules must implement an approved digital signature or keyed MAC for integrity test, Levels 3 & 4 have higher requirements.

Operational Environment

Software modules no longer need to operate in Common Criteria (CC) evlauted OS or "trusted operating system in order to meet Level 2 requirements.  There are specific OS requirements still required to meet Levels 2-4 (that will look similar to what used to be covered by CC).

Physical Security

Explicitly allows translucent enclosers/cases, in addition to FIPS 140-2 allowed opaque enclosures/cases, within the visible spectrum.  Level 3 modules must either implement environmental failure protection (EFP) or undergo environmental failure testing (EFT).  Level 4 MUST implement EFP.

Non-Invasive Security

This section currently doesn't specify requirements, but they will come. Hardware and firmware must comply and it will be optional for software  For levels 1-2, module must protect against these attacks. Level 3-4 will have to prove protection.

Sensitive Security Parameter (SSP) Management

SSPs consists of Critical Security Parameters (CPSs) and Public Security Parameters (PSPs). For Level 2 modules and up, procedural zeroization is not allowed.

Self Tests

There are two categories of self-test: Pre-operational and conditional.  Pre-operational includes things like integrity test and critical functions test. Conditional covers the other standard conditional tests plus the other items covered in the old POST guidance.

All self-tests need to be run regardless of if module is operating in approved or non-approved mode.  Level 3 & 4 modules must include an error log that is acceisble by an authorized operator of the module.  Integrity test needs to be run over all software/firmware components of module.  At a minimum, vendor must implement one cryptographic algorithm self-test as a pre-operational test.
[Clarification from Randall Easter on this topic: If the module is installed and configured as a FIPS 140 module, then it must do all of these tests/checks.  If it was installed and configured otherwise, it's not required. This is not different than what is currently required by FIPS 140-2.]

FIPS CRNGT is not currently defined in ISO.

ISO 1970 requires Level 3 & 4 modules to do automatic pre-operational self-tests at a predefined (by vendor) interval.

Life-Cycle Assurance

This seems to be more of a documentation and QE section. Covers vendor testing and finite state model. The states required are: General Initialization State, User State, and Approved State.  Changing to crypto-officer from any other role is prohibited.

Testing Requirements for Cryptographic Modules

The next part of the talk came from Zhiqiang (Richard) Wang, Leidos, Senior Security Engineer.

The testing requirements were derived from the FIPS 140-2 derived testing requirements. This covers self-tests, live cycle assurance and mitigation of other attacks.

ISO/IEC 24759:2014 specified shte methods to be used by testing laboratories to test whether the cryptographic module conforms to the requiremetns speified in ISO/IEC 19790:2012.  This was developed to provide a high degree of objectivity during the testing proess and to ensure consistency across the testing laboratories.  It clearly specifies what the vendor needs to provide to the laboratory.

Richard spent time walking through section by section going through many of the same requirements discussed early, but with a twist on how it would be tested or why.

ICMC: Explaining the ISO 1970 Standard, Part 2

Randall Easter, NIST, Security Testing, Validation and Management Group

ISO 19790 is available for purchase from ISO (in Swiss Francs) or ANSI (US Dollars), and you'll also need the derived test requirements (ISO/IEC 24759) .

In this section, Randy walked us through a deep dive of the sections of the document.

There is a new Terms and Definitions section, which will hopefully help to clear up ambiguity and help answer a lot of the questions they've gotten over the years.

The new document has all of the SHALL statements highlighted in red, with [xx.yy] - where xx indicates the clause and yy is anumeric index within the clause. This will make it, hopefully, easier for two people to have a conversation about the standard.  The plan is, when errors are found and fixed with addenda, there will be complete new revisions of the document available - ie everything in one place.

Errors were found during translations to Korean and Japanese, when the translators just could not figure out what something was supposed to me (turns out, it wasn't clear in English, either). We should expect more changes as people start to implement against this standard.  Errors will be corrected again in revisions. Mr. Easter was not clear what ANSI/ISO charge for revisions of documents.

There will be four levels for validation again. The requirements vary greatly for physical security between the levels, from "production grade components" to "tamper detection and response envelop, EFP and fault injections mitigation".  You will still need environmental controls for protecting key access, etc.

There are algorithms like Camelia that the US Federal Government are not allowed to use, but vendors are designing software for international markets.  So, federal users can only use certain algorithms - the vendors do NOT have to restrict this, it is up to the end user to implement and use the policy correctly.  How do you audit that, though?

ISO standard states that every service has to have a "bit" that tells whether or not it's an approved service or not.  This should enable this to be better audited.

This is a great policy, but what happens when you have to work what you get? For example, someone could send the IRS a tax return encrypted with RC4. The IRS can decrypt using RC4, but would have to store using AES.  It would be good to know what has happened here.

The new ISO standard has requirements around roles, services and authentication. The minimum role is the crypto officer - there has to be logical separation of required and optional roles and services. The higher up the the levels go, the more restrictions: like role-based or identity based authentication all the way up to multi-factor authentication.

There's been a lot of questions about FIPS 140-2 about what we mean by "list all the services". Does that mean only all the approved services? No, all services - even non approved security functions. Non security relevant services also have to be listed, but that can refer to a separate document. [VAF: curious still, what exactly that means - an OS provides a LOT of services!]

ISO 19790 has new directives of managing sensitive security parameters - for example, you have to provide a method to zeroize keys.  This could be done procedurally by an operator, or as a result of tamper. Other examples this area covers: random bit generation, SSP generation, entry, output and storage of keys.

Self-tests have changed. Pre-operational tests cover software/firmware integrity, bypass and critical functions tests.  Crypto operations can begin to proceed while these tests are running, but the output cannot be provided until the pre-operational tests have completed.

Known answer tests are now all conditional tests, like pair-wise consistency checks.  Vendors can continue to do all of the tests up front, or as needed. Lots of flexibility here.  Mr. Easter made a note I couldn't quite follow about module updates being signed by the crypto officer - not clear why it wouldn't be the vendor. [VAF: I may have missed something here.]

The new standard still allows simulation environments for testing, and special interfaces for the labs to test against that may not be needed by the consumer.

Surprising to NIST, some consumers don't care about FIPS 140 validations and the vendors want to provide that to them.  For example, some of the tamper evident seals may need to be installed by the cryptographic operator, or some initialization procedures. Some customers may not even care about power-on-self-tests EVER being run, so that configuration information has to be part of the integrity check.

As a note: some of the current FIPS 140-2 implementation guidance will be retained with this new standard, as they are newer than the ISO standard, or too vendor specific to be included in a general document.

The vendor may have mitigation against some attacks that there are no currently testable requirements available, and that would be allowable through Level 3. Once you get to Level 4, you have to prove your mitigations.

The new standard allows for degraded modes of operation - for example, if one algorithm stopped functioning properly the rest of the algorithms could still be used.

Something new: here has to be a way to validate that you're running a validated version and what the version number is.  This is interesting and tricky, because of course you get your validation done on released software, so when you ship you don't know that you're validated.  And if you store validation status in another file, it could easily get out of date (ie updates done to the system, but software still reporting it's validated). There are ways to solve this, but vendors should tread carefully.

Also, Annex C (and only Annex C) which covers approved security functions (ie algorithms) can be replaced by other countries, as needed.


Q: "How does software have an 'opaque' coating?" A: "Physical security requirements do not have to be met by software modules".

Q: "Lots of services could be going on, what do we need to be concerned with" A: "Services that are exposed by the module, via command line or API. Security services need be be queryable".

Q: "Why should the crypto officer be signing new modules? They may not be able to modify the trust anchors". A: Was using crypto officer as an example, policies could vary - but it is the crypto officer's decision on what to load.

ICMC: Explaining the ISO 19790 Standard, Part 1

Randall Easter, NIST, Security Testing,  Security Management  & Assurance Group

FIPS 140-1 was published in January 1994, and a year later there came the Derived Test Requirements and guidelines for how to test to these DTRs.  The standard has continued to evolve over the last 20 years.  FIPS 140-2 was published in May 2001.  The DTRs cannot be written until after the standard has stopped fluctuating, which is why there's always a delay.

The goal of the Cryptographic Module Validation Program (CMVP) is to show a level of compliance to given standards. They also desire to finish the testing and validation while the software/hardware is still current and shipping (ed note: not a goal that has been consistently met in recent years - goals and budget do not always align).

NISTs goal is to reevaluate standards every 5 years to make sure they are still current.

A request for feedback and comments on FIPS 140-2 came out in 2005.  At the same time, the international community wanted to have a rough international equivalent of FIPS 140.  ISO/IEC 19790: 2006 was published in March 2006, which was a rough equivalent to FIPS 140-2.  This included editorial changes to clarify items that people were interpreting differently - perhaps roughly equivalent to Implementation Guidance we've received for FIPS 140-2?

The original goal of doing ISO/IEC standard in parallel with FIPS 140-3, so that we could get the best of both worlds. International participation and an international standard, and a new FIPS 140 document at the same time. The problem with ISO standards - they are worked on privately and even the "public" final version must be purchased. The FIPS 140 standards are downloadable for free from NIST's website.

ISO/IEC 19790:2012 was pulbished on August 15, 2012 and the DTRs came out in January 2014.

The draft for FIPS 140-3 came out in January 2005, but never became a formal standard.  What happened?

Mr. Easter noted that there was a change of ownership of the document and FIPS 140-3 quickly diverged from ISO.  ISO has very strict standards about meeting twice a year, and insists on progress or your work will be dropped.  Work on FIPS 140-3 stalled in NIST... but work had to forge ahead with ISO.

Mr. Easter, editor of the ISO document, is happy with how it's come out.  :-)

While ISO/IEC 19790:2012 has been out since... 2012, NIST has not formally announced that their intention is to move to this standard.  That lack of announcement seems to be a political one, as the person that should be making that announcement hasn't been hired, yet....

One interesting change in the ISO document is that it covers algorithms that are not approved by the US Government, but are used regularly by the international community.  There is a concept of a wrapper document where more algorithms could be added and clauses modified - but the more someone does that, it will cause standard divergence.

The ISO document was worked on by the international community and circulated to all of the labs that do FIPS 140-2 validations to provide comments. Mr. Easter believes it was better circulated than a NIST review would be. I would disagree here, as ISO is a closed process, so developers could not provide feedback from their sides (and believe me - developers and labs speak a different language, and I am certain our feedback would've been unique and valuable)

ISO/IEC 19790:2012 contains new sections on Software Security and Non-Invasive Security, and have removed the Finite State Model.  FIPS 140-2 was very hardware specific - built around hardware DES modules.  The new standard acknowledges that software exists. ;-)

NIST did hold a Software Security Workshop in 2008, to try to learn more about how software works.  Software could then be validated to higher levels and were tied to Common Criteria.  Based on input from this workshop, the levels software could validate against were changed and the relationship to Common Criteria was severed - that made it into the 2009 draft of FIPS 140-3. Unfortunately, that was the last draft of FIPS 140-3 and the standard never became final.

Looking at an appendix of FIPS 140-2 vs ISO/IEC 19790:2012 - very similar, with the notable differences, like Finite State Model being dropped and a new software/firmware section added.  There's now guidance on non-invasive security testing and sensitive security parameter management.

The Annexes of ISO/IEC 19790:2012 sound more helpful - there's an entire appendix telling you what the documentation requirements are.  I n FIPS 140-2, you just had to troll your way through the entire document and hope you caught all of the "though shall document this" clauses.

One of the annexes is about approved authentication mechanisms... unfortunately, at this time, there are no formal authentication mechanisms.  Hopefully that can be covered in Implementation Guidance.

The big goals for ISO/IEC 19790:2006 were to harmonize with FIPS 140-3 (140-3 will now be a wrapper document referring to the ISO document), editorial clarifications from lessons learned (Randy noted that he wasn't even sure exactly what he was trying to say when he re-read things years later ;), incorporation of all of the FIPS 140-2 implementation guidance and most of all entropy guidance (ah, entropy.... ).

Additionally, need to address new technologies - we're putting entire algorithms on chips, and sometimes just pieces of algorithms.  How does that fit into the boundary?  That is where technology is going, and a modern standard needs to understand that and take it into consideration, in ways that FIPS 140-2 simply couldn't have conceived.

Software will take advantage of this - it only makes sense, but that starts to put us into hybrid mode. This is covered in the new ISO standard, not just as implementation guidance.

The new ISO standard addresses the trusted channel without reference to Common Criteria. ISO/IEC 19790:2012 added a new requirement - a service to query version information.  This is interesting to us in OS land, as our FIPS version number and OS version number are not the same.  See this with other software vendors as well.

Integrity check is simpler for level 1 in this new standard, but more difficult for levels 2 and higher.

The new software security requirements section is definitely an improvement over FIPS 140-2, but still not as good as it could be. ISO did not get very much feedback on this section in the time frame where they made the request.

The ISO standard had to remove the reliance on Common Criteria, as NIAP is moving away from those evaluations and the protection profiles are up in the air. The ISO group didn't want to tie themselves to a moving target, instead added specific requirements for things like auditing that the operating system should provide, if you want to get Level 2.  In general, software won't be able to get over Level 2 in this new standard.

An interesting side note, you can actually have a "mix & match" of your levels. You could have "Level 2" for certain levels, and have an overall "Level 1" validation (based around your lowest section).  For example, for United States Post Office postage meters they want Level 3 Physical Security, but overall Level 2 for everything else.  It's important that people cannot break into the machine, but things like RBAC (role based access control) are not as important.

ISO/IEC 19790 added new temperature requirements. That is, you have to test at the extremes that the vendor claims the module operates at.  Though, if the vendor only wants to claim at ambient temperature - that would be noted in the security policy.  They should be tested at the "shipping range" as well.  Reason?  Imagine leaving your epoxy covered Level 4 crypto device on a dashboard of a car in the summer.... well, that epoxy melts. Would it really still be a Level 4 device after that? No, so temperature range is important.

We no longer require all of the self-tests be completed before the module is run. For example, if you're going to use AES - you only need to run the known answer tests on AES, not on ALL of the algorithms.  NIST understood that for devices like contactless smartcards - can't wait forever for the door to open to your office.  Now power-on-self-tests are conditional, as opposed to required all the time.

The new standard adds periodic self tests, at a defined interval (with option to defer when critical operations are already in use).

There is a new section on life-cycle assurance and coverage for End of Life - what do you do with this crypto box when you're upgrading to the shiny new hardware?

Keep in mind that this is still a revision of FIPS 140-2. It is not a completely new document. The document will seem familiar, but it should overall be more readable and provide greater assurance.  We didn't add things just because they sounded like fun ways to torture the vendor, even if some vendors may think that ;-)

Questions from the audience: "Will we be getting rid of Implementation Guidance?" No, we can't possibly guarantee to get everything correct in the first standard, and technology changes faster than the main document.  "Can we stop calling it Guidance?  If vendors can't refuse to do it, then it's not guidance." It's always been called that, you can choose to not follow it - but you won't pass your validition. Maybe we should change the name, a contest perhaps? (Suggestion from the audience, "call it requirements, as that's what they are - you are required to follow them")  My suggestion: call them "Implementation Clarifications".  The word "guidance" is too much associated with advice, which this is not soft advice.