Thursday, November 20, 2014

ICMC: FIPS 140-2 Implementation Guidance 9.10: What is a Software Library and How to Engineer It for Compliance?

Apostol Vassilev, Cybersecurity Expert, Computer Security Division, NIST, Staff Member, CMVP

Why did we come up with IG 9.10 [Power On Self Tests]? There were many open quetions about how software libraries fit into the standard.  In particular, CMVP did not allow static libraries - but they existed. We needed to come up with reasons to rationalize our decision, so we could spend time doing things other than ddebating.

Related to this are IG 1.7 (Muliple Approved Modes of Operation) and IG 9.5 (Module Initialization during Power-Up).

The standard is clear in this case - the power-up self tests SHALL be initiated automatically and SHALL not require operator intervention.  For a software module implemented as a library, an operator action/intervention is any action taken on the library by an application linking to it.

Let's look a the execution control flow to understand this problem. When the library is loaded by the OS loader, execution control is not with the library UNLESS special provisions are taken. Static libraries are embedded into the object code and behave differently.

How do we instrument a library? Default entry points are well-known mechanism for operator-indeendent transfer of execution control to the library  This has been available for over 30 years, and exist for all types of libraries: static, shared, dynamic.

There are alternative instrumentation - in languages like C++, C# and Java you an leverage things like static constructors that are executed automatically upon loading the library containing them when it is loaded.

What if the OS does not provide a DEP mechanism and the module is in a procedural language like C?  You can consider switching to C++ or using a a C++ wrapper, so that you can get this functionality.  Lucky for my team, Solaris supports _init() functions. :)

Implementation Guidance 9.5 and 9.10 live in harmony - you need to understand and implement both correctly.

Static libraries can now be validated with the new guidance.

ICMC: Roadmap to Testing of New Algorithms

Sharon Keller, Director CAVP, NIST
Steve (?), CAVP, NIST

The CAVP takes over after NIST picks a new algorithm, the CAVP takes over and figures out how to test it.  They need to evaluate the algorithm from top to bottom - identify the mathematical formulas, components, etc.

The CAVP develop and implement the algorithm valdiation test suite. Which requirements  are addressable at this level? They develop the test metrics for the algorithm and exercise all mathematical elements of the algorithm. If something fails - why?  Is there an error in the algorithm, or an intentional failure - or is there an error in the test?

The next stop is to develop user documentaion and guidance, called validation system document (VS), documents test suite and provides instructions o implementing validation tests.  There is cross validation, and make sure that both teams come up with the same answers - a good way to check their own work.

The basic tests are Known Answer Tests (KAT) , Multi-block Message Test (MMT), and Monte Carlo Tests.  KATs are designed to verify the components  to algorithms. MMT will test algorithms where there may be chaining of information from one block to the next and make sure it still works. The Monte Carlo Tests are exhaustive, checking for flaws in the UI or race conditions.

Additionally need to test the boundaries - what happens if you encrypt the empty string?  What if we send in negative inputs?

There are many documents for validation testing - one for each algorithm or algorithm mode.

The goals of all these tests? Cover all the nooks and crannies - prevent hackers from taking advantage of poorly written code.

Currently, the CAVP is working on tests for SP 800-56C, SP 800-132 and SP800-56A (Rev2).

In the future, there will be tests for SP 800-56B (rev1), SP 800-106 and SP800-38A.  Which ones of these is more important for you to get these tests completed?

Upcoming algorithms that are still in draft, FIPS 202 (Draft) for SHA3, SP800-90A (Rev2) for DRBG, SP800-90B for Entropy Sources and SP 800-90C for construction of RBGS. Ms. Keller has learned the hard way - her team cannot write tests for algorithms until they are a published standard.


ICMC: Is Anybody Listening? Business Issues in Cryptographic Implementations?

Mary Ann Davidson, Chief Security Officer, Oracle Corporation

A tongue in cheek title... of course we're hoping nobody is listening!  While Ms. Davidson is not a lobbyist, she does spend time reading a lot of legislation - and tries not to pull out all of her hair.

There are business concerns around this legislation - we have to worry about how we comply, doing it right, etc.  Getting it right is very important at Oracle - that's why we don't let our engineers write their own crytpo [1] - we leverage known good cryptographic libraries.  Related to that, validations are critical to show we're doing this right. There should not be exceptions.

Security vulnerabilities... the last 6 months have been exhausting. What is going on?  We all are leveraging opensource we think is safe.

We would've loved if we could've said that we knew where all of our OpenSSL libraries were when we heard about Heartbleed. But, we didn't - it took us about 3 weeks to find them all! We all need to do better: better at tracking, better at awareness, better at getting the fixes out.

It could be worse - old source code doesn't go away, it just becomes unsupportable.  Nobody's customer wants to hear, "Sorry, we can't patch your system because that software is so old."

Most frustrating?  Everyone is too excited to tell the world about the vulnerability they found - it doesn't give vendors time to address this before EVERYONE knows how to attack the vulnerability. Please use responsible disclosure.

This isn't religion - this is a business problem! We need reliable and responsible disclosures. We need to have good patching processes in place in advance so we are prepared.We need our opensource code analyzed - don't assume there's "a thousand eyes" looking at it.

Ms. Davidson joked about her ethical hacking team. What does that mean? When they hack into our payroll system, they can only change her title - not her pay scale. How do you think she got to be CSO? ;-)

Customers are too hesitant to upgrade - but newer really is better! We are smarter now than we used to be, and sorry we just cannot patch you thousand year old system. We can't - you need to upgrade! The algorithms are better, the software is more secure - we've learned and you need to upgrade to reap those benefits.

But we need everyone to work with us - we cannot have software sitting in someone's queue for 6 months (or more) to get our validation done.  That diminishes our value of return - 6 months is a large chunk of a product's life cycle. Customers are stuck on these old versions of software, waiting for our new software to get its gold star. Six weeks? Sure - we can do that. Six months? No.

Ms. Davidson is not a lobbyist, but she's willing to go to Capital Hill to get more money for NIST. Time has real money value. How do we fix this?

What's a moral hazard? Think about the housing market - people were making bad investments, buying houses they couldn't afford to try to flip houses and it didn't work out. We rewarded those people, but not those who bought what they could afford (or didn't buy at all) - we rewarded their bad risk taking.

Can we talk with each other?  NIST says "poTAHto", NIAP says "poTAHto" - why aren't they talking?  FIPS 140-2 requires Common Criteria validations for the underlying OS for higher levels of validations - but NIAP said they don't want to do validations

We need consistency in order to do our jobs. Running around trying to satisfy the Nights Who Say Ni is not a good use of time.

And... The entropy of ... entropy requirements.  These are not specific, this is not "I know it when I see it". And why is NIAP getting into entropy business? That's the realm of NIST/FIPS.

Ms. Davidson ends with a modest proposal: Don't outsource your core mission.  Consultants are not neutral - and she's disturbed by all of the consultants she's seeing on The Hill.  They are not neutral - they will act in their own economic interest. How many times can they charge you for coming back and asking for clarification? Be aware of that.

She also requests that we promote the private-public partnership.  We need to figure out what the government is actually worried about - how is telling them the names of every individual that worked on code help with their mission? It's a great onus on business, and we're international companies - other countries won't like us sharing data about their citizens. Think about what we're trying to accomplish, and what is feasible for business to handle.

Finally, let's have "one security world order" - this is so much better than the Balkanization of security.  This ISO standard (ISO 19790) is a step in the right direction. Let's work together on the right solutions.

[1] Unless you're one of the teams at Oracle, like mine, who's job it is to write the cryptographic libraries for use by the rest of the organization. But even then, we do NOT invent our own algorithms. That would just be plain silly. 

ICMC: Random Thoughts - Is True Randomness an Illusion?

Helmut Kurth, Chief Scientist, atsec information security

Illusions can be fun ... if we know they are an illusion.  They can make us feel good... even if they are bad.  They can make us think something is okay... when it's really broken.

There are some illusions we like in technology, like virtualization! The illusion is that we have more resources than we really have.  We can save money - that's good, right?  Some people think that virtualization provides more security - but does it really?  Vendors claim their virtualized systems are just like a real hardware box... but are they? Often not exactly. Something has to give, we should understand this.

How is entropy impacted once you virtualized the system?  Virtualizaion can change the timing, it may just behave differently. Either way, are we getting the same entropy on these systems?

Often, we make incorrect assumptions about things like timing and the similarity of a virtualized system to its true hardware counterpart.

For example, if you're using time as your entropy source - you may assume the lowest order bits are changing most frequently and will provide more actual entropy. But, what if this is not the true timer? What if a hypervisor is intercepting and interpreting the concept of "time" - what if the hypervisor should not be trusted?

Shouldn't you be able to trust your hypervisor?  Once someone has breached the hypervisor, they can do all kinds of evil things underneath your VM and you won't be able to easily detect it (as your OS will be unchanged).

For example, the RDRAND instruction can be intercepted by a hypervisor. This is a "feature" documented by Intel.  So, as a user of the VM, you think you're getting some pretty good entropy from RDRAND - but you're really getting poor entropy from your hypervisor. How could you detect this?   Intel's RDRAND is often used as the sole source of randomness with no DRNG postprocessing (like in OpenSSL), regardless if the "randomness" is being used for generating nonces or for generating cryptographic key material.

Assuming a compromised hypervisor, the bad guy can have the key used to generate the "random" sequences used in the RDRAND emulation.  He can use this to generate the different random streams.  He is able to get the nonce, which is transmitted in the clear.

Launching the attack requires installation of a hypervisor or the modification of a running hypervisor. Just one flaw that allows the execution of privileged code is all it takes.  The hacker, this this case, may have the session key even before the communicating parties have it! At this point, it doesn't matter what algorithm you're using - communications are essentially in the clear.

This attack is not that easy - it requires taking over the hypervisor (not easy), and once you have the hypervisor, you can do anything you want!  But, this isn't about taking down the machine, this is about eavesdropping undetected for any length of time.

This is a sneaky attack - virus scanner and host-based IDS will tell you everything is okey dokey!  It is independent of the OS or ay applications (as many rely on RDRAND) .

How can you protect yourself?  The basic solution is diversification - do not use a single source of entropy.  Use a different RNG for nonce generation and generation of key material, and use RDRAND for seeding (with other sources) rather than directly for generating critical security parameters.  Read the hardware documentation carefully - make sure you understand what you're getting.

Intel isn't going to fix this, as this isn't a bug... it's a feature!

Entropy analysis isn't just about the mathematical entropy analysis - be aware of how you procure the numbers and how they are used in your system.

...

As a side note, we (in Solaris land) are aware that this is a tough problem and hard to get right and we've already implemented counter measures. Darren Moffat did a great write up on Solaris randomness and how we use it in the Solaris Cryptographic Framework, describing the counter measures we already have in place.

Wednesday, November 19, 2014

ICMC: Status of the Transition to New Algorithms and Stonger Keys

Allen Roginsky, Mathematician, NIST

FIPS 140-2 doesn't talk much about the algorithms themselves, they are covered in the Annexes.  There were minor changes back in 2002/2003, however the algorithms have changed. New algorithms have come in, old ones have been deprecated.

Under the ISO rules, every country can choose their own algorithms. In the US, we've already chosen our algorithms for FIPS 140-2. We'll likely continue to use the same ones in FIPS 140-4 (or whatever we call them).

The current major algorithm documents are SP 800-131A and FIPS 186-4.  The stronger key requirements went into effect last year and there is a major hit coming in at the end of 2015.

Why are we doing this transition?  Security strenght of 80 bits is insufficient (the 56-bit strong DES was broken long ago; attacks on the SHA-1 collision resistance property; advances in integer factorization; etc).  Some of the currently approved algorithms aren't strong regardless of the key length (the non SP-800-90A RNGs).  Transition plans were fist announced in SP 800-57, Part 1 in 2005.  We've delayed this from going into effect from 2010 to 2015, but cannot delay it further or we'll be hurting the consumers.

Approved are the best algorithms. Deprecated algorithms are not recommended, but can be used. This is different than restricted, which you should not use.  Legacy use have no guarantee, but really should not be used, except to verify previously generated signatures, for example.  Some algorithms are just simply not allowed.

For example, SKIPJACK decryption was allowed at the end of 2010 for legacy use only, but SKIPJACK encryption is disallowed.  Only 8 certificates were ever issued, so there were not any complaints bout this change.

At the end of 2010, two-key 3DES encyrption is restricted (100 bits of strength for two-key 3DES with no more than 3^20 (plantext, cyphertext) pairs), two-key 3DES decryption is legacy-use only.
At the end of 2015, two-key 3DES encryption is disallowed.  AES and three-key 3DES are acceptable.  We allowed this for so long, because it was in wide use and the attacks were not straight forward.

Digital Signatures

As of the end of 2010, signature generation algorithms with less than 112 bits of encryption strenght became deprecated. As of the end of 2013, there was a transition from FIPS 186-2 to FIPS 186-4 and signature generation algorithms with less than 112 bits of cryptographic strength became disallowed.

Signature verification with less than 112 bits of strength is legacy-use, beginning in 2011.

Deterministic Random Number Generators

This is the BIG problem! As of the end of 2010, te non-SP-800-90A compliant RNGs became deprecated. As of the endof 2015, the non-SP-800-90A compliant RNGs will become disallowed. As of the end of 2015, the non-SP-800-90A complaint RNGs became disallowed - RETROACTIVELY!  This will be a big expense, as previously purchased software can no longer be used.

Note from Randy Easter: What this means is that every validation that was done over the last 15 years and every validation that is not using this RNG, that item will be moved to the nonapproved line.  If the keying algorithm is using this RNG, ALL of those functions become non approved.

Key Agreement and Key Transport

As of the end of 2013, Key Agreement and Key Transport algorithms stay acceptable if: key strength is at least 112 bits AND the algorithms are compliant with the appropriate NIST standards: SP 800-56A, SP 800-56B and SP 800-38F. As of the end of 2013, the non-compliant Key Agreement and Key Transport (Key Encapsulation) algorithms have become deprecated if key strength is at least 112 bits.  Key wrapping must be complaint with one of the provisions of SP 800-38F. Everything else is disallowed.

Others

Hash and MAC functions will be impacted, as well as some key derivation algorithms. See SP 800-131A for details

FIPS 186-2 to 186-4 Transition

Beginning in 2014, new implementations shall be tested for their compliance with FIPS 186-4. This applies to domain parameter generation, key pair generation and digital signature generation.  Signature verification per FIPS 186-2 is Legacy-use. Beginning this year (2014), RSA digital signature keys must be generated as shown in FIPS 186-4.

Future Transition Plans

You bet - already looking forward to the future.  We want to transition away from non-Approved implementations of key agreement (DLC-based) and key transport (RSA-based) schemes.  Unfortunately, there are too many modules in existence that are non compliant with SP 800-56A and 56B. We need a well thought out strategy for transition.

ICMC: Questions to CMVP (NIST/CSEC) on ISO 19790 Standard, 140-4 or Other

 Carolyn Finch, Manager, CMVP, CSEC
Randall Easter, NIST, Security Testing, Validation and Management Group
Allen Roginsky, Mathematician, NIST
Sharon Keller, Director, Cryptographic Algorithm Validation Program (CAVP), NIST

This panel is entirely questions and answers. I'll do my best to capture them.

Q: About bug fixing and patches. Is there an expedited way, once we get our validation, if there's a non security patch, is there a non-painful/easy way for us to update the validation?

A: Randy: Yes, we have a process, but whether it's painless or not... Even if the changes are not security relevant, you will have to go back to the lab. They will decide if it is security relevant or not, and they can do an electronic submission to NIST. It may require re-doing your algorithm testing and updating your certificate.  If there are no issues, we can usually do the update within a week - once we get the paperwork from the lab.

Q: What happens with a datacenter that is always doing critical functions, it can never give up. How can they do more tests?

A: Randy: If there is critical work being done, this can be deferred until the next time period. It doesn't say "after 42 deferrals you have to interrupt work".  There may be time when the processor is doing non crypto, then the checks can be run. The processor, in my experience, is not 100% busy on crypto.

Q: So, it can be indefinite?

A: Randy: Yes.

Q: Will CAVP now be testing and verifying all of the SHALL statements? Or is that a documentation thing?

A:  Sharon: CAVP tests all the things that are testable.  Back in the day, they all just gave tests for the algorithm - but then things got more complicated, like making sure vulnerabilities aren't introduced. For example, it's a good thing if the IV is never repeated, but that thing is not testable.

Randy: One good example is 800-90A. There are quite a bit of SHALLs in there, but there are some things that are difficult to test, so thing fall through the cracks. 

Q: In CC, we have the concept of Flaw Remediation. There are sometimes fixes we can make judegement calls ourselve. The time and money it costs to go back to the lab for FIPS makes it prohibitive for us to maintain validation

A: Randy: I  can't speak to CC, but once you open the box to make changes, then we need the lab to validate you only changed what you said you changed.  Not every change has to be revalidated, there can be judgment there.  This has been the policy in CMVP since 1995, every change has to be verified by the lab.  My experience in development, it's possible to think you've only made a small change, only later discover that it had unforeseen consequences.

A: Moderator: Working at a lab that does CC, there are vendors that abuse Flaw Remediation.

[VAF: Note: who better to judge how relevant a change is, than the developers who are intimately familiar with the codebase?]

Q: Is there a rough estimate when ISO 19790 will be adopted?

A: Randy: I honestly do not know. Hopefully mid next year, but just don't know. There will be a transition date from FIPS 140-2. But we don't know how long that will be. In the past, it was a 6 month transition period, but depending how this goes - we may need more time?

Q: Given the last public review of FIPS 140-3 was more than 5 years ago, are you ready to go forward with this standard?

A: Carolyn: This is a different process. The ISO process is different.  They don't have that type of review.

Randy: We could have a review of sorts about whether or not we want to move to the ISO standard. We won't pick up the old FIPS 140-3, as there are no DTRs and there won't be. The only path forward is with ISO 19790.

Q: Randy mentioned earlier that we'll still have Implementation Guidance. The current IGs are getting very large and difficult to navigate. How will this work with an international standard?

A: Randy: We work with Canada already and have a non-binding agreement with the Japanese guidance. We circulate the guidance with the labs before we post. The guidance is big, because we've been using FIPS 140-2 for nearly 13 years. It's only large because time has gone on for so long.  As the program has grown, the vendors and the users have gotten more sophisticated and therefor require particular guidance to address.  Hopefully we'll refresh more often and be able to better manage this.

Q: If I went down the ISO road right now, and in 18 months from now I'm ready to validate - but the new standard hasn't been adopted, yet, I won't pass FIPS 140-2, will I?

A: Randy: That's right. You could get this validated by JCVP, but the US doesn't recognize this standard at this time.  We would like a decision to get made. Yes, I said that last year, too.

Q: Elliptic Curve Cryptography has  come up significantly over the last year, particularly around NSA Suite B. Do you have plans to standardize any other curves that did not come from the NSA?

A: Allen: I am not aware of any work in this area. I am not aware of any weaknesses in the current curves. In general, one is good enough, the strength is not in the curve (as long as it meats the requirements) - the strength is in the key. Some are over the binary fields, some are over the primary fields. You can choose one or the other.  One problem with ECC is with the DUAL ECDRBG algorithm.  The problem is not with EC, but is in this particular algorithm. There was a guard in EC to protect against this, but you can't stop a problematic implementation or use.  There are no known issues with the existing curves.

Q: Our hardware now has partial implementations of crypto algorithms. We have software that can do the rest and work on the older processors, so it doesn't require the hardware but will use it if it's there.

A: Carolyn: If it can do everything in software, it's not a hybrid module.

Q: We had anticipated RSA 4096 being approved, but now it isn't.

A: Allen: It has been in FIPS 186-2, but it is no longer there. It is not allowed now for signature generation. The reason is to facilitate the transition to EC. It's very difficult to be sure that all of the floating point arithmetic is done correctly when you're dealing with these numbers. It can be done, but it's error prone. The decision was made to move to technology where the keys are shorter, like in EC. Some people have complained about this, but this is the decision. I think it's the correct one.

Q: Do you know of anything in the ISO standard that will impact current validations?

A: Randy: There is nothing that is retroactive. Existing validations should stand. New modules, though, will have to meet the new requirements once it's in place.

Q: ISO being international, but you haven't mentioned CAVS. There's no requirement that they are international.

A: Randy: It's an international standard, but CAVP will still be a US program. We'll be using an international standard. We're working on an ISO standard on how to do algorithm testing.

ICMC: Comparing ISO/IEC 19790 and FIPS PUB 140-2

William Tung, Leidos, Laboratory Manager

ISO/IEC 19790:2012 is the planned replacement for FIPS 140-2, but there's been no official announcement or timeframe, yet. This means no labs are accredited to perform these validations.

But... it's coming.

Some of this will be a rehash of our earlier sessions, but with more deep diving per section.

Degraded Operation

This mode can only be used after exiting an error state and must be able to provide information about this state.  Whatever mechanism or function is causing the failure shall be isolated.  While in the degraded mode, all conditional algorithm self-tests shall be performed prior to the first operational use of the algorithm and before degradation can be removed, all tests must pass

Cryptographic Module Interface

The cryptographic module interface is a fifth logical interface that cannot be used whenever you're in an error state.

Roles, Services and Authentication

The user role is optional, there is a minimum requirement of a crypto-officer role.  The minimal service requires showing the module's versioning info that matches the certificate on record.

There is also a new requirement for self-initiated cryptographic output capability.

Authentication strength requirements must be met by the module's implementation, not through policy controls or security rules.  For example, password size restrictions. ISO 19790 does not yet define what exactly those strength requirements are.  Level 4 modules will have to implement multi-factor authentication.

Software/Firmware Security

This section of the document applies only to software/firmware or hardware modules. Level 2 modules must implement an approved digital signature or keyed MAC for integrity test, Levels 3 & 4 have higher requirements.

Operational Environment

Software modules no longer need to operate in Common Criteria (CC) evlauted OS or "trusted operating system in order to meet Level 2 requirements.  There are specific OS requirements still required to meet Levels 2-4 (that will look similar to what used to be covered by CC).

Physical Security

Explicitly allows translucent enclosers/cases, in addition to FIPS 140-2 allowed opaque enclosures/cases, within the visible spectrum.  Level 3 modules must either implement environmental failure protection (EFP) or undergo environmental failure testing (EFT).  Level 4 MUST implement EFP.

Non-Invasive Security

This section currently doesn't specify requirements, but they will come. Hardware and firmware must comply and it will be optional for software  For levels 1-2, module must protect against these attacks. Level 3-4 will have to prove protection.

Sensitive Security Parameter (SSP) Management

SSPs consists of Critical Security Parameters (CPSs) and Public Security Parameters (PSPs). For Level 2 modules and up, procedural zeroization is not allowed.

Self Tests

There are two categories of self-test: Pre-operational and conditional.  Pre-operational includes things like integrity test and critical functions test. Conditional covers the other standard conditional tests plus the other items covered in the old POST guidance.

All self-tests need to be run regardless of if module is operating in approved or non-approved mode.  Level 3 & 4 modules must include an error log that is acceisble by an authorized operator of the module.  Integrity test needs to be run over all software/firmware components of module.  At a minimum, vendor must implement one cryptographic algorithm self-test as a pre-operational test.
[Clarification from Randall Easter on this topic: If the module is installed and configured as a FIPS 140 module, then it must do all of these tests/checks.  If it was installed and configured otherwise, it's not required. This is not different than what is currently required by FIPS 140-2.]

FIPS CRNGT is not currently defined in ISO.

ISO 1970 requires Level 3 & 4 modules to do automatic pre-operational self-tests at a predefined (by vendor) interval.

Life-Cycle Assurance

This seems to be more of a documentation and QE section. Covers vendor testing and finite state model. The states required are: General Initialization State, User State, and Approved State.  Changing to crypto-officer from any other role is prohibited.

Testing Requirements for Cryptographic Modules

The next part of the talk came from Zhiqiang (Richard) Wang, Leidos, Senior Security Engineer.

The testing requirements were derived from the FIPS 140-2 derived testing requirements. This covers self-tests, live cycle assurance and mitigation of other attacks.

ISO/IEC 24759:2014 specified shte methods to be used by testing laboratories to test whether the cryptographic module conforms to the requiremetns speified in ISO/IEC 19790:2012.  This was developed to provide a high degree of objectivity during the testing proess and to ensure consistency across the testing laboratories.  It clearly specifies what the vendor needs to provide to the laboratory.

Richard spent time walking through section by section going through many of the same requirements discussed early, but with a twist on how it would be tested or why.