Wednesday, November 4, 2015

ICMC15: Introduction to FIPS 140-2

Steve Weingart, Manager of Public Sector Certifications, Aruba Networks; Chris Keenan, Evaluator, Gossamer Security Solutions

FIPS area  series of U.S. Federal Information Processing Standards. Required by federal agencies, and recommended to state agencies. Financial companies are also requiring this. This is a joint scheme with Canada.

FISMA , put forth by George W. Bush in 2002, now makes it required.  The standard precludes the use of unvalidated cryptography in the federal government - it's considered "plain text" (ie no protection). If an agency violates this, they could lose their funding (though that hasn't happened, yet).

The CMVP, Cryptographic Module Validation Program, is a joint program between U.S. NIST (National Institute of Standards and Technology) and CSEC from Canada. The logo for a validation is the 50 stars for the US with a Canadian maple leaf on the front. :-)

The US Government is not staffed to do all of this testing, so they let accredited  labs do it (NVLAP, National Voluntary Laboratory Scheme).  Then NIST can review the results. The labs are checked out every 2 years, including verbal tests.


The labs are not allowed to do any consulting, to prevent conflict of interest.  This is different than Common Criteria.  You can use a lab as a consultant, but then you need to use a different lab for your testing.

FIPS 140-2 is a conformance test, not a evaluation.

It is a back and forth process - never linear.

A cryptographic module is defined by its: Security Functionality and boundaries.  Choosing your boundary is extremely critical, particularly when it comes to making changes after you go into the process or have your certificate. A crypto module can be embodied in hardware, software, firmware or a hybrid.

Your goal is to protect against unauthorized use and unauthorized disclosure of secrets. We also want to make sure that the code has not be modified (integrity check) and prevent insertion and deletion of keys (critical security paramaters or CSPs).

History - we had Federal Standard 1027 in 1982, focusing on making secret phone calls.  then they started thinking about crypto, they turned 1027 into FIPS 140. Nothing was validated against it.  Eleven years later, we got FIPS 140-1 in 1994. It is meant to be reviewed and renewed every 5 years.  In 1999, the review started of FIPS 140-1 and we got FIPS 140-2 in 2001 - just a bit behind schedule.  There was a FIPS 140-3, but it seemed to get buried in the comments.

Then in 2006, work started on ISO/IEC 19790 - second edition came out in 2012. It's come up that ISO/IEC 19790 may become FIPS 140-3.

As you can see, we're behind on updates. [Note: and boy has crypto technology and CPU technology changed a LOT since 2001!]

There is the base document, FIPS 140-2. Read it once, keep it handy - but it won't be your end all guidance.  Focus right now is on Annex A (approved security functions), Annex B (approved p rotection profiles), Annex C (approved RNG) and Annex D (approved key establishment techniques).

Appendix B is pretty empty, as FIPS 140-2 has become detached from common criteria.

DTRs (Derived Test Requirements) is your daily document, that and Implementation Guidance.  Some people think the IG is more important than the DTRs - not true, both important.

If we do switch to ISO/IEC 19790, then the IG should revert to a blank document.

Each area in a module is validated between security levels 1-4.  The Overall Security Level is the lowest level awarded in any of the levels.  Most people just validate all components at the same level, but some customers (like the US Postal Service) will specify higher levels required for some functions.

There are eleven major functional areas. Some are very broad, others, like EMI/EMC are very hardware specific. All of these requirements are hard to "paste on" later.

You can claim "bonus" things - like doing RSA operations in constant time. In order to claim it, your lab has to prove it. Once approved, it can be listed in your certificate.

You must define all entry and exit points. Data Input, data output, control input, status output.  At levels 3 & 4, physical ports must be physically separate.

A cryptographic module shall support authorized roles for opraors and corresponding services within each role. Multiple roles may be assumed by a single operator. If a crypto module supports concurrent operators, then the module has to maintain and track this.

Authentication mechanisms may be required within a cryptographic module to authenticate an operator accessing the module, and to verify that the operator is authorized to assume the requested role within the service of that role.

If you're level 1, no access control is required. Level two is role based, level 3 & 4 have identity based requirements.

You have to actually produce a finite state model. This is easier for old hardware engineer. You can't be in multiple states at one time. Think about the fatal error state: pretty much all states should fail there, if something goes wrong.  This is a bit different with ISO/IEC 19790, where you only have to shut down the stuff having problems.

For physical security, thanks to the credit card breaches from last year, we all have a single chip module in our pockets! (Smart cards embedded in our cards). For level 1, you can get away with using commonly available, well known chips.  A very common thing to use level 2 is a label that is torn if the sytem is opened or tampered with.  Of course, this means hackers, etc, have been focusing on how to remove labels and reapply them with no evidence - so sticker makers have had to step up.

At level 3, you could dip in epoxy or have a pick resistent lockable case.

For level 4, the lab can use ANY method they can imagine to attack your module. This is really, really hard. Do you have a good reason to get this?

There are requirements around the operational environment - the OS your module runs on.  The most common environment is a regular OS. Anything over level 1 is hard, you'll need signed updates and integrity checks. At level 2 and above, folks tend to use a non modifiable operational environment.

Key Management is a big one! Entire life cycle of the key - how are they created? where do they go? Can they be moved? How are you wrapping them?

Key management covers RNG and key generation, key establishment, key distribution, key entry/output, storage and zeroization. You can use the key management of another module.

This can be a problem with compiler optimizations - they will "help" you and leave you key in memory.

There are EMI (Electromagnetic Interference) and EMC (Electromagnetic Compatibility) requirements. EMI is to make sure the module doesn't interfere with other equipment. For EMC, in general, and FCC part 15 class A or B certificate is needed. WARNING: there are lots of fake FCC certificates and counterfeit labs out there.

This can be important as we're now having people analyze EMF signals to predict key materials.

Power up self tests require that you test everything when you start up. If the module fails, then it must enter an error state. All of the operations must be tested "one way" - that is, you cannot encrypt data and then decrypt it and claim success - that only proves you created a reversible operation, not that you're actually doing AES, for example.

There are also conditional tests - when a specific operation is invoked that requires it, like key generation or using random numbers. Again, if it fails - it must enter a failure state and do no further operations.

This will be a little "lazier" in ISO/IEC 19790. Currently, all tests must pass before you do anything. In ISO/IEC 19790 you'll be able to start using SHA2, for example, as soon as it's testing is done - even if other tests are still running.

To prove you've done it correctly, you have to prove to the lab that you can trigger the failure.  Some add a special flag that will always fail, or make a special version of your code that always fails.

For desgn assurance, you need to use the best practices by your developers during the design, deployment and operation of a cryptographic module - like coding standards, reviews, etc.
 
You can claim mitigation of attacks - this is optional - like shielding, timing attacks, etc. While optional, the lab has to verify it/witness it.

Okay - that's all done, what next?  Laboratory testing!

This is where the Derived Test Requirements (DTR) come in - repeatable and testable conformance. If you gave this module to any one of the 23 labs, it will pass or not pass for all of them.

Each entry in a DTR has 3 parts: the assertion (a direct quote from the FIPS 140-2  standard), requirements against the vendor to prove it, and finally a set of requirements levied on the tester of the module (ie the lab).

When you go through Cryptographic Algorithm Validation, it will prove you are using an approved algorithm in its approved mode of operation.  It can be a longer process, as you have to write a test harness, but the interaction with CAVP tends to be faster.

There is a NIST CAVS tool - but only the labs can see/use this tool, not you as a vendor. The lab generates test samples and sends them to the customer. The customer uses the samples as input and generates the cryptographic output, the customer returns the results to the lab. The lab checks the restults and if they mach, submits the results to the CAVP for the certificate.  You should have your own harness that takes the arcane CAVS input to verify your modules quickly as you're doing you work.  There are sometimes small variations you need to do.

Important things to remember: encryption and decryption are tested separately. in Counter mode, the test is performed on ECB mode and then the counter design is examined separately. t must be shown that a particular count can only be used once with a key.

If you want to limit the testing and churn, keep your boundary very small so you can reuse it.

If you have non cryptographically relevant changes, you can do a change letter (1SUB) - those can be very quick through CMVP, but your lab will have to review your code diffs and do some regression tests (so not free).  A partial revalidation (30% or less of your module has changed) will require a partial retest.  If you've changed more than 30%, then a full revalidation is requited.

Correctly and optimally defining your crypto boundary is the single most important decision you can make. Make it as SMALL as possible - but you can't leave things out!

This post by Valerie Fenwick, syndicated from Security, Beers, Theater and Biking!

No comments:

Post a Comment