Thursday, September 26, 2013

ICMC: Reuse of Validation of Other Third Party Components in a 140-2 Cryptographic Module

Presented: Jonathan Smith, Senior Cryptographic Equipment Assessment Laboratory (CEAL) Tester, CygnaCom Solutions

What is component in this context?  An algorithm, 140-2 module, third party library, etc - not a hardware device.  There is more interest in this area, as more validations are occurring.  Requirements are not obvious in this area, and there isn't a lot of guidance to follow.

Let's say you want to reuse an algorithm that has its CAVP certificates - if you wan to leverage that validation, you have to make sure you are talking about the same Operational Environment (OS/processor for software) and that there is no change within the algorithm boundary when you embed it within a module.  CAVP boundaries are not as well defined as CMVP, but for all intents and purpose it is the compiled binary executable that contains the algorithm implementation.

When you're reusing someone else's algorithm, you will have a hard time to make sure all of the CMVP self-tests are all being run at the right time. You may not be able to reuse it with out rebuilding it.

Now you may want to use an entire validated module - first make sure you have the correct validated version.  If you can use it completely unchanged, you will have to reference the other module's certificate.  One note, if the embedded module is Level 2, but your code only meets Level 1 criteria - the composite module could not be evaluated higher than Level 1. Now, the inverse is not necessarily true - you might be embedding a Level 1 module, but your different use cases may allow you to get a higher level for the composite module.

To reuse this module, again, you need to have an unchanged operational environment the same as trying to reuse an algorithm.  The new module boundary must include the entire boundary of the included module. You'll need to have a consistent error state - you cannot allow one part of the composite module to enter an error state while the rest of the system continues serving crypto.

Your documentation can quite frequently reference the embedded module's documentation, leaving certain tasks up to the embedded module.  Make sure the new capabilities of the composite module are documented.

A question came up about using multiple vendor's modules together, where they each have their own validation certificate.  Mr. Easter (CMVP) recommended we read Implementation Guidance (IG) 7.7 for detailed advice on this concept.

There was a question about if the embedded module was validated before new IG came out - what then?  As long as the embedded module meets SP800-31A, then the old certificate fully applies and you will not have to bring it up to the new IG.

This post syndicated from: Thoughts on security, beer, theater and biking!

Wednesday, September 25, 2013

ICMC: FIPS and FUD


Ray Potter, CEO, Safelogic.

Mr. Potter has seen lots of vendors jumping into the FIPS-140 bandwagon when they see another vendor claiming FIPS-140 certification, without understanding what that meant.  That other vendor may have simply gotten an algorithm certificate for just one algorithm, for example.

FIPS-140 is important - it provides a measure of confidence and a measure of trust.  A third party is validating your claims.  FIPS-140 is open and everyone in the world can read the standard and the IG and implement based on these and understand what it means to be validated. Most importantly, FIPS-140 is required by the US Government and desired by other industries (medical/financial/etc).

Having this validation shows that you've correctly implemented your algorithms and you are offering secure algorithms.

See claims like the module has been "designed for compliance to FIPS 140-2", or "it will be submitted" or "a third party has verified this" or "we have an algorithm certificate" or "we have implemented all of the requirements for FIPS-140" - none of these is truly FIPS-140-2 validated.

Once you have a certificate in hand from the CMVP, then you're validated.

But even when the vendor has done the right thing for some products, sales can just get this wrong - too eager to tell the customer that everything is then validated.

So, Mr. Potter has encountered honest mistakes, but he's also seen sales/vendors outright lie about this, because it simply takes too long and is too expensive to do this.  Why do this? Make the sale - hope your in process evaluation completes before the sale.

Issues that exacerbate the situation: unpredictable CMVP queues, new Implementation Guidance (IG) that is sometimes retroactive, and uneven application across the government.  Some government agencies may accept different phases (in validation) - where others require a certificate in hand.

Vendors get frustrated when they are in the queue for months, then get some retroactive IG that requires code changes - they don't see this as worth the effort.

We can help: educate your customers on the value of FIPS-140 validations and what it really means to be validated, only use validated modules and follow strict guidelines for claims.

There are some people that will take another FIPS-140 validated implementation, repackage it and get their own certificate for the same underlying module but with their name as the vendor on the cert.

I asked about why some vendors are doing validations of their crypto consumers, when they've already got a validation for the underlying consumer.  Mr. Potter noted that some people might do this because they need to cover the key management aspects that the consumer is doing that weren't covered in the other evaluation, or that the consumer may actually have some if it's own internal crypto in addition to what they are getting from the underlying module, or that they simply are trying to make a very important customer happy.

This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Understanding the FIPS Government Crypto Regulations for 2014

 Presented by Edwards Morris, co-founder Gossamer Security Solutions.

Looking forward to what new regulations are going to mean for older algorithms and the protocols that use them, right away there seems like there may be work for us to do.

Security strength of various algorithms are not straightforward to calculate, as it's based on how it's used and how big the key is, etc.

In May 19, 2005, DES is being sunset, because it has less than 80-bits of security.  Because this sunset applies to the bits of security, DH and RSA with key sizes smaller than 1024-bits are included.

Originally the DES transition gave people two years to migrate to AES or 3DES.

As a part of this, NIST released SP 800-57, which included recommendations for key management.

This was harder than anticipated, due to lack of standards, available 140-2 approved products and the sheer size of deployments - so then NIST 800-131A was born.

NIST 800-131A includes more details on the transitions, terminology and dates.  Some 80-bit crypto will be deprecated in 2013 and others in 2015.

The devil is in the details, of course,  SP 800-131A refers to SP800131B, which is in draft and ultimately because FIPS-140-2 Implementation Guidance.  Digging into all of these requirements, how can you tell if TLS can still be used?

Mr. Morris dug into  various protocols to help us interpret this standard.

IPsec

IPsec is made up of so many RFCs, so getting the big picture is not an easy task. It also has so many possible options.  SP 800-57 Pat 3 details this guidance.  Three IPsec protocols allow a choice of algorithms: IKE (for key exchange), ESP and AH.

You can avoid using IKE by using manual keying (acceptable 2014+, but what a pain to configure/deploy), IKEv1 (unacceptable in 2014) and  IKEv3 (acceptable 2014+, if configured correctly).

For encryption in IPsec, you'll be okay with ENCR_3DES (in CBC mode), ENCR_AES_CBC (CBC mode only), and a few others.  You'll be able to use SHA1 as an HMAC, but not alone for signatures (wow, does this get twisted).

Mr. Morris continued through what would be acceptable for Pseudo-random functions, Integrity, Diffie-Hellman group and Peer Authentication - too quickly for me to type all of those algorithms, but I will try to update this if I can get access to the slide deck.

It's not as simple as which algorithm you use, but again how you use it, which sized keys, etc.

IPsec is well positioned for this - as long as you configure it correctly.

TLS

TLS is not the same things SSL v3.0. TLS is equivalent to SSL v 3.1.  TLS 1.0 is acceptable. SSL, though, is not allowed by CMVP.  Since 1.0 is acceptable, that essentially means that TLS 1.1 and 1.2 are also acceptable, as long it's configured correctly (seeing a theme here?).  For example, if you use pre-shared keys in lieu of certificates, ensure the key is greater or equal to 112 bits.

You won't be able to use the following key exchanges: *_anon, SRP_*, KRB5. (though Mr. Morris hasn't dug through the Kerberos standards, yet, to see where they will be with these 2014 requirements.

SSH

This is not covered by SP 800-57, so this is harder to figure out if it's okay or not. There are problems when the RFCs require things that are no longer considered secure: Single DES are required to be implemented, but by having them available in your implementation - this will be a problem.

Recommendations

Look for documentation and configuration guides that can help there, or get independent evaluations of the software that you're deploying (companies like Gossamer will look at how you're using even opensource software like Apache, OpenSSL, OpenSSH, etc)

This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Building a Corporate-Wide Certification Program - Techniques that (might) work

Tammy Green, Security and Certification Architect at Blue Coat, came into what seemed like a simple task. She had had some Common Criteria experience when she joined Blue Coat, their product had previously been evaluated at FIPS-140-2 level 2, and they had already signed contracts with a consulting firm (Corsec) and lab.  Should've been easy, right?  Her new boss said it should take about a year and only 20% of her time.  Two and half full time years later... yikes.

One of the biggest problems was getting internal teams to work with her - even people involved in the previous validations didn't even want to talk to Ms. Green about it.

Nobody wanted to do this - they want to work on the new shiny features that they can sell, how does a process that takes 2 years (often not complete until after a product is EOL) help them?

It's hard to see the long term picture - you want to sell to the government, you need FIPS-140-2 validations.

Ms. Green didn't want to do this herself again afterwards. Instead of running away, she worked on setting up a certification team locally (her boss hiring a program manager helped to encourage it).

In addition to having the program manager, you need a certification architect.  You can't use the same architect as the product architect, because that person is busy designing shiny new features.

You need to work with the development team well in advance - fit your FIPS-140-2/Common Criteria schedule into their development schedule. You can't screw on the necessary tests and requirements as an afterthought, and you don't want to delay a schedule because requirements are dropped in at the end.

Target the right release: because FIPS-140-2 takes so long, you need to pick a release you plan on supporting for a long time.

Ms. Green found that after time... engineers stopped replying to her emails and answering her phone calls.  You need to identify key engineering resources to work with and their management needs to commit to those engineers dedicating 10-20% of their time to these validations.

Once you get this set up and have educated engineering, you'll find they'll reach out to you in advance - better timing!

Her team keeps track of what needs to be done: file bugs and track them.  You'd think the project manager for the product team would do this, but what she's found is that the bugs get reprioritized and reassigned to a future release.  Someone who understands validations needs to track these issues.

Ms. Green recommends that you create the security policy from existing documents: don't rely on engineers doing this. They simply don't understand what goes into this document or why it's important.  Instead, use engineering and QA to validate content.

It's important to convince QA continue to test FIPS mode and related features, as some customers may still want to run in FIPS mode (even though it wasn't validated) or that the release would be ready for validation if something went horribly wrong with the older release in validation.

Schedule time to prep. Ms. Green has 4-8 hour long meetings to make sure everyone understands what's important. Take time to prepare, make sure everyone knows what will be expected from the lab visit and have a test plan formalized in advance.  It's actually a lot of work to set up failures (the lab evaluators require that you demonstrate what happens when  a failure happens, even though you have to inject the failure to force it).  Debug builds, builds you know will fail, multiple test machines, platforms, etc.

To keep your team from killing you... or damaging morale, celebrate the milestones. Mention the progress in every status report, celebrate the milestones, do corporate wide announcements when you finish.

Do a post mortem to understand how this can be improved: give your engineering team a voice! Listen and take action based on what worked and didn't.

Update tools and features to make this easier next time: keywords to bugs and features, modifying product life-cycle, add questions related to FIPS to templates.

Suggestions/questions from the audience:
Make sales your best friend.  Validations/certifications are not fun, nobody does them for fun - you do this to make money.
Get the certification team involved as early as possible: from the very beginning - marketing design meetings.
Why don't you run your FIPS-140 mode tests all the time?  Time consuming, slower, not seen as a priority when there are no plans to validate.


This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Panel: How Can the Validation Queue for the CMVP be Improved

Fiona Pattinson, atsec information security, moderated a panel on CMVP queue lengths. Panelists: Michael Cooper, NIST; Steve Weymann, Infoguard; James McLaughlin, Gemalto; and Nithya Rahamadugu, Cygnacom.

Mr. McLauglin said the major issues for vendors with these queue lengths, because it impacts time to market. If you don't get your validation until after your product is obsolete, that can be incredibly frustrating. It's important to take schedule and time to market into consideration, which is impossible when you cannot anticipate how long you'll be sitting in the queue.

Mr. Weymann expressed how frustrating this is for a lab to be able to communicate to their customers what the expectations are and what to do when the implementation guidance is changed (or as CMVP would say: clarified) while you're in the queue - do you have to start over?

Mr. Cooper, CMVP, expressed that there is a resource limitation (this gets back to the 8 reviewers). But is simply adding resources enough?  They have grown in this area - used to only have 1 or 2 reviewers.  But does adding people resolve all of the issues?  Finding people with the correct training is difficult as well.  Hiring in the US Government can take 6-8 months, even if they do find the right person with the right background and right interest.

Mr. Weymann posits that perhaps there are ways that could better use our existing resources.  Labs could help out a lot more here, making sure things are in such good shape at submission that the work that CMVP has to do is easier.

Ms. Rahamadugu and Mr. Weymann suggested adding more automation into the CMVP process. Mr. Cooper noted that he has just now gotten the budget to hire someone to do some automation tasks, so hopefully that will result in improvements to pace.

A comment from the audience from a lab suggested automation of the generation of the certificate, once the vendor has passed validation.  Apparently this can sometimes be error prone, resulting in 1-2 more cycles of review.  Automation could help here.

Mr. Easter said they like to complete things in 1-2 rounds of questions to the vendor, and there are penalties (not specified) for more than 2 rounds.  Sometimes the answers to a seemingly benign question can actually bring up more questions, which will result further rounds.  Though, CMVP is quick to want to have a phone call to resolve "question loops".

Mr. Easter noted that he tries to give related reviews or technologies to one person. This often helps to speed up reviews, reducing the questions. On the other hand, that puts the area expertise in the hands of one person, so when they go on vacation - there can be delays.

Mr. Weymann noted that it seems that each lab seems to gather different information and presents it in different ways. For example, different ways of presenting which algorithms are presented, key methods, etc.

Concern from the audience about loss of expertise if anyone moves - could validators shadow each other to learn each other's area of expertise?  Or will that effectively limit the number of reviewers.
Vendors feel like they have to continually "train" the validators, and retrain every time - frustrating for vendors to do this seemingly over and over.

A suggestion from the audience: could labs review another lab's submissions before it went to CMVP?  This is difficult, due to all of the NDAs involved.  Also, a vendor may not want a lab working with a competitor to review their software.

Complicated indeed!

This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Impact FIPS 140-2: Implementation Guidance

Presenters: Kim Schaffer, Apostol Vassilev, and Jim Fox - all from NIST (CMVP).  The talk walked us through some of the most confusing/contentious recent updates to the Implementation Guidance document.

Implementation Guidance 9.10

Default Entry Points are well-known mechanisms for operator independent access to a crypto module. When an application loads, static default constructs, dynamically loaded libraries, etc - these access points are linked into the application.  When this default entry point is called, power on self tests must be called with out application or operator intervention.

This guidance exists in harmony with IG 9.5, which is regarding, essentially making sure there isn't a way for the module to skip power-on-self-tests.

Implementation Guidance 3.5

All services need to be documented in your security policy, even the non-approved ones.  Each service needs to be listed individually.  Some services can be documented externally, but those need to be publicly accessible (via URL or similar).

Operational Environment

This is defined as the OS and platform [Ed note: emphasis mine].

Today, processors are becoming nonstandard, so NIST can no longer just consider the OS alone and needs to reinvigorate the intent seen in the original document.  This gets more complicated with virtual machines (VMs) in the mix.

What they used to call GPC (general purpose processor), they are now referring to them as Special Processing Capability (SPC) because of the cryptography being added to the processors.

What do they mean by SPC?  extended calls to enhance cryptographic algorithm without full implementation.  Processors include Intel, Arm and SPARC.

If the module can demonstrate independence of SPC, then it won't necessarily be considered a hybrid implementation.  If the lab can show this, even though an implementation can leverage an SPC, they may still be considered a software or firmware implementation.

Questions

An audience member showed irritation that the new IG does not give time to implement (ie no grace period).  Mr. Randy Easter reiterated  that this is not a new requirement, but something that was formerly missed by the testing that the labs did. They want to nip these lapses in the bud and get people back on track.  Essentially correcting assumptions that CMVP was making about what the labs were testing. There are some changes/clarifications on this lack of grace period that were sent to the labs last night (not sure what's in that, but the labs all seem happy and are anxiously checking their mail).

Another question about why the very specific platform is required on the certificate. Mr. Easter noted that this is due to how dependent entropy is on the underlying platform - this becomes important to test and verify. This is a frustrating for Android developers.  Mr. Easter noted that most consumers, though, are happy to use the same module on slight variations of platforms (for example, different releases of Windows on slightly different laptop hardware - too many variations to actually test).

Mr. Schaffer noted that certificates do not need to go down to the patch level, unless the vendor requests it.


This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: CAVP/CMVP Status - What's Next?

We were lucky to have three panelists from NIST: Sharon Keller, Randall Easter, and Carolyn French, so we can hear about what they are doing straight from the horses mouth, as it were.  This is a rare opportunity for vendors to get to interact directly with members of these validation programs.

Ms. Keller, CAVP - NIST, explained a bit of history about this body. CMVP and CAVP used to be one group, but they were separated in 2003, as their tasks could easily be broken up.  As noted yesterday, the validations done by CAVP is much more black and white - you've either passed the test suite, or failed.  The CAVP has automated this as much as possible, which also contributes to the speed.

In addition to these validations, CAVP writes the tests, provides documentation and guidance and provides clarification, as needed.  They also keep an up to date webpage that lists algorithm implementations.

The rate of incoming validations seems to be nearly logarithmic growth!  Already this year, they've already issued more algorithm certificates than in all of 2012.  This year, they added a new tests for algorithms like SHA512/t.

Mr. Easter said that NIST had originally planned to do a follow-on conference to their 2003 conference in 2006, once FIPS-140-3 was finalized.... oops!

The original ISO/IEC 19790 (March 2006) was a copy of FIPS-140-2, the latest draft contains improvements that were hoped to be in FIPS-140-3.

Because FISMA requires that the US Government use validated crypto, these standards have become very important.

Vendors should make sure they reference the Implementation Guidance (IG). Mr. Easter noted that the IG, in their opinion, does not contain  any new "shalls", but merely clarification of existing requirements. Though, asking many vendors here, apparently the original document was so murky that these IGs seem like completely new requirements.  Now that FIPS-140-2 is so old (12 years now), the IG document is larger than the standard itself!  This can

There are actually only 8 full time reviewers for the CMVP, and those same reviewers also have to do bi-annual audits of the 22 international labs, write Implementation Guidance, etc - you can see why they are busy!

Reports from labs show the greatest areas of difficulty for conformance are key management, physical security, self-tests and RNGs.  Nearly 50% of the vendor implementations have non-conformance issues (8% are algorithm implementation conformance issues).

Mr. Easter apologized for the  queue and noted that this is not what they want either: they want the US Government to be able to get the latest software and hardware, too!

Currently, there are 200 reports in the queue that are actively being tracked and worked - again, 8 reviewers.  Is adding more reviewers the answer? How many people can they steal from the labs ;-)

What can you do to help?  Labs should make sure the reports are complete, in good shape, with all necessary elements..   Vendors should try to answer any questions as fast as possible.  Help close the loop, make the jobs of the reviewers simple.

Unfortunately,  work on FIPS-140-3 seems to have stalled in the group that evaluates new algorithms, and Mr. Easter would like NIST instead to adopt ISO19790 as FIPS-140-3 (it's ready to go: fleshed out, DTRs) - asking us to help pressure NIST to get this standard adopted.


This post syndicated from: Thoughts on security, beer, theater and biking!