Wednesday, September 25, 2013

ICMC: Understanding the FIPS Government Crypto Regulations for 2014

 Presented by Edwards Morris, co-founder Gossamer Security Solutions.

Looking forward to what new regulations are going to mean for older algorithms and the protocols that use them, right away there seems like there may be work for us to do.

Security strength of various algorithms are not straightforward to calculate, as it's based on how it's used and how big the key is, etc.

In May 19, 2005, DES is being sunset, because it has less than 80-bits of security.  Because this sunset applies to the bits of security, DH and RSA with key sizes smaller than 1024-bits are included.

Originally the DES transition gave people two years to migrate to AES or 3DES.

As a part of this, NIST released SP 800-57, which included recommendations for key management.

This was harder than anticipated, due to lack of standards, available 140-2 approved products and the sheer size of deployments - so then NIST 800-131A was born.

NIST 800-131A includes more details on the transitions, terminology and dates.  Some 80-bit crypto will be deprecated in 2013 and others in 2015.

The devil is in the details, of course,  SP 800-131A refers to SP800131B, which is in draft and ultimately because FIPS-140-2 Implementation Guidance.  Digging into all of these requirements, how can you tell if TLS can still be used?

Mr. Morris dug into  various protocols to help us interpret this standard.

IPsec

IPsec is made up of so many RFCs, so getting the big picture is not an easy task. It also has so many possible options.  SP 800-57 Pat 3 details this guidance.  Three IPsec protocols allow a choice of algorithms: IKE (for key exchange), ESP and AH.

You can avoid using IKE by using manual keying (acceptable 2014+, but what a pain to configure/deploy), IKEv1 (unacceptable in 2014) and  IKEv3 (acceptable 2014+, if configured correctly).

For encryption in IPsec, you'll be okay with ENCR_3DES (in CBC mode), ENCR_AES_CBC (CBC mode only), and a few others.  You'll be able to use SHA1 as an HMAC, but not alone for signatures (wow, does this get twisted).

Mr. Morris continued through what would be acceptable for Pseudo-random functions, Integrity, Diffie-Hellman group and Peer Authentication - too quickly for me to type all of those algorithms, but I will try to update this if I can get access to the slide deck.

It's not as simple as which algorithm you use, but again how you use it, which sized keys, etc.

IPsec is well positioned for this - as long as you configure it correctly.

TLS

TLS is not the same things SSL v3.0. TLS is equivalent to SSL v 3.1.  TLS 1.0 is acceptable. SSL, though, is not allowed by CMVP.  Since 1.0 is acceptable, that essentially means that TLS 1.1 and 1.2 are also acceptable, as long it's configured correctly (seeing a theme here?).  For example, if you use pre-shared keys in lieu of certificates, ensure the key is greater or equal to 112 bits.

You won't be able to use the following key exchanges: *_anon, SRP_*, KRB5. (though Mr. Morris hasn't dug through the Kerberos standards, yet, to see where they will be with these 2014 requirements.

SSH

This is not covered by SP 800-57, so this is harder to figure out if it's okay or not. There are problems when the RFCs require things that are no longer considered secure: Single DES are required to be implemented, but by having them available in your implementation - this will be a problem.

Recommendations

Look for documentation and configuration guides that can help there, or get independent evaluations of the software that you're deploying (companies like Gossamer will look at how you're using even opensource software like Apache, OpenSSL, OpenSSH, etc)

This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Building a Corporate-Wide Certification Program - Techniques that (might) work

Tammy Green, Security and Certification Architect at Blue Coat, came into what seemed like a simple task. She had had some Common Criteria experience when she joined Blue Coat, their product had previously been evaluated at FIPS-140-2 level 2, and they had already signed contracts with a consulting firm (Corsec) and lab.  Should've been easy, right?  Her new boss said it should take about a year and only 20% of her time.  Two and half full time years later... yikes.

One of the biggest problems was getting internal teams to work with her - even people involved in the previous validations didn't even want to talk to Ms. Green about it.

Nobody wanted to do this - they want to work on the new shiny features that they can sell, how does a process that takes 2 years (often not complete until after a product is EOL) help them?

It's hard to see the long term picture - you want to sell to the government, you need FIPS-140-2 validations.

Ms. Green didn't want to do this herself again afterwards. Instead of running away, she worked on setting up a certification team locally (her boss hiring a program manager helped to encourage it).

In addition to having the program manager, you need a certification architect.  You can't use the same architect as the product architect, because that person is busy designing shiny new features.

You need to work with the development team well in advance - fit your FIPS-140-2/Common Criteria schedule into their development schedule. You can't screw on the necessary tests and requirements as an afterthought, and you don't want to delay a schedule because requirements are dropped in at the end.

Target the right release: because FIPS-140-2 takes so long, you need to pick a release you plan on supporting for a long time.

Ms. Green found that after time... engineers stopped replying to her emails and answering her phone calls.  You need to identify key engineering resources to work with and their management needs to commit to those engineers dedicating 10-20% of their time to these validations.

Once you get this set up and have educated engineering, you'll find they'll reach out to you in advance - better timing!

Her team keeps track of what needs to be done: file bugs and track them.  You'd think the project manager for the product team would do this, but what she's found is that the bugs get reprioritized and reassigned to a future release.  Someone who understands validations needs to track these issues.

Ms. Green recommends that you create the security policy from existing documents: don't rely on engineers doing this. They simply don't understand what goes into this document or why it's important.  Instead, use engineering and QA to validate content.

It's important to convince QA continue to test FIPS mode and related features, as some customers may still want to run in FIPS mode (even though it wasn't validated) or that the release would be ready for validation if something went horribly wrong with the older release in validation.

Schedule time to prep. Ms. Green has 4-8 hour long meetings to make sure everyone understands what's important. Take time to prepare, make sure everyone knows what will be expected from the lab visit and have a test plan formalized in advance.  It's actually a lot of work to set up failures (the lab evaluators require that you demonstrate what happens when  a failure happens, even though you have to inject the failure to force it).  Debug builds, builds you know will fail, multiple test machines, platforms, etc.

To keep your team from killing you... or damaging morale, celebrate the milestones. Mention the progress in every status report, celebrate the milestones, do corporate wide announcements when you finish.

Do a post mortem to understand how this can be improved: give your engineering team a voice! Listen and take action based on what worked and didn't.

Update tools and features to make this easier next time: keywords to bugs and features, modifying product life-cycle, add questions related to FIPS to templates.

Suggestions/questions from the audience:
Make sales your best friend.  Validations/certifications are not fun, nobody does them for fun - you do this to make money.
Get the certification team involved as early as possible: from the very beginning - marketing design meetings.
Why don't you run your FIPS-140 mode tests all the time?  Time consuming, slower, not seen as a priority when there are no plans to validate.


This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Panel: How Can the Validation Queue for the CMVP be Improved

Fiona Pattinson, atsec information security, moderated a panel on CMVP queue lengths. Panelists: Michael Cooper, NIST; Steve Weymann, Infoguard; James McLaughlin, Gemalto; and Nithya Rahamadugu, Cygnacom.

Mr. McLauglin said the major issues for vendors with these queue lengths, because it impacts time to market. If you don't get your validation until after your product is obsolete, that can be incredibly frustrating. It's important to take schedule and time to market into consideration, which is impossible when you cannot anticipate how long you'll be sitting in the queue.

Mr. Weymann expressed how frustrating this is for a lab to be able to communicate to their customers what the expectations are and what to do when the implementation guidance is changed (or as CMVP would say: clarified) while you're in the queue - do you have to start over?

Mr. Cooper, CMVP, expressed that there is a resource limitation (this gets back to the 8 reviewers). But is simply adding resources enough?  They have grown in this area - used to only have 1 or 2 reviewers.  But does adding people resolve all of the issues?  Finding people with the correct training is difficult as well.  Hiring in the US Government can take 6-8 months, even if they do find the right person with the right background and right interest.

Mr. Weymann posits that perhaps there are ways that could better use our existing resources.  Labs could help out a lot more here, making sure things are in such good shape at submission that the work that CMVP has to do is easier.

Ms. Rahamadugu and Mr. Weymann suggested adding more automation into the CMVP process. Mr. Cooper noted that he has just now gotten the budget to hire someone to do some automation tasks, so hopefully that will result in improvements to pace.

A comment from the audience from a lab suggested automation of the generation of the certificate, once the vendor has passed validation.  Apparently this can sometimes be error prone, resulting in 1-2 more cycles of review.  Automation could help here.

Mr. Easter said they like to complete things in 1-2 rounds of questions to the vendor, and there are penalties (not specified) for more than 2 rounds.  Sometimes the answers to a seemingly benign question can actually bring up more questions, which will result further rounds.  Though, CMVP is quick to want to have a phone call to resolve "question loops".

Mr. Easter noted that he tries to give related reviews or technologies to one person. This often helps to speed up reviews, reducing the questions. On the other hand, that puts the area expertise in the hands of one person, so when they go on vacation - there can be delays.

Mr. Weymann noted that it seems that each lab seems to gather different information and presents it in different ways. For example, different ways of presenting which algorithms are presented, key methods, etc.

Concern from the audience about loss of expertise if anyone moves - could validators shadow each other to learn each other's area of expertise?  Or will that effectively limit the number of reviewers.
Vendors feel like they have to continually "train" the validators, and retrain every time - frustrating for vendors to do this seemingly over and over.

A suggestion from the audience: could labs review another lab's submissions before it went to CMVP?  This is difficult, due to all of the NDAs involved.  Also, a vendor may not want a lab working with a competitor to review their software.

Complicated indeed!

This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Impact FIPS 140-2: Implementation Guidance

Presenters: Kim Schaffer, Apostol Vassilev, and Jim Fox - all from NIST (CMVP).  The talk walked us through some of the most confusing/contentious recent updates to the Implementation Guidance document.

Implementation Guidance 9.10

Default Entry Points are well-known mechanisms for operator independent access to a crypto module. When an application loads, static default constructs, dynamically loaded libraries, etc - these access points are linked into the application.  When this default entry point is called, power on self tests must be called with out application or operator intervention.

This guidance exists in harmony with IG 9.5, which is regarding, essentially making sure there isn't a way for the module to skip power-on-self-tests.

Implementation Guidance 3.5

All services need to be documented in your security policy, even the non-approved ones.  Each service needs to be listed individually.  Some services can be documented externally, but those need to be publicly accessible (via URL or similar).

Operational Environment

This is defined as the OS and platform [Ed note: emphasis mine].

Today, processors are becoming nonstandard, so NIST can no longer just consider the OS alone and needs to reinvigorate the intent seen in the original document.  This gets more complicated with virtual machines (VMs) in the mix.

What they used to call GPC (general purpose processor), they are now referring to them as Special Processing Capability (SPC) because of the cryptography being added to the processors.

What do they mean by SPC?  extended calls to enhance cryptographic algorithm without full implementation.  Processors include Intel, Arm and SPARC.

If the module can demonstrate independence of SPC, then it won't necessarily be considered a hybrid implementation.  If the lab can show this, even though an implementation can leverage an SPC, they may still be considered a software or firmware implementation.

Questions

An audience member showed irritation that the new IG does not give time to implement (ie no grace period).  Mr. Randy Easter reiterated  that this is not a new requirement, but something that was formerly missed by the testing that the labs did. They want to nip these lapses in the bud and get people back on track.  Essentially correcting assumptions that CMVP was making about what the labs were testing. There are some changes/clarifications on this lack of grace period that were sent to the labs last night (not sure what's in that, but the labs all seem happy and are anxiously checking their mail).

Another question about why the very specific platform is required on the certificate. Mr. Easter noted that this is due to how dependent entropy is on the underlying platform - this becomes important to test and verify. This is a frustrating for Android developers.  Mr. Easter noted that most consumers, though, are happy to use the same module on slight variations of platforms (for example, different releases of Windows on slightly different laptop hardware - too many variations to actually test).

Mr. Schaffer noted that certificates do not need to go down to the patch level, unless the vendor requests it.


This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: CAVP/CMVP Status - What's Next?

We were lucky to have three panelists from NIST: Sharon Keller, Randall Easter, and Carolyn French, so we can hear about what they are doing straight from the horses mouth, as it were.  This is a rare opportunity for vendors to get to interact directly with members of these validation programs.

Ms. Keller, CAVP - NIST, explained a bit of history about this body. CMVP and CAVP used to be one group, but they were separated in 2003, as their tasks could easily be broken up.  As noted yesterday, the validations done by CAVP is much more black and white - you've either passed the test suite, or failed.  The CAVP has automated this as much as possible, which also contributes to the speed.

In addition to these validations, CAVP writes the tests, provides documentation and guidance and provides clarification, as needed.  They also keep an up to date webpage that lists algorithm implementations.

The rate of incoming validations seems to be nearly logarithmic growth!  Already this year, they've already issued more algorithm certificates than in all of 2012.  This year, they added a new tests for algorithms like SHA512/t.

Mr. Easter said that NIST had originally planned to do a follow-on conference to their 2003 conference in 2006, once FIPS-140-3 was finalized.... oops!

The original ISO/IEC 19790 (March 2006) was a copy of FIPS-140-2, the latest draft contains improvements that were hoped to be in FIPS-140-3.

Because FISMA requires that the US Government use validated crypto, these standards have become very important.

Vendors should make sure they reference the Implementation Guidance (IG). Mr. Easter noted that the IG, in their opinion, does not contain  any new "shalls", but merely clarification of existing requirements. Though, asking many vendors here, apparently the original document was so murky that these IGs seem like completely new requirements.  Now that FIPS-140-2 is so old (12 years now), the IG document is larger than the standard itself!  This can

There are actually only 8 full time reviewers for the CMVP, and those same reviewers also have to do bi-annual audits of the 22 international labs, write Implementation Guidance, etc - you can see why they are busy!

Reports from labs show the greatest areas of difficulty for conformance are key management, physical security, self-tests and RNGs.  Nearly 50% of the vendor implementations have non-conformance issues (8% are algorithm implementation conformance issues).

Mr. Easter apologized for the  queue and noted that this is not what they want either: they want the US Government to be able to get the latest software and hardware, too!

Currently, there are 200 reports in the queue that are actively being tracked and worked - again, 8 reviewers.  Is adding more reviewers the answer? How many people can they steal from the labs ;-)

What can you do to help?  Labs should make sure the reports are complete, in good shape, with all necessary elements..   Vendors should try to answer any questions as fast as possible.  Help close the loop, make the jobs of the reviewers simple.

Unfortunately,  work on FIPS-140-3 seems to have stalled in the group that evaluates new algorithms, and Mr. Easter would like NIST instead to adopt ISO19790 as FIPS-140-3 (it's ready to go: fleshed out, DTRs) - asking us to help pressure NIST to get this standard adopted.


This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Welcome and Plenary Session

Congratulations to Program Chair Fiona Pattison, atsec information security, for putting together such an interesting program for this inaugural International Cryptographic Module Conference.   She has brought together an amazingly diverse mix of vendors, consultants, labs and NIST.  People from all over the world are here.

Our first keynote speaker is Charles H. Romine, Directory, Information Technology Laboratory, NIST. Mr. Romine started off his talk by noting how important integrity and security is to his organization.  Because of this, based on community feedback, they've reopened review of algorithms and other documents.  Being open and transparent is critical to his organization.

NIST is reliant on their industry partners for things like coming up with SHA3.  They are interested in knowing what is working for their testing programs and what is not working, hopefully they will get a lot of great feedback at this conference.

Because these validations are increasingly more valuable in the industry - demand for reviews have gone up significantly. How can NIST keep the quality of reviews up, while still meeting the demand? Mr. Romine is open for feedback from us all.

Our next keynote was Dr. Bertrand du Castel, Schlumbeger Fellow and Java Card Pioneer, titled "Do Cryptographic Modules have a Moat?"

Dr. du Castel asks, is the problem with cryptography simply key exchange? Or is it really a trust issue?  With things like bitcoin (now taxed in Germany) and paypal everywhere on the Internet - it should be obvious how important trust is.  He walked us through many use examples, using humorous anecdotes to demonstrate that the importance of trust is growing and growing.


This post syndicated from: Thoughts on security, beer, theater and biking!

Tuesday, September 24, 2013

ICMC: Introduction to FIPS-140-2

Presented by Steve Weingart, Cryptographic & Security Testing Laboratory Manager, at atsec information security.

While I'm not new to FIPS-140-2, but as I'm here in Gaithersberg, MD for the inaugural International Cryptographic Module Conference, it's always good to get a refresher from an expert.  Please note: these are my notes from this talk and should not be taken as gospel on the subject. If you need a FIPS-140 validation - you should probably engage a professional :-)

Since the passage of Federal Information Security Management Act (FISMA) of 2002, US  Federal Government can no longer waive FIPS requirements. Vendors just simply need to comply to various FIPS standards, including FIPS-140-2 (the FIPS standard that is relevant to cryptography and security modules).  Financial institutions and others that care about third party evaluations also want to see this standard implemented.

Technically, a non-compliant agency could lose their funding.

FIPS-140 is a joint program between US NIST and Canadian CSEC (aka CSE).

All testing for FIPS-140 is done by accredited labs (accredited by National Voluntary Laboratory Scheme).  Labs, though, cannot perform consulting on the design of the cryptographic module. Could be seen as a conflict of interest, if they were seen as designing and testing the same module.  They can use content you provide to make your Security Policy and Finite State Machine (FSM), as those documents have to be in a very specific format that individual vendors will likely have trouble creating them their first time out.

A cryptograpic module is defined by its security functionality, well-defined boundaries and have at least one approved security function.  A module can be contained in hardware, software, firmware, software-hybrid, firmware-hybrid.

Security functionality can be: symmetric/asymmetric key cryptography, hashing, message authentication, RNG, or key management.

The testing lab makes sure that you've implemented the approved security functions correctly to protect sensitive information, makes sure you cannot change the module after it's been validated (integrity requirement), and that you prevent unauthorized use of the module.

Your users need to be able to query the module to see if it is performing correctly and to see the operational state it is in.

FIPS-140 has been around, originally as Federal Standard 1027, since about 1982.  Of course, as technology changes, the standard gets out of date.  FIPS-140-2 came out in 2001 with some change notices in 2002.  FIPS-140-3 has had many false starts.  A large quantity of implementation guidance has come  out (the IG is approaching, if not overtaking, the size of the initial FIPS-140-2 document).

Some brief clarifications: the -<number> on the standard refers to the version.  The first was FIPS-140, next FIPS-140-1, and the final currently adopted one is FIPS-140-2.

Each of those versions have levels that you can be evaluated at. Level one is the "easiest", level four is the hardest (available to hardware only).

FIPS-140-3 has had two rounds of public drafts, there were over 1200 comments, but it seems there is just one person still working on this draft.  In addition, there are not any Derived Test Requirements (DTR) so the labs cannot even consider writing tests for the standard.

There are new versions of "FIPS-140-3" by ISO (19790)  (released in 2012), essentially competing with the NIST draft.  Though, the original goal was to have the ISO standard be the same document, just as an international standard.

Right now, you can validate against the ISO standard in other countries - not in the US or Canada, though.  If you used an international body to validate your module against the ISO standard, it would not get you through the door to US or Canadian Government customers.

It's up to NIST and CSEC to pick one of these, create transition guidance and testing information to let vendors and labs move forward.

ISO 19790 has many improvements, but if you implement them, you will not pass FIPS-140-2 testing. For example, ISO 19790 allows lazy POST (only testing when needed), but FIPS-140-2 requires POST of the entire boundary any time any part of the boundary is used.

FIPS-140-2 has four levels, and it doesn't matter if 99% of your module meets all of the items required for a higher level (like level 2) - but 1% only meets level 1, you cannot be validated at the higher level.

The ISO document doesn't point to specific EAL common criteria levels, helping to alleviate the chicken and the egg circular dependency for FIPS-140 levels. For example, FIPS-140-2 Level 2 requires a EAL validated OS underneath.  The EAL validation requires a FIPS-140-2 validated crypto module.

The finite state model means that you can only be in one state at one time. That means you cannot be generating key material at the same time you're performing cryptographic operations in another thread. This can be very difficult to accomplish with modern day multi-threaded programs - the labs that do the validation review source code, too, so no sneaking around them!

Mr. Weingart keeps reminding us that FIPS-140 is a validation, not an evaluation.

There are some good questions about how do things like OASIS KMIP interact with the requirements for FIPS-140-2 cryptographic key management requirements?  The general thought is they should harmonize, but KMIP doesn't seem to be referenced.

Around key generation, RNG and entropy generation is very important, and with the current news - this is being heavily scrutinized by NIST right now.  Simply using /dev/random (without knowing anything about its entropy sources) is not sufficient. Of course, when you're also the provider of /dev/random, you have a bit more knowledge. We should expect further guidance in this area.

Cryptographic modules have to complete power on self-tests (POSTs) to ensure that the module is functioning properly - no shortcuts allowed! (again, your code will be reviewed - shortcuts will be seen!).  There are also some conditional self-tests - tests run when a certain condition occur, for example, generating a key.

If any of these tests fail, you must not perform *any* cryptographic operations until the failure state has been cleared and all POSTs are rerun.  That is, even if your POSTS for SHA1 fail, you cannot even provide AES or ECC.

If you make any additional "extra credit" security claims, like "we protect against timing attacks", that either needs to be  verified by the lab, or a disclaimer needs to be placed in your security policy.

Implementation Guidance

There is a lot of new implementation guidance coming up, fast and furiously.

The most contentious one is the run Power on Self-Tests all the time (whether in FIPS mode or not). This can be problematic, particularly for something like a general purpose OS or smartcards. Things that may not have been designed for this, or that just don't have great performance capabilities (like smartcards) this can make your device or system unusable for customers that do not need FIPS-140 validated hardware/software.

IG G.14, for example, has some odd things on algorithms, like RSA4096 will be removed from approval, but RSA 2048 won't be. This seems to be related to performance issues, according to discussions in the room, but that seems a harsh punishment for perf issues.  Check SP800-31A for more details about what will and will not be allowable going forward.

Cryptographic Algorithm Validation Program

Any algorithms used in approved mode need to be validated to make sure they are operating correctly.  This step is required before you can submit to the CMVP  (Cryptographic Module Validation Program).  This is a very mechanical process - you have either passed the algorithm tests, or you haven't.  CAVP turn around to issue certificates for your algorithms is typically very quick, because there isn't wiggle room or room for interpretation.

You will work with labs (22 approved ones that are accredited currently) on this, and a consulting firm that will help you to work with the labs, work on your documentation, design, architecture, etc.  You can hire another lab as a consultant, just your lab cannot consult for you. (back to that they cannot test what they designed)

Preparing yourself for validation

Read FIPS-140-2, implementation guidance, SP-800s documentations, etc.

Take training, where available and possible.

Enlist help on your design and architecture as early as possible, get a readiness assessment.

You can do your algorithm testing early, find and fix problems early in your development.

Iterate as needed (if this is your first time, you'll almost certainly have to iterate to get this right).


This post syndicated from: Thoughts on security, beer, theater and biking!