Showing posts with label FIPS140. Show all posts
Showing posts with label FIPS140. Show all posts

Wednesday, May 15, 2019

ICMC19: FIPS 140-2 and the Cloud


Alan Halachmi, Sr. Manager, Solutions Architecture, Amazon, United States

FIPS 140-2 came out in May 2001... think about that, that was before Facebook, Gmail, etc - and way before cloud computing.

Right now validating on the cloud is impossible, as level 1 requires single operator mode - not how you will find things set up in the cloud. In fact, an IG on Operational Environment specifically notes that You cannot use things like AWS, MS Azure or Google cloud.... )

But - someone like Amazon can validate one of their services, as they are the sole operator.

The security landscape is in constant flux - making it difficult to keep a module validated. Performance is often impacted in validated modules - which is not tenable for Amazon.

Amazon wanted a framework that would allow real time advancement from validated environment to validated environment. We want to make it clear that it's a multi-party environment, and with that comes shared responsibility, but would require minimal coordination and be applied consistently between different application models. As much as possible, what to leverage existing investments.
There needs to be focus on automation and defining of relationships. Vendors need the ability to run their own FIPS 140 testing, so they can be assured that any changes they are making have not caused issues - then they can also test performance, etc. Fortunately, ACVP is creating a method for doing this automated testing! NIST approved!

We should look to another model for validation. Think about our history - humans used to come up with hypothesis and then prove them. After the 1970s, humans came up with the hypothesis and machines provde them. Could machines do both?

Think about your surface area of your code - the most critcal code is in small areas (hypervisor, kernel, etc). Attackers have more time (OSes and machines deployed for years) and learned history from what worked in the past. Can we use formal methods for verification? Amazon has done one for TLS - it's on github.

Friday, May 11, 2018

ICMC18: Update from the "Security Policy" Working Group

Update from the “Security Policy” Working Group (U32a) Ryan Thomas, Acumen Security, United States

This was a large group effort, lots of people coming together. The security policy is used by the product vendor, CST laboratory (to validate), the CMVP, and user and auditor (was it configured in the approved fashion?)

The working group started in 2016, to set up a template with the big goals of efficiency and consistency. When they started, were focused on tables of allowed and disallowed algorithms (non-approved), creation of a keys and CSPs table, approved and non-approved services and mapping the difference.

But, the tables were getting unwieldy, and we were told there were changes coming. Then folks started getting ideas on adding it to the automation work. So, the group took a break after helping with the update of IG 9.5.

Fast forwarding to today, many modules leverage previously validated libraries (OpenSSL, Bouncy Castle, NSS) so the documents should be very similar... but not always. Often still very inconsistent. New goal is to target 80% of the validation types, not all.

Creating templates and example security policies will have people coming from a common baseline. This will be less work for everyone, and hopefully get us to the certificate faster!

By Summer 2018, hope to have a level 1 and level 2 security policy. This is not a new requirement, but a guideline. It will point you to the relevant IGs / section and just help you streamline your work and CMVP's work.

Need to harmonize the current tables to the template. It will be distributed on the CMUF website when complete.

While doing this work, discovered a few things were not well understood across the group (like requirements for listing the low level CPU versions.

Got feedback from CMVP about what are the most common comments they send back on security policy - like how does module meet IG A.5 shall statements? How does XTs AEs meet IG A.9? Etc.

Thursday, May 10, 2018

ICMC18: OpenSSL FIPS Module Validation Project: An Update

OpenSSL FIPS Module Validation Project (S23a) Tim Hudson, CTO and Technical Director, Cryptsoft Pty, Australia; Ashit Vora, Acumen Security, United States

Tim was part of OpenSSL before it was called OpenSSL. He's co-founder and CTO at Cryptsoft and a member of the OpenSSL Management Committee (OMC). Co-editor and contributor across OASIS KMIP and PKCS#11 technical committees.

Ashit is the co-founder and lab director at Acumen Security, acquired by Intertek Group last December. He has 15 years of certification and security experience. He is providing advice to OMC on their FIPS validations.

OpenSSL has been through validation 8 times, 3 certificates are still valid. Note the version number of validated modules is not a direct correlation to the actual OpenSSL version number. None of these modules work with OpenSSL v1.1. Cannot update the current modules, as they don't meet new IGs nor power on self test requirements. If you want to do your own revalidation, you have to fix those.

Haven't been able to do FIPS 140, yet, due to being so busy with all of the other features and work that needed to be done first (TLS 1.3, and all of the things in the previous discussion). Needed a good stable base to move forward for TLS 1.3.  The fact that IETF is still finishing TLS 1.3 gave them lots of time to polish the release and add lots of great features and algorithms.

The team is finishing off adding support to TLS v1.3, but it's taking longer than expected. This has delayed the kick off of FIPS validation effort. However, commitment to FIPS validation is unwavering. It is important to the committee. We are doing it a way that we feel is long term supportable, and not an add on.

The idea is to keep the FIPS 140 code to be in a usable state for future updates. There will be a limited operational environments (OEs) tested (previously there were over 140! Doing that many is expensive and time consuming). Will also be only validating limited number of algorithms.  They plan to do source distribution as per previous validations.

At this point in time, there are no plans to add additional platforms to base validation, get it right and make it usable. As new platforms need to be added, other parties can do it independently.  Want to get this solution in place as soon as possible.

FIPS algorithm selection is planned to be functionally equivalent to the previous FIPS module. There won't be a separate canister anymore. It will be a shared library (or DLL) as the module boundary. Static linking will not be supported. It will be an external module loaded at runtime. It will look more like other validations, which should help overall. 

The interface will NOT be an OpenSSL Engine module. Don't want to be constrained by the engine limitations - it will be something else (TBA).

Have already fixed the entropy gathering and RNG.

This won't be a funny standalone release, it will be aligned with a future standard OpenSSL release (once solidified, we will tell you - it will be after the OpenSSL 1.1.0 release).

Will move to FIPS 186-4 key generation, NIST SP 800-56A, added SHA-3 and built in efficiency of POSTs.

Current sponsors: Akamai, NetApp and Oracle. They are contributing to making OpenSSL FIPS a reality. Any other sponsors interested? You need to contact OCM within the next 90 days, you can contact the alias or an individual on the OCM to start.

Next steps: Finalize planning! What functionality is in and what's out? which platforms?  then we have to begin the development process. We expect to publish design documents and have public pull requests for review.  We will be doing incremental development and we aim to minimize the impact on developers. We want feedback earlier from experienced FIPS 140 people and OpenSSL developers.

Adding more sponsors should help speed up the process, as long as they understand this will be an open process and they are willing to work within those constraints.


ICMC18: Avoiding Burning at Sunset - Future Certification Planning in Bouncy Castle

Avoiding Burning at Sunset – Future Certification Planning in Bouncy Castle (S22b) David Hook, Director/Consultant, Crypto Workshop, Australia

If you end up on the FIPS 140-2 historiaal list, you cannot be used for procurement by any Federal Agencies.  Agencies trying to do so must go through a risk management decision in order to do so. It is also getting harder to rebrand or relabel someone else's certification.

If you've only done basic maintenance, that won't 'reset the clock' on your validated module - you have to do at least a 3 SUB, which is not a full validation, but still a lot of work. The key is the module must comply with current requirements.

Java has moved to "6 month" release cycles with periodic LTs releases. SOme of the new algorithms, such as format-preserving encryption and new expandable output functions do require revamping the API.  And.. post quantum... need to consider.

We had to split the Java effort into 3 streams - a 1.0.X stream representing the current API and 1.1.X representing the newer API.

the plan is we can do updates to the 1.0.X stream with minimal retesting. The 1.1.X stream will require recompilation and more work.

All of the updates have to comply with the current Implementation Guidance, which is changing at unspecified rates.  A wise product manager will want to keep this in mind when doing long term planning.

Premier support for Java 9 finished in March 2018 and premier support for Java 10 finishes in September 2018.  Java 11 will be supported until September 2023, extended support until September 2026.

there is now an add on for Java FIPS to allow use of post-quantum key exchange mechanisms with KAS OtherInfo via SuppPrivInfo.



Bouncy Cancel 1.0.2 will still be targeting Java 7, 8 and 11. The older versions are still very popular. Will be doing a correction to X9.31 SHA-512/256 (8 plus 1 is 10?). Who uses this? Banks... Will also be adding SHA-3 HMAC, SHA-3 signature algorithms.

BC-FJA 1.1.0 will have updates for format preserving encryption (SP 800-38G), CSHAKE, KMAC, TupleHash and ParallelHash (SP 800-185), ARIA, GOST and CHaCha20, Poly1305. Avoiding some algorithms due to patents - trying to chase down someone to talk to who can speak for the patent holders. (acquisitions make this hard...)

We now have a bunch of Android ports - stripy castle. Could not use any other name, because Google's use of org.bouncycastle as well as the org.spongycastle.

C# 1.0.1 is closer to Java 1.0.2 in some respects. We are still concentrating on the general use API. Assuming enough intersst we will do a 1.0.2 release to fix X9.31, complete SHA-3 support and complete ephemeral KAS support.



ICMC18: Panel Discussion: Technology Challenges in CM Validation

Panel Discussion: Technology Challenges in CM Validation (G21b) Moderator: Nithya Rachamadugu, Director, CygnaCom, United States Panelists: Tomas Mraz, Senior SW Engineer, Red Hat, Czech Republic; Steven Schmalz, Principal Systems Engineer, RSA—the Security Division of EMC, United States; Fangyu Zheng, Institute of Information Engineering, CAS, China

All three panelists have been through their share of validations, and Fangyu has also had to deal with the Chinese CMVP process.

As to their biggest challenges, everyone agrees that time is the issue. Tomas noted that it's very difficult to get the open source community excited about this and doing work to support the validations. For Fangyu, they often have to maintain 2 versions of several algorithms, one for US validations and one for Chinese.

In general, it's hard to find out what the requirements are from most customers here, particularly across various geos.

Several panelists agree this is seen as an expensive checklist. Steven also worries about the impact on business - it goes beyond what you pay the labs and the engineers to write the code. It's hard to get this done and get it to all the customers. Tomas noted that there are conflicting requirements between FIPS and other standards (like AES GCM, though that has been recently addressed).

On value of the certification, have you found anything during a validation that made your  product more secure? Steven notes you can talk about methodologies for preventing software vulnerabilities, and the devs will come back and say why didn't it work for "so-and-so"? But if you look specifically at the testing of the algorithm, it gives you value that you've implemented the cryptography correctly.  Not clear we get as much benefit out of the module verification.  Agreement across the panel that CAVP is valuable, technically.

Steve really wishes there was a lot more guidance on timing of validations and how to handle vulnerabilities. Tomas notes it's hard to limit changes to the boundary in the kernel, because we need to add new hardware support and other things. Fangyu noted that even rolling out fixes for a hardware module is challenging.

All panelists are excited about the automation that is happening, though Steven is wondering if it will really be possible for the module (algorithm testing seems very automatable, and that will still help).  Steven talked about industry trend to continuously check status of machines, make sure they are up to date with patches, etc - getting this automation in can help people continuously check their work, even on development modules.

All panelists noted that the validation process could be improved, but it won't help the overall security of the system.

Customers want Common Criteria and FIPS 140-2, but don't really understand what it means, they just want to make sure it's there. Try to do them both at the same time is difficult to line up the validations and making sure all teams understand when they need it. And... they both still take too long to get.

On the topic of out of date or sunset modules - it's unclear how many customers may be running these, but Steven has heard support requests come in for out of date modules. They use that as an opportunity to get them to upgrade. Tomas noted they won't likely be able to "revive" the sunset module, due to how quickly the IG and standards change.

ICMC18: 10 Years of FIPS 140-2 Certifications at Red Hat

 10 Years of FIPS 140-2 Certifications at Red Hat (G21a) Tomas Mraz, Red Hat, Czech Republic


Red Hat was founded in 1993, received first FIPS 140 validation in 2007 with Sun Microsystems with NSS (Network Security Services), which was designed to be FIPS 140 compliant from the start.
We spent a lot of effort in Red Hat Linux to get everything to use NSS (cURL, RPM, OpenLDAP, OpenSWAN), but could not convert everything. Too many differences in APIs.

So, changed to “validate everything” mode!  OpenSSL based on the original FIPS module from OpenSSL, but partially evolved for Red Hat.  For OpenSSH, did their own independent FIPS work. Libgcrypt, hired a community developer to integrate FIPS support upstream.  For OpenSWAN and later libreswan, hired the community developer to port to NSS and then get FIPS support.  DM-crypt first had its own crypto, but later switched to use libgcrypt. GnuTLS was done after Red Hat hired the main developer of the project.

Highlights – we were able to do this at all. We could do it quite quickly with existing modules, and we never included Dual-EC DRBG so avoided big issues there. Some small implementation bus were found by CAVs testing.

Lowlights – process is still to slow and expensive to be able to revalidate everything we release, creating a conflict between fixing bugs and security issues and the need to have the software validated.  Sometimes new crypto has to be disabled in the FIPS mode (even though its security is well established, like ChaCha20-Poly1305, Curve 25519 DH).  Some of the requirements really are for hardware, and don’t make sense for software and implementing them does not improve the software.
Lowlights – more! The restrictions are too tight on the operating environment. HW requirements are ignored by customers and other products built upon RHEL are marketing under a different name – confusing!  The open source community does not care about government customers, so call it nonsense, silliness and garbage.
We needed to make the process of turning on FIPS mode, so it would not interfere with regular customers that don't care about FIPS mode at all.  This is all more restricted in containers as well (both host and container must be RHEL, for example).
In libgcrypt, non-approved algorithms are blocked, things like MD5, annoying to customers.
In the future, want to continue to work with NIST to improve the process and continue to work on the ACVP project, to speed up revalidations.  We may have more or less crypto modules, less if we can get more utilities to use our validated libraries (like move SSH from using OpenSSH crypto to OpenSSL).
 

Wednesday, May 9, 2018

ICMC18: FIPS 140-3 Update

FIPS 140-3 Update (C13c) Michael Cooper, IT Specialist, NIST, United States

Mr. Cooper would love to give us a signature date, but... he can't. (out of his control). There are a general set of documents that point to ISO 19790 and ISO 24759, it's gone through the NIST processes (legal reviews, etc) now we are at the last stage: waiting for the secretary of commerce to sign. This is a timing thing - wheels are in motion.

The document that's going in for signing is just a wrapper document, basically pointing only to those other documents and no modifications.

Hoping that by leveraging an international standard, then this will simplify testing requirements for vendors. Already going to CC meetings to see who else is interested in this, and looking into automation for this as well.

Standardizing testing, especially across NIAP and CC, then this will help extend the adoption of the standard.

The algorithm automated testing will give us a start on automating module testing. We want to leverage ideas from around the world, academia and industry.

Question from the audience - what other country has signed up for this? So far, none, but there is interest.

Q: Does FIPS 140-3 point to a specific version of the other documents? Yes, but worded to make it easier to update to newer versions, as needed. Given more flexibility.

Q: what's going to be the sunset of FIPS 140-2? Will likely follow something similar as to what we had before, there will be documentation to guide folks. Likely a year to submit against old scheme.

Q: What about the old IGs (Implementation Guidance) documents? Will they go away? About 50% of them, the rest will need to be updated.

Q: Why are we starting with the 2012 draft, and not the 2015 draft? With the mandate to update standards every 5 years? FIPS 140-3 won't have to change, we can update what it points to. We will forever be FIPS 140-3, pointing to the 'latest' ISO standard.

Q: How often do the ISO standards get updated? Every 5 years?

ICMC18: Mandating CMVP for NIAP Evaluations Panel

Mandating CMVP for NIAP Evaluations Panel Presentation (C13a) Moderator: Dianne Hale, NIAP, United States, Panelists: Michael Cooper, IT Specialist, NIST, United States; Terrie Diaz, Product Certification Engineer, Cisco Systems, United States; Matt Keller, Corsec, United States; Edward Morris, Co-founder, Gossamer Security Solutions, United States; Nithya Rachamadugu, Director Cygnacom United States

By mandating this, it will reduce duplicate work

Mike Cooper - of course we'd like to see more people leveraging our programs, but we know NIAP has to worry about the timeline. We currently can't do things in parallel. But, there are slight differences, depending on which PP you're looking at. FIPS 140-2 is a 1 size fits all. We'd have to do work to see what the differences are and figure out how to go forward.

Ed Morris - we see this from our customer perspective. When they are doing a common criteria project, we have to figure out if FIPS will be required or not - if it's for DoD, etc, then they will need both. Managing that timeline is very difficult.

Matt Keller - been doing this for more than 20 years as a consultant (not a lab). It's an interesting idea. There are a lot of things FIPS does very well: no unbounded surfaces, well defined interfaces, but it's not perfect. Can we leverage each? Products do not always equal modules. Most people were not just buying crypto modules - they are buying a product that contains a module. NIAP gives more assurance at the product level. We can't just look at crypto - we need to consider how the keys are stored. We want to look at the entire product, making sure the module we tested is being used correctly in the product.

Nithya Rachamadugu - remembers doing this when crypto was treated by CC as a black box.  CMVP in some places has more tests and requirements and vice versa.

Terrie Diaz - been working in this space for 20 years. Cisco has a large profile of products that are certified and sold through many countries. Don't really want to do country specific certifications, something that is more general would be more appealing.

Ed - it would be nice if they could point to each other's certificates. Not sure this would work in practice. NIAP covers the entire product, and modules are just the crypto bits. Vendors want to shrink their crypto boundaries for FIPS as small as possible to avoid revalidation thrash.  But what happens is we miss out on how keys are handled and stored, for example. FIPS is a once size fits all - smartcard to server.  And it's grown from older standards, which brings a lot of extra things. CC has updated PP that are more relevant for your product.

Matt K. - there are labs are doing FIPS inside testing, so we are solving this problem - it's just not standardized (and no regulation or oversite).

Nithya - CC is compliance and FIPS is a validation. They have different goals. Maybe not mandated, but leveraged?

Ed - we hear sometimes from customers that we don't need to worry about their crypto, because they're already going through a CMVP validation, but then we find they are not using it correctly based on CC.

Question: there's already a divergence for various countries. What can we do better here?

Matt K - we could pull the most valuable pieces from FIPS and put it in CC PP. But, if the law still requires you buy FIPS validated crypto... then this won't meet your requirement.

Q: It would be good to keep these standards in sync.  Ed - yes, that would help us to avoid thrash.  But linking them together can be quite hard. Nithya - I get questions all the time about which one to start first, my advice is start at the same time. You will be able to leverage the work of each other. What if you find an issue along the way? lots of things to consider.

Mike C. - as a government purchaser, I do want to see how these assurance cases tie together.

Ed - maybe we should decrease the scope of FIPS so we can better work together.

This was followed by a good discussion on timing - vendors really want this all to happen as fast as possible for the most benefit.

Friday, May 19, 2017

ICMC17: Crypto: You're Doing it Wrong

Jon Green, Sr. Director, Security Architecture and Federal CTO, Aruba Networks/HPE

Flaws can be varied and sad - like forgetting to use crypto (like calling a function that was never completed for your DRBG! Jon showed us an example of validated code that was an empty function who's comment contained TODO). Other issues in large multi-module products that may contain code written in C, Java, Python, PhP, JavaScript, Go, Bash... claiming to get FIPS from OpenSSL. Most of languages aren't going to be using OpenSSL, so they won't be using FIPS validated crypto.

Developers often don't know where the crypto is happening. They may forget to complete certain code segments, relay on 3rd party and open source and rely on multiple frameworks. Or even if they do, they may not want to dive in because of the amount of work required to make things work correctly, particularly from a  FIPS perspective..

What about the FIPS code review done by the lab? Almost certainly not, as they are typically looking at the application code - just the low level crypto and RNG functions. Even with the old German scheme for EAL4 deeper code review, still miss issues (like the above TODO code that went through EAL2 and EAL4 review).  Testing misses these nuances as well.

Security audits of your code are very fruitful, but very expensive. He's seen success with bug bounty programs, even if the code is closed.

He's also seen problems with FIPS deployments that are leveraging "FIPS inside" where they leverage another module, like OpenSSL, but forgot to turn on FIPS mode and forgot to update the applications so they would not try to use non-FIPS algorithms.

Another problematic approach - the dev follows all the steps to deploy CenOS in FIPS mode by following the RedHat documentation... except that documentation only applies to RedHat and *not* CentOS. Yes, it's the same source code , but validations are not transitive. A RedHat validation only applies to RedHat deployments.

To get this right, identify services that really need to be FIPS validated and focus your efforts there..


ICMC17: Keynote: From Heartbleed to Juniper and Beyond


Matthew Green, Johns Hopkins University.

Kleptography - the study of stealing cryptographic secrets. Most people did not think the government was really doing this. But, we do know there was a company, Crypto AG, that worked with the NSA on their cipher machines available between 1950s and 1980s.

Snowden's leak contained documents referring to SIGINT - a plan to insert back doors into commercial encryption devises and add vulnerabilities into standards.

How can the government do this?  We can't really change existing protocols, but you can mandate use of specific algorithms. This brings us to the 'Achilles heel' of crypto - (P)RNG. If that's broken, everything is broken.

There are two ways to subvert an RNG - attack the lower level TRNG or the PRNG. The TRNG is probabilistic and hardware specific and too much variance. The PRNG/DRBG is software and everyone has to use a specific one to comply with US Government standards - a more appealing target.

Young and Yung predicted an attack against the DRBG and how it might work in the 1990s - where you could use internal state, a master key and a trap door will let you decrypt the data.  This sounds a lot like Dual EC DRBG. It was almost immediately identified as having this weakness - if the values were not chosen correctly. NSA chose the values - and we trusted them.

Snowden leaks found documents referring to "challenge of finesse" regarding pushing this backdoor through a standards body.  Most of us don't have to worry about the government snooping on us, but ... what if someone else could leverage this back door?

This is what happened when Juniper discovered unauthorized code in their ScreenOS that allowed an attacker to passively decrypt VPN traffic.  Analysis of the code changes discovered that Juniper changed the parameters for DUAL EC DRBG. But, Juniper said they didn't use Dual EC DRBG, according to their security policy, other than as input into 3DES (which should cover anything bad from DRBG).  The problem came up when a global variable was used in a for loop (bad idea), which in effect means the for loop that was supposed to do the 3DES mixing never runs (as the Dual EC DRBG subroutine also uses the same global variable).

More specifically, there are issues with how IKE was implemented.  The impacted version, ScreenOS 6.2 (the version that adds Dual EC DRBG) adds a pre-generation nonce.

Timeline: 1996 Young and Yung propose the attack, 1998 Dual EC DRBG developed at NSA, 2007 became a final NISt standard, 2008 Juniper added Dual EC DRBG and it was exploited in 2012 and not discovered until 2015.

Before Dual EC DRBG, people used ANSI X9.31 - which had a problem if you used a fixed K, someone can recover the state and subvert the systems.

How do we protect ourselves? We should build protocols that are more resilient to bad RNG (though that's not what is happening). But maybe protocols are not the issue - maybe it's how we're doing the validation.  Look at FortiOS, who had a hard coded key in their devices, which was used for testing FIPS requirements, and documented it in their security policy.

Friday, May 20, 2016

ICMC16: Entropy As a Service: Unlocking the Full Potential of Cryptography

Apostol Vassilev, Research Lead–STVM, Computer Security Division, NIST

Crypto is going smaller and light weight, lightweight protocols, apis, etc.

In modern cryptography, the algorithms are known. Key generation and management govern the strength of the keys. If this isn't right, the keys are not actually strong.

In 2013, researchers could find keys from a smart card, due to use of low-quality hardware RNG, which was stuck in a short cycle.  Why was this design used? Didn't want to pay for a higher quality piece of hardware or licensing of patents.

Look at the famous "Mining your Ps and Qs: Detection of Widespread Weak Keys in Network Devices", which found that 0.75% of TLS certificates share keys, due to insufficient entropy during key generation.

One of the problems is that there is a lot of demand for entropy when a system boots... when the least amount of entropy is available.

Estimating randomness is hard. Take a well-known irrational number, e.g. Pi, and test the output bit sequence for randomness - it will be reported as random (NIST has verified this is true).

Check out the presentation by Viktor Fischer, Univ Lyon, UJM-Saint-Etienne, Laboratoire, Hubert Curien: NIST DRBG Workshop 2016.

He noted that using the statistical test approach of SP 800-90B makes it hard to automte the estimation of entropy. But automation is critically important for the new CMVP!

The solutions - an entropy service!  NOT a key generation service (would you trust the government on this!?). Not similar to the NIST beacon.

Entropy as a Service (EaaS).   Followed by cool pictures :-)

Key generation still happens locally. You have to be careful how you mix data from a remote entropy server.

While analyzing Linux, they discovered the process scheduling algorithm was collecting 128 bits of entropy every few seconds. Why? Who knows.

EaaS needs to worry about standard attacks on web service and protocol, like message replay, man in the middle an dns poisoning.  But, other attack vectors - like dishonest EaaS instances. You will need to rely on multiple servers.

EaaS servers themselves will have to protect against malicious clients, too.

Project page: http://csrc.nist.gov/projects/eaas





Thursday, May 19, 2016

ICMC16: Entropy Requirements Comparison between FIPS 140-2, Common Criteria and ISO 19790 Standards

 Richard Wang, FIPS Laboratory Manager, Gossamer Security Solutions, Tony Apted, CCTL Technical Director, Leidos

Entropy is a measure of the disorder, randomeness or uncertainty in a closed system.  Entropy underpins cryptography, and if it's bad, things can go wrong.  Entropy sources should have a noise source, post processing and conditioning.

There is a new component in the latest draft of SP 800-90B that is discussing post processing.  there are regular health tests, so any problems can be caught quickly.

There are 3 approved methods for post-processing: Von Neumann's method, Linear filtering method, Length of runs method.

The labs have to justify how they arrived at their entropy estimates. There should be a detailed logical diagram to illustrate all of the components, sources and mechanisms that constitute an entropy source. Also do statistical analysis.

When examining ISO 19790, their clauses on entropy seemed to line up with FIPS 140-2 IG's - so if you meet CMVP requirements, you should be ready for ISO 19790 (for entropy assesment).

Common Criteria has it's own entropy requirements in the protection profiles. The Network Device PP, released in 2010, defined an extensive requirement for RNG and entropy. You have to have a hardware based noise source, minimum 128 bits of entropy and 256 bits of equivalent strength.

The update in 2012 allowed software and/or hardware entropy sources. It was derived from SP 800-90B, so very similar requirements.

Entropy documentation has to be reviewed and approved before the evaluation can formally commence.

Some vendors are having trouble documenting thrid party sources, especially hardware.  Lots of misuse of Intel's RDRAND.








ICMC16: Improving Module's Performance When Executing the Power-up Tests

Allen Roginsky, CMVP NIST

There are power-up tests and conditional tests. Power-up includes integrity test, approved algorithm test, critical functions tests.  Conditional are things like generating key pairs, a bypass test.

Implementations have become more robust, and integrity tests no longer take just a second or so - may be minutes.

Imagine a smartcard used to enter a building.  If it is too slow, it may take 5-10 seconds... then the next person... the next. Quite a traffic jam

We don't want to drop the tests altogether, so what do we do?  What are other industries doing?

Supposed the software/firmware image is represented as a bit-string. The module beraks the string into n substrings; n is no greater than 1024. the length, k bits, of each of the first (n-1) substrings is the same; the length of the last substring is no greater than k.  Then, maybe you can just look at some of the strings.

A bloom filter optimization can significantly improve the efficiency of this method.

You could use a deterministic approach. You would have to keep track of the location where you ended, and hope that location doesn't get corrupted.

CMUF is working on an update to Implementation Guidance on doing integrity check from random sampling.

Vendors do not want to perform each algorithm's known answer test, not just delay until first use of algorithm. Why are we doing this test in the first place?  Something to think about.

Another option offered by a questioner: maybe replace with on demand testing? Is a good option.

ICMC16: Finding Random Bits for OpenSSL

Denis Gauthier, Oracle

OpenSSL provides entropy for the non FIPS case, but not for the FIPS case.

BYOE - Bring your own entropy, if you're running in FIPS mode.

OpenSSL gets its entropy from /dev/urandom, /dev/random, /dev/srandom, but since /dev/urandom doesn't block, that's effectively where OpenSSL gets most of its entropy.  And it never reseeds. We likely didn't have enough when we started, and certainly not later.

Reseeding: not everyone will get it right... there are too many deadlines and competing priorities.  Looking at an ubuntu example, based only on time(0)... not good entropy.

There is no universal solution - the amount varies with each source and its operating environment. Use hardware, if it's available. Software will rely on timers - because, hopefully, events don't happen at predictable times (not to the nanosecond).

There is a misconception that keyboard strokes and mouse movements will give you a lot of entropy... unless the machine is a headless server... :)

You could use Turbid and Randomsound to gather entropy from your soundcard.

There are a lot of software sources, like CPU, diskstats, HAVEGED, interrupts, IO, memory, net-dev, time, timer-list and TSC. Denis had a cool graph about how much entropy he was getting from these various places.

You need to do due diligence, doing careful evaluation of your sources. Things are not always as advertised.  Not just convincing yourself, but making sure the folks you're selling to trust your choices as well.

The best thing - diversify!  You need multiple independent sources. Use a scheduler to handle reseeding for forward secrecy.  But, be careful not to starve sources, either.

When using OpenSSL, bring your own entropy and reseed.


ICMC16: OpenSSL Update

Tim Hudson, Cryptsoft

The OpenSSL team had their first face to face meeting, ever! 11 of the 15 members got together digging into documentation and fixing bugs - and POODLE broke, so... they got to know each other quite well.

The team thinks of time as "before" April 2014 and after... Before there were only 2 main developers, entirely on volunteer basis.  Nor formal decision making process, extremely resource limited.  After April 2014, now have 2 full time developers and 5-6 regular developers.  This really helps the project.

After a wake-up call, you have to be more focused on what you should be doing. Everyone is now analyzing your code-base, looking for the next heartbleed.  Now there is more focus on fuzz testing, increased automated testing. Static code analysis tools are rapidly being updated to detect heartbleed and things like heartbleed.

New: mandatory code review for every single changeset.  [wow!]

The OpenSSL team now has a roadmap, and they are reporting updates against them.  They want to make sure they continue to be "cryptography for the real world" and not ignore "annoying" user base for legitimate concerns or feature needs.

Version 1.0.2 will be supported until 2019.  No longer will all releases be supported for essentially eternity.

OpenSSL now changing defaults for applications, larger key sizes. removing platform code for platforms that are long gone from the field.  Adding new algorithms, like ChaCha20 and Poly1305. The entire TLS state machine has been rewritten.  Big change: internal data structures are going to be opaque, which will allow maintainers to make fixes more easily.

FIPS 140 related work paid for OpenSSL development through 2014.  It is hard to go through a validation with one specific vendor, who will have their own agenda. 

There are 244 other modules that reference OpenSSL in their validations, and another 50 that leverage it but do not list it on their boundary.

The OpenSSL FIPS 2.0 module works with OpenSSL-1.0.x. There is no funding or plans for an OpenSSL FIPS module to work with OpenSSL-1.1.x.

The hopes are that the FIPS 3.0 module will be less intrusive.

If you look at all the OpenSSL related validations, you'll see they cover 174 different operating environments.

How can you help? Download the pre-release versions and build your applications. Join the openssl-dev and/or openssl-users mailing lists. report bugs and submit features 

If FIPS is essential to have in the code base, why hasn't anyone stepped forward with funding?


Wednesday, May 18, 2016

ICMC16: Secure Access with Open Source Authentication

Donald Malloy, LSExperts

For years, Mastercard and Visa said the cost to implement the chips in cards cost more than the fraud, but that has all changed in recent years.  EMV (Chip and Pin) is being rolled out in the US, which will push the US fraud to online (can't use the chip).

Fingerprints are non-revocable. Someone can get them from a picture or hacking into a database.

OATH is an industry consortium, the algorithms are free to use.

They have started a certification program, so we can verify that vendor tokens work together.

Why is OTP still expensive? Comes in soft tokens, hard tokens, usb tokens, etc.  Cost per user has consistently been too high, manufacturers continue to have a business model that overcharges the users. OATH is giving away the  protocols - so why still so much?

working with LinOTP - fast and free.

Since biometrics are irrevocable, how do we get stronger passwords? Could we use behaviour analytics? type a phrase and the computer will know it's you.

 




ICMC16: The Current Status and Entropy Estimation Methodology in Korean CMVP

Yongjin Yeom, Kookmin University;Seog Chung Seo, National Security Research Institute

NSR is the only company testing in Korea.

We have vendors, testing authority and certification authority. NSR tests according to Korean standards. NSR develops and standardized testing methodology with vendors.

KCMVP started in 2005, using their own standard. Staring in 2007, leveraged international standards.

116 modules have been validated. There are more software validations than hardware validations. Most of the software validations have been in C and Java.

The process overview of KCMVP looks very similar to CAVP combined with CMVP in the US/Canadian schemes. There are algorithm verification, tests of the self tests, etc ;)

They have a CMVP tool to automatically verify algorithm correctness.

Just like here, there are big concerns about entropy sources. It's hard to get entropy to scale, so they want to discover the limits of entropy source.

ICMC16: Introduction on the Commercial Cryptography Scheme in China

Di Li, Senior Consultant, atsec information security corporation

Please note: Di Li works for atsec, and is not speaking for the Chinese Government.

Their focus is on certifying both hardware and software, including HSMs and smart cards.  In China, only certified products can be sold or used. They should be sold commercially. By law, no foreign encryption products can be sold or used in China.

Additionally use their own algorithms. Some are classified, others leverage international algorithms like ECC, others would be considered a competitor to algorithms like SHA2 or AES-GCM.

They have their own security requirements for cryptographic modules, but there are no IGs, DTRs, etc., so it is different. No such concept of "hybrid", either.

There are two roles in the scheme: OSCCA and vendor. OSCCA issues the certificates and they are also the testing lab.  The vendor designs and develops the product, sell, and promote.  They report their sales figures to OSCCA every year.

There are requirements to be a vendor as well! To get sales permission, you need to demonstrate you are knowledgeable in cryptography among other things. 

Validations are updated every 5 years. As a part of this process, you additionally have to pass design review.

You cannot implement cryptography within a common chip , as there is not enough security in that chip.

Banking must use a certified products. The biggest market is USB tokens and smart cards.

ICMC16: Automated Run-time Validation for Cryptographic Modules

Apostol Vassilev, Technical Director, Research Lead–STVM, Computer Security Division, NIST
Barry Fussel, Senior Software Engineer, Cisco Systems

Take a look at Verizon's Breach report. It's been going on for 9 years, and we see things are not getting better. It takes attackers only minutes to subvert your security and they are financially motivated.  Most of the industry doesn't even have mature patching process.

We hope validation can help here, but vendors complain it takes too long, the testing depth is too shallow, and it is too costly and rigid.

In the past, we leveraged code reviews to make sure code was correctly implemented. Data shows that code reviews, though,  only find 50% of the bugs.  Better than 0, but not complete. Can we do better?

Speeding up the process to validate, allows people to validate more products.

Patching is important - but the dilemma for the industry is "do I fix my CVE or maintain my validation?" This is not a choice we want our vendors to make.  We should have some ability for people to patch and still maintain validation.

The conformance program needs to be objective, yet we have different labs, companies and validators relying on these written test reports. This is a very complex essay!  Reading the results becomes dizzying.  We want to improve our consistency and objectivity, how can we do this?  So, we asked the industry for feedback on what the industry looks like and how we could improve the program. We started the CMVP working group.

The problem is not just technical, but also economical. To make changes, you have to address all of the problems.

Additionally, we need to reconcile our (NIST's) needs with NIAP's (for common criteria). 

And a new problem - how to validate in the cloud? :)

The largest membership is in the Software Module sub group - whether that is due to the current FIPS 140-2 handling software so poorly, or reflective of a changing marketplace is unclear.

Fussell took us into a deep dive of the automated run-time validation protocol. 

The tool started out as an internal CIsco project under the oversite of David McGrew, but didn't get a lot of traction. Found a home in the CMVP working group.

One of the primary requirements of this protocol is that it be an open standard.

Case example: Cisco has a crypto library that is used in many different products. First they verify that the library is implemented correctly, but then they need a way to verify that when products integrate it they do it correctly.

The protocol is a light weight standard's based protocol (HTTPS/TLS/JSON). No need to reinvent the wheel.

Authentication will be configurable - multi factor authentication should be an option, but for some deployments you won't need a heavy authentication model.

And Barry had a video! (hope to have linked soon)

If things are going to be as cool as the video promises, you'll be able to get a certificate quickly -from a computer. :)  And if you find a new

ICMC16: FIPS Inside

Carolyn French, Manager, Cryptographic Module Validation Program, Communications Security Establishment

Canada doesn't have the same legal requirements to use FIPS 140-2 validated cryptography, but it's handled by risk management and is strongly desired. They put verbiage in their contracts around this, and also require that the modules are run in FIPS mode.

If you are not using a FIPS 140-2 validated module, we consider it plaintext.  So, if cryptography is required, then you need to use an approved algorithm, in an approved manner with sufficient strength.

Vendors can choose to validate the entire box or just the cryptography itself.  The smaller the boundary, the longer you can go between revalidating.

A vendor may incorporate a validated module from another vendor (eg a software library) into their product. CMVP recommends that you get a letter from them confirming exactly what you're getting. Writing crypto is hard - so reuse makes a lot of sense.

When you are considering leveraging someone else's validated  module, look at what is actually "insdie". For example, what module is generating the keys?

You can rely on procurement lists like the DoD UC APL.