Showing posts with label General Security. Show all posts
Showing posts with label General Security. Show all posts

Tuesday, October 9, 2018

BH18: How I Learned to Stop Worrying and Love the SBOM

Allan Friedman  | Director of Cybersecurity, NTIA / US Department of Commerce

Vendors need to understand what they are shipping to the customer, need to understand the risks in what is going out the door. You cannot defend what you don’t know. Think about ingredients list on a box – if you know you have an allergy, you can simply check the ingredients and make a decision. Why should software/hardware we ship be any different?

There had been a bill before congress, requesting that there always be an SBOM (SW Bill of Materials) for anything the US Government buys – so they know what they are getting and how to take care of it. The bill was DoA, but things are changing…

The Healthcare Sector has started getting behind that. Now people in FDA and Washington are concerned about the supply chain. There should not be health care way of doing this, automotive way of doing this, DoD way of doing this… there should be one way.   That’s where the US Department 
of Commerce comes in.  We don’t want this coming from a single sector.

Committees are the best way to do this – they are consensus based. That means it is stakeholder driven, no single person can derail. Think about it like “I push, but I don’t steer”.

We need Software Component Transparency. We need to compile the data, share it and use it.  Committee kicked off on July 19 in DC. Some folks believe this is a solved problem, but how do we make sure the existing data is machine readable? We can’t just say ‘use grep’. Ideally it could hook into tools we are already using.

First working group is tackling defining the problem. Another is working on case studies and state of practice. Others on standards and formats, healthcare proof of concept, and others.

We need more people to understand and poke at the idea of software transparency – it has real potential to improve resiliency across different sectors.

BH18: Keynote! Parisa Tabriz

Jeff Moss, founder of Blackhat, started out the first session at the top of the conference, noting several countries have only one person from their country here – Angola, Guadalupe, Greece, and several others. About half of the world’s countries are represented here this year! Blackhat continues to offer scholarships to encourage a younger audience to attend, who may not be able to afford to. Over 200 scholarships were awarded this year!

To Jeff, it feels like the adversaries have strategies, and we have tactics – that’s creating a gap. Think about address spoofing – it’s allowed and turned on on popular mobile devices by default, though most consumers don’t know what it is and why they should turn it off.

With Adobe Flash going away, beliefs out there are this will increase SPAM and change that landscape. We need to think about that.

Parisa Tabriz, Director of Engineering, Google.
Parisa has worked as a pen tester, engineer and more recently as a manager. She has often felt she was playing a game of “whack-a-mole” – how do we get away from this? Where the same vuln (or a trivial variation of another vuln) pops up over and over. We have to be more strategic in our defense.
Blockchain is not going to solve our security problems. (no matter what the vendors in the expo tell you…)

It is up to us to fix these issues. We can make great strides here – but we have to realize our current approach is insufficient

We have to tackle the root cause, pick milestones and celebrate and build out your coalition.  We need to invest in bold programs – building that coalition with people outside of the security landscape.

We cannot be satisfied with just fixing vulnerabilities. We need to explore the cause and effect – what causes these issues.

Imagine a remote code execution (RCE) is found in your code – yes, fix it, but figure out why it was introduced (the 5 Whys)

Google has started Project Zero – Make 0-Day Hard. Project Zero was formed in 2014, treats Google products like 3rd party. Finding thousands of vulnerabilities. But they want to achieve the most defensive impact from any vulnerabilities they find.

Team found that vendor response varied wildly in the industry – and it never really aligned with consumer needs. There is a power imbalance between security researcher and the big companies making the software. Project Zero has set a 90 day release time line, which has removed the negotiation between a researcher and the big company. A deadline driven approach causes pain for the larger organizations that need to make big changes – but it is leading to positive change at these companies. They are rallying and making the necessary fixes internally.

One vendor improved their patch response time by as much as 40%! 98% of the issues are fixed within the 90-day disclosure period – a huge change!  Unsure what all of those changes are, but guessing it’s improved processes, creating security response teams, etc.

If you care about end user security, you need to be more open. More transparency in Project Zero has allowed for more collaboration.

We all need to increase collaboration – but this is hard with corporate legal, process and policies. It’s important that we work to change this culture.

The defenders are our unsung heroes – they don’t win awards, often are not even recognized at their office. If they do their job well, nobody notices.

We lose steam in distraction driven work environments. We have to project manage, and keep driving towards this goal.

We need to change the status quo – if you’re not upsetting anyone, then you’re not going to change the status quo.

One project Google is doing to change the world is to move people away from HTTP and to HTTPS on the web platform.  Not just Google services, but the entire world wide web.  We wanted to see a web that was by default secure – not opt-in secure. The old Chrome browser didn’t make this as obvious to users which was the better website – something to work on.

Browser standards come from many standards bodies, like IETF, W3C, ISO, etc – and then people build browsers on top of those using their own designs. Going to HTTPS is not as simple as flipping a switch – need to worry about getting certificates, performance, managing the security, etc.

Did not want to create warning fatigue, or to have it be inconsistently reported (that is, a site reported as insecure on Chrome, but secure on another browser).

Needed to roll out these changes gradually, with specific milestones we could celebrate. Started with a TLSHaiku poetry competition, which led to brainstorming.  Shared ideas publicly, got feedback from all over, and helped to build support internally at Google to drive this. Published a paper on how to best warn users.  Published papers regarding who was and was not using HTTPS. 

Started a grass root effort to help people migrate to HTTPS. Celebrated big conversions publicly, recognizing good actors.  Vendors were given a deadline to transition to, with clear milestones to work against, and could move forward. Had to work with certificate vendors to make it easier and cheaper to get certificates.

Team ate homemade HTTPS cake and pie! It is important to celebrate accomplishments, acknowledge the difficult work done. People need purpose – it will drive and unify them.

Chrome set out with an architecture that would protect a malicious site from attacking your physical machine. But, now with lots of data out there in the cloud, has grown the cross site data attacks.  Google’s Chrome team started the Site Isolation project in 2012 that prevented the data from moving that way.

We need to continue to invest in ambitious proactive defensive projects.

Projects can fail for a variety of reasons – management can kill the project, for example.  The site isolation project was originally estimated to be a year, but it actually took six….. schedule delay at that level puts a bulls-eye on you.  Another issue could be lack of peer support – be a good team player and don’t be a jerk!

Wednesday, August 8, 2018

BH18: There Will Be Glitches: Extracting and Analyzing Automotive Firmware Efficiently

Alyssa Milburn & Niek Timmers, Riscure.

The standard approach for breaking into embedded systems: Understand target, Identify vulnerability, exploit vulnerability. Note - he also is referring to ECUs found in cars.

To understand the embedded system, need to understand the firmware. To do so - you need to get a hold of a car! Good source for cheep cars with common components - recalled Volkswagens :-)

Today's talk is targeting the instrument cluster - why? Because it has visual indicators you can see what is happening - it has blinking lights! :-)

Inside the instrument panel you will find the microcontroller, the EEPROM. display and the UART for debugging (but, it's been secured).  So, we have just inputs and outputs we don't understand. After much analysis, discovered most instrument panels talk UDS over the CAN bus. (ISO14229). This covers diagnostics, data transmission (read/write), security access check and loads more!

The team identified the read/write memory functions, but also discovered they were well protected.

Discovered that there are voltage boundaries, and if they go out of bounds they can stop the MCU. But... what if we do it for a very short amount of time? Will the chip keep running?

Had to get fault injection tooling - ChipWhisperer or Inspector FI - all available to the masses.

Fault injectors are great for breaking things. Once a glitch is introduced, nothing can be trusted. You can even change the executed instructions - opens a lot more doors! If you can modify instructions, you can also skip instructions!

They investigated adding a glitch to the security access check. Part of the check has a challenge, and if the expected response is received - access is granted. The team tried adding a glitch here, but were not successful, due to 10 minute timeout after 3 failed timeouts. As they are looking for something easy... moved on!

So, they moved on to glitching the ReadMemoryByAddress - no timeout here! They were successful on several different ECUs, which are designed around different MCUs.  Depending on the target, they could read N bytes from an arbitrary address. It took a few days, but were able to get the complete firmware in a few days.

There are parameters you can tweak for this glitch - delay, duration and voltage. Lots of pretty graphs followed.

It's hard to just do static analysis, as there is often generated code.

So, they wrote an emulator - allowed them to hook into a real CAN network, add debug stop points, and track execution more closely.

By using taint tracking, were able to find the CalculateKey function with the emulator.

There are new tools coming or electromagnetic fault injection - expensive right now, but getting cheaper.

ECU hardware still needs to be hardened - things like memory integrity and processing integrity. Unfortunately, these are currently being only designed for safety (not security).

There should be redundancy and the designers should be more paranoid. ECUs should not expose keys - need to leverage HSMs (hardened cryptographic engine). Highly recommend using asymmetric crypto - so the ECU only has a public key.

Do better :-)



Wednesday, November 19, 2014

ICMC: Explaining the ISO 1970 Standard, Part 2

Randall Easter, NIST, Security Testing, Validation and Management Group

ISO 19790 is available for purchase from ISO (in Swiss Francs) or ANSI (US Dollars), and you'll also need the derived test requirements (ISO/IEC 24759) .

In this section, Randy walked us through a deep dive of the sections of the document.

There is a new Terms and Definitions section, which will hopefully help to clear up ambiguity and help answer a lot of the questions they've gotten over the years.

The new document has all of the SHALL statements highlighted in red, with [xx.yy] - where xx indicates the clause and yy is anumeric index within the clause. This will make it, hopefully, easier for two people to have a conversation about the standard.  The plan is, when errors are found and fixed with addenda, there will be complete new revisions of the document available - ie everything in one place.

Errors were found during translations to Korean and Japanese, when the translators just could not figure out what something was supposed to me (turns out, it wasn't clear in English, either). We should expect more changes as people start to implement against this standard.  Errors will be corrected again in revisions. Mr. Easter was not clear what ANSI/ISO charge for revisions of documents.

There will be four levels for validation again. The requirements vary greatly for physical security between the levels, from "production grade components" to "tamper detection and response envelop, EFP and fault injections mitigation".  You will still need environmental controls for protecting key access, etc.

There are algorithms like Camelia that the US Federal Government are not allowed to use, but vendors are designing software for international markets.  So, federal users can only use certain algorithms - the vendors do NOT have to restrict this, it is up to the end user to implement and use the policy correctly.  How do you audit that, though?

ISO standard states that every service has to have a "bit" that tells whether or not it's an approved service or not.  This should enable this to be better audited.

This is a great policy, but what happens when you have to work what you get? For example, someone could send the IRS a tax return encrypted with RC4. The IRS can decrypt using RC4, but would have to store using AES.  It would be good to know what has happened here.

The new ISO standard has requirements around roles, services and authentication. The minimum role is the crypto officer - there has to be logical separation of required and optional roles and services. The higher up the the levels go, the more restrictions: like role-based or identity based authentication all the way up to multi-factor authentication.

There's been a lot of questions about FIPS 140-2 about what we mean by "list all the services". Does that mean only all the approved services? No, all services - even non approved security functions. Non security relevant services also have to be listed, but that can refer to a separate document. [VAF: curious still, what exactly that means - an OS provides a LOT of services!]

ISO 19790 has new directives of managing sensitive security parameters - for example, you have to provide a method to zeroize keys.  This could be done procedurally by an operator, or as a result of tamper. Other examples this area covers: random bit generation, SSP generation, entry, output and storage of keys.

Self-tests have changed. Pre-operational tests cover software/firmware integrity, bypass and critical functions tests.  Crypto operations can begin to proceed while these tests are running, but the output cannot be provided until the pre-operational tests have completed.

Known answer tests are now all conditional tests, like pair-wise consistency checks.  Vendors can continue to do all of the tests up front, or as needed. Lots of flexibility here.  Mr. Easter made a note I couldn't quite follow about module updates being signed by the crypto officer - not clear why it wouldn't be the vendor. [VAF: I may have missed something here.]

The new standard still allows simulation environments for testing, and special interfaces for the labs to test against that may not be needed by the consumer.

Surprising to NIST, some consumers don't care about FIPS 140 validations and the vendors want to provide that to them.  For example, some of the tamper evident seals may need to be installed by the cryptographic operator, or some initialization procedures. Some customers may not even care about power-on-self-tests EVER being run, so that configuration information has to be part of the integrity check.

As a note: some of the current FIPS 140-2 implementation guidance will be retained with this new standard, as they are newer than the ISO standard, or too vendor specific to be included in a general document.

The vendor may have mitigation against some attacks that there are no currently testable requirements available, and that would be allowable through Level 3. Once you get to Level 4, you have to prove your mitigations.

The new standard allows for degraded modes of operation - for example, if one algorithm stopped functioning properly the rest of the algorithms could still be used.

Something new: here has to be a way to validate that you're running a validated version and what the version number is.  This is interesting and tricky, because of course you get your validation done on released software, so when you ship you don't know that you're validated.  And if you store validation status in another file, it could easily get out of date (ie updates done to the system, but software still reporting it's validated). There are ways to solve this, but vendors should tread carefully.

Also, Annex C (and only Annex C) which covers approved security functions (ie algorithms) can be replaced by other countries, as needed.

Q&A:

Q: "How does software have an 'opaque' coating?" A: "Physical security requirements do not have to be met by software modules".

Q: "Lots of services could be going on, what do we need to be concerned with" A: "Services that are exposed by the module, via command line or API. Security services need be be queryable".

Q: "Why should the crypto officer be signing new modules? They may not be able to modify the trust anchors". A: Was using crypto officer as an example, policies could vary - but it is the crypto officer's decision on what to load.

ICMC: Explaining the ISO 19790 Standard, Part 1

Randall Easter, NIST, Security Testing,  Security Management  & Assurance Group

FIPS 140-1 was published in January 1994, and a year later there came the Derived Test Requirements and guidelines for how to test to these DTRs.  The standard has continued to evolve over the last 20 years.  FIPS 140-2 was published in May 2001.  The DTRs cannot be written until after the standard has stopped fluctuating, which is why there's always a delay.

The goal of the Cryptographic Module Validation Program (CMVP) is to show a level of compliance to given standards. They also desire to finish the testing and validation while the software/hardware is still current and shipping (ed note: not a goal that has been consistently met in recent years - goals and budget do not always align).

NISTs goal is to reevaluate standards every 5 years to make sure they are still current.

A request for feedback and comments on FIPS 140-2 came out in 2005.  At the same time, the international community wanted to have a rough international equivalent of FIPS 140.  ISO/IEC 19790: 2006 was published in March 2006, which was a rough equivalent to FIPS 140-2.  This included editorial changes to clarify items that people were interpreting differently - perhaps roughly equivalent to Implementation Guidance we've received for FIPS 140-2?

The original goal of doing ISO/IEC standard in parallel with FIPS 140-3, so that we could get the best of both worlds. International participation and an international standard, and a new FIPS 140 document at the same time. The problem with ISO standards - they are worked on privately and even the "public" final version must be purchased. The FIPS 140 standards are downloadable for free from NIST's website.

ISO/IEC 19790:2012 was pulbished on August 15, 2012 and the DTRs came out in January 2014.

The draft for FIPS 140-3 came out in January 2005, but never became a formal standard.  What happened?

Mr. Easter noted that there was a change of ownership of the document and FIPS 140-3 quickly diverged from ISO.  ISO has very strict standards about meeting twice a year, and insists on progress or your work will be dropped.  Work on FIPS 140-3 stalled in NIST... but work had to forge ahead with ISO.

Mr. Easter, editor of the ISO document, is happy with how it's come out.  :-)

While ISO/IEC 19790:2012 has been out since... 2012, NIST has not formally announced that their intention is to move to this standard.  That lack of announcement seems to be a political one, as the person that should be making that announcement hasn't been hired, yet....

One interesting change in the ISO document is that it covers algorithms that are not approved by the US Government, but are used regularly by the international community.  There is a concept of a wrapper document where more algorithms could be added and clauses modified - but the more someone does that, it will cause standard divergence.

The ISO document was worked on by the international community and circulated to all of the labs that do FIPS 140-2 validations to provide comments. Mr. Easter believes it was better circulated than a NIST review would be. I would disagree here, as ISO is a closed process, so developers could not provide feedback from their sides (and believe me - developers and labs speak a different language, and I am certain our feedback would've been unique and valuable)

ISO/IEC 19790:2012 contains new sections on Software Security and Non-Invasive Security, and have removed the Finite State Model.  FIPS 140-2 was very hardware specific - built around hardware DES modules.  The new standard acknowledges that software exists. ;-)

NIST did hold a Software Security Workshop in 2008, to try to learn more about how software works.  Software could then be validated to higher levels and were tied to Common Criteria.  Based on input from this workshop, the levels software could validate against were changed and the relationship to Common Criteria was severed - that made it into the 2009 draft of FIPS 140-3. Unfortunately, that was the last draft of FIPS 140-3 and the standard never became final.

Looking at an appendix of FIPS 140-2 vs ISO/IEC 19790:2012 - very similar, with the notable differences, like Finite State Model being dropped and a new software/firmware section added.  There's now guidance on non-invasive security testing and sensitive security parameter management.

The Annexes of ISO/IEC 19790:2012 sound more helpful - there's an entire appendix telling you what the documentation requirements are.  I n FIPS 140-2, you just had to troll your way through the entire document and hope you caught all of the "though shall document this" clauses.

One of the annexes is about approved authentication mechanisms... unfortunately, at this time, there are no formal authentication mechanisms.  Hopefully that can be covered in Implementation Guidance.

The big goals for ISO/IEC 19790:2006 were to harmonize with FIPS 140-3 (140-3 will now be a wrapper document referring to the ISO document), editorial clarifications from lessons learned (Randy noted that he wasn't even sure exactly what he was trying to say when he re-read things years later ;), incorporation of all of the FIPS 140-2 implementation guidance and most of all entropy guidance (ah, entropy.... ).

Additionally, need to address new technologies - we're putting entire algorithms on chips, and sometimes just pieces of algorithms.  How does that fit into the boundary?  That is where technology is going, and a modern standard needs to understand that and take it into consideration, in ways that FIPS 140-2 simply couldn't have conceived.

Software will take advantage of this - it only makes sense, but that starts to put us into hybrid mode. This is covered in the new ISO standard, not just as implementation guidance.

The new ISO standard addresses the trusted channel without reference to Common Criteria. ISO/IEC 19790:2012 added a new requirement - a service to query version information.  This is interesting to us in OS land, as our FIPS version number and OS version number are not the same.  See this with other software vendors as well.

Integrity check is simpler for level 1 in this new standard, but more difficult for levels 2 and higher.

The new software security requirements section is definitely an improvement over FIPS 140-2, but still not as good as it could be. ISO did not get very much feedback on this section in the time frame where they made the request.

The ISO standard had to remove the reliance on Common Criteria, as NIAP is moving away from those evaluations and the protection profiles are up in the air. The ISO group didn't want to tie themselves to a moving target, instead added specific requirements for things like auditing that the operating system should provide, if you want to get Level 2.  In general, software won't be able to get over Level 2 in this new standard.

An interesting side note, you can actually have a "mix & match" of your levels. You could have "Level 2" for certain levels, and have an overall "Level 1" validation (based around your lowest section).  For example, for United States Post Office postage meters they want Level 3 Physical Security, but overall Level 2 for everything else.  It's important that people cannot break into the machine, but things like RBAC (role based access control) are not as important.

ISO/IEC 19790 added new temperature requirements. That is, you have to test at the extremes that the vendor claims the module operates at.  Though, if the vendor only wants to claim at ambient temperature - that would be noted in the security policy.  They should be tested at the "shipping range" as well.  Reason?  Imagine leaving your epoxy covered Level 4 crypto device on a dashboard of a car in the summer.... well, that epoxy melts. Would it really still be a Level 4 device after that? No, so temperature range is important.

We no longer require all of the self-tests be completed before the module is run. For example, if you're going to use AES - you only need to run the known answer tests on AES, not on ALL of the algorithms.  NIST understood that for devices like contactless smartcards - can't wait forever for the door to open to your office.  Now power-on-self-tests are conditional, as opposed to required all the time.

The new standard adds periodic self tests, at a defined interval (with option to defer when critical operations are already in use).

There is a new section on life-cycle assurance and coverage for End of Life - what do you do with this crypto box when you're upgrading to the shiny new hardware?

Keep in mind that this is still a revision of FIPS 140-2. It is not a completely new document. The document will seem familiar, but it should overall be more readable and provide greater assurance.  We didn't add things just because they sounded like fun ways to torture the vendor, even if some vendors may think that ;-)

Questions from the audience: "Will we be getting rid of Implementation Guidance?" No, we can't possibly guarantee to get everything correct in the first standard, and technology changes faster than the main document.  "Can we stop calling it Guidance?  If vendors can't refuse to do it, then it's not guidance." It's always been called that, you can choose to not follow it - but you won't pass your validition. Maybe we should change the name, a contest perhaps? (Suggestion from the audience, "call it requirements, as that's what they are - you are required to follow them")  My suggestion: call them "Implementation Clarifications".  The word "guidance" is too much associated with advice, which this is not soft advice.








Thursday, September 26, 2013

ICMC: ISO/IEC 19790 Status and Supporting Documents

Presented by Randall Easter, NIST; and Miguel Bagnon, Epoche & Espri.

Mr. Bagnon started out by explaining the structure of ISO, the IEC and SC27 working group.  The ISO standards body looks at  creating market driven standards, getting input from all of the relevant stake holders.  The SC27 focuses on security and privacy opics across 5 working groups.  The SC27 has 48 voting member countries - from Algeria to Uruguay!  There are 19 other observing countries. You can see a very wide representation of continents, countries, cultures and languages.

The WG3 Mission is security evaluations, testing and specification. This covers how to apply the criteria, testing criteria, and administrative procedures in this area.

The process is open to the world (as you can see), drafts are sent out for review by the public before becoming a final international standard.  Please participate if you can, it's the only way to have your opinion counted.

Mr. Easter then dove into ISO 19790, and the related standards: ISO 24759 (test requirements for cryptographic modules), 18367 (algorithm and security mechanisms conformance testing), 17825 (testing methods for the mitigation of non-invasive attack classes against crypto modules) and 30104 (physical security attacks, mitigation techniques and security requirements).

ISO 19790 was first published in 2006 and it was technically equivalent to FIPS 140-2, plus an additional requirements for mitigation of attacks for Level 4.  This standard has been adopted internationally and is being used around the world.

What Mr. Easter had been hoping would happen was that ISO 19790 and FIPS 140-3 would closely track each other, with ISO 19790 picking up all of the changes from FIPS 140-3.  FIPS 140-3 was so delayed, though, that ISO 19790 began to develop independently.

Mr. Easter noticed that there were no validation labs participating in the ISO standard, so he got permission to circulate the draft amongst the labs and to incorporate their comments, as he's the editor of the document.

This document has been adopted by ANSI as a US standard now as well.

At this time, it is not officially recognized by NIST and the US Government.

This is very frustrating to many vendors and labs, because FIPS 140-2 was published in 2001 and it is quite stale (hence the 170 page Implementation Guidance). Technology is changing, the original language in FIPS 140-2 wasn't clear to all, and there seems to be a way out - if only NIST would adopt it.

Until that happens, vendors are stuck implementing to FIPS 140-2.

How can you change this? Call up your NIST representative or friendly CSEC contact and ask for this.

This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Key Management Overview

Presenters: Allen Roginsky and Kim Schaffer, NIST.

Key Establishment Methods



Key Establishment Methods in FIS 140-2 cover key agreement, key transport (including encapsulation and wrapping), key generation, key entry (manual, electronic, unprotected media), and key derivation.

The best method to make sure you do this right is to comply with SP 800-56A (CAVP KAS certificate required).

You can also use SP 800-56B, which is vendor affirmed right now. SP 800-56B is IFC based and key confirmation is highly desirable.

Or, you can use non-approved methods that rely on approved validated algorithm components. The shared secret is still computed per SP 800-56A with a CVL certificate. The kdf (key derivation function) then would be aproved (with a CVL certificate) per SP 800-56B and 80-56C.  There was a new version of SP 800-56A released in May 2013 that should help alleviate some of this convoluted cross referencing, and clarify many questions people have had over the last few years.

OR...you can even use non-approved, but allowed implementations.  That is, if your key strengths are consistent with SP 800-131A transition requirements.

Key Transport Modes

Key transport modes can be confusing as well.  Key encapsulation is where keying material is encrypted using asymmetric (public key) algorithm.  Key Wrapping, though, is where the keying material is encrypted suing symmetric algorithms. Both commonly provide integrity checks.

Approved methods would be an approved IFC based key encapsulation scheme as in SP 800-56B, key wrapping schemes (AES or 3DES based) as per PS 80038F, AES based authentication encryption m odes permitted in SP 800-38F, or as per SP 800-56A, a DLC-based key agreement scheme together with a key wrapping algorithm.

Any key encapsulation scheme employing an IFC based methodology that uses key lengths specified in SP 800-131A as acceptable or (through 2013) deprecated.   When AES or 3DES are used for wrapping, a CAVP validation of the algorithm is required.

Key Generation Methods

People often mistakenly believe that because they are using a good RNG, that they must be doing the right thing for key generation... not always the case!  You still need to follow SP 800-133 and IG 7.8 (Implementation Guidance).

The vendor needs to identify the method used and account for the resulting length and strength of the generated keys. This is about the generation of a symmetric algorithm key or a seed for generating an asymmetric algorithm key; the the generation of an asymmetric algorithm domain parameters and RSA keys.  See IG 7.8 and the future versioin of SP 800-90A.

You can use SP 800-132 for password-based key generation for storage applications only.

Key Entry

Implementation Guidance (IG) 7.7 provides examples explaining the FIPS 140-2 requirements. Key entry/output via the GPC internal path is generally N/A.  Key establishment over the unprotected media requires protection.  Split knowledge entry for manually distributed keys at Levels 3 and .

Key Derivation

When you're deriving a key - it's coming from something else. If you're deriving from a shared secret (per SP 800-135rev1), that includes the following protocols and their key derivation function are included: IKE (versions 1 and 2) , TLS (1.0->1.2), ANSI X9.42 and X9.63, SSH, SRTP, SNMP and TPM.  You can also drive from other keys, which is covered by  SP 800-108 - which also includes IEEE 802.11i key derivation functions (IG 7.2).



 This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: The Upcoming Transition to New Algorithms and Key Sizes

Presented by Allen Roginsky, Kim Schaffer, NIST.

There are major things we need to be concerned about – we need to move from old, less secure algorithms to the new ones. This includes the transition to 112-bit strong crypto and closing certain loopholes in old standards

The algorithms will fall into the following classes:
  • Acceptable (no known risks of use)
  • Deprecated (you can use it, but you are accepting risk by doing so)
    • This is a temporary state
  • Restricted (deprecated and some additional restrictions apply) 
  • Legacy-Use (may only be usd to process already-protected information) 
  • Disallowed (may not be used at all)
And of course, these classifications can change at any time. As you all know, the crypto algorithm arena is ever changing.  I asked a question about the distinction between Legacy-Use and disallowed.  It seems to me that you might find some old data laying around that you’ll need to decrypt at a later date.  Mr. Roginsky noted that they didn’t really cover this when they did the DES transition, and you might be okay because decrypting is not really “protecting” data.

When we get to January 1, 2014, 112-bit strength is required.  Two-key 3DES is restricted through 2015. Digital signatures are deprecated though 2013 if they aren’t strong enough.   This is an example where you could continue to use them for verification under “Legacy-Use” when we reach 2014.

Non SP-800-90A RNGs are disallowed for use after 2015 – you won’t even be able to submit a test report after December 31, 2013 if you don’t have an SP-800-90A RNG.

There is a new document everyone will want to review: SP 800-38 – it explains the use of AES and 2Des for key wrapping.

SHA-224, 256, 384, 512 are all approved for all algorithms. SHA-1 is okay, expect for digital signature generation. There are other changes around MACs and key derivation.

We’ll also be transitioning from FIPS 186-2 to FIPS 186-3/4.  Conformane to 186-2 can be tested through 2013.  Already validated implementations will remain valid, subject to the key strength requirements.  Only certain functions (such as parameter validation, public key validation and signature verification) will be tested for 186-2 compliance after 2013.  What this really means is that some key sizes are gone” after 2013: RSA can only use 2048 and 3072 keys.

Make sure you also read Implementation Guidance (IG) 7.12: RSA signature keys need to be generated as in FIPS 186-3 or X9.31.

The deadlines are coming up – don’t delay!

 This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Reuse of Validation of Other Third Party Components in a 140-2 Cryptographic Module

Presented: Jonathan Smith, Senior Cryptographic Equipment Assessment Laboratory (CEAL) Tester, CygnaCom Solutions

What is component in this context?  An algorithm, 140-2 module, third party library, etc - not a hardware device.  There is more interest in this area, as more validations are occurring.  Requirements are not obvious in this area, and there isn't a lot of guidance to follow.

Let's say you want to reuse an algorithm that has its CAVP certificates - if you wan to leverage that validation, you have to make sure you are talking about the same Operational Environment (OS/processor for software) and that there is no change within the algorithm boundary when you embed it within a module.  CAVP boundaries are not as well defined as CMVP, but for all intents and purpose it is the compiled binary executable that contains the algorithm implementation.

When you're reusing someone else's algorithm, you will have a hard time to make sure all of the CMVP self-tests are all being run at the right time. You may not be able to reuse it with out rebuilding it.

Now you may want to use an entire validated module - first make sure you have the correct validated version.  If you can use it completely unchanged, you will have to reference the other module's certificate.  One note, if the embedded module is Level 2, but your code only meets Level 1 criteria - the composite module could not be evaluated higher than Level 1. Now, the inverse is not necessarily true - you might be embedding a Level 1 module, but your different use cases may allow you to get a higher level for the composite module.

To reuse this module, again, you need to have an unchanged operational environment the same as trying to reuse an algorithm.  The new module boundary must include the entire boundary of the included module. You'll need to have a consistent error state - you cannot allow one part of the composite module to enter an error state while the rest of the system continues serving crypto.

Your documentation can quite frequently reference the embedded module's documentation, leaving certain tasks up to the embedded module.  Make sure the new capabilities of the composite module are documented.

A question came up about using multiple vendor's modules together, where they each have their own validation certificate.  Mr. Easter (CMVP) recommended we read Implementation Guidance (IG) 7.7 for detailed advice on this concept.

There was a question about if the embedded module was validated before new IG came out - what then?  As long as the embedded module meets SP800-31A, then the old certificate fully applies and you will not have to bring it up to the new IG.

This post syndicated from: Thoughts on security, beer, theater and biking!

Wednesday, September 25, 2013

ICMC: FIPS and FUD


Ray Potter, CEO, Safelogic.

Mr. Potter has seen lots of vendors jumping into the FIPS-140 bandwagon when they see another vendor claiming FIPS-140 certification, without understanding what that meant.  That other vendor may have simply gotten an algorithm certificate for just one algorithm, for example.

FIPS-140 is important - it provides a measure of confidence and a measure of trust.  A third party is validating your claims.  FIPS-140 is open and everyone in the world can read the standard and the IG and implement based on these and understand what it means to be validated. Most importantly, FIPS-140 is required by the US Government and desired by other industries (medical/financial/etc).

Having this validation shows that you've correctly implemented your algorithms and you are offering secure algorithms.

See claims like the module has been "designed for compliance to FIPS 140-2", or "it will be submitted" or "a third party has verified this" or "we have an algorithm certificate" or "we have implemented all of the requirements for FIPS-140" - none of these is truly FIPS-140-2 validated.

Once you have a certificate in hand from the CMVP, then you're validated.

But even when the vendor has done the right thing for some products, sales can just get this wrong - too eager to tell the customer that everything is then validated.

So, Mr. Potter has encountered honest mistakes, but he's also seen sales/vendors outright lie about this, because it simply takes too long and is too expensive to do this.  Why do this? Make the sale - hope your in process evaluation completes before the sale.

Issues that exacerbate the situation: unpredictable CMVP queues, new Implementation Guidance (IG) that is sometimes retroactive, and uneven application across the government.  Some government agencies may accept different phases (in validation) - where others require a certificate in hand.

Vendors get frustrated when they are in the queue for months, then get some retroactive IG that requires code changes - they don't see this as worth the effort.

We can help: educate your customers on the value of FIPS-140 validations and what it really means to be validated, only use validated modules and follow strict guidelines for claims.

There are some people that will take another FIPS-140 validated implementation, repackage it and get their own certificate for the same underlying module but with their name as the vendor on the cert.

I asked about why some vendors are doing validations of their crypto consumers, when they've already got a validation for the underlying consumer.  Mr. Potter noted that some people might do this because they need to cover the key management aspects that the consumer is doing that weren't covered in the other evaluation, or that the consumer may actually have some if it's own internal crypto in addition to what they are getting from the underlying module, or that they simply are trying to make a very important customer happy.

This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Understanding the FIPS Government Crypto Regulations for 2014

 Presented by Edwards Morris, co-founder Gossamer Security Solutions.

Looking forward to what new regulations are going to mean for older algorithms and the protocols that use them, right away there seems like there may be work for us to do.

Security strength of various algorithms are not straightforward to calculate, as it's based on how it's used and how big the key is, etc.

In May 19, 2005, DES is being sunset, because it has less than 80-bits of security.  Because this sunset applies to the bits of security, DH and RSA with key sizes smaller than 1024-bits are included.

Originally the DES transition gave people two years to migrate to AES or 3DES.

As a part of this, NIST released SP 800-57, which included recommendations for key management.

This was harder than anticipated, due to lack of standards, available 140-2 approved products and the sheer size of deployments - so then NIST 800-131A was born.

NIST 800-131A includes more details on the transitions, terminology and dates.  Some 80-bit crypto will be deprecated in 2013 and others in 2015.

The devil is in the details, of course,  SP 800-131A refers to SP800131B, which is in draft and ultimately because FIPS-140-2 Implementation Guidance.  Digging into all of these requirements, how can you tell if TLS can still be used?

Mr. Morris dug into  various protocols to help us interpret this standard.

IPsec

IPsec is made up of so many RFCs, so getting the big picture is not an easy task. It also has so many possible options.  SP 800-57 Pat 3 details this guidance.  Three IPsec protocols allow a choice of algorithms: IKE (for key exchange), ESP and AH.

You can avoid using IKE by using manual keying (acceptable 2014+, but what a pain to configure/deploy), IKEv1 (unacceptable in 2014) and  IKEv3 (acceptable 2014+, if configured correctly).

For encryption in IPsec, you'll be okay with ENCR_3DES (in CBC mode), ENCR_AES_CBC (CBC mode only), and a few others.  You'll be able to use SHA1 as an HMAC, but not alone for signatures (wow, does this get twisted).

Mr. Morris continued through what would be acceptable for Pseudo-random functions, Integrity, Diffie-Hellman group and Peer Authentication - too quickly for me to type all of those algorithms, but I will try to update this if I can get access to the slide deck.

It's not as simple as which algorithm you use, but again how you use it, which sized keys, etc.

IPsec is well positioned for this - as long as you configure it correctly.

TLS

TLS is not the same things SSL v3.0. TLS is equivalent to SSL v 3.1.  TLS 1.0 is acceptable. SSL, though, is not allowed by CMVP.  Since 1.0 is acceptable, that essentially means that TLS 1.1 and 1.2 are also acceptable, as long it's configured correctly (seeing a theme here?).  For example, if you use pre-shared keys in lieu of certificates, ensure the key is greater or equal to 112 bits.

You won't be able to use the following key exchanges: *_anon, SRP_*, KRB5. (though Mr. Morris hasn't dug through the Kerberos standards, yet, to see where they will be with these 2014 requirements.

SSH

This is not covered by SP 800-57, so this is harder to figure out if it's okay or not. There are problems when the RFCs require things that are no longer considered secure: Single DES are required to be implemented, but by having them available in your implementation - this will be a problem.

Recommendations

Look for documentation and configuration guides that can help there, or get independent evaluations of the software that you're deploying (companies like Gossamer will look at how you're using even opensource software like Apache, OpenSSL, OpenSSH, etc)

This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Building a Corporate-Wide Certification Program - Techniques that (might) work

Tammy Green, Security and Certification Architect at Blue Coat, came into what seemed like a simple task. She had had some Common Criteria experience when she joined Blue Coat, their product had previously been evaluated at FIPS-140-2 level 2, and they had already signed contracts with a consulting firm (Corsec) and lab.  Should've been easy, right?  Her new boss said it should take about a year and only 20% of her time.  Two and half full time years later... yikes.

One of the biggest problems was getting internal teams to work with her - even people involved in the previous validations didn't even want to talk to Ms. Green about it.

Nobody wanted to do this - they want to work on the new shiny features that they can sell, how does a process that takes 2 years (often not complete until after a product is EOL) help them?

It's hard to see the long term picture - you want to sell to the government, you need FIPS-140-2 validations.

Ms. Green didn't want to do this herself again afterwards. Instead of running away, she worked on setting up a certification team locally (her boss hiring a program manager helped to encourage it).

In addition to having the program manager, you need a certification architect.  You can't use the same architect as the product architect, because that person is busy designing shiny new features.

You need to work with the development team well in advance - fit your FIPS-140-2/Common Criteria schedule into their development schedule. You can't screw on the necessary tests and requirements as an afterthought, and you don't want to delay a schedule because requirements are dropped in at the end.

Target the right release: because FIPS-140-2 takes so long, you need to pick a release you plan on supporting for a long time.

Ms. Green found that after time... engineers stopped replying to her emails and answering her phone calls.  You need to identify key engineering resources to work with and their management needs to commit to those engineers dedicating 10-20% of their time to these validations.

Once you get this set up and have educated engineering, you'll find they'll reach out to you in advance - better timing!

Her team keeps track of what needs to be done: file bugs and track them.  You'd think the project manager for the product team would do this, but what she's found is that the bugs get reprioritized and reassigned to a future release.  Someone who understands validations needs to track these issues.

Ms. Green recommends that you create the security policy from existing documents: don't rely on engineers doing this. They simply don't understand what goes into this document or why it's important.  Instead, use engineering and QA to validate content.

It's important to convince QA continue to test FIPS mode and related features, as some customers may still want to run in FIPS mode (even though it wasn't validated) or that the release would be ready for validation if something went horribly wrong with the older release in validation.

Schedule time to prep. Ms. Green has 4-8 hour long meetings to make sure everyone understands what's important. Take time to prepare, make sure everyone knows what will be expected from the lab visit and have a test plan formalized in advance.  It's actually a lot of work to set up failures (the lab evaluators require that you demonstrate what happens when  a failure happens, even though you have to inject the failure to force it).  Debug builds, builds you know will fail, multiple test machines, platforms, etc.

To keep your team from killing you... or damaging morale, celebrate the milestones. Mention the progress in every status report, celebrate the milestones, do corporate wide announcements when you finish.

Do a post mortem to understand how this can be improved: give your engineering team a voice! Listen and take action based on what worked and didn't.

Update tools and features to make this easier next time: keywords to bugs and features, modifying product life-cycle, add questions related to FIPS to templates.

Suggestions/questions from the audience:
Make sales your best friend.  Validations/certifications are not fun, nobody does them for fun - you do this to make money.
Get the certification team involved as early as possible: from the very beginning - marketing design meetings.
Why don't you run your FIPS-140 mode tests all the time?  Time consuming, slower, not seen as a priority when there are no plans to validate.


This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Panel: How Can the Validation Queue for the CMVP be Improved

Fiona Pattinson, atsec information security, moderated a panel on CMVP queue lengths. Panelists: Michael Cooper, NIST; Steve Weymann, Infoguard; James McLaughlin, Gemalto; and Nithya Rahamadugu, Cygnacom.

Mr. McLauglin said the major issues for vendors with these queue lengths, because it impacts time to market. If you don't get your validation until after your product is obsolete, that can be incredibly frustrating. It's important to take schedule and time to market into consideration, which is impossible when you cannot anticipate how long you'll be sitting in the queue.

Mr. Weymann expressed how frustrating this is for a lab to be able to communicate to their customers what the expectations are and what to do when the implementation guidance is changed (or as CMVP would say: clarified) while you're in the queue - do you have to start over?

Mr. Cooper, CMVP, expressed that there is a resource limitation (this gets back to the 8 reviewers). But is simply adding resources enough?  They have grown in this area - used to only have 1 or 2 reviewers.  But does adding people resolve all of the issues?  Finding people with the correct training is difficult as well.  Hiring in the US Government can take 6-8 months, even if they do find the right person with the right background and right interest.

Mr. Weymann posits that perhaps there are ways that could better use our existing resources.  Labs could help out a lot more here, making sure things are in such good shape at submission that the work that CMVP has to do is easier.

Ms. Rahamadugu and Mr. Weymann suggested adding more automation into the CMVP process. Mr. Cooper noted that he has just now gotten the budget to hire someone to do some automation tasks, so hopefully that will result in improvements to pace.

A comment from the audience from a lab suggested automation of the generation of the certificate, once the vendor has passed validation.  Apparently this can sometimes be error prone, resulting in 1-2 more cycles of review.  Automation could help here.

Mr. Easter said they like to complete things in 1-2 rounds of questions to the vendor, and there are penalties (not specified) for more than 2 rounds.  Sometimes the answers to a seemingly benign question can actually bring up more questions, which will result further rounds.  Though, CMVP is quick to want to have a phone call to resolve "question loops".

Mr. Easter noted that he tries to give related reviews or technologies to one person. This often helps to speed up reviews, reducing the questions. On the other hand, that puts the area expertise in the hands of one person, so when they go on vacation - there can be delays.

Mr. Weymann noted that it seems that each lab seems to gather different information and presents it in different ways. For example, different ways of presenting which algorithms are presented, key methods, etc.

Concern from the audience about loss of expertise if anyone moves - could validators shadow each other to learn each other's area of expertise?  Or will that effectively limit the number of reviewers.
Vendors feel like they have to continually "train" the validators, and retrain every time - frustrating for vendors to do this seemingly over and over.

A suggestion from the audience: could labs review another lab's submissions before it went to CMVP?  This is difficult, due to all of the NDAs involved.  Also, a vendor may not want a lab working with a competitor to review their software.

Complicated indeed!

This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Impact FIPS 140-2: Implementation Guidance

Presenters: Kim Schaffer, Apostol Vassilev, and Jim Fox - all from NIST (CMVP).  The talk walked us through some of the most confusing/contentious recent updates to the Implementation Guidance document.

Implementation Guidance 9.10

Default Entry Points are well-known mechanisms for operator independent access to a crypto module. When an application loads, static default constructs, dynamically loaded libraries, etc - these access points are linked into the application.  When this default entry point is called, power on self tests must be called with out application or operator intervention.

This guidance exists in harmony with IG 9.5, which is regarding, essentially making sure there isn't a way for the module to skip power-on-self-tests.

Implementation Guidance 3.5

All services need to be documented in your security policy, even the non-approved ones.  Each service needs to be listed individually.  Some services can be documented externally, but those need to be publicly accessible (via URL or similar).

Operational Environment

This is defined as the OS and platform [Ed note: emphasis mine].

Today, processors are becoming nonstandard, so NIST can no longer just consider the OS alone and needs to reinvigorate the intent seen in the original document.  This gets more complicated with virtual machines (VMs) in the mix.

What they used to call GPC (general purpose processor), they are now referring to them as Special Processing Capability (SPC) because of the cryptography being added to the processors.

What do they mean by SPC?  extended calls to enhance cryptographic algorithm without full implementation.  Processors include Intel, Arm and SPARC.

If the module can demonstrate independence of SPC, then it won't necessarily be considered a hybrid implementation.  If the lab can show this, even though an implementation can leverage an SPC, they may still be considered a software or firmware implementation.

Questions

An audience member showed irritation that the new IG does not give time to implement (ie no grace period).  Mr. Randy Easter reiterated  that this is not a new requirement, but something that was formerly missed by the testing that the labs did. They want to nip these lapses in the bud and get people back on track.  Essentially correcting assumptions that CMVP was making about what the labs were testing. There are some changes/clarifications on this lack of grace period that were sent to the labs last night (not sure what's in that, but the labs all seem happy and are anxiously checking their mail).

Another question about why the very specific platform is required on the certificate. Mr. Easter noted that this is due to how dependent entropy is on the underlying platform - this becomes important to test and verify. This is a frustrating for Android developers.  Mr. Easter noted that most consumers, though, are happy to use the same module on slight variations of platforms (for example, different releases of Windows on slightly different laptop hardware - too many variations to actually test).

Mr. Schaffer noted that certificates do not need to go down to the patch level, unless the vendor requests it.


This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: CAVP/CMVP Status - What's Next?

We were lucky to have three panelists from NIST: Sharon Keller, Randall Easter, and Carolyn French, so we can hear about what they are doing straight from the horses mouth, as it were.  This is a rare opportunity for vendors to get to interact directly with members of these validation programs.

Ms. Keller, CAVP - NIST, explained a bit of history about this body. CMVP and CAVP used to be one group, but they were separated in 2003, as their tasks could easily be broken up.  As noted yesterday, the validations done by CAVP is much more black and white - you've either passed the test suite, or failed.  The CAVP has automated this as much as possible, which also contributes to the speed.

In addition to these validations, CAVP writes the tests, provides documentation and guidance and provides clarification, as needed.  They also keep an up to date webpage that lists algorithm implementations.

The rate of incoming validations seems to be nearly logarithmic growth!  Already this year, they've already issued more algorithm certificates than in all of 2012.  This year, they added a new tests for algorithms like SHA512/t.

Mr. Easter said that NIST had originally planned to do a follow-on conference to their 2003 conference in 2006, once FIPS-140-3 was finalized.... oops!

The original ISO/IEC 19790 (March 2006) was a copy of FIPS-140-2, the latest draft contains improvements that were hoped to be in FIPS-140-3.

Because FISMA requires that the US Government use validated crypto, these standards have become very important.

Vendors should make sure they reference the Implementation Guidance (IG). Mr. Easter noted that the IG, in their opinion, does not contain  any new "shalls", but merely clarification of existing requirements. Though, asking many vendors here, apparently the original document was so murky that these IGs seem like completely new requirements.  Now that FIPS-140-2 is so old (12 years now), the IG document is larger than the standard itself!  This can

There are actually only 8 full time reviewers for the CMVP, and those same reviewers also have to do bi-annual audits of the 22 international labs, write Implementation Guidance, etc - you can see why they are busy!

Reports from labs show the greatest areas of difficulty for conformance are key management, physical security, self-tests and RNGs.  Nearly 50% of the vendor implementations have non-conformance issues (8% are algorithm implementation conformance issues).

Mr. Easter apologized for the  queue and noted that this is not what they want either: they want the US Government to be able to get the latest software and hardware, too!

Currently, there are 200 reports in the queue that are actively being tracked and worked - again, 8 reviewers.  Is adding more reviewers the answer? How many people can they steal from the labs ;-)

What can you do to help?  Labs should make sure the reports are complete, in good shape, with all necessary elements..   Vendors should try to answer any questions as fast as possible.  Help close the loop, make the jobs of the reviewers simple.

Unfortunately,  work on FIPS-140-3 seems to have stalled in the group that evaluates new algorithms, and Mr. Easter would like NIST instead to adopt ISO19790 as FIPS-140-3 (it's ready to go: fleshed out, DTRs) - asking us to help pressure NIST to get this standard adopted.


This post syndicated from: Thoughts on security, beer, theater and biking!

ICMC: Welcome and Plenary Session

Congratulations to Program Chair Fiona Pattison, atsec information security, for putting together such an interesting program for this inaugural International Cryptographic Module Conference.   She has brought together an amazingly diverse mix of vendors, consultants, labs and NIST.  People from all over the world are here.

Our first keynote speaker is Charles H. Romine, Directory, Information Technology Laboratory, NIST. Mr. Romine started off his talk by noting how important integrity and security is to his organization.  Because of this, based on community feedback, they've reopened review of algorithms and other documents.  Being open and transparent is critical to his organization.

NIST is reliant on their industry partners for things like coming up with SHA3.  They are interested in knowing what is working for their testing programs and what is not working, hopefully they will get a lot of great feedback at this conference.

Because these validations are increasingly more valuable in the industry - demand for reviews have gone up significantly. How can NIST keep the quality of reviews up, while still meeting the demand? Mr. Romine is open for feedback from us all.

Our next keynote was Dr. Bertrand du Castel, Schlumbeger Fellow and Java Card Pioneer, titled "Do Cryptographic Modules have a Moat?"

Dr. du Castel asks, is the problem with cryptography simply key exchange? Or is it really a trust issue?  With things like bitcoin (now taxed in Germany) and paypal everywhere on the Internet - it should be obvious how important trust is.  He walked us through many use examples, using humorous anecdotes to demonstrate that the importance of trust is growing and growing.


This post syndicated from: Thoughts on security, beer, theater and biking!

Tuesday, September 24, 2013

ICMC: Introduction to FIPS-140-2

Presented by Steve Weingart, Cryptographic & Security Testing Laboratory Manager, at atsec information security.

While I'm not new to FIPS-140-2, but as I'm here in Gaithersberg, MD for the inaugural International Cryptographic Module Conference, it's always good to get a refresher from an expert.  Please note: these are my notes from this talk and should not be taken as gospel on the subject. If you need a FIPS-140 validation - you should probably engage a professional :-)

Since the passage of Federal Information Security Management Act (FISMA) of 2002, US  Federal Government can no longer waive FIPS requirements. Vendors just simply need to comply to various FIPS standards, including FIPS-140-2 (the FIPS standard that is relevant to cryptography and security modules).  Financial institutions and others that care about third party evaluations also want to see this standard implemented.

Technically, a non-compliant agency could lose their funding.

FIPS-140 is a joint program between US NIST and Canadian CSEC (aka CSE).

All testing for FIPS-140 is done by accredited labs (accredited by National Voluntary Laboratory Scheme).  Labs, though, cannot perform consulting on the design of the cryptographic module. Could be seen as a conflict of interest, if they were seen as designing and testing the same module.  They can use content you provide to make your Security Policy and Finite State Machine (FSM), as those documents have to be in a very specific format that individual vendors will likely have trouble creating them their first time out.

A cryptograpic module is defined by its security functionality, well-defined boundaries and have at least one approved security function.  A module can be contained in hardware, software, firmware, software-hybrid, firmware-hybrid.

Security functionality can be: symmetric/asymmetric key cryptography, hashing, message authentication, RNG, or key management.

The testing lab makes sure that you've implemented the approved security functions correctly to protect sensitive information, makes sure you cannot change the module after it's been validated (integrity requirement), and that you prevent unauthorized use of the module.

Your users need to be able to query the module to see if it is performing correctly and to see the operational state it is in.

FIPS-140 has been around, originally as Federal Standard 1027, since about 1982.  Of course, as technology changes, the standard gets out of date.  FIPS-140-2 came out in 2001 with some change notices in 2002.  FIPS-140-3 has had many false starts.  A large quantity of implementation guidance has come  out (the IG is approaching, if not overtaking, the size of the initial FIPS-140-2 document).

Some brief clarifications: the -<number> on the standard refers to the version.  The first was FIPS-140, next FIPS-140-1, and the final currently adopted one is FIPS-140-2.

Each of those versions have levels that you can be evaluated at. Level one is the "easiest", level four is the hardest (available to hardware only).

FIPS-140-3 has had two rounds of public drafts, there were over 1200 comments, but it seems there is just one person still working on this draft.  In addition, there are not any Derived Test Requirements (DTR) so the labs cannot even consider writing tests for the standard.

There are new versions of "FIPS-140-3" by ISO (19790)  (released in 2012), essentially competing with the NIST draft.  Though, the original goal was to have the ISO standard be the same document, just as an international standard.

Right now, you can validate against the ISO standard in other countries - not in the US or Canada, though.  If you used an international body to validate your module against the ISO standard, it would not get you through the door to US or Canadian Government customers.

It's up to NIST and CSEC to pick one of these, create transition guidance and testing information to let vendors and labs move forward.

ISO 19790 has many improvements, but if you implement them, you will not pass FIPS-140-2 testing. For example, ISO 19790 allows lazy POST (only testing when needed), but FIPS-140-2 requires POST of the entire boundary any time any part of the boundary is used.

FIPS-140-2 has four levels, and it doesn't matter if 99% of your module meets all of the items required for a higher level (like level 2) - but 1% only meets level 1, you cannot be validated at the higher level.

The ISO document doesn't point to specific EAL common criteria levels, helping to alleviate the chicken and the egg circular dependency for FIPS-140 levels. For example, FIPS-140-2 Level 2 requires a EAL validated OS underneath.  The EAL validation requires a FIPS-140-2 validated crypto module.

The finite state model means that you can only be in one state at one time. That means you cannot be generating key material at the same time you're performing cryptographic operations in another thread. This can be very difficult to accomplish with modern day multi-threaded programs - the labs that do the validation review source code, too, so no sneaking around them!

Mr. Weingart keeps reminding us that FIPS-140 is a validation, not an evaluation.

There are some good questions about how do things like OASIS KMIP interact with the requirements for FIPS-140-2 cryptographic key management requirements?  The general thought is they should harmonize, but KMIP doesn't seem to be referenced.

Around key generation, RNG and entropy generation is very important, and with the current news - this is being heavily scrutinized by NIST right now.  Simply using /dev/random (without knowing anything about its entropy sources) is not sufficient. Of course, when you're also the provider of /dev/random, you have a bit more knowledge. We should expect further guidance in this area.

Cryptographic modules have to complete power on self-tests (POSTs) to ensure that the module is functioning properly - no shortcuts allowed! (again, your code will be reviewed - shortcuts will be seen!).  There are also some conditional self-tests - tests run when a certain condition occur, for example, generating a key.

If any of these tests fail, you must not perform *any* cryptographic operations until the failure state has been cleared and all POSTs are rerun.  That is, even if your POSTS for SHA1 fail, you cannot even provide AES or ECC.

If you make any additional "extra credit" security claims, like "we protect against timing attacks", that either needs to be  verified by the lab, or a disclaimer needs to be placed in your security policy.

Implementation Guidance

There is a lot of new implementation guidance coming up, fast and furiously.

The most contentious one is the run Power on Self-Tests all the time (whether in FIPS mode or not). This can be problematic, particularly for something like a general purpose OS or smartcards. Things that may not have been designed for this, or that just don't have great performance capabilities (like smartcards) this can make your device or system unusable for customers that do not need FIPS-140 validated hardware/software.

IG G.14, for example, has some odd things on algorithms, like RSA4096 will be removed from approval, but RSA 2048 won't be. This seems to be related to performance issues, according to discussions in the room, but that seems a harsh punishment for perf issues.  Check SP800-31A for more details about what will and will not be allowable going forward.

Cryptographic Algorithm Validation Program

Any algorithms used in approved mode need to be validated to make sure they are operating correctly.  This step is required before you can submit to the CMVP  (Cryptographic Module Validation Program).  This is a very mechanical process - you have either passed the algorithm tests, or you haven't.  CAVP turn around to issue certificates for your algorithms is typically very quick, because there isn't wiggle room or room for interpretation.

You will work with labs (22 approved ones that are accredited currently) on this, and a consulting firm that will help you to work with the labs, work on your documentation, design, architecture, etc.  You can hire another lab as a consultant, just your lab cannot consult for you. (back to that they cannot test what they designed)

Preparing yourself for validation

Read FIPS-140-2, implementation guidance, SP-800s documentations, etc.

Take training, where available and possible.

Enlist help on your design and architecture as early as possible, get a readiness assessment.

You can do your algorithm testing early, find and fix problems early in your development.

Iterate as needed (if this is your first time, you'll almost certainly have to iterate to get this right).


This post syndicated from: Thoughts on security, beer, theater and biking!

Friday, April 26, 2013

Security Attacks: From the Lab to the Streets: Automobiles

I dated a guy in high school that drove a Ford Escort.  I know, not that amazing. It gets more interesting, I promise. His father drove a Mercury Lynx. For those of you familiar with the American Automotive industry will know that those were essentially the same car, but with different badges on them.

I know, you're getting jealous of the exclusive circles I hung out in [1], but the point was, these cars came from different dealers and were purchased at different times.  What's interesting is that the keys for the Escort could unlock the Lynx.  The keys for the Lynx could unlock and start the Escort.  No, this family hadn't paid outrageous sums of money to get their cars rekeyed so this would work.  It just did.  These were 1980s model cars, and at the time, the American automotive industry just didn't make that many key combinations.

This became well known and break-ins would happen at the mall where I worked where there would be no evidence of forced entry.

Well, car manufacturers learned their lesson and came up with secure electronic keys.

At USENIX Security 2011, I attended a great set of talks on Analysis of Deployed Systems.  One talk, Comprehensive Experimental Analysis of Automotive Attack Surfaces (scroll down in my previous post, it was the 3rd talk), covered how it was possible, with some effort - to not only remotely unlock someone else's car, but also to start them and control them while in motion.  They found a car that had a live IRC channel on it.  You know, in case you need to chat with your car.  Heck, the researchers even reprogrammed the dashboard to display their website URL.

Really, the problem here is trying to cut costs and use as much vanilla software as possible.

Now, ABC is reporting how police are perplexed that there is a rash of automobile break-ins where the perpetrators are not physically attacking the machine.  Clearly, neither ABC nor the police attended the same USENIX Security talk that I did.

What do you think about modern cars and physical security?

[1] They did have an immaculate '57 Chevy in the garage. Yeah, but still.