Thursday, March 26, 2015

PKCS#11 Webinar Friday (That's Tomorrow!)

Bob Griffin, EMC, and I will be presenting the history of PKCS#11 and where we are going with the standard in our OASIS Technical Committee Friday, March 27, 2015 at 8AM PT.  This is in preparation for our OASIS wide vote for PKCS#11 2.40 to become an official OASIS standard (boy, this process has taken longer than I imagined possible!)

Come along and hear all about it, and ask me and my co-chair questions!

You can register here at the OASIS site.

"See" you there!


Monday, March 16, 2015

Vote for Me!: Open Crypto Standards Talk at RSA

I would like to give a talk on PKCS#11 and KMIP and how you can escape vendor lock in by using open standards at this years upcoming RSA conference, but I can only do it if I can get your vote! This year, RSA is "crowd sourcing" a few talks - the most popular will be sent to their program committee. I only have a chance if I get your vote.

Voting closes on April 2, so please don't delay!

Attendees votes count for double, but even non-attendees can vote. Please check out my talk and vote for me. Thank you!

Wednesday, March 11, 2015

International Women's Day Breakfast at Google

I was so honored to be asked to join a breakfast for a small group of women at Google this past Saturday in celebration of International Women's Day hosted by the Women Tech Makers group.  It was a great place to get to know other senior women in the industry, with loads of time for networking.

I was so impressed that when Natalie Villalobos (our host) asked her boss, Meg Smith (now US CTO), about taking her 20% passion project to inspire women in tech and improve diversity in the industry to a full time job that Ms. Smith agreed and talked to others at Google to get funding for that as a full time position!  Now Ms. Villalobos gets to work on Women Tech Makers full time - what an awesome job!

I loved our keynote speaker, Suzanne Frey, Director of Policy, Trust & Security, Google Apps .  She was full of energy, was inspiring and super smart.  Suzanne Frey had the following advice for the women leaders in the room:

  1. Ms Frey had learned about the cingulate gyrus - the part of the brain that does "look back" (ie how could this conversation have gone better, what if I had gone to that event instead of this other one, etc). Basically, she learned that women's cingulate gyrus fires much  more frequently than men's. The summary of the research was that more men are not focused on potential mistakes of the past, but moving forward, looking ahead.  Her advice? Stop ruminating on the past!

    As someone who has spent many a night tossing and turning thinking how I could've better handled a situation, how I wish I would've called my grandmother one more time at the holidays (I did not know she was sick and dying until it was too late and she had already passed), what if I had picked up my grandma's cat sooner - could I have prevented his FIP from flaring out of control and killing him? Was I too short in that text message to my friend? What if, what if, what if...

    I remember years ago complaining to my mother that my husband fell asleep instantly and she responded, "Men do that, they don't worry about the day behind them - they want their rest for the day ahead. Their minds are quiet at night."

    Of course, there is certain contemplation both men and women should always do, so that you can improve your future performance - but you cannot change the past or actions of others, and its better to learn the lesson and move on.
  2. Have your own personal board of directors. Other women leaders you can bounce problems and ideas off of. Meet regularly.  She meets with hers every quarter - an international group, so meetings times can be at very irregular times for US.

    This is hard for me - when I go to event like this breakfast, I got so much out of talking to other female managers. We face similar challenges but have very diverse ways to look at things.  It's the keeping it going afterwards.

    Do you have a personal board of directors? If so, how did you set it up? How do you keep it going?
  3. Your intentions and how you are perceived are not always the same.  Be aware of that, and it will impact how you take actions. It's important to believe in yourself - that will change how you are perceived!
     
  4.  Start to reinvest in yourself. Do NOT do things that deplete you.  Use things like TaskRabbit. You are worth a few dollars a month - your time is worth so much more. Your energy and efforts are better spent on yourself and elsewhere then on housecleaning, etc.
Lots of great advice from around the table - including learning how to pick our battles, how to manage children/family and work and calendar management.

What tips do you  have for emerging women leaders?

Friday, November 21, 2014

ICMC: Entropy Sources - Recommendations fo a Scalable, Repeatable and Comprehensive Evaluation Process

Sonu Shankar, Software Engineer, Cisco Systems
Alicia Squires, Manager, Global Certifications Team, Cisco
Ashit Vora, Lab Director and Co-Fonder, Acumen Security


When you're evaluating entropy your process has to be scalable, repeatable and comprehensive... well, comprehensive in a way that doesn't outweigh the assurance level you're going for. Ideally, the method used for the evaluation would be valid for FIPS-140 and Common Criteria.

Could we have the concept of a "module" certificate for entropy sources?

Let's think about the process for how we'd get here. we'd have to look at the Entropy Source: covering min-entropy estimation, review of built-in health tests, built-in oversampling, and a high-level design review.

There are several schemes that cover entropy and how to test it. You need to have a well documented description of the entropy source design, and leverage tools for providing statistical analysis of raw entropy.  It would be good to add statistical testing and heuristic analysis - but will vendors have the expertise to do this correctly?

How do you test for this?  First, you have to collect from raw entropy - disabling all of the conditioners (no hashing, LFSR, etc) - not always possible, as many chips also do the conditioning, so you cannot get the raw entropy. If you can't get the raw entropy, then it's not worth testing - as long as you've got good conditioning, it will  look like good entropy.

In order to run this test, you need to have at least one file of entropy contiaing 1 million symbols and the file has to be in binary format.

When it comes time to look at the results, the main metric is min-entropy.

You need to be careful, though, to not over sample from your entropy source and drain it. You need to be aware of how much entropy it can provide and use it appropriately. [* Not sure if I caught this correctly, as what I heard and saw didn't quite sync, and the slide moved away too quickly]

When it comes to reviewing noise source health test - need to catch catastrophic errors and reductions in entropy quality This is your first line of defense against side channel attacks. This may be implemented in software pre-DRBG or built-in to source.

Ideally, these entropy generators could have their own certificate, so that 3rd parties could use someone else's hardware for an entropy source - w/out having to worry difficult vendor NDA issues.

ICMC: Entopy: A FIPS and Common Criteria Perspective Including SP 800-90B (G22A)

Gary Granger, AT&E Technical Director, Leidos

Random values are required for applications using cryptography (such as for crypto keys, nonces, etc)

There are two basic strategies for generating random bits - non deterministic random bit generator (NDRBG) and deterministic random bit generator (DRBG) .  Both strategies depend on unpredictability.

Entropy source is covered in NIST SP 800-90B (design and testing requirements).  Entropy source model: Noise source, conditioning component, and health tests.

How do we measure entropy? A noise source sample represents a discrete random variable. There are several measures of entropy based on a random variable's probability distribution line Shannon Entropy or Min-Entropy.  NIST SP 800-90B specifies requirements using min-entropy (conservative estimate that facilitates entropy estimation).
 
FIPS has additional implications for RNG in their implementation guidance, specifically IG 7.11. It defines non-deterministic random number generators (NDRNG), identifies FIPS 140 requirements for tests, etc.

IG 7.13 covers cryptographic key strength modified by an entropy estimate  For example, the entropy has to have at least 112 bits of security strength or the associated algorithm and key shall not be used in the approved mode of operation.

But the basic problem - entropy standards and test methods do not yet exist. How can a vendor determine and document estimate of their entropy? How do we back up our claims?

There are also different concerns to consider if you are using an internal (to your boundary) source of entropy or an external (to your boundary) source for entropy.


ICMC: Validation of Cryptographic Protocol Implementations

Juan Gonzalez Nieto, Technical Manger, BAE Systems Applied Inteligence

FIS 140-2 and its Annexes do not cover protocol security, but the goal of this standard (and the organizations controlling it) is to provide better crypto implementations.  If the protocol around the crypto has issues, your crypto cannot protect you.

Mr. Nieto's problematic protocol example is TLS - he showed us a slide with just the vulns of the last 5 years... it ran off of the page (and the font was not that large....).

One of the issues is the complexity of the protocol. From a cryptographer's point of view, it's simple: RSA key transport or signed Diffie -Hellman + encryption. In reality, it's a huge collection of RFCs that is difficult to put together.

TLS/SSL has been around since 1995, with major revisions every few years (TLS 1.3 is currently in draft).  The basics of TLS are a handshake protocol and a record layer.  Sounds simple, but there are so many moving parts. Key exchange + Signature + Encryption + MAC... and all of those have many possible options.  When you combine all of those permutations, you end up with a horrifyingly long and complicated list (entertainingly cramped slide results) .:)

But where are the vulnerabities showing up?  Answer: everywhere (another hilarious slide ensues). Negotiation protocol, applications, libraries, key exchange, etc... all the places.

Many of the TLS/SSL cipher suites contain primitives that are vulnerable to a cryptanalytic attacks that re not allowed by FIPS 140-2, like DES, MD5, SHA1 (for signing), RC2, RC4, GOST, SkipJack.....

The RSA  key transport is happening with RSA PKCS#1 v 1.5 - but that's not allowed by FIPS 140-2, except for key transport. (See Bleichbaker 1998).

There are mitigations for the Bleichbaker, but as of this summer's USENIX Security conf... not great anymore. So, really, do not use static RSA transport (as proposed in TLS 1.3 draft). Recommendation: FIPS 140 should not allow PKCS#1 v 1.5 for key transport.  People should use RSA-OAEP for key transport (which is already approved).

Implementation issues, such as a predictable IV in AES-CBC mode, can expose plaintext recovery attacks. When the protocol is updated to mitigate, such as the fix in TLS 1.1/1.2 for Vaudanay's (2002) padding oracle attack, often something else comes along to take advantage of the fix (Lucky 13, a timing based attack).

Sometimes FIPS 140-2 just can't help us - for example, with he POODLE (2014) attack on SSL 3.0 (mitigation: disable SSL 3.0), FIPS 140-2 wouldn't have helped. Authenticated encryption protocols are out of scope.  Compression attacks like CRIME(2012)? Out of scope for FIPS 140-2.

Since Heartbleed, the CMVP has started asking labs to test known vulnerabilities. But, perhaps CMVP should address other well-known vulns?

Alas, most vulnerabilities occur outside of the cryptographic boundayr of the module, so it is out of scope.  The bigger the boundary, the more complex testing becomes.  FIPS 140-2's implicit assumption that if the crypto primitives are correct, then the protocols will likely be correct is flawed.

Perhaps we need a new approach for validation of cryptography that includes approved protocols and protocol testing?

In my personal opinion, I would like to see some of that expanded - but WITHOUT including the protocols in the boundary. As FIPS 140-2 does not have any concept of flaw remediation, if something like Heartbleed had been inside the boundary (and missed by the testers) - vendors would have found them, but had to break their validation in order to fix it.

Thursday, November 20, 2014

ICMC: Validating Sub-Chip Modules and Partial Cryptographic Accelerators

Carolyn French, Manager, CMVP, CSEC
Randall Easter, NIST, Security Testing Validation and Management Group

Partial Cryptographic Accelerators

Draft IG 1.9: Hybrid Module is crypto software module that takes advantage of "Security Relevant Components" on a chip.

But, that doesn't cover modern processors like Oracle's SPARC T4 and Intel's AES-NI - so there is a new IG (1.X): Processor Algorithm Accelerators (PAA).  If the software module relies on the instructions provided by the PAA (Mathematical construct and not the comlete algorithm as defined in NIST standards), and ccannot act independently - it's still a hybrid.  If there are issues with the hardware and the software could work on it's own (or on other platforms), then it is NOT a hybrid. (YAY for clarification!)

Sub-Chip Modules

What is this? A complete implementation of a defined cryptograpic module is implemented on part of a chip substrate.  This is different than when a partial implemenation of a defined cryptographic module is implemented on part of a chip substrate (see above).

A sub-chip has a logical soft core. The cryptographic module has  a contiguous and defined logical boundary with all crypto contained within. Durign physical placement, the crypto gates are scattered. Testing at the logical soft core voundary does not verify correct operation after synthesis and placement.

There are a lot of requirements in play here for these sub-chip modules. There is a physical boundary and a logical boundary. The physical boundary is around a single chip. The logical boundary will represent the collection of physical circuitry that was synthesized from the high level VHDL soft core cryptographic models.

Porting is a bit more difficult here - the soft core cna be re-used, unchanged, and embedded in other single-chip constructs - this requires Operational Regression testing.  This can be done at all levels, as long as other requirements are met.

If you have multiple disjoint sub-chip crypto... you can still do this, but it will result in two separate cryptographic modules/boundaries.

What if there are seveal soft cores, and they want to talk to each other? If I have several different disjoint software modules that are both validated and on the same physical device, we allow them to exchange keys in the clear. So, why not? As long as they are being directly transferred, and not outside of the trip through an intermediary.

As chip densities increase, we're going to see more of these cores on one chip.