Friday, November 21, 2014

ICMC: Entropy Sources - Recommendations fo a Scalable, Repeatable and Comprehensive Evaluation Process

Sonu Shankar, Software Engineer, Cisco Systems
Alicia Squires, Manager, Global Certifications Team, Cisco
Ashit Vora, Lab Director and Co-Fonder, Acumen Security

When you're evaluating entropy your process has to be scalable, repeatable and comprehensive... well, comprehensive in a way that doesn't outweigh the assurance level you're going for. Ideally, the method used for the evaluation would be valid for FIPS-140 and Common Criteria.

Could we have the concept of a "module" certificate for entropy sources?

Let's think about the process for how we'd get here. we'd have to look at the Entropy Source: covering min-entropy estimation, review of built-in health tests, built-in oversampling, and a high-level design review.

There are several schemes that cover entropy and how to test it. You need to have a well documented description of the entropy source design, and leverage tools for providing statistical analysis of raw entropy.  It would be good to add statistical testing and heuristic analysis - but will vendors have the expertise to do this correctly?

How do you test for this?  First, you have to collect from raw entropy - disabling all of the conditioners (no hashing, LFSR, etc) - not always possible, as many chips also do the conditioning, so you cannot get the raw entropy. If you can't get the raw entropy, then it's not worth testing - as long as you've got good conditioning, it will  look like good entropy.

In order to run this test, you need to have at least one file of entropy contiaing 1 million symbols and the file has to be in binary format.

When it comes time to look at the results, the main metric is min-entropy.

You need to be careful, though, to not over sample from your entropy source and drain it. You need to be aware of how much entropy it can provide and use it appropriately. [* Not sure if I caught this correctly, as what I heard and saw didn't quite sync, and the slide moved away too quickly]

When it comes to reviewing noise source health test - need to catch catastrophic errors and reductions in entropy quality This is your first line of defense against side channel attacks. This may be implemented in software pre-DRBG or built-in to source.

Ideally, these entropy generators could have their own certificate, so that 3rd parties could use someone else's hardware for an entropy source - w/out having to worry difficult vendor NDA issues.

ICMC: Entopy: A FIPS and Common Criteria Perspective Including SP 800-90B (G22A)

Gary Granger, AT&E Technical Director, Leidos

Random values are required for applications using cryptography (such as for crypto keys, nonces, etc)

There are two basic strategies for generating random bits - non deterministic random bit generator (NDRBG) and deterministic random bit generator (DRBG) .  Both strategies depend on unpredictability.

Entropy source is covered in NIST SP 800-90B (design and testing requirements).  Entropy source model: Noise source, conditioning component, and health tests.

How do we measure entropy? A noise source sample represents a discrete random variable. There are several measures of entropy based on a random variable's probability distribution line Shannon Entropy or Min-Entropy.  NIST SP 800-90B specifies requirements using min-entropy (conservative estimate that facilitates entropy estimation).
FIPS has additional implications for RNG in their implementation guidance, specifically IG 7.11. It defines non-deterministic random number generators (NDRNG), identifies FIPS 140 requirements for tests, etc.

IG 7.13 covers cryptographic key strength modified by an entropy estimate  For example, the entropy has to have at least 112 bits of security strength or the associated algorithm and key shall not be used in the approved mode of operation.

But the basic problem - entropy standards and test methods do not yet exist. How can a vendor determine and document estimate of their entropy? How do we back up our claims?

There are also different concerns to consider if you are using an internal (to your boundary) source of entropy or an external (to your boundary) source for entropy.

ICMC: Validation of Cryptographic Protocol Implementations

Juan Gonzalez Nieto, Technical Manger, BAE Systems Applied Inteligence

FIS 140-2 and its Annexes do not cover protocol security, but the goal of this standard (and the organizations controlling it) is to provide better crypto implementations.  If the protocol around the crypto has issues, your crypto cannot protect you.

Mr. Nieto's problematic protocol example is TLS - he showed us a slide with just the vulns of the last 5 years... it ran off of the page (and the font was not that large....).

One of the issues is the complexity of the protocol. From a cryptographer's point of view, it's simple: RSA key transport or signed Diffie -Hellman + encryption. In reality, it's a huge collection of RFCs that is difficult to put together.

TLS/SSL has been around since 1995, with major revisions every few years (TLS 1.3 is currently in draft).  The basics of TLS are a handshake protocol and a record layer.  Sounds simple, but there are so many moving parts. Key exchange + Signature + Encryption + MAC... and all of those have many possible options.  When you combine all of those permutations, you end up with a horrifyingly long and complicated list (entertainingly cramped slide results) .:)

But where are the vulnerabities showing up?  Answer: everywhere (another hilarious slide ensues). Negotiation protocol, applications, libraries, key exchange, etc... all the places.

Many of the TLS/SSL cipher suites contain primitives that are vulnerable to a cryptanalytic attacks that re not allowed by FIPS 140-2, like DES, MD5, SHA1 (for signing), RC2, RC4, GOST, SkipJack.....

The RSA  key transport is happening with RSA PKCS#1 v 1.5 - but that's not allowed by FIPS 140-2, except for key transport. (See Bleichbaker 1998).

There are mitigations for the Bleichbaker, but as of this summer's USENIX Security conf... not great anymore. So, really, do not use static RSA transport (as proposed in TLS 1.3 draft). Recommendation: FIPS 140 should not allow PKCS#1 v 1.5 for key transport.  People should use RSA-OAEP for key transport (which is already approved).

Implementation issues, such as a predictable IV in AES-CBC mode, can expose plaintext recovery attacks. When the protocol is updated to mitigate, such as the fix in TLS 1.1/1.2 for Vaudanay's (2002) padding oracle attack, often something else comes along to take advantage of the fix (Lucky 13, a timing based attack).

Sometimes FIPS 140-2 just can't help us - for example, with he POODLE (2014) attack on SSL 3.0 (mitigation: disable SSL 3.0), FIPS 140-2 wouldn't have helped. Authenticated encryption protocols are out of scope.  Compression attacks like CRIME(2012)? Out of scope for FIPS 140-2.

Since Heartbleed, the CMVP has started asking labs to test known vulnerabilities. But, perhaps CMVP should address other well-known vulns?

Alas, most vulnerabilities occur outside of the cryptographic boundayr of the module, so it is out of scope.  The bigger the boundary, the more complex testing becomes.  FIPS 140-2's implicit assumption that if the crypto primitives are correct, then the protocols will likely be correct is flawed.

Perhaps we need a new approach for validation of cryptography that includes approved protocols and protocol testing?

In my personal opinion, I would like to see some of that expanded - but WITHOUT including the protocols in the boundary. As FIPS 140-2 does not have any concept of flaw remediation, if something like Heartbleed had been inside the boundary (and missed by the testers) - vendors would have found them, but had to break their validation in order to fix it.

Thursday, November 20, 2014

ICMC: Validating Sub-Chip Modules and Partial Cryptographic Accelerators

Carolyn French, Manager, CMVP, CSEC
Randall Easter, NIST, Security Testing Validation and Management Group

Partial Cryptographic Accelerators

Draft IG 1.9: Hybrid Module is crypto software module that takes advantage of "Security Relevant Components" on a chip.

But, that doesn't cover modern processors like Oracle's SPARC T4 and Intel's AES-NI - so there is a new IG (1.X): Processor Algorithm Accelerators (PAA).  If the software module relies on the instructions provided by the PAA (Mathematical construct and not the comlete algorithm as defined in NIST standards), and ccannot act independently - it's still a hybrid.  If there are issues with the hardware and the software could work on it's own (or on other platforms), then it is NOT a hybrid. (YAY for clarification!)

Sub-Chip Modules

What is this? A complete implementation of a defined cryptograpic module is implemented on part of a chip substrate.  This is different than when a partial implemenation of a defined cryptographic module is implemented on part of a chip substrate (see above).

A sub-chip has a logical soft core. The cryptographic module has  a contiguous and defined logical boundary with all crypto contained within. Durign physical placement, the crypto gates are scattered. Testing at the logical soft core voundary does not verify correct operation after synthesis and placement.

There are a lot of requirements in play here for these sub-chip modules. There is a physical boundary and a logical boundary. The physical boundary is around a single chip. The logical boundary will represent the collection of physical circuitry that was synthesized from the high level VHDL soft core cryptographic models.

Porting is a bit more difficult here - the soft core cna be re-used, unchanged, and embedded in other single-chip constructs - this requires Operational Regression testing.  This can be done at all levels, as long as other requirements are met.

If you have multiple disjoint sub-chip crypto... you can still do this, but it will result in two separate cryptographic modules/boundaries.

What if there are seveal soft cores, and they want to talk to each other? If I have several different disjoint software modules that are both validated and on the same physical device, we allow them to exchange keys in the clear. So, why not? As long as they are being directly transferred, and not outside of the trip through an intermediary.

As chip densities increase, we're going to see more of these cores on one chip.

ICMC: FIPS 140-2 Implementation Guidance 9.10: What is a Software Library and How to Engineer It for Compliance?

Apostol Vassilev, Cybersecurity Expert, Computer Security Division, NIST, Staff Member, CMVP

Why did we come up with IG 9.10 [Power On Self Tests]? There were many open quetions about how software libraries fit into the standard.  In particular, CMVP did not allow static libraries - but they existed. We needed to come up with reasons to rationalize our decision, so we could spend time doing things other than ddebating.

Related to this are IG 1.7 (Muliple Approved Modes of Operation) and IG 9.5 (Module Initialization during Power-Up).

The standard is clear in this case - the power-up self tests SHALL be initiated automatically and SHALL not require operator intervention.  For a software module implemented as a library, an operator action/intervention is any action taken on the library by an application linking to it.

Let's look a the execution control flow to understand this problem. When the library is loaded by the OS loader, execution control is not with the library UNLESS special provisions are taken. Static libraries are embedded into the object code and behave differently.

How do we instrument a library? Default entry points are well-known mechanism for operator-indeendent transfer of execution control to the library  This has been available for over 30 years, and exist for all types of libraries: static, shared, dynamic.

There are alternative instrumentation - in languages like C++, C# and Java you an leverage things like static constructors that are executed automatically upon loading the library containing them when it is loaded.

What if the OS does not provide a DEP mechanism and the module is in a procedural language like C?  You can consider switching to C++ or using a a C++ wrapper, so that you can get this functionality.  Lucky for my team, Solaris supports _init() functions. :)

Implementation Guidance 9.5 and 9.10 live in harmony - you need to understand and implement both correctly.

Static libraries can now be validated with the new guidance.

ICMC: Roadmap to Testing of New Algorithms

Sharon Keller, Director CAVP, NIST
Steve (?), CAVP, NIST

The CAVP takes over after NIST picks a new algorithm, the CAVP takes over and figures out how to test it.  They need to evaluate the algorithm from top to bottom - identify the mathematical formulas, components, etc.

The CAVP develop and implement the algorithm valdiation test suite. Which requirements  are addressable at this level? They develop the test metrics for the algorithm and exercise all mathematical elements of the algorithm. If something fails - why?  Is there an error in the algorithm, or an intentional failure - or is there an error in the test?

The next stop is to develop user documentaion and guidance, called validation system document (VS), documents test suite and provides instructions o implementing validation tests.  There is cross validation, and make sure that both teams come up with the same answers - a good way to check their own work.

The basic tests are Known Answer Tests (KAT) , Multi-block Message Test (MMT), and Monte Carlo Tests.  KATs are designed to verify the components  to algorithms. MMT will test algorithms where there may be chaining of information from one block to the next and make sure it still works. The Monte Carlo Tests are exhaustive, checking for flaws in the UI or race conditions.

Additionally need to test the boundaries - what happens if you encrypt the empty string?  What if we send in negative inputs?

There are many documents for validation testing - one for each algorithm or algorithm mode.

The goals of all these tests? Cover all the nooks and crannies - prevent hackers from taking advantage of poorly written code.

Currently, the CAVP is working on tests for SP 800-56C, SP 800-132 and SP800-56A (Rev2).

In the future, there will be tests for SP 800-56B (rev1), SP 800-106 and SP800-38A.  Which ones of these is more important for you to get these tests completed?

Upcoming algorithms that are still in draft, FIPS 202 (Draft) for SHA3, SP800-90A (Rev2) for DRBG, SP800-90B for Entropy Sources and SP 800-90C for construction of RBGS. Ms. Keller has learned the hard way - her team cannot write tests for algorithms until they are a published standard.

ICMC: Is Anybody Listening? Business Issues in Cryptographic Implementations?

Mary Ann Davidson, Chief Security Officer, Oracle Corporation

A tongue in cheek title... of course we're hoping nobody is listening!  While Ms. Davidson is not a lobbyist, she does spend time reading a lot of legislation - and tries not to pull out all of her hair.

There are business concerns around this legislation - we have to worry about how we comply, doing it right, etc.  Getting it right is very important at Oracle - that's why we don't let our engineers write their own crytpo [1] - we leverage known good cryptographic libraries.  Related to that, validations are critical to show we're doing this right. There should not be exceptions.

Security vulnerabilities... the last 6 months have been exhausting. What is going on?  We all are leveraging opensource we think is safe.

We would've loved if we could've said that we knew where all of our OpenSSL libraries were when we heard about Heartbleed. But, we didn't - it took us about 3 weeks to find them all! We all need to do better: better at tracking, better at awareness, better at getting the fixes out.

It could be worse - old source code doesn't go away, it just becomes unsupportable.  Nobody's customer wants to hear, "Sorry, we can't patch your system because that software is so old."

Most frustrating?  Everyone is too excited to tell the world about the vulnerability they found - it doesn't give vendors time to address this before EVERYONE knows how to attack the vulnerability. Please use responsible disclosure.

This isn't religion - this is a business problem! We need reliable and responsible disclosures. We need to have good patching processes in place in advance so we are prepared.We need our opensource code analyzed - don't assume there's "a thousand eyes" looking at it.

Ms. Davidson joked about her ethical hacking team. What does that mean? When they hack into our payroll system, they can only change her title - not her pay scale. How do you think she got to be CSO? ;-)

Customers are too hesitant to upgrade - but newer really is better! We are smarter now than we used to be, and sorry we just cannot patch you thousand year old system. We can't - you need to upgrade! The algorithms are better, the software is more secure - we've learned and you need to upgrade to reap those benefits.

But we need everyone to work with us - we cannot have software sitting in someone's queue for 6 months (or more) to get our validation done.  That diminishes our value of return - 6 months is a large chunk of a product's life cycle. Customers are stuck on these old versions of software, waiting for our new software to get its gold star. Six weeks? Sure - we can do that. Six months? No.

Ms. Davidson is not a lobbyist, but she's willing to go to Capital Hill to get more money for NIST. Time has real money value. How do we fix this?

What's a moral hazard? Think about the housing market - people were making bad investments, buying houses they couldn't afford to try to flip houses and it didn't work out. We rewarded those people, but not those who bought what they could afford (or didn't buy at all) - we rewarded their bad risk taking.

Can we talk with each other?  NIST says "poTAHto", NIAP says "poTAHto" - why aren't they talking?  FIPS 140-2 requires Common Criteria validations for the underlying OS for higher levels of validations - but NIAP said they don't want to do validations

We need consistency in order to do our jobs. Running around trying to satisfy the Nights Who Say Ni is not a good use of time.

And... The entropy of ... entropy requirements.  These are not specific, this is not "I know it when I see it". And why is NIAP getting into entropy business? That's the realm of NIST/FIPS.

Ms. Davidson ends with a modest proposal: Don't outsource your core mission.  Consultants are not neutral - and she's disturbed by all of the consultants she's seeing on The Hill.  They are not neutral - they will act in their own economic interest. How many times can they charge you for coming back and asking for clarification? Be aware of that.

She also requests that we promote the private-public partnership.  We need to figure out what the government is actually worried about - how is telling them the names of every individual that worked on code help with their mission? It's a great onus on business, and we're international companies - other countries won't like us sharing data about their citizens. Think about what we're trying to accomplish, and what is feasible for business to handle.

Finally, let's have "one security world order" - this is so much better than the Balkanization of security.  This ISO standard (ISO 19790) is a step in the right direction. Let's work together on the right solutions.

[1] Unless you're one of the teams at Oracle, like mine, who's job it is to write the cryptographic libraries for use by the rest of the organization. But even then, we do NOT invent our own algorithms. That would just be plain silly. 

ICMC: Random Thoughts - Is True Randomness an Illusion?

Helmut Kurth, Chief Scientist, atsec information security

Illusions can be fun ... if we know they are an illusion.  They can make us feel good... even if they are bad.  They can make us think something is okay... when it's really broken.

There are some illusions we like in technology, like virtualization! The illusion is that we have more resources than we really have.  We can save money - that's good, right?  Some people think that virtualization provides more security - but does it really?  Vendors claim their virtualized systems are just like a real hardware box... but are they? Often not exactly. Something has to give, we should understand this.

How is entropy impacted once you virtualized the system?  Virtualizaion can change the timing, it may just behave differently. Either way, are we getting the same entropy on these systems?

Often, we make incorrect assumptions about things like timing and the similarity of a virtualized system to its true hardware counterpart.

For example, if you're using time as your entropy source - you may assume the lowest order bits are changing most frequently and will provide more actual entropy. But, what if this is not the true timer? What if a hypervisor is intercepting and interpreting the concept of "time" - what if the hypervisor should not be trusted?

Shouldn't you be able to trust your hypervisor?  Once someone has breached the hypervisor, they can do all kinds of evil things underneath your VM and you won't be able to easily detect it (as your OS will be unchanged).

For example, the RDRAND instruction can be intercepted by a hypervisor. This is a "feature" documented by Intel.  So, as a user of the VM, you think you're getting some pretty good entropy from RDRAND - but you're really getting poor entropy from your hypervisor. How could you detect this?   Intel's RDRAND is often used as the sole source of randomness with no DRNG postprocessing (like in OpenSSL), regardless if the "randomness" is being used for generating nonces or for generating cryptographic key material.

Assuming a compromised hypervisor, the bad guy can have the key used to generate the "random" sequences used in the RDRAND emulation.  He can use this to generate the different random streams.  He is able to get the nonce, which is transmitted in the clear.

Launching the attack requires installation of a hypervisor or the modification of a running hypervisor. Just one flaw that allows the execution of privileged code is all it takes.  The hacker, this this case, may have the session key even before the communicating parties have it! At this point, it doesn't matter what algorithm you're using - communications are essentially in the clear.

This attack is not that easy - it requires taking over the hypervisor (not easy), and once you have the hypervisor, you can do anything you want!  But, this isn't about taking down the machine, this is about eavesdropping undetected for any length of time.

This is a sneaky attack - virus scanner and host-based IDS will tell you everything is okey dokey!  It is independent of the OS or ay applications (as many rely on RDRAND) .

How can you protect yourself?  The basic solution is diversification - do not use a single source of entropy.  Use a different RNG for nonce generation and generation of key material, and use RDRAND for seeding (with other sources) rather than directly for generating critical security parameters.  Read the hardware documentation carefully - make sure you understand what you're getting.

Intel isn't going to fix this, as this isn't a bug... it's a feature!

Entropy analysis isn't just about the mathematical entropy analysis - be aware of how you procure the numbers and how they are used in your system.


As a side note, we (in Solaris land) are aware that this is a tough problem and hard to get right and we've already implemented counter measures. Darren Moffat did a great write up on Solaris randomness and how we use it in the Solaris Cryptographic Framework, describing the counter measures we already have in place.

Wednesday, November 19, 2014

ICMC: Status of the Transition to New Algorithms and Stonger Keys

Allen Roginsky, Mathematician, NIST

FIPS 140-2 doesn't talk much about the algorithms themselves, they are covered in the Annexes.  There were minor changes back in 2002/2003, however the algorithms have changed. New algorithms have come in, old ones have been deprecated.

Under the ISO rules, every country can choose their own algorithms. In the US, we've already chosen our algorithms for FIPS 140-2. We'll likely continue to use the same ones in FIPS 140-4 (or whatever we call them).

The current major algorithm documents are SP 800-131A and FIPS 186-4.  The stronger key requirements went into effect last year and there is a major hit coming in at the end of 2015.

Why are we doing this transition?  Security strenght of 80 bits is insufficient (the 56-bit strong DES was broken long ago; attacks on the SHA-1 collision resistance property; advances in integer factorization; etc).  Some of the currently approved algorithms aren't strong regardless of the key length (the non SP-800-90A RNGs).  Transition plans were fist announced in SP 800-57, Part 1 in 2005.  We've delayed this from going into effect from 2010 to 2015, but cannot delay it further or we'll be hurting the consumers.

Approved are the best algorithms. Deprecated algorithms are not recommended, but can be used. This is different than restricted, which you should not use.  Legacy use have no guarantee, but really should not be used, except to verify previously generated signatures, for example.  Some algorithms are just simply not allowed.

For example, SKIPJACK decryption was allowed at the end of 2010 for legacy use only, but SKIPJACK encryption is disallowed.  Only 8 certificates were ever issued, so there were not any complaints bout this change.

At the end of 2010, two-key 3DES encyrption is restricted (100 bits of strength for two-key 3DES with no more than 3^20 (plantext, cyphertext) pairs), two-key 3DES decryption is legacy-use only.
At the end of 2015, two-key 3DES encryption is disallowed.  AES and three-key 3DES are acceptable.  We allowed this for so long, because it was in wide use and the attacks were not straight forward.

Digital Signatures

As of the end of 2010, signature generation algorithms with less than 112 bits of encryption strenght became deprecated. As of the end of 2013, there was a transition from FIPS 186-2 to FIPS 186-4 and signature generation algorithms with less than 112 bits of cryptographic strength became disallowed.

Signature verification with less than 112 bits of strength is legacy-use, beginning in 2011.

Deterministic Random Number Generators

This is the BIG problem! As of the end of 2010, te non-SP-800-90A compliant RNGs became deprecated. As of the endof 2015, the non-SP-800-90A compliant RNGs will become disallowed. As of the end of 2015, the non-SP-800-90A complaint RNGs became disallowed - RETROACTIVELY!  This will be a big expense, as previously purchased software can no longer be used.

Note from Randy Easter: What this means is that every validation that was done over the last 15 years and every validation that is not using this RNG, that item will be moved to the nonapproved line.  If the keying algorithm is using this RNG, ALL of those functions become non approved.

Key Agreement and Key Transport

As of the end of 2013, Key Agreement and Key Transport algorithms stay acceptable if: key strength is at least 112 bits AND the algorithms are compliant with the appropriate NIST standards: SP 800-56A, SP 800-56B and SP 800-38F. As of the end of 2013, the non-compliant Key Agreement and Key Transport (Key Encapsulation) algorithms have become deprecated if key strength is at least 112 bits.  Key wrapping must be complaint with one of the provisions of SP 800-38F. Everything else is disallowed.


Hash and MAC functions will be impacted, as well as some key derivation algorithms. See SP 800-131A for details

FIPS 186-2 to 186-4 Transition

Beginning in 2014, new implementations shall be tested for their compliance with FIPS 186-4. This applies to domain parameter generation, key pair generation and digital signature generation.  Signature verification per FIPS 186-2 is Legacy-use. Beginning this year (2014), RSA digital signature keys must be generated as shown in FIPS 186-4.

Future Transition Plans

You bet - already looking forward to the future.  We want to transition away from non-Approved implementations of key agreement (DLC-based) and key transport (RSA-based) schemes.  Unfortunately, there are too many modules in existence that are non compliant with SP 800-56A and 56B. We need a well thought out strategy for transition.

ICMC: Questions to CMVP (NIST/CSEC) on ISO 19790 Standard, 140-4 or Other

 Carolyn Finch, Manager, CMVP, CSEC
Randall Easter, NIST, Security Testing, Validation and Management Group
Allen Roginsky, Mathematician, NIST
Sharon Keller, Director, Cryptographic Algorithm Validation Program (CAVP), NIST

This panel is entirely questions and answers. I'll do my best to capture them.

Q: About bug fixing and patches. Is there an expedited way, once we get our validation, if there's a non security patch, is there a non-painful/easy way for us to update the validation?

A: Randy: Yes, we have a process, but whether it's painless or not... Even if the changes are not security relevant, you will have to go back to the lab. They will decide if it is security relevant or not, and they can do an electronic submission to NIST. It may require re-doing your algorithm testing and updating your certificate.  If there are no issues, we can usually do the update within a week - once we get the paperwork from the lab.

Q: What happens with a datacenter that is always doing critical functions, it can never give up. How can they do more tests?

A: Randy: If there is critical work being done, this can be deferred until the next time period. It doesn't say "after 42 deferrals you have to interrupt work".  There may be time when the processor is doing non crypto, then the checks can be run. The processor, in my experience, is not 100% busy on crypto.

Q: So, it can be indefinite?

A: Randy: Yes.

Q: Will CAVP now be testing and verifying all of the SHALL statements? Or is that a documentation thing?

A:  Sharon: CAVP tests all the things that are testable.  Back in the day, they all just gave tests for the algorithm - but then things got more complicated, like making sure vulnerabilities aren't introduced. For example, it's a good thing if the IV is never repeated, but that thing is not testable.

Randy: One good example is 800-90A. There are quite a bit of SHALLs in there, but there are some things that are difficult to test, so thing fall through the cracks. 

Q: In CC, we have the concept of Flaw Remediation. There are sometimes fixes we can make judegement calls ourselve. The time and money it costs to go back to the lab for FIPS makes it prohibitive for us to maintain validation

A: Randy: I  can't speak to CC, but once you open the box to make changes, then we need the lab to validate you only changed what you said you changed.  Not every change has to be revalidated, there can be judgment there.  This has been the policy in CMVP since 1995, every change has to be verified by the lab.  My experience in development, it's possible to think you've only made a small change, only later discover that it had unforeseen consequences.

A: Moderator: Working at a lab that does CC, there are vendors that abuse Flaw Remediation.

[VAF: Note: who better to judge how relevant a change is, than the developers who are intimately familiar with the codebase?]

Q: Is there a rough estimate when ISO 19790 will be adopted?

A: Randy: I honestly do not know. Hopefully mid next year, but just don't know. There will be a transition date from FIPS 140-2. But we don't know how long that will be. In the past, it was a 6 month transition period, but depending how this goes - we may need more time?

Q: Given the last public review of FIPS 140-3 was more than 5 years ago, are you ready to go forward with this standard?

A: Carolyn: This is a different process. The ISO process is different.  They don't have that type of review.

Randy: We could have a review of sorts about whether or not we want to move to the ISO standard. We won't pick up the old FIPS 140-3, as there are no DTRs and there won't be. The only path forward is with ISO 19790.

Q: Randy mentioned earlier that we'll still have Implementation Guidance. The current IGs are getting very large and difficult to navigate. How will this work with an international standard?

A: Randy: We work with Canada already and have a non-binding agreement with the Japanese guidance. We circulate the guidance with the labs before we post. The guidance is big, because we've been using FIPS 140-2 for nearly 13 years. It's only large because time has gone on for so long.  As the program has grown, the vendors and the users have gotten more sophisticated and therefor require particular guidance to address.  Hopefully we'll refresh more often and be able to better manage this.

Q: If I went down the ISO road right now, and in 18 months from now I'm ready to validate - but the new standard hasn't been adopted, yet, I won't pass FIPS 140-2, will I?

A: Randy: That's right. You could get this validated by JCVP, but the US doesn't recognize this standard at this time.  We would like a decision to get made. Yes, I said that last year, too.

Q: Elliptic Curve Cryptography has  come up significantly over the last year, particularly around NSA Suite B. Do you have plans to standardize any other curves that did not come from the NSA?

A: Allen: I am not aware of any work in this area. I am not aware of any weaknesses in the current curves. In general, one is good enough, the strength is not in the curve (as long as it meats the requirements) - the strength is in the key. Some are over the binary fields, some are over the primary fields. You can choose one or the other.  One problem with ECC is with the DUAL ECDRBG algorithm.  The problem is not with EC, but is in this particular algorithm. There was a guard in EC to protect against this, but you can't stop a problematic implementation or use.  There are no known issues with the existing curves.

Q: Our hardware now has partial implementations of crypto algorithms. We have software that can do the rest and work on the older processors, so it doesn't require the hardware but will use it if it's there.

A: Carolyn: If it can do everything in software, it's not a hybrid module.

Q: We had anticipated RSA 4096 being approved, but now it isn't.

A: Allen: It has been in FIPS 186-2, but it is no longer there. It is not allowed now for signature generation. The reason is to facilitate the transition to EC. It's very difficult to be sure that all of the floating point arithmetic is done correctly when you're dealing with these numbers. It can be done, but it's error prone. The decision was made to move to technology where the keys are shorter, like in EC. Some people have complained about this, but this is the decision. I think it's the correct one.

Q: Do you know of anything in the ISO standard that will impact current validations?

A: Randy: There is nothing that is retroactive. Existing validations should stand. New modules, though, will have to meet the new requirements once it's in place.

Q: ISO being international, but you haven't mentioned CAVS. There's no requirement that they are international.

A: Randy: It's an international standard, but CAVP will still be a US program. We'll be using an international standard. We're working on an ISO standard on how to do algorithm testing.

ICMC: Comparing ISO/IEC 19790 and FIPS PUB 140-2

William Tung, Leidos, Laboratory Manager

ISO/IEC 19790:2012 is the planned replacement for FIPS 140-2, but there's been no official announcement or timeframe, yet. This means no labs are accredited to perform these validations.

But... it's coming.

Some of this will be a rehash of our earlier sessions, but with more deep diving per section.

Degraded Operation

This mode can only be used after exiting an error state and must be able to provide information about this state.  Whatever mechanism or function is causing the failure shall be isolated.  While in the degraded mode, all conditional algorithm self-tests shall be performed prior to the first operational use of the algorithm and before degradation can be removed, all tests must pass

Cryptographic Module Interface

The cryptographic module interface is a fifth logical interface that cannot be used whenever you're in an error state.

Roles, Services and Authentication

The user role is optional, there is a minimum requirement of a crypto-officer role.  The minimal service requires showing the module's versioning info that matches the certificate on record.

There is also a new requirement for self-initiated cryptographic output capability.

Authentication strength requirements must be met by the module's implementation, not through policy controls or security rules.  For example, password size restrictions. ISO 19790 does not yet define what exactly those strength requirements are.  Level 4 modules will have to implement multi-factor authentication.

Software/Firmware Security

This section of the document applies only to software/firmware or hardware modules. Level 2 modules must implement an approved digital signature or keyed MAC for integrity test, Levels 3 & 4 have higher requirements.

Operational Environment

Software modules no longer need to operate in Common Criteria (CC) evlauted OS or "trusted operating system in order to meet Level 2 requirements.  There are specific OS requirements still required to meet Levels 2-4 (that will look similar to what used to be covered by CC).

Physical Security

Explicitly allows translucent enclosers/cases, in addition to FIPS 140-2 allowed opaque enclosures/cases, within the visible spectrum.  Level 3 modules must either implement environmental failure protection (EFP) or undergo environmental failure testing (EFT).  Level 4 MUST implement EFP.

Non-Invasive Security

This section currently doesn't specify requirements, but they will come. Hardware and firmware must comply and it will be optional for software  For levels 1-2, module must protect against these attacks. Level 3-4 will have to prove protection.

Sensitive Security Parameter (SSP) Management

SSPs consists of Critical Security Parameters (CPSs) and Public Security Parameters (PSPs). For Level 2 modules and up, procedural zeroization is not allowed.

Self Tests

There are two categories of self-test: Pre-operational and conditional.  Pre-operational includes things like integrity test and critical functions test. Conditional covers the other standard conditional tests plus the other items covered in the old POST guidance.

All self-tests need to be run regardless of if module is operating in approved or non-approved mode.  Level 3 & 4 modules must include an error log that is acceisble by an authorized operator of the module.  Integrity test needs to be run over all software/firmware components of module.  At a minimum, vendor must implement one cryptographic algorithm self-test as a pre-operational test.
[Clarification from Randall Easter on this topic: If the module is installed and configured as a FIPS 140 module, then it must do all of these tests/checks.  If it was installed and configured otherwise, it's not required. This is not different than what is currently required by FIPS 140-2.]

FIPS CRNGT is not currently defined in ISO.

ISO 1970 requires Level 3 & 4 modules to do automatic pre-operational self-tests at a predefined (by vendor) interval.

Life-Cycle Assurance

This seems to be more of a documentation and QE section. Covers vendor testing and finite state model. The states required are: General Initialization State, User State, and Approved State.  Changing to crypto-officer from any other role is prohibited.

Testing Requirements for Cryptographic Modules

The next part of the talk came from Zhiqiang (Richard) Wang, Leidos, Senior Security Engineer.

The testing requirements were derived from the FIPS 140-2 derived testing requirements. This covers self-tests, live cycle assurance and mitigation of other attacks.

ISO/IEC 24759:2014 specified shte methods to be used by testing laboratories to test whether the cryptographic module conforms to the requiremetns speified in ISO/IEC 19790:2012.  This was developed to provide a high degree of objectivity during the testing proess and to ensure consistency across the testing laboratories.  It clearly specifies what the vendor needs to provide to the laboratory.

Richard spent time walking through section by section going through many of the same requirements discussed early, but with a twist on how it would be tested or why.

ICMC: Explaining the ISO 1970 Standard, Part 2

Randall Easter, NIST, Security Testing, Validation and Management Group

ISO 19790 is available for purchase from ISO (in Swiss Francs) or ANSI (US Dollars), and you'll also need the derived test requirements (ISO/IEC 24759) .

In this section, Randy walked us through a deep dive of the sections of the document.

There is a new Terms and Definitions section, which will hopefully help to clear up ambiguity and help answer a lot of the questions they've gotten over the years.

The new document has all of the SHALL statements highlighted in red, with [xx.yy] - where xx indicates the clause and yy is anumeric index within the clause. This will make it, hopefully, easier for two people to have a conversation about the standard.  The plan is, when errors are found and fixed with addenda, there will be complete new revisions of the document available - ie everything in one place.

Errors were found during translations to Korean and Japanese, when the translators just could not figure out what something was supposed to me (turns out, it wasn't clear in English, either). We should expect more changes as people start to implement against this standard.  Errors will be corrected again in revisions. Mr. Easter was not clear what ANSI/ISO charge for revisions of documents.

There will be four levels for validation again. The requirements vary greatly for physical security between the levels, from "production grade components" to "tamper detection and response envelop, EFP and fault injections mitigation".  You will still need environmental controls for protecting key access, etc.

There are algorithms like Camelia that the US Federal Government are not allowed to use, but vendors are designing software for international markets.  So, federal users can only use certain algorithms - the vendors do NOT have to restrict this, it is up to the end user to implement and use the policy correctly.  How do you audit that, though?

ISO standard states that every service has to have a "bit" that tells whether or not it's an approved service or not.  This should enable this to be better audited.

This is a great policy, but what happens when you have to work what you get? For example, someone could send the IRS a tax return encrypted with RC4. The IRS can decrypt using RC4, but would have to store using AES.  It would be good to know what has happened here.

The new ISO standard has requirements around roles, services and authentication. The minimum role is the crypto officer - there has to be logical separation of required and optional roles and services. The higher up the the levels go, the more restrictions: like role-based or identity based authentication all the way up to multi-factor authentication.

There's been a lot of questions about FIPS 140-2 about what we mean by "list all the services". Does that mean only all the approved services? No, all services - even non approved security functions. Non security relevant services also have to be listed, but that can refer to a separate document. [VAF: curious still, what exactly that means - an OS provides a LOT of services!]

ISO 19790 has new directives of managing sensitive security parameters - for example, you have to provide a method to zeroize keys.  This could be done procedurally by an operator, or as a result of tamper. Other examples this area covers: random bit generation, SSP generation, entry, output and storage of keys.

Self-tests have changed. Pre-operational tests cover software/firmware integrity, bypass and critical functions tests.  Crypto operations can begin to proceed while these tests are running, but the output cannot be provided until the pre-operational tests have completed.

Known answer tests are now all conditional tests, like pair-wise consistency checks.  Vendors can continue to do all of the tests up front, or as needed. Lots of flexibility here.  Mr. Easter made a note I couldn't quite follow about module updates being signed by the crypto officer - not clear why it wouldn't be the vendor. [VAF: I may have missed something here.]

The new standard still allows simulation environments for testing, and special interfaces for the labs to test against that may not be needed by the consumer.

Surprising to NIST, some consumers don't care about FIPS 140 validations and the vendors want to provide that to them.  For example, some of the tamper evident seals may need to be installed by the cryptographic operator, or some initialization procedures. Some customers may not even care about power-on-self-tests EVER being run, so that configuration information has to be part of the integrity check.

As a note: some of the current FIPS 140-2 implementation guidance will be retained with this new standard, as they are newer than the ISO standard, or too vendor specific to be included in a general document.

The vendor may have mitigation against some attacks that there are no currently testable requirements available, and that would be allowable through Level 3. Once you get to Level 4, you have to prove your mitigations.

The new standard allows for degraded modes of operation - for example, if one algorithm stopped functioning properly the rest of the algorithms could still be used.

Something new: here has to be a way to validate that you're running a validated version and what the version number is.  This is interesting and tricky, because of course you get your validation done on released software, so when you ship you don't know that you're validated.  And if you store validation status in another file, it could easily get out of date (ie updates done to the system, but software still reporting it's validated). There are ways to solve this, but vendors should tread carefully.

Also, Annex C (and only Annex C) which covers approved security functions (ie algorithms) can be replaced by other countries, as needed.


Q: "How does software have an 'opaque' coating?" A: "Physical security requirements do not have to be met by software modules".

Q: "Lots of services could be going on, what do we need to be concerned with" A: "Services that are exposed by the module, via command line or API. Security services need be be queryable".

Q: "Why should the crypto officer be signing new modules? They may not be able to modify the trust anchors". A: Was using crypto officer as an example, policies could vary - but it is the crypto officer's decision on what to load.

ICMC: Explaining the ISO 19790 Standard, Part 1

Randall Easter, NIST, Security Testing,  Security Management  & Assurance Group

FIPS 140-1 was published in January 1994, and a year later there came the Derived Test Requirements and guidelines for how to test to these DTRs.  The standard has continued to evolve over the last 20 years.  FIPS 140-2 was published in May 2001.  The DTRs cannot be written until after the standard has stopped fluctuating, which is why there's always a delay.

The goal of the Cryptographic Module Validation Program (CMVP) is to show a level of compliance to given standards. They also desire to finish the testing and validation while the software/hardware is still current and shipping (ed note: not a goal that has been consistently met in recent years - goals and budget do not always align).

NISTs goal is to reevaluate standards every 5 years to make sure they are still current.

A request for feedback and comments on FIPS 140-2 came out in 2005.  At the same time, the international community wanted to have a rough international equivalent of FIPS 140.  ISO/IEC 19790: 2006 was published in March 2006, which was a rough equivalent to FIPS 140-2.  This included editorial changes to clarify items that people were interpreting differently - perhaps roughly equivalent to Implementation Guidance we've received for FIPS 140-2?

The original goal of doing ISO/IEC standard in parallel with FIPS 140-3, so that we could get the best of both worlds. International participation and an international standard, and a new FIPS 140 document at the same time. The problem with ISO standards - they are worked on privately and even the "public" final version must be purchased. The FIPS 140 standards are downloadable for free from NIST's website.

ISO/IEC 19790:2012 was pulbished on August 15, 2012 and the DTRs came out in January 2014.

The draft for FIPS 140-3 came out in January 2005, but never became a formal standard.  What happened?

Mr. Easter noted that there was a change of ownership of the document and FIPS 140-3 quickly diverged from ISO.  ISO has very strict standards about meeting twice a year, and insists on progress or your work will be dropped.  Work on FIPS 140-3 stalled in NIST... but work had to forge ahead with ISO.

Mr. Easter, editor of the ISO document, is happy with how it's come out.  :-)

While ISO/IEC 19790:2012 has been out since... 2012, NIST has not formally announced that their intention is to move to this standard.  That lack of announcement seems to be a political one, as the person that should be making that announcement hasn't been hired, yet....

One interesting change in the ISO document is that it covers algorithms that are not approved by the US Government, but are used regularly by the international community.  There is a concept of a wrapper document where more algorithms could be added and clauses modified - but the more someone does that, it will cause standard divergence.

The ISO document was worked on by the international community and circulated to all of the labs that do FIPS 140-2 validations to provide comments. Mr. Easter believes it was better circulated than a NIST review would be. I would disagree here, as ISO is a closed process, so developers could not provide feedback from their sides (and believe me - developers and labs speak a different language, and I am certain our feedback would've been unique and valuable)

ISO/IEC 19790:2012 contains new sections on Software Security and Non-Invasive Security, and have removed the Finite State Model.  FIPS 140-2 was very hardware specific - built around hardware DES modules.  The new standard acknowledges that software exists. ;-)

NIST did hold a Software Security Workshop in 2008, to try to learn more about how software works.  Software could then be validated to higher levels and were tied to Common Criteria.  Based on input from this workshop, the levels software could validate against were changed and the relationship to Common Criteria was severed - that made it into the 2009 draft of FIPS 140-3. Unfortunately, that was the last draft of FIPS 140-3 and the standard never became final.

Looking at an appendix of FIPS 140-2 vs ISO/IEC 19790:2012 - very similar, with the notable differences, like Finite State Model being dropped and a new software/firmware section added.  There's now guidance on non-invasive security testing and sensitive security parameter management.

The Annexes of ISO/IEC 19790:2012 sound more helpful - there's an entire appendix telling you what the documentation requirements are.  I n FIPS 140-2, you just had to troll your way through the entire document and hope you caught all of the "though shall document this" clauses.

One of the annexes is about approved authentication mechanisms... unfortunately, at this time, there are no formal authentication mechanisms.  Hopefully that can be covered in Implementation Guidance.

The big goals for ISO/IEC 19790:2006 were to harmonize with FIPS 140-3 (140-3 will now be a wrapper document referring to the ISO document), editorial clarifications from lessons learned (Randy noted that he wasn't even sure exactly what he was trying to say when he re-read things years later ;), incorporation of all of the FIPS 140-2 implementation guidance and most of all entropy guidance (ah, entropy.... ).

Additionally, need to address new technologies - we're putting entire algorithms on chips, and sometimes just pieces of algorithms.  How does that fit into the boundary?  That is where technology is going, and a modern standard needs to understand that and take it into consideration, in ways that FIPS 140-2 simply couldn't have conceived.

Software will take advantage of this - it only makes sense, but that starts to put us into hybrid mode. This is covered in the new ISO standard, not just as implementation guidance.

The new ISO standard addresses the trusted channel without reference to Common Criteria. ISO/IEC 19790:2012 added a new requirement - a service to query version information.  This is interesting to us in OS land, as our FIPS version number and OS version number are not the same.  See this with other software vendors as well.

Integrity check is simpler for level 1 in this new standard, but more difficult for levels 2 and higher.

The new software security requirements section is definitely an improvement over FIPS 140-2, but still not as good as it could be. ISO did not get very much feedback on this section in the time frame where they made the request.

The ISO standard had to remove the reliance on Common Criteria, as NIAP is moving away from those evaluations and the protection profiles are up in the air. The ISO group didn't want to tie themselves to a moving target, instead added specific requirements for things like auditing that the operating system should provide, if you want to get Level 2.  In general, software won't be able to get over Level 2 in this new standard.

An interesting side note, you can actually have a "mix & match" of your levels. You could have "Level 2" for certain levels, and have an overall "Level 1" validation (based around your lowest section).  For example, for United States Post Office postage meters they want Level 3 Physical Security, but overall Level 2 for everything else.  It's important that people cannot break into the machine, but things like RBAC (role based access control) are not as important.

ISO/IEC 19790 added new temperature requirements. That is, you have to test at the extremes that the vendor claims the module operates at.  Though, if the vendor only wants to claim at ambient temperature - that would be noted in the security policy.  They should be tested at the "shipping range" as well.  Reason?  Imagine leaving your epoxy covered Level 4 crypto device on a dashboard of a car in the summer.... well, that epoxy melts. Would it really still be a Level 4 device after that? No, so temperature range is important.

We no longer require all of the self-tests be completed before the module is run. For example, if you're going to use AES - you only need to run the known answer tests on AES, not on ALL of the algorithms.  NIST understood that for devices like contactless smartcards - can't wait forever for the door to open to your office.  Now power-on-self-tests are conditional, as opposed to required all the time.

The new standard adds periodic self tests, at a defined interval (with option to defer when critical operations are already in use).

There is a new section on life-cycle assurance and coverage for End of Life - what do you do with this crypto box when you're upgrading to the shiny new hardware?

Keep in mind that this is still a revision of FIPS 140-2. It is not a completely new document. The document will seem familiar, but it should overall be more readable and provide greater assurance.  We didn't add things just because they sounded like fun ways to torture the vendor, even if some vendors may think that ;-)

Questions from the audience: "Will we be getting rid of Implementation Guidance?" No, we can't possibly guarantee to get everything correct in the first standard, and technology changes faster than the main document.  "Can we stop calling it Guidance?  If vendors can't refuse to do it, then it's not guidance." It's always been called that, you can choose to not follow it - but you won't pass your validition. Maybe we should change the name, a contest perhaps? (Suggestion from the audience, "call it requirements, as that's what they are - you are required to follow them")  My suggestion: call them "Implementation Clarifications".  The word "guidance" is too much associated with advice, which this is not soft advice.

Friday, October 10, 2014

GHC14: Security: Multiple Presentations - Another Perspective

Finding Doppelgangers: Taking Stylometry to the Underground

Sadia Afroz, UC Berkeley) is using stylometry to find who is interacting on underground forums (cybercrime forums).  You want to figure out what this guys is doing there in the first place and who really is doing the work.

Current research around deanonymizing users in social networks is focused around similar usernames - but if you really care about being anonymous, you won't fall for that trap.  The next thing to look at is similar activities or social networks.  For most people you can see that they will write a facebook post and a tweet on the same event/activity, so easy to find the match.  This doesn't work for underground user forums, though.  So, instead they are using Stylometry to analyze the writing style.

Stylometry is based on everyone has a unique writing style - unique in ways you are not aware of, so its hard to modify. To do this, you analyze frequency of punctuation and connector words, n-grams, etc. But, you need quite a large writing sample to analyze, the larger the better - but still can get some accuracy on small samples.

They looked at four forums, 1 in Russia (Antichat), 1 in English (BlackhatWorld) and 2 in German (Carders/L33tCrew).  People move from oe forum to another, but not always easy for researchers to get the full data sample.

Problems? These forums are not in English... often in l33tsp3ak (pwn3d).  Also, people aren't speaking with their natural voice, they are making sales pitches (more likely to overlap with other accounts that aren't actually the same person)..

They parsed l33tsp3ak using regular expressions, and additional parsing for "pitches" vs "conversation" (if there are no verbs and repeated things in lists, it's most likely a sales pitch and was eliminated).

Then it seems to be all about probability - what are they likelihood that  these are the same person.  Lots of analysis followed, like: do these accounts talk to each other or about each other? Are there similar Username, ICQ, Signature, Conatct information, Account information, Topics. Did they ever get banned? (moderators do not like multiple accounts for one account)

People can sell their accounts - accounts that have been established with a higher rank could be sold for more. Some people also want to "brand" so they can sell different things with each account (like CC numbers with one, marijuana with another).

You could avoid detection by writing less (lowering rank), or you could use their tool, Anonymouth :-)

From Phish to Phraud

 Presented by Kat Seymour, Bank of America, senior security analyst. talk started out great with a reference to Yoda. Every talk should have a reference to Yoda!

Phishing used to be around silly things like weight loss pills and male enhancement pills.  But, it's grown up - there's real money to be made here.$4.9 billion lost to phishers last year.

Attacks come from all over the place now - mobile, voicemail, emails, websites... and they've matured. No longer plain text filled with spelling errors, they now are stealing corporate branding and well written emails. They are stealing websites that aren't well watched/maintained.

Ms. Seymour can look at things in the URL to find out more about the phisher (and to help learn for suspicious patterns).  She can also find the IP address to do further research. Additionally, she can leverage the Internet Archive (aka the Way Back Machine) to see if the website has changed a lot recently (shows evidence of takeover).

She pays attention to referrers to their website - if a new referrer shows up quickly in their logs and then disappears?  It's likely a phishing site - so then she has to watch the accounts that logged in through there for suspicious activity (in addition to doing further research on the referring site).

It's not as simple as blocking IPs - she can't control your personal machines... and all of the places you might be coming from.

She needs to work with ISPs to block known phishing websites, but ISPs are spread all over the world  She can watch logs, traffic analysis and referrers - but the phishers are constantly coming up with new ways of  doing this.  Would be great to work with email providers to get them to watch out for this - but too diverse (some email providers are trying to address this, but difficult to coordinate).

Advice? Watch your statements, watch your statements, watch your statements!

GHC14: Passwords with Lorrie Faith Cranor

Lorie Faith Cranor, a professor of Computer Science and of Engineering and Public Policy at Carnegie Mellon University.

Who knew that Carnegie Mellon had a passwords research TEAM!? (looked to be about 10 people).

Lorrie Faith Cranor noted that everyone hates passwords, but no matter how much we hate them, text passwords are here to stay.  These types of passwords have a lot of attack vectors: shoulder-surfing, online attacks and offline attacks.

Offline attacks are difficult to protect against and are very effective and the cause of many publicized breaches.  Passwords are leaked hashed or encrypted and the computers an take BILLIONS of guesses per second, comparing hashes to find matches. Additionally, they exploit the common usage of the same password at mutliple sites.

CMU had rolled out a new password policy (number of numbers required, upper/lower case, allowed symbols, etc). Everyone hated the rules and blamed her (alas, IT department did not consult her). She asked them, though, where they got the rules from: NIST.  Sounds good - so looked into where NIST came up with their recommendations. Seems they came up with their rules based on what they thought would be a good idea, but had not done any tests on actual passwords.

System administrators don't want to get in trouble - they are going to use "best practices". If Dr. Cranor wants them to use something "better" - she has to prove it and get it published in a respected source.

How can you get passwords to study?

One of the easiest ways to get passwords is to ask users to come into your lab and create passwords for you - bu not everyone wants to walk into your lab to do this.  You can expand the reach by doing online and get thousands of passwords.  The problem?  You're asking people to NOT give you their real password, so this is not real data.

Another approach? Steal passwords. Of course, CMU cannot steal passwords - it's not ethical.  But, hackers like to post hacked password lists, so they can do research on some real passwords.

You can ask users to tell you about their passwords (where they put the special symbol, where do they put the number and capital letter, etc).

Or you can ask sysadmins for passwords, but they usually don't want to give these out. [VAF note: the sysadmin should NOT actually have access to the raw password?]

The passwords you get from leaked systems are often from throw-away sites, so not high quality.

Her lab was able to convince CMU to give them 25,000 real, high value passwords. Could compare these passwords to leaked and previous study data to see how relevant it was. These CMU passwords have the CMU password restrictions.  They also got the error logs:how often people logged in using the password, error rate for wrong passwords, and h ow often they changed - along with information about gender, age, ethnicity, etc.

To get this information took a LONG time.  Had to have two computers - one off of the Internet, locked in a room and not accessible by the researchers.  Researchers would write their tests and analysis scripts on a separate machine - then hand it over to the IT staff to run.  Black box testing.

How did they get these passwords that should've been hashed?  Many enterprises don't actually use hashes, they encrypt them with a system they can reverse so they can more easily deploy new systems. [VAF: ARGH!?!?!? what?!] So, at CMU they could decrypt the passwords (in the locked environment that the researchers did not have access to).

CMU Real Password Study

Dr. Cranor's team looked at things like how guessable the password was? Simple ones, like 1234 would be guessed in 4 tries. More complicated may be 'impossible'.

Since they had clear text passwords, they could run a guessability meter on them, as opposed to actually guessing them.  They could see that CS students createad the strongest passwords, business students did not create as good of passwords.

Could not find an effect for facutly vs student vs ethnicity made no difference in password strength, but men didmake a passwords that were 1.1x stronger than women.

You can make your password stronger by dong simple things - like adding a digit. If you put the digit at the beginning of your password, it was better than no digit - but not as good as having a digit in the middle. If you have multiple numbers in your password  - if you spread them out, it's harder to guess.

Password creation was annoying - if you're annoyed while doing it, though, you'll create a weaker password. :-)

They additionally took a look at leaked hash/cracked passwords - those were weaker than those created by sites like CMU that has an "annoying" password policy.

But, they could then compare the spread and diversity of their passwords collected in studies against real CMU passwords and found they were similar enough that her team could do further research with study passwords.

Large-Scale Online Experiment

Used Mechanical Turk - a site you can pay users to participate in your study (10 cents, a dollar, etc). Found this is a great way to do online studies, as Amazon has to manage credit cards, etc.

Asked participants to create passwords under randomly assigned constraints. They could see entropy estimates and guessability estimates. They could also see that people would drop out of the study the more difficult and onerous the password rules were.

NIST research has shown various password entropy estimates.  NIST notes that adding dictionary checks raises entropy and that having one with more rules (comprehensive 8) would be 24 bits of entropy.  Compared to "basic 16" (no dictionary check), they estimate the highest entropy.

Users only seem to use very few symbols (@ sign and ! are the most popular), even though many are available to them.

Found that in general, basic 16 could be pretty good - except for dumb users. Found these passwords quite easily: baseballbaseball, 123456789012345 and xxxxxxxxxxxxxxxx.  Oops!

Some minor restirctions will bring the basic 16 (which is less annoying to set and easier to remember) will make it stronger than a comprehensive 8 password.

Longer passwords though take longer to type... so that is annoying in a different way.

Recommended Policy?

Not sure - our password cracking algorithms are fine tuned to 8 character passwords, so just that they are having a hard time cracking 16 character passwords may not really be because it's harder, but rather because they have the wrong tools.

So... more research on N-grams (Google, book quotes, IMDB, song lyrics, etc) - now 16 character passwords become much easier to crack (Mybonnieliesovertheocean, ImsexyandIknowit#01).  Her students used this to win the DefCon password hacking context this year with their new tools.

Found that password meters can be frustrating - the same password gets different ratings on different meters, but they do make people make better passwords.


Did XKCD solve this all already?

So, Dr. Cranor's team studied this! Found that that the passphrases were not easier to remember, and people didn't like the random word passwords (but didn't like them any less than other password rules).  They tried a method of adding "auto correct" to the random word passwords, which helped people log in faster.

Research uncovered one of the most common words that appears in passswords: MONKEY! Why? Updated their password survey and asked any user that included "monkey" in their password and asked them WHY!? A: a lot of people have pets named Monkey or a friend nicknamed Monkey or... well, they just like monkeys.

As much as they've tried, they have not found a way to make users be random. More research... :-)

Interesting thing about Dr. Cranor? She made her dress and it's covered with discovered password graph (iloveyou in giant letters along the side).

Her team is starting to do more research on Mobile vs Desktop: users are seeming to avoid anything that involves shift key on mobile.

Interested in going to grad school and studying this? Join her team:

Question from audience: does changing passwords make them better?  No, her research shows that changing your password more frequently: you end up with a BAD password.  People do simple incremental changes to their passwords that make them easier to guess, particularly if the attacker has an "old" password.  The only time sysadmins should make users change their password is in response to a breach.

Password reuse: sure, for junk websites (newspapers, etc), but do NOT use that for work, bank, personal email. It's better to write them down (requires someone breaking into your house, as opposed to attacking a news site and then having access to your bank account).

This blog is syndicated from Security, Beer, Theater and Biking!

Thursday, October 9, 2014

GHC14: Security: Multiple Talks

With: Morgan Eisler, Shelly Bird, Runa A. Sandvik

Visualizing Privacy: Using (Usable) Short Form Privacy Policies

Morgan Eisler, @mogasaur, works at Lookout, a mobile security company.  This year over 2 billion people worldwide use the internet - more added every minute. Many (most?) of these are mobile devices.  Many companies have privacy policies, but only 12% of Internet users read privacy policies all the time - and only 20% of those that read them (even occassionally) understand them. Simply too long!

Facebook's privacy policy is longer than the constitution of the United States!

If nobody reads them, can we really say that the customers are making a choice? Certainly not an informed one. Users trust their providers - but, expectations rarely match, and can cause negative surprises that lead to loss of trust and loss of revenue.

Consider "Yo".  An app that you can share all of your contacts with, and it will send them push notifications - a literal "Yo".  The app became very popular, and was hacked over night - suddenly peoples phone numbers were no longer private.  This wasn't even an app created by a company - just a few friends having fun.

At Lookout, they made a really short form policy that you could view on just one page on a mobile device - but was it helpful?

It is important, but if people are not reading it - it's not really helpful. the NTIA does give guidelines here to help anyone create a privacy policy.

Lookout created a new short form policy that was quite simple - greyed out icons for things like "Government" to show that they were not showing their data with the Government.  For people they did share with, like "Carriers" - you could click on the icon and get more information.

Did usability studies and found that customers liked it - but did they understand it?  People, for example, weren't sure what the icon of  "user files" meant -  it looked like pictures. Did that mean it only applied to pictures?  Used usability studies to clear up some of the icons.

The Flattening of the Security Landscape and Implications for Privacy

Customers are like sheep (which is not at all like they are portrayed in movies). Sheep are stubborn and if you try to push them too hard, they scatter (enter picture of sheep dog working hard :).  Even though Shelly Bird isn't "in" security, when a security breach happens - customers come to her.  She has to pay attention to everything before deployment, like making sure the bios is up to date.

Shelly thinks of security as a bowl - a container to store and protect your data/applications/etc. Also, like a castle - defense in depth.

Ten years ago, during a deployment, a customer said she had to remove IPsec from all of the machines. Huh?  The router/switch engineers said: That's our job!  What about the "last mile"?  Same customer didn't want IPv6 - convinced their firewall would be confused and not able to process it.

Once Shelly got though all of this - then the Intrusion Detection folks were unhappy! They could no longer read the packets.

Essentially, fear of change.

Shelly could see that the more she could push the work down the stack - the faster things worked.  For example, high level app encrypting a disk took four hours, but letting the OS do it - two!

There are other bigger problems here - credentials! The government likes to authorized users to have something physical to prove their affiliations.  Shelly ended up with a dozen of these cards. Ugh.  Now they are moving them into the mobile device, using TPMs as a trust anchor.  This is claims based authentication, allowing business to move faster.

This is still very complicated, though, as the US Government doesn't even have trust across branches.

 People want to have multiple identities, people travel/move around and have different reasons for doing different transactions - lots of work to get this right.

The Data Brokers: Collecting, Analyzing and Selling Your Personal Information

Runa Sandvick works for Freedom the Press - they protect the press and help to inform the press of their rights. Like those that have been arrested in Ferguson for not moving fast enough.

While she often talking about NSA, today she's talking more about consumer privacy.

It's surprising how much companies know about you by just watching your patterns.  You are volunteering this information in exchange for a discount. Like the father that found out from Target that his daughter was pregnant. She wasn't even buying diapers or anything that obvious, but changed the products she was using in a way that indicated pregnancy to target.

But this stuff happens online, too, and we don't even know about it.

And this information isn't just kept by the one company you are shopping at - it's getting collected by data brokers.  For example, OfficeMax addressed a letter to a man with the title "Daughter Died in Car Crash". Where did they get that data? Why did they have that?

Data brokers sell lists of rape victims, alcoholics and erectile dysfunction sufferers.  Where are they getting this? Why are they collecting it?

When asked directly, data brokers talk about caring about privacy, but don't want to share things like: how to see what information they have about you? How to remove/correct information? How to decline to share?

How many people have read the privacy policy for GHC? No hands went up...Runa did read it for us, and wasn't happy with what she found.  Things like your resume could be shared with non-recruiters.  Privacy policy also notes that they will not use encryption, unless required by law, to protect your information.  She also used a tool, Disconnect, to see what sites were gathering information from users of the GHC website - there was a data broker there (New Relic, which does help you analyze your site traffic, but what is *their* privacy policy? will they share GHC stats with other orgs and corps?).

You can use Tor to protect yourself from these data brokers. The only way the site will know it's you is if you log in. There's no way for them, otherwise, to know who you are so they won't have anything to track against.  Runa only uses Chrome for cat photos. :-)

wow - this really goes beyond the annoying targeted banner ads!

GHC14: Accountability and Metrics for Gender Diversity

Panelists: Laszlo Bock, SVP People Operations, Google; Danielle Brown, Cheif of Staff, Intel; Theresa Kushner, VP of Enterprise Information Management, VMware; Denise Menelly, Shared Service Operations Eecutive for Global echnology and Operations, BofA; J, Sr Directoreanne Hultquist of Strategic Initiatives, ABI.

At BofA, they talk about metrics at all levels in all positions - this stuff is important.

Lazslo Block noted that they didn't release their diversity data for business reasons, but because this was just the right thing to do. Google needed to be open and honest about this.  Diverse teams are better, we know that. We were hesitant to release the numbers for the same reason as everyone else: we were afraid to get sued! Simply the right thing to do.

Danielle brown noted that releasing these numbers is an important part of the conversation. Intel has been releasing the data for 10 years, but perhaps a bit quietly in the past. By measuring we know where we stand and where we need to go.

Denise Menelly noted the importance of these numbers - they aren't just numbers, you have to have actions behind them.  Every senior level manager is expected to have a score card: budget, project schedule and how they are performing against gender diversity numbers.  Not only do the managers have to report, but they need to say what they are doing to continue to increment in the right direction. It's very important to see what is happening, even small changes are important and need to be watched.

VMware is data driven - so it was an easy, short conversation. Theresa Kushner showed her new CEO the numbers and he instantly said, "Yep, there's a problem - we need to do something about this". Then senior management has a new responsibility - measure, track. What you measure is what you look at - so make sure you're measuring the right things.  It's not just a number, you also have to change the culture - but how do you measure that?

 At Intel, we often found that women were working in isolation.  Trying to address this by creating networks for women that start when they start.  Making sure they immediately have a network of support.

At Google we're looking at unconscious bias - for example, if a man leaves early to pick up his kids, everything thinks "what a great dad!" When a woman does the same thing? "Figures".  "Our tech population is 83% men - they have to behave differently." The unconscious bias training is starting to make an anecdotal difference - people are now aware they are doing this. 94% of Googlers surveyed said they will now step up and say something if they see someone demonstrating unconscious bias.

Denise Menelly noted this is taking too long. While they won ABI award, she was surprised as she sees there is so much work to do.  You're leaders -NOT just your HR/Diversity people - your technical leaders need to support you to come to Grace Hopper Celebration.  There is an issue that women will look at a job description and not see themselves as qualified (where a man will, even though they have the same qualifications.

Sergey, when he first started Google wanted 50% of his interns to be women (they hired 4 total interns in their first year).  Sergey also has his door open to women - he realizes that they have a perspective that he just doesn't have.

At VMware the executives don't just need to mentor women, but rather sponsor them. The execs need to have a plan for doing this and have accountability for their actions.

At Intel, EVERY employee will get a bigger paycheck this year if Intel improves their gender diversity.

At Google, every manager with more than 100 people in their org gets a diversity report and a visit to discuss. There are company goals here, and when people fail to make progress it can cause reduction in pay.  Some people are convinced of this issue, some people are just wrong. We'll need to work on forcing them out or keeping their wrong opinions to themselves.  The rest, they're in the middle and we want them to have the epiphany.

At VMware, we are working on diversity because it's good for business, it's good for innovation and it's good for our product line.

Denise  from BofA noted, yes, there's a pipeline problem, but that's not the biggest issue. We need to focus on fixing the culture and retention and making this a better industry.

At Intel, it was believed that the "issue" was women were leaving mid career - but when they looked at data, that actually wasn't the issue! Focus was in the wrong place, Intel now working on promotion and advancements.  The lack of senior women wasn't caused by women leaving - it was caused by them getting stuck at mid level.

Lots of good questions about pay data, when will we see more break down of what these companies mean by "technical woman", if women aren't leaving - why are they stuck, etc.  Panelists answered them all honestly - unfortunately, I was in line to ask a question (but we ran out of time) so could not take  notes.

Great talk - very inspiring! What are you doing in your org?  Do you think this could be handled bottom up?

This post is syndicated from Security, Beer, Theater and Biking!

Wednesday, October 8, 2014

GHC14: Leadership Strategies for High Impact Women

Presented by JJ DiGeronimo

JJ started her career over 20 years go - not for the love of code, but because she was tired of working dead end minimum wage jobs.  Found a great place in her school and got great grades that landed her in a consulting job, allowing her to travel all over the world.

We have to continue to carry the baton, but we have to find a different way of doing this - we can't take care of all things all the time, and still have any energy left.  The more things she picked up in her career, in addition to taking care of her family, she couldn't see how she could maintain the current state of things - she simply was doing so much.  JJ spent a lot of her time over the years interviewing more senior more successful women to figure out how they were doing.

Women want more influence and impact - do something slightly different to have their voice heard.

Keep in mind - you're career is a game, no doubt about it. How do you keep moving yourselves forward and position yourself strategically?

And you never know where you'll find these opportunities!  JJ and her husband wanted to start a family, so traveling all over the world alone was not conducive to getting pregnant. So, she took a lateral move to a new position which she ended up loving and had a lot more influence than she had expected.

But, you shouldn't leave this up to chance. What do you need to do to get to where you want to be in the next 24 months?  You need to plan ahead of time. Do you need new connections, new knowledge, new customers, new partnerships and/or new opportunities.

JJ wanted a new job at VMware and was surprised when she got a call from the hiring manager saying they were not even going to include her in the interview cycle, because she didn't have the right skills.  JJ then analyzed the people that were in the position she wanted to be in and discovered her gaps, and worked on filling those over the next few months.

How do the successful women do it?  First of all, they are master schedulers - they have to be in control of their calendar.  JJ looked at this and made a list of all the things she was committed to - she came up with FIVE pages of things. That's just not sustainable.   She had to look at two things: who  was asking her to do this?  Did they align with where she wanted to go?

People are notorious for putting things on your list - especially if you're a doer.  If you're in this session or reading these notes - that's probably you!  Be careful and make sure that what you and your teams are doing are aligned with what you should be doing and how much enjoyment do you get out of this?  You have to protect your time - do not expect others to do so!

For example, somebody asked her to do an "easy" task of mentoring 24 women over the year.  When JJ analyzed what this meant, she realized it was a 75 hour commitment! She thought about it for 24 hours... and realized she just couldn't fit it in.  JJ asked what they were really trying to get - inspire women in their organization.  She suggested instead that she come to their quarterly meetings, talk to the group and spend time afterwards talking to interested women.  The org asking was thrilled and JJ changed the commitment from a 75 hour commitment to a 6 hour commitment that was going to be rewarding.

Your goal here should be to get things off of  your list. Delegation is your best friend, especially if you've already learned the lesson from the task - give someone else an opportunity to learn! Yes, it will take longer at first, but it will save you time long term and make you happier and free up your time to do the things you want to do.

It doesn't make sense for a well paid women to do all of her household chores - no matter what the world and your mother have told you.  Can you barter and trade for services?  For example, help your neighbor with their wifi network and see if they can do some cooking/baking for you.

If you aren't excited about something - don't sign up. Your help won't be appreciated if you aren't bringing energy to it.  By actually letting someone else take something over for you, you're giving them opportunities to shine.

To get control of your calendar, you need to introduce yourself to the word "No."

Don't let other people give you tasks (they are ALL urgent) that are going to cause you to fall behind on your "real" job (which is, really, just a collection of tasks).

Make sure you have a list of what tasks you're working on and their priorities - put them on your white board.  When your boss comes in with a super new important tasks, you can say, visually - where does it fit with all of these things?  Sometimes that can even let you remove tasks, when your boss has forgotten to tell you that something wasn't important anymore.

How do you get rid of the guilt?  You will have to drop things that you're going to wish you could do (like you might not be able to make *every* soccer game) - but let your family and co-workers have a voice, that can help alleviate the guilt. For example, "which of your upcoming games are the most important and you really want me there?"

Before you say yes, think about:
  • Do you understand the work ahead of you?
  • What other commitments would interfere?
  • Is this project in line with your goals?
  • If I do this, would it be for the right reasons?
  • How will it impact my other responsibilities and commitments?
  • who else needs to be involved to ensure scucess?
  • What would sueccess look like?
  • Am I the best person to be doing this?
How did JJ prepare for her job changes from sales to cloud? She blocked out time on her calendar 3 times a week reading new articles on the cloud. But how could she let folks know what she now new?  She started posting comments on the articles she read, she starting writing her own articles and started sharing more on LinkedIn.

JJ also started getting into new circles - both online and in real life.  From there, she could help other groups she hadn't previously worked with, helping her to build her credibility.

You need to seek clarity, guidance and perspective.  JJ's had a surprising number of people come to her for "mentorship", but it turns out that they hate their current job. That's not a job for a mentor - that's a job for a career coach to help you to find your right direction - THEN find a mentor.

Once you have a plan, make sure your desires are known.  Don't be afraid to apply for the job - even if you're not perfectly qualified.

To get more exposure and skills, join a non-profit board and improve your leadership skills.  JJ said this is something EVERYONE needs to do.

My takeaway?

Next week - I'm resetting my calendar, and starting over. My calendar is SOLID, I have no time to get to tasks that I need to.  This is hard, as I am a first line manager, so I need to have 1:1s with people on my team - but I can control that schedule more than I currently do.  In addition, since becoming a manager I find I am driven by my calendar - all sorts of appointments end up there, often back to back to back to back... I need to start blocking off time to do email, strategy, etc - in big enough clumps where I can get things accomplished.

Also, I've been working on putting priority lists on my whiteboard - but it's never completely up to date, and as it's a white board it's not that hard to do this - and I will! (part of that time block for prioritization).

I loved listening to the other women's take-aways: one women is going to get someone else to mow her lawn. Another is going to take some new risks. Another women was excited that she is not "alone" for having chosen CS for a degree for the money - there is no shame in wanting to provide for your family, and you can still find the passion. Yeses need to be curtailed. Do things you enjoy that make you excited wherever possible.

What do you think you can do to better streamline your life and professional career?

This post is syndicated from Security, beer, theater....

Friday, August 29, 2014

GHC14: Seeking Online Community Volunteers!

First and foremost: I will be attending the Grace Hopper Celebration of Women in Computing again this year! I missed last year, so I am excited to be coming back!

Second: I am co-chairing the Online Community Committee again with Gail Carmichal, and we need volunteer bloggers, note-takers (on the wiki), wiki managers, tweeters, LinkedIn group managers, video bloggers, etc - so we can capture all of the content for those who were unable to get a pass to the conference this year.

Also - we could you remote volunteers to help manage questions from other volunteers, manage the online groups and keep the wiki free of spam.  So, even if you can't attend, if you are available to help before and during the conference - we want you!

We are accepting applications through September 8, 2014 - please sign up sooner than later, so we can get you set up well ahead of the conference start!  Apply now!


Valerie Fenwick
Online Communities Committee Co-Chair

This post is syndicated from Security, Beer, Theater and Biking!

Thursday, August 7, 2014

Success! Team Salty Dawgs Do Marin!

We did it!  Mark, Mike and I completed the 100K Marin Century Route on Saturday, August 2.  Mike noticed at the last rest stop that the route was actually only 57 miles, so Mark and I added a 20 minute loop at the end of the ride to make sure we got our full 62 miles in.

Stats: Since April, I rode 900 miles, burned 36,204 calories and rode for 75 hours and 35 minutes to train for this 100K ride to raise money for the American Lung Association.

Results: I rode 62 miles, climbed 3830 feet, and burned 2360 calories in 6 hours and 15 minutes. (that includes time at rest stops).

Best results: I raised nearly $4500 for the American Lung Association of California, and Oracle will be chipping in about another $1500 in matching donation.  I have been overwhelmed with everyone's generosity.

Not bad for a woman who thought she'd never ride a bicycle again just 3 years ago!

The ride was fabulous, and the rest stops had the best food! They had all the standards: m&ms, nuts, chips, cookies, PB&J and Gatorade.  But then they had even more: focaccia bread, brie, strawberries, figs, beef jerky, peaches, grapes, cherries, coffee cake and more.

That really helped me avoid stomach cramps while I rode (more fruit, less heavy/fatty stuff).

The day started out cool and nice (and missing Mike, who started 38 minutes after us...)

Mark beat me to the Big Rock (he appears to be being very silly)

but I got there eventually...

We did see a little bit of sun and Mark warmed up enough to take off his arm warmers, though they came back on for some of the descents.

Mike did find us and ride with us for a lot of the ride - completing Team Salty Dawgs!

We were still grinning at the finish!

Photos courtesy of Captivating Sports and Event Photos!

I couldn't have done this with out the support of my friends and family, and without Mike and Mark.  Mark even pushed me a bit up the steepest climb - I think he was getting bored.

I felt like I could've easily done another 10 miles... with more training, maybe next year I can try 100 miles...

THANK YOU!!  Valerie