Juan Gonzalez Nieto, Technical Manger, BAE Systems Applied Inteligence
FIS 140-2 and its Annexes do not cover protocol security, but the goal of this standard (and the organizations controlling it) is to provide better crypto implementations. If the protocol around the crypto has issues, your crypto cannot protect you.
Mr. Nieto's problematic protocol example is TLS - he showed us a slide with just the vulns of the last 5 years... it ran off of the page (and the font was not that large....).
One of the issues is the complexity of the protocol. From a cryptographer's point of view, it's simple: RSA key transport or signed Diffie -Hellman + encryption. In reality, it's a huge collection of RFCs that is difficult to put together.
TLS/SSL has been around since 1995, with major revisions every few years (TLS 1.3 is currently in draft). The basics of TLS are a handshake protocol and a record layer. Sounds simple, but there are so many moving parts. Key exchange + Signature + Encryption + MAC... and all of those have many possible options. When you combine all of those permutations, you end up with a horrifyingly long and complicated list (entertainingly cramped slide results) .:)
But where are the vulnerabities showing up? Answer: everywhere (another hilarious slide ensues). Negotiation protocol, applications, libraries, key exchange, etc... all the places.
Many of the TLS/SSL cipher suites contain primitives that are vulnerable to a cryptanalytic attacks that re not allowed by FIPS 140-2, like DES, MD5, SHA1 (for signing), RC2, RC4, GOST, SkipJack.....
The RSA key transport is happening with RSA PKCS#1 v 1.5 - but that's not allowed by FIPS 140-2, except for key transport. (See Bleichbaker 1998).
There are mitigations for the Bleichbaker, but as of this summer's USENIX Security conf... not great anymore. So, really, do not use static RSA transport (as proposed in TLS 1.3 draft). Recommendation: FIPS 140 should not allow PKCS#1 v 1.5 for key transport. People should use RSA-OAEP for key transport (which is already approved).
Implementation issues, such as a predictable IV in AES-CBC mode, can expose plaintext recovery attacks. When the protocol is updated to mitigate, such as the fix in TLS 1.1/1.2 for Vaudanay's (2002) padding oracle attack, often something else comes along to take advantage of the fix (Lucky 13, a timing based attack).
Sometimes FIPS 140-2 just can't help us - for example, with he POODLE (2014) attack on SSL 3.0 (mitigation: disable SSL 3.0), FIPS 140-2 wouldn't have helped. Authenticated encryption protocols are out of scope. Compression attacks like CRIME(2012)? Out of scope for FIPS 140-2.
Since Heartbleed, the CMVP has started asking labs to test known vulnerabilities. But, perhaps CMVP should address other well-known vulns?
Alas, most vulnerabilities occur outside of the cryptographic boundayr of the module, so it is out of scope. The bigger the boundary, the more complex testing becomes. FIPS 140-2's implicit assumption that if the crypto primitives are correct, then the protocols will likely be correct is flawed.
Perhaps we need a new approach for validation of cryptography that includes approved protocols and protocol testing?
In my personal opinion, I would like to see some of that expanded - but WITHOUT including the protocols in the boundary. As FIPS 140-2 does not have any concept of flaw remediation, if something like Heartbleed had been inside the boundary (and missed by the testers) - vendors would have found them, but had to break their validation in order to fix it.
No One Will EVER KNOW - To quote absolutely no one, *piping mistakes happen*. A slip of the wrist, a miscommunication, a minor earthquake - for whatever reason, sometimes things...