Thursday, May 16, 2019

ICMC2019: Does Open-Source Cryptographic Software Work Correctly


Daniel J. Bernstein, Research Professor, University of Illinois at Chicago, United States


Discussion on CVE-2018-0733 - an error in CRYPTO_memcmp function, where only the least significant bit of each byte are compared. It allows an attacker to forge messages that are lower than the guaranteed by the security claims. Yes, 2^16 is lower than 2^128.... only impacts PA-RISC

Take a look at CVE-2017-3738 ... It impacts Intel AVX2 montgomery multiplication procedure, but how likely is it to be exploited? According to the CVE - not likely, but where is the proof?

Eric Raymond noted, in 1999, that given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone. or less formally, 'given enough eyeballs, all bugs are shallow'.

But, who are the beta-testers? Unhappy users? That's the model used by most social media companies nowadays....

And "almost every problem" is not "all bugs" ... what about exceptions? can't those be very devastating? How do we really know that we are really looking? Who is looking for the hard bugs?

This seems to assume that developers like reviewing code - but in reality they like to write new code. The theory encourages people to release their code as fast as possible - but isn't that just releasing more bugs more quickly?

So then, does closed source stop attackers from finding bugs? Some certifications seem to award bonus points for not opening your code - but, why? How long does it really take an attacker to extract, disassemble and decompile the code? Sure, they're missing your comments, but they don't care.

Closed source will scare away some "lazy" academics, but not attackers... just takes longer for you as a vendor to find out about the issue.

There is also a belief that closed source makes you more money, but is that still true? Aren't there a lot of companies making money off of support?

Dan sees the only path forward through open source - it will build confidence in what we do. Cryptography is notoriously hard to review. Math makes for subtle bugs.... so do side-channel countermeasures. Don't even get started on post quantum...

A big reason it's hard to review is due to our pursuit of speed, since it's often applied to large volumes of data. This leads to variations in crypto designs. Keccak code package has more than 20 implementations for different platforms - hard to review!

Google added hand written Cortex-A7 ASM to Linux kernel for Speck... even though people said Speck is not safe. Eventually switched to ChaCha... but created more hand written assembly.

You can apply formal verification - code reviewer has to prove correctness. It's tedious, but not impossible - EverCrypt is starting to do this, but for the most simple crypto operations (but still have to worry about what the compiler might do... )

Testing is great - definitely test everything! How can we auto generate inputs, get lots of random inputs going through here - but you still may miss the "right" input that trips a bug. There are symbolic execution tools out there. (angr.io, for example)





No comments:

Post a Comment