Thursday, November 5, 2015

ICMC15: Cryptography, Moore’s Law, and Hardware Foundations for Security

Paul Kocher, President, Chief Scientist​, Cryptography Research

While crypto has continued to get better, and protocols have improved - we still are having massive security breaches.   the more complex our code get, the larger the code base gets - we increase our odds of more bugs.  If I have twice as much code, but the same amount of time to look at it, then the odds of missing bugs goes up much faster than linerally.

The Silver Bridge on US 35 in Ohio, built in 1924, collapsed in 1967, due to an engineering issue - gave us the term "fracture critical components".  How many "fracture-critical" elements are in a typical IoT device? DRAM, flash, storage, CPU logic, support logic. We're talking about billions of possible issues.

Our ability to understand simple elements often creates a false impression that we understand the complex system.just because you understand transistors, does not mean you understand a processor, let alone machine language.

We all make a bad assumption: software will be bug-free.  Almost every device you have is 1-3 bugs away from total compromise.Defect densisties re higher than we think.

For side channels, we mistakenly think that attackers only see the binary input/output data - not true! Power and RF measurements show tiny correlations to individual gates. 

Four properties for solutions to succeed. Hardware-based: it's the only layer where we know how to build reliable security boundaries. Deployable additively: legacy designs can't be abandoned, but ar e too complex to fix.  Addres your infrastructure: solutions that must address both in-device capabilities and manufacturing/lifecycle.  Best case: must have a very positive ROI. All stakeholders must benefit, and benefits must not depend on ubiquity.

We need to think about our security perimeters. If you put too much complexity in one boundary, catastrophic failure is more likely. We need to think like the military for how they store their ordnance - if one layer fails, nobody will have all of the munitions.

Software security is not scalable. No hope of eliminating bugs in existing software. CPU modes (like TrustZone, Ring0) haven't helped, despite decades of trying. Separate chips/modules only work for a small subset of uses cases - costly and distant from where security is needed.  He has the most hope fo ron-die security modules. 

Why? High performance, low power. When something is inside the die, it's cheaper to manufacture, lower latency, better performance.

The cost of putting in extra transistors to do security is largely immaterial today - and if you're bothered by it now, just wait 18 months. :-)

Chipmakers required solutions for in-device security and also solutions for enabling infrastructure. Cryptography Research's approach: CryptoManager Solution. Their CryptoManager protects the chip.

If you get twice the code, you aren't getting twice the most desirable features. You added the core features first!  BUT, by adding twice the number of lies of code, you're increasing complexity and adding bugs.

After all of these security breaches, some government agencies in other countries have switched back to the typewriter - it's more secure.

We have to come up with solutions, otherwise security risks will erase net benefits from new technology.  For example, what is the risk vs reward of having your air conditioner connected to Internet. It's great to have the repair automatically called when there is trouble, but does it create an attack vector into your house or office?

All of these security issues means job security for the people in this room, but we should sill try to do beter. :-)

Post by Valerie Fenwick, syndicated from Security, Beer, Theater and Biking!