Friday, May 17, 2019

ICMC19: At the Root of It All: The Cryptographic Underpinnings of Security

Karen Reinhardt, Director, Security Tools, Entrust Datacard
What is security without cryptography? That's how it used to be - we secured our computers with physical access control. That advanced to password files and access control lists, and then once we got on the network we had to advance to things like LDAP. We still relied heavily on routers and firewalls. But now that we are in the cloud... are those still effective?

We know we will have issues - so we must do monitoring and detection (IDS, IPS, logging, log analysis, etc) - but that's only if things have gone wrong.  But, wouldn't it be better to prevent the incident?

We used to secure devices by being in a physically secure environment, then we introduced VPNs - which allowed us to pretend we were in the physically secure environment.... but now we have so many connected devices in our house filled with personal and professional identity information.

Those identities are hot commodities! Ms. Reinhardt has worked many breaches and she notes the attackers are always going after the MS Active Directory.

Now even ambulances are connected to the Internet - but please don't attack them, you could put someone's life at risk.

Think about comparing crypto keys to nuts & bolts in construction. You need to use good quality nuts and bolts, and you need redundancy - our you could have a catastrophic failure (think about the recent crane collapse in Seattle).

If we have a few bad keys here and there - we might still be okay, depending on what is being protected. But, what if we lose an entire algorithm?  What if it happens before quantum computers?   We have nothing to replace RSA and ECC right now - what if something happens to them?  Should we be looking at older algorithms and ideas?

You need to assume your algorithms are going to fail and you will need to get new keys and new algorithms out there. think about this as plumbing - need to be able to replace the pipes.

If we lose RSA, you lose your entire chain of trust.  We can't reasonably replace every device out there - all the thermostats, traffic signals, cars, etc. impossible.

Good crypto alone is still not good enough - the attackers are still going to go after your users, your user directories, your insecure machines on your network, your kerberos golden ticket....We have to slow them down,

Thursday, May 16, 2019

ICMC19: HW Equivalence Working Group

Carolyn French, Manager Cryptographic Module Validation Program, Canadian Centre for Cyber Security, Canada; Renaudt Nunez, IT Security Consultant, atsec, United States
The working group will work towards a recommendation in the form ofa  draft Implementation Guidance (IG) to the CMVP..

Vendors often want to submit multiple hardware modules in the same report, and therefore on the same certificate. Under what conditions can the lab perform limited operational testing on the group of modules and still provide assurance that the right testing has happened?

The basic assumption is that IG 1.22 is already met (same crypto), but may have different number of cards, chips, memory config, etc.    For example, if you changed from solid state drive to classic hard disk... did you really need to do more testing?  Same for things like field replaceable and stationary accessories.

The draft IG is out and they are looking for reviewers.

ICMC19: KMIP vs PKCS#11: There is no Contest!

Tony Cox, VP Partners, Alliances and Standards, OASIS, Australia

Tony got a question in ICMC 2018 about "which of these two standards will win?" - the answer is BOTH.

The two standards have different scopes and areas of being useful, but both are standards based and should mean that they are vendor independent. Both standards have informative and normative documents updated by the technical committees.

Tony gave a good overview of the specifications, including goals and documents, explaining it all - like what are profiles and what do they mean? Profiles help prove interoperability and do some baseline testing.

KMIP 2.0 is full of loads of new features - hashed passwords, OTP, delegated login, Re-Encrypt (looking forward to post quantum crypto) and PKSC#11 operation... In addition to new features, lots of improvements as well.

PKCS#11 3.0 - out for public review any day now... also has loads of new things! New algorithms, support for Login of a user and AEAD, better functionality support for interaction with KMIP (Like Unique Identifiers). This started from V2.40 errata 1.

Key Manager uses KMIP and HSMs leverage PKCS#11... they work together. Key Manager is higher volume key management, key sharing. An HSM wants to keep the keys locked in.

PKCS#11 over KMIP is essentially giving a standardized way to do PKSC#11 over a network.

The two standards are quite complementary and have many of the same individuals or companies working on both. In the end, by following the standards we are giving the market freedom of choice.


ICMC2019: Intel SGX's Open Source Approach to 3rd Party Attestation

Dan Zimmerman, Security Technologist, Intel, United States

SGX is a set of CPU instructions that enable the creation of memory regions with security features called 'enclaves'. It has encrypted memory with strong access controls, updatable trusted computing base (TCB). Developers can leverage this to relocate sensitive code and data to the enclave, which has a per process trusted execution environment (TEE).

Common use cases are key protection, confidential computing, and crypto module isolation.

SGX Remote Attestation is a demonstration that software has been properly instantiated on a platform in good standing, fully patched and indeed in the enclave. Attestation evidence conveys identity of the software being attested, associated report data and details of the unmeasured state.

The attestation service is truly verification as a service, using privacy preserving and based on enhanced privacy ID (EPID). This approach does require that you're online and connect to a service.
The newer approach is Datacenter Attestation Primitives (Intel SGX DCAP). It is datacenter and cloud service provider focused. Flexible provisioning and based on ECDSA signatures, a well known verification algorithm. Theses primitives allow for construction of on-prem attestation services. This will leverage flexible launch control on the new Intel SGX enabled platforms. And best of all, it's OpenSource! (that's how it got into the opensource track :-) )

Platform Certification Key (PCK) Retrieval. Intel issues a PCK Certificate for each of its processors at various TCBs. The retrieval tool will extract platform provisioning ID info for Intel PCS service requests. There is also a provisioning certification service and caching service.

There is a quote generation library that has an API for generating attestation evidence for an Intel SGX based enclave, and of course a quote verification library.

SGX Remote Attestation is important as a successful attestation provides increased confidence to Relying Parties prior to deploying secrets to application enclaves. It also allows for policy based decisions based on quote verification outcomes.

ICMC19: IoT TLS: Why is it Hard?

David Brown, Senior SW Engineer, Linaro, United States

For reasons we can't really explain, we now have things like our lightbulbs, toasters and fridges on the Internet... now those devices are vulnerable to attack.

5 worst examples: Jeep Hack, Mirai Botnet, Hackable Cardiac Devices, Owlet WiFi Baby Heart Monitor and Trendnet webcam hack. In the Jeep example, they had a lot of great controls in place, but not on who could update the firmware...

James Mickens was quoted as Saying "IoT Security is not Interesting". It's not interesting, because it's not different. We already know how to secure devices... so we should do it! TLS is great - so let's just use that!

But, we have some really tiny devices out there - smaller than a Raspberry Pi. They have maybe less than 10s of KB of Memory, and 10s MHz of CPU... how can we do TLS there?

TLS has a way of specifying which cipher suites can be used during the handshake. It's hard to change what an IoT device is using, so how can a service just start rejecting something?

One of the problems is that lots of folks do not implement TLS correctly - TLS done incorrectly is worse than not doing it at all.

TLS requires memory, time and randomness - all things that are in short supply on IoT devices!

Some suggestions are to pursue stream abstraction or to put TLS under te socket API, but those don't really work.

Looking at Sockets + TLS now, Zephyr network API changes, JWT, time, MQTT...

ICMC2019: Does Open-Source Cryptographic Software Work Correctly


Daniel J. Bernstein, Research Professor, University of Illinois at Chicago, United States


Discussion on CVE-2018-0733 - an error in CRYPTO_memcmp function, where only the least significant bit of each byte are compared. It allows an attacker to forge messages that are lower than the guaranteed by the security claims. Yes, 2^16 is lower than 2^128.... only impacts PA-RISC

Take a look at CVE-2017-3738 ... It impacts Intel AVX2 montgomery multiplication procedure, but how likely is it to be exploited? According to the CVE - not likely, but where is the proof?

Eric Raymond noted, in 1999, that given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone. or less formally, 'given enough eyeballs, all bugs are shallow'.

But, who are the beta-testers? Unhappy users? That's the model used by most social media companies nowadays....

And "almost every problem" is not "all bugs" ... what about exceptions? can't those be very devastating? How do we really know that we are really looking? Who is looking for the hard bugs?

This seems to assume that developers like reviewing code - but in reality they like to write new code. The theory encourages people to release their code as fast as possible - but isn't that just releasing more bugs more quickly?

So then, does closed source stop attackers from finding bugs? Some certifications seem to award bonus points for not opening your code - but, why? How long does it really take an attacker to extract, disassemble and decompile the code? Sure, they're missing your comments, but they don't care.

Closed source will scare away some "lazy" academics, but not attackers... just takes longer for you as a vendor to find out about the issue.

There is also a belief that closed source makes you more money, but is that still true? Aren't there a lot of companies making money off of support?

Dan sees the only path forward through open source - it will build confidence in what we do. Cryptography is notoriously hard to review. Math makes for subtle bugs.... so do side-channel countermeasures. Don't even get started on post quantum...

A big reason it's hard to review is due to our pursuit of speed, since it's often applied to large volumes of data. This leads to variations in crypto designs. Keccak code package has more than 20 implementations for different platforms - hard to review!

Google added hand written Cortex-A7 ASM to Linux kernel for Speck... even though people said Speck is not safe. Eventually switched to ChaCha... but created more hand written assembly.

You can apply formal verification - code reviewer has to prove correctness. It's tedious, but not impossible - EverCrypt is starting to do this, but for the most simple crypto operations (but still have to worry about what the compiler might do... )

Testing is great - definitely test everything! How can we auto generate inputs, get lots of random inputs going through here - but you still may miss the "right" input that trips a bug. There are symbolic execution tools out there. (angr.io, for example)





Wednesday, May 15, 2019

ICMC2019: Random Numbers, Entropy Sources, and You

John Kelsey, Computer Scientist, NIST, United States

SP 800-90B - think about how random bits should be generated.

DRBG should always be between entropy source and the attackers. Entropy just gives you bits... with entropy (as per the sources promise).

SP 800-90B is not AIS31... though the two groups are talking.

Noise sources are where the entropy comes from, health tests verify the noise source and and conditioning.

Noise sources must be non-deterministic, often uses ring oscillators. You have to be able to describe this in detail. This is complicated, as many vendors are relying on someone else's noise source. They either don't know or there is an NDA around it - that won't get validated. Submitter also has to provide entropy estimate and a justification for that estimate.

Health tests - need to stay working to verify the entropy source continues to work after deployed in the field.

Conditioning is to improve the entropy. They are deterministic, so cannot add entropy...

IID = Independent and Identically Distributed - sample indepent of all others, independent of position in the sequence of samples. NIST will run statistical tests to try to disprove claim. If we can't disprove it, we assume it is true.

If you don't claim to be IID, NIST will apply many different entropy estimators against sequential datasets. they will look for things like bias after restart. May get you rejected or a lower estimate of entropy, if issues are found. Would rather underestimate than overestimate!

But... black box statistical tests can't reliably measure entropy.... Ideally you need to design it right and document it and share with NIST (where available).

Currently for conditioning you can choose: hash, HMAC, CMAC, CBC-MAC, DFs all from 800-90A. You can also roll your own. Or, just don't use it.

Problems are: we can't impact performance too much, can't expect this level of expertise at the labs...