Friday, May 17, 2019

ICMC19: At the Root of It All: The Cryptographic Underpinnings of Security

Karen Reinhardt, Director, Security Tools, Entrust Datacard
What is security without cryptography? That's how it used to be - we secured our computers with physical access control. That advanced to password files and access control lists, and then once we got on the network we had to advance to things like LDAP. We still relied heavily on routers and firewalls. But now that we are in the cloud... are those still effective?

We know we will have issues - so we must do monitoring and detection (IDS, IPS, logging, log analysis, etc) - but that's only if things have gone wrong.  But, wouldn't it be better to prevent the incident?

We used to secure devices by being in a physically secure environment, then we introduced VPNs - which allowed us to pretend we were in the physically secure environment.... but now we have so many connected devices in our house filled with personal and professional identity information.

Those identities are hot commodities! Ms. Reinhardt has worked many breaches and she notes the attackers are always going after the MS Active Directory.

Now even ambulances are connected to the Internet - but please don't attack them, you could put someone's life at risk.

Think about comparing crypto keys to nuts & bolts in construction. You need to use good quality nuts and bolts, and you need redundancy - our you could have a catastrophic failure (think about the recent crane collapse in Seattle).

If we have a few bad keys here and there - we might still be okay, depending on what is being protected. But, what if we lose an entire algorithm?  What if it happens before quantum computers?   We have nothing to replace RSA and ECC right now - what if something happens to them?  Should we be looking at older algorithms and ideas?

You need to assume your algorithms are going to fail and you will need to get new keys and new algorithms out there. think about this as plumbing - need to be able to replace the pipes.

If we lose RSA, you lose your entire chain of trust.  We can't reasonably replace every device out there - all the thermostats, traffic signals, cars, etc. impossible.

Good crypto alone is still not good enough - the attackers are still going to go after your users, your user directories, your insecure machines on your network, your kerberos golden ticket....We have to slow them down,

Thursday, May 16, 2019

ICMC19: HW Equivalence Working Group

Carolyn French, Manager Cryptographic Module Validation Program, Canadian Centre for Cyber Security, Canada; Renaudt Nunez, IT Security Consultant, atsec, United States
The working group will work towards a recommendation in the form ofa  draft Implementation Guidance (IG) to the CMVP..

Vendors often want to submit multiple hardware modules in the same report, and therefore on the same certificate. Under what conditions can the lab perform limited operational testing on the group of modules and still provide assurance that the right testing has happened?

The basic assumption is that IG 1.22 is already met (same crypto), but may have different number of cards, chips, memory config, etc.    For example, if you changed from solid state drive to classic hard disk... did you really need to do more testing?  Same for things like field replaceable and stationary accessories.

The draft IG is out and they are looking for reviewers.

ICMC19: KMIP vs PKCS#11: There is no Contest!

Tony Cox, VP Partners, Alliances and Standards, OASIS, Australia

Tony got a question in ICMC 2018 about "which of these two standards will win?" - the answer is BOTH.

The two standards have different scopes and areas of being useful, but both are standards based and should mean that they are vendor independent. Both standards have informative and normative documents updated by the technical committees.

Tony gave a good overview of the specifications, including goals and documents, explaining it all - like what are profiles and what do they mean? Profiles help prove interoperability and do some baseline testing.

KMIP 2.0 is full of loads of new features - hashed passwords, OTP, delegated login, Re-Encrypt (looking forward to post quantum crypto) and PKSC#11 operation... In addition to new features, lots of improvements as well.

PKCS#11 3.0 - out for public review any day now... also has loads of new things! New algorithms, support for Login of a user and AEAD, better functionality support for interaction with KMIP (Like Unique Identifiers). This started from V2.40 errata 1.

Key Manager uses KMIP and HSMs leverage PKCS#11... they work together. Key Manager is higher volume key management, key sharing. An HSM wants to keep the keys locked in.

PKCS#11 over KMIP is essentially giving a standardized way to do PKSC#11 over a network.

The two standards are quite complementary and have many of the same individuals or companies working on both. In the end, by following the standards we are giving the market freedom of choice.


ICMC2019: Intel SGX's Open Source Approach to 3rd Party Attestation

Dan Zimmerman, Security Technologist, Intel, United States

SGX is a set of CPU instructions that enable the creation of memory regions with security features called 'enclaves'. It has encrypted memory with strong access controls, updatable trusted computing base (TCB). Developers can leverage this to relocate sensitive code and data to the enclave, which has a per process trusted execution environment (TEE).

Common use cases are key protection, confidential computing, and crypto module isolation.

SGX Remote Attestation is a demonstration that software has been properly instantiated on a platform in good standing, fully patched and indeed in the enclave. Attestation evidence conveys identity of the software being attested, associated report data and details of the unmeasured state.

The attestation service is truly verification as a service, using privacy preserving and based on enhanced privacy ID (EPID). This approach does require that you're online and connect to a service.
The newer approach is Datacenter Attestation Primitives (Intel SGX DCAP). It is datacenter and cloud service provider focused. Flexible provisioning and based on ECDSA signatures, a well known verification algorithm. Theses primitives allow for construction of on-prem attestation services. This will leverage flexible launch control on the new Intel SGX enabled platforms. And best of all, it's OpenSource! (that's how it got into the opensource track :-) )

Platform Certification Key (PCK) Retrieval. Intel issues a PCK Certificate for each of its processors at various TCBs. The retrieval tool will extract platform provisioning ID info for Intel PCS service requests. There is also a provisioning certification service and caching service.

There is a quote generation library that has an API for generating attestation evidence for an Intel SGX based enclave, and of course a quote verification library.

SGX Remote Attestation is important as a successful attestation provides increased confidence to Relying Parties prior to deploying secrets to application enclaves. It also allows for policy based decisions based on quote verification outcomes.

ICMC19: IoT TLS: Why is it Hard?

David Brown, Senior SW Engineer, Linaro, United States

For reasons we can't really explain, we now have things like our lightbulbs, toasters and fridges on the Internet... now those devices are vulnerable to attack.

5 worst examples: Jeep Hack, Mirai Botnet, Hackable Cardiac Devices, Owlet WiFi Baby Heart Monitor and Trendnet webcam hack. In the Jeep example, they had a lot of great controls in place, but not on who could update the firmware...

James Mickens was quoted as Saying "IoT Security is not Interesting". It's not interesting, because it's not different. We already know how to secure devices... so we should do it! TLS is great - so let's just use that!

But, we have some really tiny devices out there - smaller than a Raspberry Pi. They have maybe less than 10s of KB of Memory, and 10s MHz of CPU... how can we do TLS there?

TLS has a way of specifying which cipher suites can be used during the handshake. It's hard to change what an IoT device is using, so how can a service just start rejecting something?

One of the problems is that lots of folks do not implement TLS correctly - TLS done incorrectly is worse than not doing it at all.

TLS requires memory, time and randomness - all things that are in short supply on IoT devices!

Some suggestions are to pursue stream abstraction or to put TLS under te socket API, but those don't really work.

Looking at Sockets + TLS now, Zephyr network API changes, JWT, time, MQTT...

ICMC2019: Does Open-Source Cryptographic Software Work Correctly


Daniel J. Bernstein, Research Professor, University of Illinois at Chicago, United States


Discussion on CVE-2018-0733 - an error in CRYPTO_memcmp function, where only the least significant bit of each byte are compared. It allows an attacker to forge messages that are lower than the guaranteed by the security claims. Yes, 2^16 is lower than 2^128.... only impacts PA-RISC

Take a look at CVE-2017-3738 ... It impacts Intel AVX2 montgomery multiplication procedure, but how likely is it to be exploited? According to the CVE - not likely, but where is the proof?

Eric Raymond noted, in 1999, that given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone. or less formally, 'given enough eyeballs, all bugs are shallow'.

But, who are the beta-testers? Unhappy users? That's the model used by most social media companies nowadays....

And "almost every problem" is not "all bugs" ... what about exceptions? can't those be very devastating? How do we really know that we are really looking? Who is looking for the hard bugs?

This seems to assume that developers like reviewing code - but in reality they like to write new code. The theory encourages people to release their code as fast as possible - but isn't that just releasing more bugs more quickly?

So then, does closed source stop attackers from finding bugs? Some certifications seem to award bonus points for not opening your code - but, why? How long does it really take an attacker to extract, disassemble and decompile the code? Sure, they're missing your comments, but they don't care.

Closed source will scare away some "lazy" academics, but not attackers... just takes longer for you as a vendor to find out about the issue.

There is also a belief that closed source makes you more money, but is that still true? Aren't there a lot of companies making money off of support?

Dan sees the only path forward through open source - it will build confidence in what we do. Cryptography is notoriously hard to review. Math makes for subtle bugs.... so do side-channel countermeasures. Don't even get started on post quantum...

A big reason it's hard to review is due to our pursuit of speed, since it's often applied to large volumes of data. This leads to variations in crypto designs. Keccak code package has more than 20 implementations for different platforms - hard to review!

Google added hand written Cortex-A7 ASM to Linux kernel for Speck... even though people said Speck is not safe. Eventually switched to ChaCha... but created more hand written assembly.

You can apply formal verification - code reviewer has to prove correctness. It's tedious, but not impossible - EverCrypt is starting to do this, but for the most simple crypto operations (but still have to worry about what the compiler might do... )

Testing is great - definitely test everything! How can we auto generate inputs, get lots of random inputs going through here - but you still may miss the "right" input that trips a bug. There are symbolic execution tools out there. (angr.io, for example)





Wednesday, May 15, 2019

ICMC2019: Random Numbers, Entropy Sources, and You

John Kelsey, Computer Scientist, NIST, United States

SP 800-90B - think about how random bits should be generated.

DRBG should always be between entropy source and the attackers. Entropy just gives you bits... with entropy (as per the sources promise).

SP 800-90B is not AIS31... though the two groups are talking.

Noise sources are where the entropy comes from, health tests verify the noise source and and conditioning.

Noise sources must be non-deterministic, often uses ring oscillators. You have to be able to describe this in detail. This is complicated, as many vendors are relying on someone else's noise source. They either don't know or there is an NDA around it - that won't get validated. Submitter also has to provide entropy estimate and a justification for that estimate.

Health tests - need to stay working to verify the entropy source continues to work after deployed in the field.

Conditioning is to improve the entropy. They are deterministic, so cannot add entropy...

IID = Independent and Identically Distributed - sample indepent of all others, independent of position in the sequence of samples. NIST will run statistical tests to try to disprove claim. If we can't disprove it, we assume it is true.

If you don't claim to be IID, NIST will apply many different entropy estimators against sequential datasets. they will look for things like bias after restart. May get you rejected or a lower estimate of entropy, if issues are found. Would rather underestimate than overestimate!

But... black box statistical tests can't reliably measure entropy.... Ideally you need to design it right and document it and share with NIST (where available).

Currently for conditioning you can choose: hash, HMAC, CMAC, CBC-MAC, DFs all from 800-90A. You can also roll your own. Or, just don't use it.

Problems are: we can't impact performance too much, can't expect this level of expertise at the labs...

ICMC19: Emerging Cryptography Trends in the Internet of Things


Charles White, CTO, Fornetix, United States

In a connected world, we need to think about security in new ways. There are a lot of IoT devices out there sensing... reading... waiting....   Cryptography is very similar to IoT! In the IoT landscape, we're starting to hear about Root of Trust, Data-in-Motion Encryption and Data-at-Rest Encryption.

Sensing vs Acting - acting has more requirements for encryption and authentication. Cryptography is Identity, Authentication and Authorization. There aren't users, logins, passwords... these are small devices that have little or no human interaction.  Crypto has to be that user, per se.

It's all about the root of trust. When you are going from factory to someone's livingroom, the consumers need to know the device hasn't been tampered with. But crypto can also be used to establish sessions, exchange information and data securely, etc.

When we talk about IoT, there is a lot of data in motion. Hard drive encryption and radio encryption both use symmetric keys - this is something we should understand how to do. Protection needs to be balanced with other requirements, suhch as bandwidth and battery consumption.

We need to protect data at rest - we need to also allow access. Think about a mechanic trying to access data from the canbus. 

We can look at turning our challenges into opportunities. Can we align disparate technologies? Could we orchestrate utilization and product strategy?  What if we could do device attestation at scale?   And make the orchestration of root of trust widely available?
The next trend is how cryptography can orchestrate control and management. Need to rely on standards and interops, automating and simplifying.

Need to have a way to do key distribution and association for Narrow band IoT sensors, communications infrastructure and device management.

We already have the fundamentals and knowledge - need to apply to IoT in a way that makes sense.

ICMC19: FIPS 140-2 and the Cloud


Alan Halachmi, Sr. Manager, Solutions Architecture, Amazon, United States

FIPS 140-2 came out in May 2001... think about that, that was before Facebook, Gmail, etc - and way before cloud computing.

Right now validating on the cloud is impossible, as level 1 requires single operator mode - not how you will find things set up in the cloud. In fact, an IG on Operational Environment specifically notes that You cannot use things like AWS, MS Azure or Google cloud.... )

But - someone like Amazon can validate one of their services, as they are the sole operator.

The security landscape is in constant flux - making it difficult to keep a module validated. Performance is often impacted in validated modules - which is not tenable for Amazon.

Amazon wanted a framework that would allow real time advancement from validated environment to validated environment. We want to make it clear that it's a multi-party environment, and with that comes shared responsibility, but would require minimal coordination and be applied consistently between different application models. As much as possible, what to leverage existing investments.
There needs to be focus on automation and defining of relationships. Vendors need the ability to run their own FIPS 140 testing, so they can be assured that any changes they are making have not caused issues - then they can also test performance, etc. Fortunately, ACVP is creating a method for doing this automated testing! NIST approved!

We should look to another model for validation. Think about our history - humans used to come up with hypothesis and then prove them. After the 1970s, humans came up with the hypothesis and machines provde them. Could machines do both?

Think about your surface area of your code - the most critcal code is in small areas (hypervisor, kernel, etc). Attackers have more time (OSes and machines deployed for years) and learned history from what worked in the past. Can we use formal methods for verification? Amazon has done one for TLS - it's on github.

ICMC19: Keynote: Mary Ann Davidson

The opening remarks included another great cartoon from Atsec, a tribute to the new ACVP (Automated Certificate Validation Program) - very funny!

Matt Keller gave us an update on the CMUF (Crypto Module User Forum). They have several working groups that are contributing to new implementation guidance from NIST. Their goals are to share information and help move the standards forward. NIST also comes and gives updates, and the forum provides a way to share ideas and suggestions on navigating a validation.

Mary Ann Davidson, Chief Security Officer, Oracle Corp., "Keeping Up with the Joneses". When Ms. Davidson started in security, nobody cared (except PTLGA - Paranoid Three Letter Government Agencies). There was hardly any 3rd party software, except for crypto. Nobody cared, so it was a quiet job - like the Maytag repair man....

But things are changing. SW and HW are ubiquitous - you can even have an Internet connected fridge. 66% of applications code is now open source... need to keep up and understand the landscape.

We need to keep up with new threats, market expectations, latest regulatory FDJ (framework du jour) and changes in the industry.

Hackers are moving towards hardware, so her ethical hacking team is focusing now on HW in addition to their SW work. HW hacking combined with IoT has greatly increased our area of attack.
Regulatory frameworks should not be tied to a specific technology or vendor. ("regulatory capture" - not a good thing).

Looking at market expectations - 3rd party code enables scarce resources to be used on innovation, not "building blocks" (ie cutting down trees to build your own house...). But, this creates a target for a hacker. Everyone loves free code, but nobody wants to invest in making it better.

Vendors need to know what is in their code - beware of 3rd party code that pulls in other code... need to understand it all. Should have fewer instances of 3rd party libraries in their code to minimize attack surfaces and simplify and lower cost of upgrade. That is, don't have 48 copies of one 3rd party library - have a central copy.

Department of Commerce (DoC) is working on a Software Bill of Materials - you will have to know, as a vendor, what is in your SW. But what does that buy you? Customers typically cannot replace third party libraries in code - they have a binary, or license forbids. Also, just having the vulnerable code doesn't mean you are using it in a vulnerable way. Lots of resources spent upgrading, even though it is irrelevant. Veracode noted that 95% of Java vulns in 3rd party code are NOT exploitable in the context of the application...

What could we do instead? Be honest with your customers. Describe how we use the code. fix the worst issues the fastest. Need to have a way to teach the scanners about usage - ie - not vulnerable as being used.

Changes in industry can be distracting: "On prem/waterfall is so last year...". Need to keep the meaningful aspects as we move on, timelines still matter. We have to think about how long things like validations take - can't do all SW at the same time. Need to do the most relevant and do it as efficiently as possible.

We need cloud agility in certifications. NIST has 2 working groups looking at doing FIPS for crypto in the cloud, but we need to do it faster.

Perfect storm of increased regulatory scrutiny and increased use of technology has led to greater risk management inquiries. Need to asses relevant risk management concerns. You wouldn't want or need to inspect a day care provider's vacation home... not relevant.

People have asked Ms Davidson for things like: "we have the right to pen test any system in your network" "need patching status of every system in your network" ... etc.

This is problematic because it's not germane to her particular risk management concerns. For example, she's often asked about "3 day patching" - even though the person asking knows it's not possible, but they still want it in a contract...

Mary Ann apparently makes a really good rhubarb crisp, but she's not going to force it as a standard... so don't ask her to do a non-standard certification, either. (though you may want to have some of her rhubarb crisp....)

Vendors need to be more public with what they are doing, otherwise customers will assume you're not doing something. Set up clear rules of engagement - makes the questions more relevant and the discussions more fruitful. Keep in mind that anything vague will be misinterpreted - needs to be challenged.

Remember - change is inevitable, embrace it and OWN it. Don't let others own the change agenda, or you won't like the result. Use only globally accepted standards where feasible instead of one-off "wants". Economics rule the world - know it, use it, own it!