Friday, May 20, 2016

ICMC16: Unboxing the White-Box: Practical Attacks Against Obfuscated Ciphers

Jasper van Woudenberg, CTO North America, Riscure

Jasper has been doing white boxing for a long time - hacking assembly in a video game to get passwords for higher levels as a kid :-)

It's important to protect the keys. Is it possible to do it with just software? White-box cryptography -> secure software crypto in an untrusted environment. This is used today in Pay TV DRMs, mobile payments... How to apply this to software environments?

Protection against key extracton in the white-box security model. A technique that allows merging a key into a given crypto algorithm: described for the first time in 2002 by S. Chow, et al. Available for DES and AES. Lookup tables are used for applying mathematical transforms to data. A known weakness is cloning/lifting.

Once you start applying these, you will have a huge amount of lookup tables. Attaks for all academic WBC proposals focus on key extractions, types of transformations assumed known and concrete transformation and key unknown.  In real life, we do not know much about the design. 

You can do an attack on DES using fault injection. There is a challenge online for you to try yourself at whiteboxcrypto.com . 

Then we got a demo of the tool retrieving a DES key by using the fault injection.

Have been able to break all that they've tried with fewer than a 100 faults, except  one that uses output encoding.

If you can perform measurement of the crypto target, you have a good chance of getting the key.

For side channel attacks, no detailed knowledge is required. the only protection is a secret random input/output encoding.

to protect against side channel attacks: must prevent statistical dependence between intermediates and key. Typical countermeasures based on randomness difficult in white-box scenario. 

Make sure you obfuscate control-flow and data, add anti-analysis and anti-tamper countermeasures. 

ICMC16: Cryptography as a Service (CaaS) for Embedded Security Infrastructure

Matt Landrock, CEO, Cryptomathic

 What can we expect from embedded systems?  Internet of Things.... Things: PCs, Phones, Smartmeters,dishwashers, cars, apps.

 Often want to validate code running on the "thing" and enable the thing to carry out basic cryptographic functions.  Understanding that "things" in the IoT can mean pretty much anything security-wise (from high-end to low-end).  if security adds too much inconvenience or cost, it will be skipped or skimped.

HSMs are under-utilized in the IoT space. Crypto APIs tend to be complicated, auditing individual projects is expensive and key management is often over-looked.   

If we think about crypto as a service, then we only have one place to deploy the HSMs, and can get it right. In one deployment, the customer went from securing 3 applications with HSMs to over 180 with this model.

Need to make sure that all applications that need cryptography can receive service, but at the same time only provide service to legitimate users

Cryptomathic has built a crypto service gateway (CSG).  CSG shares HSMs between applications, helping us get away from silos.  this improves utilization of very expensive resources. In this configuration, HSMs can be added and removed, while the service still stays up.

CSG has a crypto firewall that only allows specified commands and by approved card holders, as defined by the security team. The product also focuses on making audit easy. It's in one place and easy to read.

They have created a Crypto query language (CQL), like "DO CODESIGN FROM Dev WITH DATA 01234".  This makes it easier for developers to use, encouraging them to use cryptography.

It is possible here to give crypto keys an expiry.  The CSG provides all key management and handles key usage policy.

Use key labels, so they ar eeasy to find using CQL. They are implicit. 


Overall, there are many more devices coming online and the easier we can make it for developers to do security, the more likely it is to happen.

ICMC16: Entropy As a Service: Unlocking the Full Potential of Cryptography

Apostol Vassilev, Research Lead–STVM, Computer Security Division, NIST

Crypto is going smaller and light weight, lightweight protocols, apis, etc.

In modern cryptography, the algorithms are known. Key generation and management govern the strength of the keys. If this isn't right, the keys are not actually strong.

In 2013, researchers could find keys from a smart card, due to use of low-quality hardware RNG, which was stuck in a short cycle.  Why was this design used? Didn't want to pay for a higher quality piece of hardware or licensing of patents.

Look at the famous "Mining your Ps and Qs: Detection of Widespread Weak Keys in Network Devices", which found that 0.75% of TLS certificates share keys, due to insufficient entropy during key generation.

One of the problems is that there is a lot of demand for entropy when a system boots... when the least amount of entropy is available.

Estimating randomness is hard. Take a well-known irrational number, e.g. Pi, and test the output bit sequence for randomness - it will be reported as random (NIST has verified this is true).

Check out the presentation by Viktor Fischer, Univ Lyon, UJM-Saint-Etienne, Laboratoire, Hubert Curien: NIST DRBG Workshop 2016.

He noted that using the statistical test approach of SP 800-90B makes it hard to automte the estimation of entropy. But automation is critically important for the new CMVP!

The solutions - an entropy service!  NOT a key generation service (would you trust the government on this!?). Not similar to the NIST beacon.

Entropy as a Service (EaaS).   Followed by cool pictures :-)

Key generation still happens locally. You have to be careful how you mix data from a remote entropy server.

While analyzing Linux, they discovered the process scheduling algorithm was collecting 128 bits of entropy every few seconds. Why? Who knows.

EaaS needs to worry about standard attacks on web service and protocol, like message replay, man in the middle an dns poisoning.  But, other attack vectors - like dishonest EaaS instances. You will need to rely on multiple servers.

EaaS servers themselves will have to protect against malicious clients, too.

Project page: http://csrc.nist.gov/projects/eaas





Thursday, May 19, 2016

ICMC16: Entropy Requirements Comparison between FIPS 140-2, Common Criteria and ISO 19790 Standards

 Richard Wang, FIPS Laboratory Manager, Gossamer Security Solutions, Tony Apted, CCTL Technical Director, Leidos

Entropy is a measure of the disorder, randomeness or uncertainty in a closed system.  Entropy underpins cryptography, and if it's bad, things can go wrong.  Entropy sources should have a noise source, post processing and conditioning.

There is a new component in the latest draft of SP 800-90B that is discussing post processing.  there are regular health tests, so any problems can be caught quickly.

There are 3 approved methods for post-processing: Von Neumann's method, Linear filtering method, Length of runs method.

The labs have to justify how they arrived at their entropy estimates. There should be a detailed logical diagram to illustrate all of the components, sources and mechanisms that constitute an entropy source. Also do statistical analysis.

When examining ISO 19790, their clauses on entropy seemed to line up with FIPS 140-2 IG's - so if you meet CMVP requirements, you should be ready for ISO 19790 (for entropy assesment).

Common Criteria has it's own entropy requirements in the protection profiles. The Network Device PP, released in 2010, defined an extensive requirement for RNG and entropy. You have to have a hardware based noise source, minimum 128 bits of entropy and 256 bits of equivalent strength.

The update in 2012 allowed software and/or hardware entropy sources. It was derived from SP 800-90B, so very similar requirements.

Entropy documentation has to be reviewed and approved before the evaluation can formally commence.

Some vendors are having trouble documenting thrid party sources, especially hardware.  Lots of misuse of Intel's RDRAND.








ICMC16: Improving Module's Performance When Executing the Power-up Tests

Allen Roginsky, CMVP NIST

There are power-up tests and conditional tests. Power-up includes integrity test, approved algorithm test, critical functions tests.  Conditional are things like generating key pairs, a bypass test.

Implementations have become more robust, and integrity tests no longer take just a second or so - may be minutes.

Imagine a smartcard used to enter a building.  If it is too slow, it may take 5-10 seconds... then the next person... the next. Quite a traffic jam

We don't want to drop the tests altogether, so what do we do?  What are other industries doing?

Supposed the software/firmware image is represented as a bit-string. The module beraks the string into n substrings; n is no greater than 1024. the length, k bits, of each of the first (n-1) substrings is the same; the length of the last substring is no greater than k.  Then, maybe you can just look at some of the strings.

A bloom filter optimization can significantly improve the efficiency of this method.

You could use a deterministic approach. You would have to keep track of the location where you ended, and hope that location doesn't get corrupted.

CMUF is working on an update to Implementation Guidance on doing integrity check from random sampling.

Vendors do not want to perform each algorithm's known answer test, not just delay until first use of algorithm. Why are we doing this test in the first place?  Something to think about.

Another option offered by a questioner: maybe replace with on demand testing? Is a good option.

ICMC16: Finding Random Bits for OpenSSL

Denis Gauthier, Oracle

OpenSSL provides entropy for the non FIPS case, but not for the FIPS case.

BYOE - Bring your own entropy, if you're running in FIPS mode.

OpenSSL gets its entropy from /dev/urandom, /dev/random, /dev/srandom, but since /dev/urandom doesn't block, that's effectively where OpenSSL gets most of its entropy.  And it never reseeds. We likely didn't have enough when we started, and certainly not later.

Reseeding: not everyone will get it right... there are too many deadlines and competing priorities.  Looking at an ubuntu example, based only on time(0)... not good entropy.

There is no universal solution - the amount varies with each source and its operating environment. Use hardware, if it's available. Software will rely on timers - because, hopefully, events don't happen at predictable times (not to the nanosecond).

There is a misconception that keyboard strokes and mouse movements will give you a lot of entropy... unless the machine is a headless server... :)

You could use Turbid and Randomsound to gather entropy from your soundcard.

There are a lot of software sources, like CPU, diskstats, HAVEGED, interrupts, IO, memory, net-dev, time, timer-list and TSC. Denis had a cool graph about how much entropy he was getting from these various places.

You need to do due diligence, doing careful evaluation of your sources. Things are not always as advertised.  Not just convincing yourself, but making sure the folks you're selling to trust your choices as well.

The best thing - diversify!  You need multiple independent sources. Use a scheduler to handle reseeding for forward secrecy.  But, be careful not to starve sources, either.

When using OpenSSL, bring your own entropy and reseed.


ICMC16: LibreSSL

Giovanni Bechis, Owner, System Administrator and Developer, SnB, Developer, OpenBSD

The LibreSSL project started in April 2014, after heartbleed was discovered in OpenSSL. Man vendors and systems were impacted by this, and there is no way to tell if you have been attacked or not.  Why did this happen? The OpenSSL code was too complex.  Thought - should we try to fix OpenSSL or fork?

Fork was decided because the fork was too complex and intricate. This has changed more recently, but in April 2014 the OpenSSL developers were only interested in new features, not in bug fixing. Heartbleed wasn't the only reason we decided to fork, it was just that the code was too complex. For example, OpenSSL doesn't use malloc, and the allocator it does use doesn't free memory.  It uses LIFO recycling.  The debugging features in their malloc are useful for debugging but could be used attack.

At the time, pretty much all OpenSSL API headers are public. Many application developers were using interfaces they should not have been exposed to. It uses it's own functions, instead of things provided by libc, etc

There is a lot of #ifdef preprocessing code to work around bugs in compilers or on specific systems.

Forked from OpenSSL 1.0.1g. Have been backporting bug fixes from that tree.

OpenSSL is the "de facto" standard and widely used. It is difficult to get patches applied upstream. They wanted to preserve the API to maintain compatibility with OpenSSL.

They want to make sure they use good coding practices and fix bugs as fast as possible.

No FIPS support, mainly because their developers are not in the US.  They have removed some old ciphers (MD2, etc) and add ChaCha20 and Poly1305.

Removed SSLv3 support. Removed dynamic engine support, mostly because there were no engines for OpenBSD so they could not test.

OpenSSL is portable, but at the expense of needing to reimplement things that are found in most implementations of libc and lots of #ifdef and #ifndef.

Some of the OpenBSD software has been switched to use libressl, like the FTP client software.