Friday, May 20, 2016

ICMC16: Unboxing the White-Box: Practical Attacks Against Obfuscated Ciphers

Jasper van Woudenberg, CTO North America, Riscure

Jasper has been doing white boxing for a long time - hacking assembly in a video game to get passwords for higher levels as a kid :-)

It's important to protect the keys. Is it possible to do it with just software? White-box cryptography -> secure software crypto in an untrusted environment. This is used today in Pay TV DRMs, mobile payments... How to apply this to software environments?

Protection against key extracton in the white-box security model. A technique that allows merging a key into a given crypto algorithm: described for the first time in 2002 by S. Chow, et al. Available for DES and AES. Lookup tables are used for applying mathematical transforms to data. A known weakness is cloning/lifting.

Once you start applying these, you will have a huge amount of lookup tables. Attaks for all academic WBC proposals focus on key extractions, types of transformations assumed known and concrete transformation and key unknown.  In real life, we do not know much about the design. 

You can do an attack on DES using fault injection. There is a challenge online for you to try yourself at . 

Then we got a demo of the tool retrieving a DES key by using the fault injection.

Have been able to break all that they've tried with fewer than a 100 faults, except  one that uses output encoding.

If you can perform measurement of the crypto target, you have a good chance of getting the key.

For side channel attacks, no detailed knowledge is required. the only protection is a secret random input/output encoding.

to protect against side channel attacks: must prevent statistical dependence between intermediates and key. Typical countermeasures based on randomness difficult in white-box scenario. 

Make sure you obfuscate control-flow and data, add anti-analysis and anti-tamper countermeasures. 

ICMC16: Cryptography as a Service (CaaS) for Embedded Security Infrastructure

Matt Landrock, CEO, Cryptomathic

 What can we expect from embedded systems?  Internet of Things.... Things: PCs, Phones, Smartmeters,dishwashers, cars, apps.

 Often want to validate code running on the "thing" and enable the thing to carry out basic cryptographic functions.  Understanding that "things" in the IoT can mean pretty much anything security-wise (from high-end to low-end).  if security adds too much inconvenience or cost, it will be skipped or skimped.

HSMs are under-utilized in the IoT space. Crypto APIs tend to be complicated, auditing individual projects is expensive and key management is often over-looked.   

If we think about crypto as a service, then we only have one place to deploy the HSMs, and can get it right. In one deployment, the customer went from securing 3 applications with HSMs to over 180 with this model.

Need to make sure that all applications that need cryptography can receive service, but at the same time only provide service to legitimate users

Cryptomathic has built a crypto service gateway (CSG).  CSG shares HSMs between applications, helping us get away from silos.  this improves utilization of very expensive resources. In this configuration, HSMs can be added and removed, while the service still stays up.

CSG has a crypto firewall that only allows specified commands and by approved card holders, as defined by the security team. The product also focuses on making audit easy. It's in one place and easy to read.

They have created a Crypto query language (CQL), like "DO CODESIGN FROM Dev WITH DATA 01234".  This makes it easier for developers to use, encouraging them to use cryptography.

It is possible here to give crypto keys an expiry.  The CSG provides all key management and handles key usage policy.

Use key labels, so they ar eeasy to find using CQL. They are implicit. 

Overall, there are many more devices coming online and the easier we can make it for developers to do security, the more likely it is to happen.

ICMC16: Entropy As a Service: Unlocking the Full Potential of Cryptography

Apostol Vassilev, Research Lead–STVM, Computer Security Division, NIST

Crypto is going smaller and light weight, lightweight protocols, apis, etc.

In modern cryptography, the algorithms are known. Key generation and management govern the strength of the keys. If this isn't right, the keys are not actually strong.

In 2013, researchers could find keys from a smart card, due to use of low-quality hardware RNG, which was stuck in a short cycle.  Why was this design used? Didn't want to pay for a higher quality piece of hardware or licensing of patents.

Look at the famous "Mining your Ps and Qs: Detection of Widespread Weak Keys in Network Devices", which found that 0.75% of TLS certificates share keys, due to insufficient entropy during key generation.

One of the problems is that there is a lot of demand for entropy when a system boots... when the least amount of entropy is available.

Estimating randomness is hard. Take a well-known irrational number, e.g. Pi, and test the output bit sequence for randomness - it will be reported as random (NIST has verified this is true).

Check out the presentation by Viktor Fischer, Univ Lyon, UJM-Saint-Etienne, Laboratoire, Hubert Curien: NIST DRBG Workshop 2016.

He noted that using the statistical test approach of SP 800-90B makes it hard to automte the estimation of entropy. But automation is critically important for the new CMVP!

The solutions - an entropy service!  NOT a key generation service (would you trust the government on this!?). Not similar to the NIST beacon.

Entropy as a Service (EaaS).   Followed by cool pictures :-)

Key generation still happens locally. You have to be careful how you mix data from a remote entropy server.

While analyzing Linux, they discovered the process scheduling algorithm was collecting 128 bits of entropy every few seconds. Why? Who knows.

EaaS needs to worry about standard attacks on web service and protocol, like message replay, man in the middle an dns poisoning.  But, other attack vectors - like dishonest EaaS instances. You will need to rely on multiple servers.

EaaS servers themselves will have to protect against malicious clients, too.

Project page:

Thursday, May 19, 2016

ICMC16: Entropy Requirements Comparison between FIPS 140-2, Common Criteria and ISO 19790 Standards

 Richard Wang, FIPS Laboratory Manager, Gossamer Security Solutions, Tony Apted, CCTL Technical Director, Leidos

Entropy is a measure of the disorder, randomeness or uncertainty in a closed system.  Entropy underpins cryptography, and if it's bad, things can go wrong.  Entropy sources should have a noise source, post processing and conditioning.

There is a new component in the latest draft of SP 800-90B that is discussing post processing.  there are regular health tests, so any problems can be caught quickly.

There are 3 approved methods for post-processing: Von Neumann's method, Linear filtering method, Length of runs method.

The labs have to justify how they arrived at their entropy estimates. There should be a detailed logical diagram to illustrate all of the components, sources and mechanisms that constitute an entropy source. Also do statistical analysis.

When examining ISO 19790, their clauses on entropy seemed to line up with FIPS 140-2 IG's - so if you meet CMVP requirements, you should be ready for ISO 19790 (for entropy assesment).

Common Criteria has it's own entropy requirements in the protection profiles. The Network Device PP, released in 2010, defined an extensive requirement for RNG and entropy. You have to have a hardware based noise source, minimum 128 bits of entropy and 256 bits of equivalent strength.

The update in 2012 allowed software and/or hardware entropy sources. It was derived from SP 800-90B, so very similar requirements.

Entropy documentation has to be reviewed and approved before the evaluation can formally commence.

Some vendors are having trouble documenting thrid party sources, especially hardware.  Lots of misuse of Intel's RDRAND.

ICMC16: Improving Module's Performance When Executing the Power-up Tests

Allen Roginsky, CMVP NIST

There are power-up tests and conditional tests. Power-up includes integrity test, approved algorithm test, critical functions tests.  Conditional are things like generating key pairs, a bypass test.

Implementations have become more robust, and integrity tests no longer take just a second or so - may be minutes.

Imagine a smartcard used to enter a building.  If it is too slow, it may take 5-10 seconds... then the next person... the next. Quite a traffic jam

We don't want to drop the tests altogether, so what do we do?  What are other industries doing?

Supposed the software/firmware image is represented as a bit-string. The module beraks the string into n substrings; n is no greater than 1024. the length, k bits, of each of the first (n-1) substrings is the same; the length of the last substring is no greater than k.  Then, maybe you can just look at some of the strings.

A bloom filter optimization can significantly improve the efficiency of this method.

You could use a deterministic approach. You would have to keep track of the location where you ended, and hope that location doesn't get corrupted.

CMUF is working on an update to Implementation Guidance on doing integrity check from random sampling.

Vendors do not want to perform each algorithm's known answer test, not just delay until first use of algorithm. Why are we doing this test in the first place?  Something to think about.

Another option offered by a questioner: maybe replace with on demand testing? Is a good option.

ICMC16: Finding Random Bits for OpenSSL

Denis Gauthier, Oracle

OpenSSL provides entropy for the non FIPS case, but not for the FIPS case.

BYOE - Bring your own entropy, if you're running in FIPS mode.

OpenSSL gets its entropy from /dev/urandom, /dev/random, /dev/srandom, but since /dev/urandom doesn't block, that's effectively where OpenSSL gets most of its entropy.  And it never reseeds. We likely didn't have enough when we started, and certainly not later.

Reseeding: not everyone will get it right... there are too many deadlines and competing priorities.  Looking at an ubuntu example, based only on time(0)... not good entropy.

There is no universal solution - the amount varies with each source and its operating environment. Use hardware, if it's available. Software will rely on timers - because, hopefully, events don't happen at predictable times (not to the nanosecond).

There is a misconception that keyboard strokes and mouse movements will give you a lot of entropy... unless the machine is a headless server... :)

You could use Turbid and Randomsound to gather entropy from your soundcard.

There are a lot of software sources, like CPU, diskstats, HAVEGED, interrupts, IO, memory, net-dev, time, timer-list and TSC. Denis had a cool graph about how much entropy he was getting from these various places.

You need to do due diligence, doing careful evaluation of your sources. Things are not always as advertised.  Not just convincing yourself, but making sure the folks you're selling to trust your choices as well.

The best thing - diversify!  You need multiple independent sources. Use a scheduler to handle reseeding for forward secrecy.  But, be careful not to starve sources, either.

When using OpenSSL, bring your own entropy and reseed.

ICMC16: LibreSSL

Giovanni Bechis, Owner, System Administrator and Developer, SnB, Developer, OpenBSD

The LibreSSL project started in April 2014, after heartbleed was discovered in OpenSSL. Man vendors and systems were impacted by this, and there is no way to tell if you have been attacked or not.  Why did this happen? The OpenSSL code was too complex.  Thought - should we try to fix OpenSSL or fork?

Fork was decided because the fork was too complex and intricate. This has changed more recently, but in April 2014 the OpenSSL developers were only interested in new features, not in bug fixing. Heartbleed wasn't the only reason we decided to fork, it was just that the code was too complex. For example, OpenSSL doesn't use malloc, and the allocator it does use doesn't free memory.  It uses LIFO recycling.  The debugging features in their malloc are useful for debugging but could be used attack.

At the time, pretty much all OpenSSL API headers are public. Many application developers were using interfaces they should not have been exposed to. It uses it's own functions, instead of things provided by libc, etc

There is a lot of #ifdef preprocessing code to work around bugs in compilers or on specific systems.

Forked from OpenSSL 1.0.1g. Have been backporting bug fixes from that tree.

OpenSSL is the "de facto" standard and widely used. It is difficult to get patches applied upstream. They wanted to preserve the API to maintain compatibility with OpenSSL.

They want to make sure they use good coding practices and fix bugs as fast as possible.

No FIPS support, mainly because their developers are not in the US.  They have removed some old ciphers (MD2, etc) and add ChaCha20 and Poly1305.

Removed SSLv3 support. Removed dynamic engine support, mostly because there were no engines for OpenBSD so they could not test.

OpenSSL is portable, but at the expense of needing to reimplement things that are found in most implementations of libc and lots of #ifdef and #ifndef.

Some of the OpenBSD software has been switched to use libressl, like the FTP client software. 

ICMC16: The OpenSSL 1.1 Audit

Kenneth White

There is an Open Crypto Audit Project. Originally formed to do an audit of the TrueCrypt audit.  Currently seeking non-profit status.  More recently looking ath OpenSSL.  Why? It's everywhere!

OpenSSL 1.0.2 FIPS is in over a 100 validations.

Enterprise people often say they don't care about FOSS, don't realize it's deployed very widely in their enterprise!  Like Cisco VPN client.

The audit of OpenSSL was commisioned by the Linux Foundation.  A pretty ambitious scope.

Most of the code in OpenSSL is written in C (70%), and currently has about 8 million lines of code. (that's a lot to audit!)

FIrst look at BigNum, BIO (focus on composition and file functions), ASN.1 and x.509 and 93M cert corpus, and "frankencert" fuzzing.

Next phase will cover the TLS state machine, EVP, protocol flows and core engine implementation, memory management and crypto core (RSA, SHA2, DH/ECDEH, CBC, GGM).

Need to focus on most relevant platforms and algorithms and protocols.

Preliminary findings:
Complexity led to some potential bugs invalidated due to pre- or post- target parsing.  PEM parsing contained unexpected formats including access to ASN.1 decoding facilities, HMAC and CMAC algorithms. Memory leak and integer overflow identified, but very unlikely invalid or low severity issues.  RSA uses blinding and constant time operations by default.

From the fuzzing work, found found 280 certificates that had very bizarre dependencies that resulted in diverse paths being taken.  The fuzz testing for x.509 parsing did not result in any crashes.  Did find bugs with some DER fuzzing, related to performance, but the right things seemed to happen.

Still looking at low impact, low likelihood, low severity potential vulnerabilities, but overall the code is looking very solid.

As Poly1305 and CHaCha20 were added recently, we'd like to take another look.

ICMC16: OpenSSL Update

Tim Hudson, Cryptsoft

The OpenSSL team had their first face to face meeting, ever! 11 of the 15 members got together digging into documentation and fixing bugs - and POODLE broke, so... they got to know each other quite well.

The team thinks of time as "before" April 2014 and after... Before there were only 2 main developers, entirely on volunteer basis.  Nor formal decision making process, extremely resource limited.  After April 2014, now have 2 full time developers and 5-6 regular developers.  This really helps the project.

After a wake-up call, you have to be more focused on what you should be doing. Everyone is now analyzing your code-base, looking for the next heartbleed.  Now there is more focus on fuzz testing, increased automated testing. Static code analysis tools are rapidly being updated to detect heartbleed and things like heartbleed.

New: mandatory code review for every single changeset.  [wow!]

The OpenSSL team now has a roadmap, and they are reporting updates against them.  They want to make sure they continue to be "cryptography for the real world" and not ignore "annoying" user base for legitimate concerns or feature needs.

Version 1.0.2 will be supported until 2019.  No longer will all releases be supported for essentially eternity.

OpenSSL now changing defaults for applications, larger key sizes. removing platform code for platforms that are long gone from the field.  Adding new algorithms, like ChaCha20 and Poly1305. The entire TLS state machine has been rewritten.  Big change: internal data structures are going to be opaque, which will allow maintainers to make fixes more easily.

FIPS 140 related work paid for OpenSSL development through 2014.  It is hard to go through a validation with one specific vendor, who will have their own agenda. 

There are 244 other modules that reference OpenSSL in their validations, and another 50 that leverage it but do not list it on their boundary.

The OpenSSL FIPS 2.0 module works with OpenSSL-1.0.x. There is no funding or plans for an OpenSSL FIPS module to work with OpenSSL-1.1.x.

The hopes are that the FIPS 3.0 module will be less intrusive.

If you look at all the OpenSSL related validations, you'll see they cover 174 different operating environments.

How can you help? Download the pre-release versions and build your applications. Join the openssl-dev and/or openssl-users mailing lists. report bugs and submit features 

If FIPS is essential to have in the code base, why hasn't anyone stepped forward with funding?

Wednesday, May 18, 2016

ICMC16: Secure Access with Open Source Authentication

Donald Malloy, LSExperts

For years, Mastercard and Visa said the cost to implement the chips in cards cost more than the fraud, but that has all changed in recent years.  EMV (Chip and Pin) is being rolled out in the US, which will push the US fraud to online (can't use the chip).

Fingerprints are non-revocable. Someone can get them from a picture or hacking into a database.

OATH is an industry consortium, the algorithms are free to use.

They have started a certification program, so we can verify that vendor tokens work together.

Why is OTP still expensive? Comes in soft tokens, hard tokens, usb tokens, etc.  Cost per user has consistently been too high, manufacturers continue to have a business model that overcharges the users. OATH is giving away the  protocols - so why still so much?

working with LinOTP - fast and free.

Since biometrics are irrevocable, how do we get stronger passwords? Could we use behaviour analytics? type a phrase and the computer will know it's you.


ICMC16: The Current Status and Entropy Estimation Methodology in Korean CMVP

Yongjin Yeom, Kookmin University;Seog Chung Seo, National Security Research Institute

NSR is the only company testing in Korea.

We have vendors, testing authority and certification authority. NSR tests according to Korean standards. NSR develops and standardized testing methodology with vendors.

KCMVP started in 2005, using their own standard. Staring in 2007, leveraged international standards.

116 modules have been validated. There are more software validations than hardware validations. Most of the software validations have been in C and Java.

The process overview of KCMVP looks very similar to CAVP combined with CMVP in the US/Canadian schemes. There are algorithm verification, tests of the self tests, etc ;)

They have a CMVP tool to automatically verify algorithm correctness.

Just like here, there are big concerns about entropy sources. It's hard to get entropy to scale, so they want to discover the limits of entropy source.

ICMC16: Introduction on the Commercial Cryptography Scheme in China

Di Li, Senior Consultant, atsec information security corporation

Please note: Di Li works for atsec, and is not speaking for the Chinese Government.

Their focus is on certifying both hardware and software, including HSMs and smart cards.  In China, only certified products can be sold or used. They should be sold commercially. By law, no foreign encryption products can be sold or used in China.

Additionally use their own algorithms. Some are classified, others leverage international algorithms like ECC, others would be considered a competitor to algorithms like SHA2 or AES-GCM.

They have their own security requirements for cryptographic modules, but there are no IGs, DTRs, etc., so it is different. No such concept of "hybrid", either.

There are two roles in the scheme: OSCCA and vendor. OSCCA issues the certificates and they are also the testing lab.  The vendor designs and develops the product, sell, and promote.  They report their sales figures to OSCCA every year.

There are requirements to be a vendor as well! To get sales permission, you need to demonstrate you are knowledgeable in cryptography among other things. 

Validations are updated every 5 years. As a part of this process, you additionally have to pass design review.

You cannot implement cryptography within a common chip , as there is not enough security in that chip.

Banking must use a certified products. The biggest market is USB tokens and smart cards.

ICMC16: Automated Run-time Validation for Cryptographic Modules

Apostol Vassilev, Technical Director, Research Lead–STVM, Computer Security Division, NIST
Barry Fussel, Senior Software Engineer, Cisco Systems

Take a look at Verizon's Breach report. It's been going on for 9 years, and we see things are not getting better. It takes attackers only minutes to subvert your security and they are financially motivated.  Most of the industry doesn't even have mature patching process.

We hope validation can help here, but vendors complain it takes too long, the testing depth is too shallow, and it is too costly and rigid.

In the past, we leveraged code reviews to make sure code was correctly implemented. Data shows that code reviews, though,  only find 50% of the bugs.  Better than 0, but not complete. Can we do better?

Speeding up the process to validate, allows people to validate more products.

Patching is important - but the dilemma for the industry is "do I fix my CVE or maintain my validation?" This is not a choice we want our vendors to make.  We should have some ability for people to patch and still maintain validation.

The conformance program needs to be objective, yet we have different labs, companies and validators relying on these written test reports. This is a very complex essay!  Reading the results becomes dizzying.  We want to improve our consistency and objectivity, how can we do this?  So, we asked the industry for feedback on what the industry looks like and how we could improve the program. We started the CMVP working group.

The problem is not just technical, but also economical. To make changes, you have to address all of the problems.

Additionally, we need to reconcile our (NIST's) needs with NIAP's (for common criteria). 

And a new problem - how to validate in the cloud? :)

The largest membership is in the Software Module sub group - whether that is due to the current FIPS 140-2 handling software so poorly, or reflective of a changing marketplace is unclear.

Fussell took us into a deep dive of the automated run-time validation protocol. 

The tool started out as an internal CIsco project under the oversite of David McGrew, but didn't get a lot of traction. Found a home in the CMVP working group.

One of the primary requirements of this protocol is that it be an open standard.

Case example: Cisco has a crypto library that is used in many different products. First they verify that the library is implemented correctly, but then they need a way to verify that when products integrate it they do it correctly.

The protocol is a light weight standard's based protocol (HTTPS/TLS/JSON). No need to reinvent the wheel.

Authentication will be configurable - multi factor authentication should be an option, but for some deployments you won't need a heavy authentication model.

And Barry had a video! (hope to have linked soon)

If things are going to be as cool as the video promises, you'll be able to get a certificate quickly -from a computer. :)  And if you find a new

ICMC16: FIPS Inside

Carolyn French, Manager, Cryptographic Module Validation Program, Communications Security Establishment

Canada doesn't have the same legal requirements to use FIPS 140-2 validated cryptography, but it's handled by risk management and is strongly desired. They put verbiage in their contracts around this, and also require that the modules are run in FIPS mode.

If you are not using a FIPS 140-2 validated module, we consider it plaintext.  So, if cryptography is required, then you need to use an approved algorithm, in an approved manner with sufficient strength.

Vendors can choose to validate the entire box or just the cryptography itself.  The smaller the boundary, the longer you can go between revalidating.

A vendor may incorporate a validated module from another vendor (eg a software library) into their product. CMVP recommends that you get a letter from them confirming exactly what you're getting. Writing crypto is hard - so reuse makes a lot of sense.

When you are considering leveraging someone else's validated  module, look at what is actually "insdie". For example, what module is generating the keys?

You can rely on procurement lists like the DoD UC APL.

ICMC16: CAVP—Inside the World of Cryptographic Algorithm Validation Testing

Sharon Keller, Computer Scientist, NIST, CAVP.

CAVP used to be within the CMVP, because there were only 3 validate algorithms. They split in 2003, and now CAVP is a per-requisite to doing a CMVP validation with NIST.  NIST's Cryptographic Technology Group determine which algorithms should be included, and how they should be implemented in order to be secure.  CAVP then writes test suites to validate that an implementation meets these requirements.

It's beetter to test something instead of nothing at all. In 2011, introduced component testing when components of an algorithm isn't contained in a single boundary.  It is testing a component of a standard, not a mathematical function.

For example, ECDSA signature generation function has two steps - hash message and sign hash. For the PIV card, hashing of message done off the card and signature of the hash is done on the card.  So, created ECDSA signature generation component that takes a hash-length input and signs it

CAVP now tests SHA3!

Need to make sure everything works correctly, and if it deviates even a little bit, then the test needs to fail.  The Monte Carlo test goes though about 4 million iterations, designed to exercise the entire implementation. Their tests include positive testing - given known inputs, get the right output. Additionally do some negative testing to make sure the implementation recognizes invalid and valid values. For example, change the format of the data before submitting and make sure it fails.

ICMC16: Keynote: Modern Crypto Systems and Practical Attacks

Najwa Aaraj, Vice President, Special Projects, DarkMatter

In the past, attacks came from a single user. Today, we have complex and coordinated attacks that target heads of state and world leaders. This can enable terrorism as well.

We need to worry about encryption, key management, and keeping data secure in all manners of transport.On one system, you can't just worry about the communication layer, but also the operating system and how you manage all of this.

First and foremost, we need secure protocols.  We may need non-repudiation, anonymity, etc - need to link it all together.  We need to make sure it's all there.

Of course, if the kernel and hardware have security issues, you'll be in big trouble. Need to worry about data at rest, real time integrity monitor, hardened cryptographic library, key management and hardened OS and kernel.

Encryption should be intractable by theoretical cryptanalysis, but it also needs to be implemented correctly. 

Common side-channel attacks: power analysis, EM analysis and timing.  For example, when you are generating keys power usage will look the same with every key. Most common targets are smart cards, smart phones and FPGA microcontrollers. 

Counter measures are most often implemented at the algorithm level. For example, masking/blinding of randomness, constant time implementations and pre-computations and leak reduction techniques. 

We additionally need protocol level countermeasures to reduce the amount of leakage to less than the minimum required for key recovery using SPA/DPA/EM-based leakage and to reduce interim states that could lead to leakage.

In the hardware level, you can choose NAND gates that don't leak information about power consumption.

You need to consider all of these factors, and additionally make sure you write your software securely as well!

ICMC16: Plenary Keynote Sessions

Welcome and Introduction Ryan Hill, Community Outreach Manager, atsec information security

Changed the time of year for the conference, and location (had to make it International, after all) - and have the largest attendance to date!  Even though it's only 6 months after the last conference.  Seems the new time of year is working out.

Cryptographic Module User Forum (CMUF) Overview, Matt Keller, Vice President, Corsec 

CMUF was founded during the first ICMC, with the goal of getting government and industry to meet and discuss issues.  An open dialogue benefits all.  It's an open group.  Working on improving security policies, to make them more useful for actual users.  A new working group is spinning up to look at power on self tests. Goal is to get a lot of people, each putting in a small bit of time. Join now, and you may win a free registration to next year's ICMC!

Conference Keynote: Building our Collective Cryptographic Community
Joe Waddington, Director General, Cyber Protection, Government of Canada

How many cryptographic instances are in this room?  Given there are 270 people here, and each person has a phone (which includes several different cryptographic instances), credit cards, ID cards, car keys.... there are thousands. In one room. And everyone expects these to just work. Nobody gives much thought to whether or not they are effective, we are just trusting that these transactions will be secure.

Think about how many social media accounts in this room - think about the petabytes of information that a company like Facebook is processing every day. We all trust that they will do this in a secure manner.

Now, with IoT, we are putting cameras in our refrigerators.  We don't want other people to be able to look into our refrigerator, so that has to be encrypted as well.

When Waddington joined the Canadian government, he was not surprised to see there were 100 different departments, but was surprised that there were 100 different CIOs, and dozens of HR databases.  This is a big problem and Canada is working on resolving this and consolidating.

Cryptography is hard and takes time to get right - time well spent.  The standards are the 'simple' part here. Complex implementations and software are making this harder to get right. Often with this cloud software development, folks are thinking about supporting their application for ... months. But, we need to protect data for years (30-40 or more!).

Need to partner with government, industry and academia to make sure we are doing the right things. No single organization has the answer.

Conference Keynote: Assuring the Faithfulness of Crypto ModulesDavid McGrew, Cisco Fellow, Cisco Systems

A faithful module does what is expected and nothing more.  An unfaithful one might have a side channel where it could leak information.

We start out with standards. Those become open source implementations (seems like a prerequisite to get traction for a standard), vendor implementations, etc.  The encryption could become unfaithful at several stages - in the design or implementation phase.  Open Source seems to be a big target, with so many contributors [VAF NOTE: though most seem to have a relatively small core development group].  Companies are at risk as well, need to worry about people being bribed, malicious.

Need to worry about just plain ol' mistakes as well (heartbleed and goto fail;).

You even have to worry about code injection attacks, like changing hard-coded values in a binary.

All sorts of areas to attack: key generation, encryption, etc.

How do we detect this? Black box testing and implementation review.   Can they catch everything? No, but at least a step in the right direction.

Reference: Mass-surveillance without the State: Strongly Undetectable Algorithm-Substitution Attacks. Why bother attacking the cipher itself, if you can undermine the randomness or change the cipher? Much easier. 

We have to worry about protocol side channels as well, like randomized padding, timing channels and variability in options, formatting, and headers.

Still, what can we do? 

Better oversite of our standards, better vetting and formal tracking of reviews for open source [VAF NOTE: quite frankly, industry should do this, too, if they aren't already!].  We need to do security reviews and track them, and additionally independent validation.  Even better - run time validations!

Fortunately, work is being discussed in this area.  See the CMVP working groups that have recently formed.