Friday, May 11, 2018

ICMC18: Update from the "Security Policy" Working Group

Update from the “Security Policy” Working Group (U32a) Ryan Thomas, Acumen Security, United States

This was a large group effort, lots of people coming together. The security policy is used by the product vendor, CST laboratory (to validate), the CMVP, and user and auditor (was it configured in the approved fashion?)

The working group started in 2016, to set up a template with the big goals of efficiency and consistency. When they started, were focused on tables of allowed and disallowed algorithms (non-approved), creation of a keys and CSPs table, approved and non-approved services and mapping the difference.

But, the tables were getting unwieldy, and we were told there were changes coming. Then folks started getting ideas on adding it to the automation work. So, the group took a break after helping with the update of IG 9.5.

Fast forwarding to today, many modules leverage previously validated libraries (OpenSSL, Bouncy Castle, NSS) so the documents should be very similar... but not always. Often still very inconsistent. New goal is to target 80% of the validation types, not all.

Creating templates and example security policies will have people coming from a common baseline. This will be less work for everyone, and hopefully get us to the certificate faster!

By Summer 2018, hope to have a level 1 and level 2 security policy. This is not a new requirement, but a guideline. It will point you to the relevant IGs / section and just help you streamline your work and CMVP's work.

Need to harmonize the current tables to the template. It will be distributed on the CMUF website when complete.

While doing this work, discovered a few things were not well understood across the group (like requirements for listing the low level CPU versions.

Got feedback from CMVP about what are the most common comments they send back on security policy - like how does module meet IG A.5 shall statements? How does XTs AEs meet IG A.9? Etc.

ICMC18: Keys, Hollywood and History: The Truth About ICANN and the DNSSEC Root Key

Keys, Hollywood, and History: The Truth About ICANN and the DNSSEC Root Key (U31c) Richard Lamb, Self-Employed, United States

Started with a segment about the Internet phone book from a television show.  Richard notes they got a lot of things right, but instead of breaking up the code into 7 cards - there are indeed 7 smart cards and 7 people all over the world that help ICANN.

Did a quick demonstration of how DNS works in the room, and learned about how important it truly is. Dan Kaminsky's DNSSEC exploit at DefCon 2008 at least drew attention to how important DNS is.

the other source of trust on the Internet is CA Certificate Roots, and encourage all web traffic to be encrypted.

Four times a year, people really do get together to do a public key ceremony. You can come watch if you want - just like they said in TV!  There are at least 12 people involved in the key ceremony, due to the thresh holding schemes by HSM vendors. The members must be from all over the world, cannot be all (or even mostly) Americans. They are Trusted Community Representatives (TCRs).

The Smart Cards are stored in a credential safe. The HSM is in a separate safe, there are iris scans available.  It is all live recorded and in a secure room. Process is certified. Shielded spaces, protected tamper evident bags (changed bags after someone was able to get into the bag w/out evidence).

The presentation moved very fast and lots of interesting things in there - can't wait to get access to the slides.

ICMC18: Panel Discussion: The Future of HSMs and New Technology for Hardware Based Security

Panel Discussion: The Future of HSMs and New Technology for Hardware Based Security Solutions (A31a) Tony Cox, Cryptsoft, Australia; Thorsten Groetker CTO, Utimaco; Tim Hudson, Cryptsoft, Australia; Todd Moore, Gemalto, United States; Robert Burns, Thales, United States

All of the panelists have a strong background in cryptography and HSMs. Starting out by defining HSMs, a secure container for secure information. Needs extra protection, may have acceleration, may be rack mounted, smart card, USB, PCMCIA card, appliance, etc. - maybe even software based.

Or, is it a meaningless term? It could be virtual, it could be a phone, it could be in the cloud - anything you feel is better than things that aren't HSMs, Tim postured.

Thorsten disagrees - there has to be a wall and have only one door and strong authentication.

Bob noted that overloading the term does cause confusion, but should not dilute what are good hardware based HSMs.

Tim notes that people buy their HSMs by their brand, not always for their features and a deep evaluation of the underlying project. Thorstein agrees that may happen in some cases, or they may be looking for a particular protection profile or FIPS 140 level.  Bob notes that branding and loyalty plays a part, but does think people look at features. Tim said he's been in customer conversations where people are influenced by the color of the lights or box.

Bob mentioned that it's not easy to install an HSM, so you're only doing it because you need it or are required to have it.

The entire panel seems to agree (minus some tongue in cheek humor) that easier to configure is important, more likely to be installed correctly.  But, still a way to go - customers are always asking for how to do this faster and more easily.  This may be leading to more cloud based HSMs.

Bob - There are trade offs - we can't tell them what their risk profile is and what configuration is right for them.

On security, Thorstein notes that some customers may be required to use older algorithms, and he recommends doing risk assessments, and just because you are writing a compliant (say "PKCS#11) application does not mean it is secure.  Having standards based makes migrating and interoperability a lot easier, but it does not always meet all of your business needs. This is why most HSM vendors make their own SDK as well.

Bob agrees, that a universal API means punting on some tough problems. As a vendor you can choose being fully compliant, or locking in your customer to your API.  PKCS#11 is great, but it is a C API - where is the future going? Need more language choices.

Tony asks - given the leadership of PKCS#11 team in the room, what could we do better? Tim makes a comment on KMIP, Bob agrees it's important but still not fully portable. Bob thinks there's an opportunity to look at the problem in a different way for PKCS#11 - the implementation is locked into C, which is no longer on the growth curve for our customers. People what the more managed languages, so people are creating shims over PKCS#11.

Thorstein likes the aggregate commands in KMIP, but still not perfect.

Todd noted we need RESTful based APIs, and there are gaps in what the standards are offering.

Tim notes that he doesn't think the vendors are always clear with their customers that they are going down a path of getting locked in. Bob disagrees that vendors are doing this on purpose.

Valerie couldn't help but note that standards are only as good as the people that contribute to them, and if the vendors are finding gaps in PKCS#11 or KMIP, please bring those gaps to the committees and join them and help to improve them.

Tim notes that there are more hardware protections available to software developers (ARM trust zones and Intel's SGX). Bob notes that they are interesting technologies, but not a true HSM, not as strong of a container. Additionally, key progeny and ownership is an issue as those keys are owned by specific companies. It would be good to expand this, particularly in the cloud space.

Thorstein believes the jury is still out - interesting approaches, but not quite there for putting the level of trust you would put into a level 3 HSM. If a US / American vendor has a kill switch that could stop your whole system from running, it's much less appealing for those of us outside of the US.  Worry about what other things could be exposed in that way - it's like a good new cipher; need to look at it and how it is implemented.

Todd notes these technologies are starting to get very interesting because they can go into edge devices and cloud services. We are excited to see how this are going to grow. Vendors still need to provide key life cycle guidance and standards compliance and making sure CIA are in place.

Thorstein notes it is a good building block of an embedded HSM, but he'd still be nervous about sharing the CPU.

Tim says it sounds like it's better than software alone, but not up to these vendor's HSMs. Bob remembers the time that HSMs used to be needed to get any decent performance, and they are already very different just 15 years later and expects another incarnation in 15 years.

Todd notes that Google just launched a silo that would leverage these technologies and managed SDKs. Bob agrees that middleware can benefit from technologies like SGX. Tim notes standards are still very important, and wants users to communicate this to vendors.

Lots more of excellent conversation.

ICMC18: TLS Panel Discussion

TLS Panel Discussion (S30b) Moderator: Tim Hudson, CTO and Technical Director, Cryptsoft Pty, Australia; Panelists: Brent Cook, OpenBSD, United States; David Hook, Director/Consultant, Crypto Workshop, Australia Rich Salz, Senior Architect, Akamai Technologies & Member, OpenSSL Dev Team, United States;

There are quite a few TLS implementations out there, in a variety of languages. David thinks this is generally a good thing, gets more people looking at the specification and working out the ambiguities. Brent agrees, it gets more people looking at it, lowers the chance of one security issue impacting all implementations. Rich noted that in the past that the way OpenSSL did it was the "Right Way" and people would write their code to interoperate with them, as opposed to against the specification, but he thinks it's better to have more as they fit different areas (like IoT).

There are a lot of implementations out there using the same crypto implementations, ASN.1 or X.509. That can be good, like the Russian gentleman who writes low level assembly to accelerate the algorithms - so everyone can be fast, but it's still good to see alternative implementations.

All of the panelists hear from their customers, getting interesting questions.  They generally have to be careful about turning things off, because you never know who is using an option or for what.

Bob Relyea noted users should be cautioned if they think they should write their own TLS library, when there are several very good ones out there. Forking is not always the answer, because it reduces the number of people looking at each implementation.  Let's make sure the ones we really care about have the right folks looking at them.

Brent notes that for him (OpenBSD) are more focused on TLS for an operating environment, and they are glad they forked. If OpenSSL hadn't wrapped their memory management like they did, folks and tools at OpenBSD would've found Heartbleed sooner.

Rich discussed the debate IETF had been having with financial institutions who wanted a way to observe traffic in the clear. The IETF did not want this and said no, they are a paranoid bunch. this means some companies won't be able to do with TLS 1.3 that they may have been able to do before. Encrypted will be encrypted.

Brent makes some deep debugging and injection tools, and also agrees, you don't want there to be an easy way to decrypt banking traffic.

Lots of great questions with quick answers that were hard to capture here, but a very enjoyable presentation.

ICMC18: TLS 1.3 and NSS

TLS 1.3 and NSS (S30a) Robert Relyea, Red Hat, United States

PKCS#11 is the FIPS boundary for NSS. AES/GCM presents a difficulty in the PKCS#11 v2.X API, but will be addressed in v3.X. While in FIPS mode, keys are locked tot he token and cannot be removed in the clear. This means their SSL implementation doesn't have actual access to the keys - so MACing, etc, needs to happen within NSS's softtoken.

In NSS's FIPS mode, only allowed FIPS algorithms were on. This caused problems for accessing things like ChaCha, so now they are only locked in the security policy.

The TLS 1.3 engine in NSS is very different than 1.2. We rewrote the handshake handling state machine. We have finally dropped support for SSL 2.0 support altogether and have notified customers that SSL 3.0 is next (currently turned off). TLS 1.3 uses a different KDF as well, already had support for HKDF through a private PKCS#11 NSS Mechanism.  Essentially, everything (but the record format) has changed.

The implementation was done by Mozilla, primarily by Eric Rescorla and Martin Thompson. They had to rewrite the state machine. We wanted customers to start playing with the software, but due to the way it's configured, they sometimes got it on accident (by applications choosing the highest available version of TLS).

When will you see this? It's fairly complete in the NSS upstream code, but nobody has released it, yet. Draft 28 of TLS 1.3 was posted on March 30, 2018. We doubt there will be any further technical changes. The current PKCS#11 is sufficient, other than the KDF. The PKCS#11 v3.0 spec should be out by the end of 2018. Still gathering final proposals and review comments into the draft. HKDF missed the cutout... Bob will work on taking HKDF through the PKCS#11 process as the 3.0 review moves forward, to hit the next version of the specification.

How do you influence NSS? The more you contribute, the bigger say you can get to influence the direction.

Thursday, May 10, 2018

ICMC18: KMIP 2.0 vs Crypto in a Cybersecurity Context

KMIP 2.0 vs Crypto in a Cybersecurity Context (G23c) Tony Cox, Cryptsoft, Australia; Chuck White, Fornetix, United States

They are both co-editors of the KMIP v 2.0 version of the specification. They are big fans of standards.

Wrapped up KMIP v1.4 in March 2017 (published in November 2017) and scoped KMIP 2.0. In January 2018, KMIP 2.0 working draft is out and had another face to face in April 2018.

Restructured the documents, removed legacy 1.x artifacts. Problems we create today will impact us for the next 10-15 years, so working with that in mind. Want a way to be able to make changes easily as needed in the future. The focus is on data in motion. We want to lower barrier for adoption, and make KMIP more accessible as a service with flow control and signaling for transaction of Encryption Keys and Cryptographic Operations.

Passwords are now sent as hashed passwords, including a nonce to prevent replay. We have an effective double hash, so no longer need to store passwords in the clear. We've had the concept of OTP (One Time Password) for a long time, but it's better defined to make it easier to use and interoperate.

We've also addressed login and delegated rights. Login is a simple mechanism to reduce authentications. Leveraging tickets to improve performance. Allows for delegation and supports 2FA. This will broaden KMIP applicability.

Flow control allows server initiated commands to clients with the client initiated connections. Allows the server to be a trust node managing encryption keys on other devices that are inside or outside a system perimeter.  Best of all - does not break the existing method of establishing a KMIP session (does not break clients).

Multiple ID Placeholders allows for simpler execution of compound key management operations. Provides a path to combine traditional key management operations and HSM operations into a single KMIP operations. Addressing broader concern of IoT and Cloud. Also added some changes to make dealing with offline devices easier.

Digest values usable between client and server - deterministic. Clients can rely on servers! Addresses a major source of non-interoperability.

There is now a concept of re-encrypt. What happens when you have an object that's been encrypted, and you want to rotate the keys? Don't want to expose keys while doing transitions. This new method allows the keys to stay in the server (w/in the FIPS boundary). Enabling rekeying more often. This is future proofing for post quantum crypto when we know people will need to rekey.

There are some default crypto parameters, allows parameter agnostic clients. Cryptography can change on the server side and not change the method in which it is requested by the client. Server can provide defaults if the client does not.

All of these features were to improve crypto agility and resilience, easier to use, and allows KMIP to be more impactful for Data in Motion, IoT, Distributed Compute and Cloud.

Implementation work is starting next, along with final reviews and hopefully close out any final issues in the next few months. Hope to publish in 2018!

This work is the culmination of our efforts and learnings in the key management space over the last 9 years as a standard's body. But, if you have requirements the standard is not handling, come and join us or let us know what the issues are. 

ICMC18: OpenSSL FIPS Module Validation Project: An Update

OpenSSL FIPS Module Validation Project (S23a) Tim Hudson, CTO and Technical Director, Cryptsoft Pty, Australia; Ashit Vora, Acumen Security, United States

Tim was part of OpenSSL before it was called OpenSSL. He's co-founder and CTO at Cryptsoft and a member of the OpenSSL Management Committee (OMC). Co-editor and contributor across OASIS KMIP and PKCS#11 technical committees.

Ashit is the co-founder and lab director at Acumen Security, acquired by Intertek Group last December. He has 15 years of certification and security experience. He is providing advice to OMC on their FIPS validations.

OpenSSL has been through validation 8 times, 3 certificates are still valid. Note the version number of validated modules is not a direct correlation to the actual OpenSSL version number. None of these modules work with OpenSSL v1.1. Cannot update the current modules, as they don't meet new IGs nor power on self test requirements. If you want to do your own revalidation, you have to fix those.

Haven't been able to do FIPS 140, yet, due to being so busy with all of the other features and work that needed to be done first (TLS 1.3, and all of the things in the previous discussion). Needed a good stable base to move forward for TLS 1.3.  The fact that IETF is still finishing TLS 1.3 gave them lots of time to polish the release and add lots of great features and algorithms.

The team is finishing off adding support to TLS v1.3, but it's taking longer than expected. This has delayed the kick off of FIPS validation effort. However, commitment to FIPS validation is unwavering. It is important to the committee. We are doing it a way that we feel is long term supportable, and not an add on.

The idea is to keep the FIPS 140 code to be in a usable state for future updates. There will be a limited operational environments (OEs) tested (previously there were over 140! Doing that many is expensive and time consuming). Will also be only validating limited number of algorithms.  They plan to do source distribution as per previous validations.

At this point in time, there are no plans to add additional platforms to base validation, get it right and make it usable. As new platforms need to be added, other parties can do it independently.  Want to get this solution in place as soon as possible.

FIPS algorithm selection is planned to be functionally equivalent to the previous FIPS module. There won't be a separate canister anymore. It will be a shared library (or DLL) as the module boundary. Static linking will not be supported. It will be an external module loaded at runtime. It will look more like other validations, which should help overall. 

The interface will NOT be an OpenSSL Engine module. Don't want to be constrained by the engine limitations - it will be something else (TBA).

Have already fixed the entropy gathering and RNG.

This won't be a funny standalone release, it will be aligned with a future standard OpenSSL release (once solidified, we will tell you - it will be after the OpenSSL 1.1.0 release).

Will move to FIPS 186-4 key generation, NIST SP 800-56A, added SHA-3 and built in efficiency of POSTs.

Current sponsors: Akamai, NetApp and Oracle. They are contributing to making OpenSSL FIPS a reality. Any other sponsors interested? You need to contact OCM within the next 90 days, you can contact the alias or an individual on the OCM to start.

Next steps: Finalize planning! What functionality is in and what's out? which platforms?  then we have to begin the development process. We expect to publish design documents and have public pull requests for review.  We will be doing incremental development and we aim to minimize the impact on developers. We want feedback earlier from experienced FIPS 140 people and OpenSSL developers.

Adding more sponsors should help speed up the process, as long as they understand this will be an open process and they are willing to work within those constraints.

ICMC18: OpenSSL Project Overview

OpenSSL Project Overview (S22c) Rich Salz, Senior Architect Akamai Technologies & Member, OpenSSL Dev Team, United States

Covering what's new since last year's update at ICMC17.  Post heartbleed, the project started a recovery effort. LibreSSL forked, and several older releases were EOLed. Started 1.1.0 in 2014 (depending on who you ask), and working on hiding all the structures. Google then started their own fork (BoringSSL). Then the team released 1.1.0!

OpenSSL 1.0.2 is supported through the end of 2019, last year is only security fixes. Extended by a year as the next LTs release wasn't ready. 1.1.1 will be the next LTS release and 1.1.0 will only be supported for 1 year after that. (security fixes only).

Very close to reaching exit criteria for 1.1.1 - want a final beta period after IETF RFC for TLS 1.3 is published (soon!). It's in editorial review, hoping nobody finds a major technical flaw at this point.  1.1.1 should be source and binary compatible with 1.1.0. Focus of the next release is FIPS.

Current CMVP 1747 expires in 2022 and we're not touching the 1747 code anymore. It's not on the historical release. 1747 is based off of OpenSSL 1.0.2, so there will be a gap.

Start porting your applications to the master gate. 1.1.1 has the same API/ABI as 1.1.0 and therefore the big "opaque" changes. FIPS will be moving forward, not backward.  You will interop on TLS 1.3.

Last HIGH CVE was in February 2017, found by fuzzing and it was a crash.  Before that, it was November 2016 (also fuzzing and also a crash). Got a grant from Amazon to create a fuzzing database. Prior CVE was in September 2016 (found by 3rd party, a memory growth leading to probably crash).  We call them CVEs so downstream will know to pick them up.

Everything for OpenSSL is now down on GitHub. It's added features to make it easier to do things. Every pull request is built 7 different ways with different options and various OSes.  Every pull request has to go through this CI process and must have a clean pass.

We have an active global community now - people from Amazon, Facebook, Google, Intel, Oracle, China (Ribose, Baishan), Russia (GOSt ciphers).  It's good to be open source and great open source contributors.

The OMC meets annually face-to-face. Most folks don't believe we can fill 2 days... but always fill the time. With the exception of private finance items and release level stuff, everything is posted to the openssl-project mailing list.  Added video conferencing this year, and remote team members stayed online for the full 8 hours.


Protocol handling uses a safe API, no more of this: len = (p[0] << 8) | p[1]; read (ssl, buff, len); - Now use safe API which understands TLS protocol! no more open coding of protocol messages.

New infrastructure - native threads support. DRBG-based CSPRNG. ASYNC support, Auto-init and cleanup, uninvited build system, system-wide config files (able to turn off algorithms and specific features), new test framework (no new API w/out a unit test).

New cryptography! X25519, Ed25519, Ed448, Cha-Cha/Poly (DJB & Co), SHA3, SM2/3/4, ARIA, OCB, many old/weak algorithms disabled by default (still in source). New policy: only the EVP layer is supported, and only standardized crypto.

New network support. IPv6 revised and now complete, for example.

Did an external audit and addressed the code quality issues that came up. Getting better at responding to reported issues and bugs. More and better documentation, all things in the main section should be documented. Lots of old code (ifdef options) removed. Will only take new crypto that has been approved by a standard's body.

And.. TLS 1.3. It works, people are using it in production.  It interoperates! It is different, so new issues and configs to think about. We know people are using it... but nobody is complaining (yet). Can't say at this point where the traffic is coming from (customer confidentiality), but it is coming in.

In the 1.1.0 release, there won't be FIPS. See Tim's next session for more details:-)

Still working on changing the license, can't commit to when / which release it might be.

Code is not noticeably smaller, there are ports for embedded device.

ICMC18: Avoiding Burning at Sunset - Future Certification Planning in Bouncy Castle

Avoiding Burning at Sunset – Future Certification Planning in Bouncy Castle (S22b) David Hook, Director/Consultant, Crypto Workshop, Australia

If you end up on the FIPS 140-2 historiaal list, you cannot be used for procurement by any Federal Agencies.  Agencies trying to do so must go through a risk management decision in order to do so. It is also getting harder to rebrand or relabel someone else's certification.

If you've only done basic maintenance, that won't 'reset the clock' on your validated module - you have to do at least a 3 SUB, which is not a full validation, but still a lot of work. The key is the module must comply with current requirements.

Java has moved to "6 month" release cycles with periodic LTs releases. SOme of the new algorithms, such as format-preserving encryption and new expandable output functions do require revamping the API.  And.. post quantum... need to consider.

We had to split the Java effort into 3 streams - a 1.0.X stream representing the current API and 1.1.X representing the newer API.

the plan is we can do updates to the 1.0.X stream with minimal retesting. The 1.1.X stream will require recompilation and more work.

All of the updates have to comply with the current Implementation Guidance, which is changing at unspecified rates.  A wise product manager will want to keep this in mind when doing long term planning.

Premier support for Java 9 finished in March 2018 and premier support for Java 10 finishes in September 2018.  Java 11 will be supported until September 2023, extended support until September 2026.

there is now an add on for Java FIPS to allow use of post-quantum key exchange mechanisms with KAS OtherInfo via SuppPrivInfo.

Bouncy Cancel 1.0.2 will still be targeting Java 7, 8 and 11. The older versions are still very popular. Will be doing a correction to X9.31 SHA-512/256 (8 plus 1 is 10?). Who uses this? Banks... Will also be adding SHA-3 HMAC, SHA-3 signature algorithms.

BC-FJA 1.1.0 will have updates for format preserving encryption (SP 800-38G), CSHAKE, KMAC, TupleHash and ParallelHash (SP 800-185), ARIA, GOST and CHaCha20, Poly1305. Avoiding some algorithms due to patents - trying to chase down someone to talk to who can speak for the patent holders. (acquisitions make this hard...)

We now have a bunch of Android ports - stripy castle. Could not use any other name, because Google's use of org.bouncycastle as well as the org.spongycastle.

C# 1.0.1 is closer to Java 1.0.2 in some respects. We are still concentrating on the general use API. Assuming enough intersst we will do a 1.0.2 release to fix X9.31, complete SHA-3 support and complete ephemeral KAS support.

ICMC18: Keynote: Challenges in Implementing Usable Advanced Crypto

OS Crypto Track Keynote: Challenges in Implementing Usable Advanced Crypto (S22a) Shai Halevi, Principal Research Staff Member, IBM T. J. Watson Research Center

Advanced crypto goes beyond cryptography - includes proofs and things that complement use. We need it to be fast enough to be useful.

Your privacy is for sale - we give up privacy for services (directions, discounts on groceries, restaurant recommendations), we give up health data to look up personal medical solutions.

Data abuse is the new normal - the entire IT industry is making It easier to abuse. Larger collections of data, better ways to process them. It will get worse! If there opportunity is there to abuse, it will be abused.

Advanced cryptography promises blindfold computation - the ability to process data without ever seeing it - getting personalized services without giving access to your private information. Useful more traditional uses as well, like key management, but that's not the focus of this talk.

Zero knowledge proofs have been around for a long time (mid 80s?), are the concept that I have a secret that I don't want to tell you, but I can convince you of properties of my secret which should be enough to prove my secret.

You can use this for grocery history - I can prove that I bought 10 gallons of milk this month, so I can get a coupon, without revealing everything else that I bought.

The next concept is secure multi-party computation. We all have our individual secrets. We can compute a function of these secrets w/out revealing them to each other (or anyone else!). Has been around since the 1980s.

You could use this with medical data to determine the effectiveness of some treatment.  Data for different patients are held at each clinic, but the effectiveness can be shared.

The other concept is homomorphic encryption. Data can be processed in encrypted form and the result is also encrypted - but inside is the result of the function. Has been described in papers going back to 2009.

I could encrypt my location and send it to Yelp, Yelp computes an encrypted table lookup and gives me ads for nearby coffee shops. I could then get back encrypted results and then get coffee. :-)

Improving performance has been a major research topic for the last 30 years - we've made progress, but it will take a lot of very knowledgeable engineers to implement it.

Digital currencies  need to prove that you have sufficient unspent coins on the ledger, constructing the proof in less than 1 min and verify in a few microseconds - this needed the performance improvements to get it to perform that well.

You can use these encryption techniques and the speed improvements to find similar patients in a database in less than 30 seconds, or compute private set intersections.

By speeding up homomorphic encryption, you can compute the similarity of two 1M-marker sequences in minutes, or inference of simple neural-nets on encrypted data. 

But - all of these are complex, so not generally available.

There are a lot of software libraries that implement ZKP / MPC / FHE - most are open source, but it's very hard to compare the, decide which to use for what.  They have different computation models,performance profiles, security guarantees and there are hardly an accepted benchmarks.

Distributed computing is already very complex by itself. Adding advanced cryptography into it makes it that much more complicated (needs oblivious computation). Good performance needs extreme optimization - straightforward implementation will be terribly non performant. You need to be familiar with the techniques to optimize for what you're trying to do.

Communication between parties is the bottleneck in many protocols for secure multi-party computation. To optimize, many libraries work with sockets - they expect to be "in charge" of IP-address:port.  Retrofitting existing libraries is also very complicated.

How can you tame the complexity? You need frameworks and compiler support, tool boxes for common tasks and to shift our focus to usability.

We need to engage cryptographers and system builders to make this happen.

ICMC18: Panel Discussion: Technology Challenges in CM Validation

Panel Discussion: Technology Challenges in CM Validation (G21b) Moderator: Nithya Rachamadugu, Director, CygnaCom, United States Panelists: Tomas Mraz, Senior SW Engineer, Red Hat, Czech Republic; Steven Schmalz, Principal Systems Engineer, RSA—the Security Division of EMC, United States; Fangyu Zheng, Institute of Information Engineering, CAS, China

All three panelists have been through their share of validations, and Fangyu has also had to deal with the Chinese CMVP process.

As to their biggest challenges, everyone agrees that time is the issue. Tomas noted that it's very difficult to get the open source community excited about this and doing work to support the validations. For Fangyu, they often have to maintain 2 versions of several algorithms, one for US validations and one for Chinese.

In general, it's hard to find out what the requirements are from most customers here, particularly across various geos.

Several panelists agree this is seen as an expensive checklist. Steven also worries about the impact on business - it goes beyond what you pay the labs and the engineers to write the code. It's hard to get this done and get it to all the customers. Tomas noted that there are conflicting requirements between FIPS and other standards (like AES GCM, though that has been recently addressed).

On value of the certification, have you found anything during a validation that made your  product more secure? Steven notes you can talk about methodologies for preventing software vulnerabilities, and the devs will come back and say why didn't it work for "so-and-so"? But if you look specifically at the testing of the algorithm, it gives you value that you've implemented the cryptography correctly.  Not clear we get as much benefit out of the module verification.  Agreement across the panel that CAVP is valuable, technically.

Steve really wishes there was a lot more guidance on timing of validations and how to handle vulnerabilities. Tomas notes it's hard to limit changes to the boundary in the kernel, because we need to add new hardware support and other things. Fangyu noted that even rolling out fixes for a hardware module is challenging.

All panelists are excited about the automation that is happening, though Steven is wondering if it will really be possible for the module (algorithm testing seems very automatable, and that will still help).  Steven talked about industry trend to continuously check status of machines, make sure they are up to date with patches, etc - getting this automation in can help people continuously check their work, even on development modules.

All panelists noted that the validation process could be improved, but it won't help the overall security of the system.

Customers want Common Criteria and FIPS 140-2, but don't really understand what it means, they just want to make sure it's there. Try to do them both at the same time is difficult to line up the validations and making sure all teams understand when they need it. And... they both still take too long to get.

On the topic of out of date or sunset modules - it's unclear how many customers may be running these, but Steven has heard support requests come in for out of date modules. They use that as an opportunity to get them to upgrade. Tomas noted they won't likely be able to "revive" the sunset module, due to how quickly the IG and standards change.

ICMC18: 10 Years of FIPS 140-2 Certifications at Red Hat

 10 Years of FIPS 140-2 Certifications at Red Hat (G21a) Tomas Mraz, Red Hat, Czech Republic

Red Hat was founded in 1993, received first FIPS 140 validation in 2007 with Sun Microsystems with NSS (Network Security Services), which was designed to be FIPS 140 compliant from the start.
We spent a lot of effort in Red Hat Linux to get everything to use NSS (cURL, RPM, OpenLDAP, OpenSWAN), but could not convert everything. Too many differences in APIs.

So, changed to “validate everything” mode!  OpenSSL based on the original FIPS module from OpenSSL, but partially evolved for Red Hat.  For OpenSSH, did their own independent FIPS work. Libgcrypt, hired a community developer to integrate FIPS support upstream.  For OpenSWAN and later libreswan, hired the community developer to port to NSS and then get FIPS support.  DM-crypt first had its own crypto, but later switched to use libgcrypt. GnuTLS was done after Red Hat hired the main developer of the project.

Highlights – we were able to do this at all. We could do it quite quickly with existing modules, and we never included Dual-EC DRBG so avoided big issues there. Some small implementation bus were found by CAVs testing.

Lowlights – process is still to slow and expensive to be able to revalidate everything we release, creating a conflict between fixing bugs and security issues and the need to have the software validated.  Sometimes new crypto has to be disabled in the FIPS mode (even though its security is well established, like ChaCha20-Poly1305, Curve 25519 DH).  Some of the requirements really are for hardware, and don’t make sense for software and implementing them does not improve the software.
Lowlights – more! The restrictions are too tight on the operating environment. HW requirements are ignored by customers and other products built upon RHEL are marketing under a different name – confusing!  The open source community does not care about government customers, so call it nonsense, silliness and garbage.
We needed to make the process of turning on FIPS mode, so it would not interfere with regular customers that don't care about FIPS mode at all.  This is all more restricted in containers as well (both host and container must be RHEL, for example).
In libgcrypt, non-approved algorithms are blocked, things like MD5, annoying to customers.
In the future, want to continue to work with NIST to improve the process and continue to work on the ACVP project, to speed up revalidations.  We may have more or less crypto modules, less if we can get more utilities to use our validated libraries (like move SSH from using OpenSSH crypto to OpenSSL).

Wednesday, May 9, 2018

ICMC18: FIPS 140-3 Update

FIPS 140-3 Update (C13c) Michael Cooper, IT Specialist, NIST, United States

Mr. Cooper would love to give us a signature date, but... he can't. (out of his control). There are a general set of documents that point to ISO 19790 and ISO 24759, it's gone through the NIST processes (legal reviews, etc) now we are at the last stage: waiting for the secretary of commerce to sign. This is a timing thing - wheels are in motion.

The document that's going in for signing is just a wrapper document, basically pointing only to those other documents and no modifications.

Hoping that by leveraging an international standard, then this will simplify testing requirements for vendors. Already going to CC meetings to see who else is interested in this, and looking into automation for this as well.

Standardizing testing, especially across NIAP and CC, then this will help extend the adoption of the standard.

The algorithm automated testing will give us a start on automating module testing. We want to leverage ideas from around the world, academia and industry.

Question from the audience - what other country has signed up for this? So far, none, but there is interest.

Q: Does FIPS 140-3 point to a specific version of the other documents? Yes, but worded to make it easier to update to newer versions, as needed. Given more flexibility.

Q: what's going to be the sunset of FIPS 140-2? Will likely follow something similar as to what we had before, there will be documentation to guide folks. Likely a year to submit against old scheme.

Q: What about the old IGs (Implementation Guidance) documents? Will they go away? About 50% of them, the rest will need to be updated.

Q: Why are we starting with the 2012 draft, and not the 2015 draft? With the mandate to update standards every 5 years? FIPS 140-3 won't have to change, we can update what it points to. We will forever be FIPS 140-3, pointing to the 'latest' ISO standard.

Q: How often do the ISO standards get updated? Every 5 years?

ICMC18: Mandating CMVP for NIAP Evaluations Panel

Mandating CMVP for NIAP Evaluations Panel Presentation (C13a) Moderator: Dianne Hale, NIAP, United States, Panelists: Michael Cooper, IT Specialist, NIST, United States; Terrie Diaz, Product Certification Engineer, Cisco Systems, United States; Matt Keller, Corsec, United States; Edward Morris, Co-founder, Gossamer Security Solutions, United States; Nithya Rachamadugu, Director Cygnacom United States

By mandating this, it will reduce duplicate work

Mike Cooper - of course we'd like to see more people leveraging our programs, but we know NIAP has to worry about the timeline. We currently can't do things in parallel. But, there are slight differences, depending on which PP you're looking at. FIPS 140-2 is a 1 size fits all. We'd have to do work to see what the differences are and figure out how to go forward.

Ed Morris - we see this from our customer perspective. When they are doing a common criteria project, we have to figure out if FIPS will be required or not - if it's for DoD, etc, then they will need both. Managing that timeline is very difficult.

Matt Keller - been doing this for more than 20 years as a consultant (not a lab). It's an interesting idea. There are a lot of things FIPS does very well: no unbounded surfaces, well defined interfaces, but it's not perfect. Can we leverage each? Products do not always equal modules. Most people were not just buying crypto modules - they are buying a product that contains a module. NIAP gives more assurance at the product level. We can't just look at crypto - we need to consider how the keys are stored. We want to look at the entire product, making sure the module we tested is being used correctly in the product.

Nithya Rachamadugu - remembers doing this when crypto was treated by CC as a black box.  CMVP in some places has more tests and requirements and vice versa.

Terrie Diaz - been working in this space for 20 years. Cisco has a large profile of products that are certified and sold through many countries. Don't really want to do country specific certifications, something that is more general would be more appealing.

Ed - it would be nice if they could point to each other's certificates. Not sure this would work in practice. NIAP covers the entire product, and modules are just the crypto bits. Vendors want to shrink their crypto boundaries for FIPS as small as possible to avoid revalidation thrash.  But what happens is we miss out on how keys are handled and stored, for example. FIPS is a once size fits all - smartcard to server.  And it's grown from older standards, which brings a lot of extra things. CC has updated PP that are more relevant for your product.

Matt K. - there are labs are doing FIPS inside testing, so we are solving this problem - it's just not standardized (and no regulation or oversite).

Nithya - CC is compliance and FIPS is a validation. They have different goals. Maybe not mandated, but leveraged?

Ed - we hear sometimes from customers that we don't need to worry about their crypto, because they're already going through a CMVP validation, but then we find they are not using it correctly based on CC.

Question: there's already a divergence for various countries. What can we do better here?

Matt K - we could pull the most valuable pieces from FIPS and put it in CC PP. But, if the law still requires you buy FIPS validated crypto... then this won't meet your requirement.

Q: It would be good to keep these standards in sync.  Ed - yes, that would help us to avoid thrash.  But linking them together can be quite hard. Nithya - I get questions all the time about which one to start first, my advice is start at the same time. You will be able to leverage the work of each other. What if you find an issue along the way? lots of things to consider.

Mike C. - as a government purchaser, I do want to see how these assurance cases tie together.

Ed - maybe we should decrease the scope of FIPS so we can better work together.

This was followed by a good discussion on timing - vendors really want this all to happen as fast as possible for the most benefit.

ICMC18: Update on the Automated Cryptographic Validation Program (ACVP)

Udate on the Automated Cryptographic Validation Program (ACVP) (C12a) Apostol Vassilev, NIST, United States; Tim Anderson, Amazon, United States; Harold Booth, NIST, United States; Shawn Geddis, Apple, United States; Barry Fussell, Cisco, United States; Bradley Moore, NIST, United States; Robert Relyea, Red Hat, United States

[Note: I cam in 25 minutes late and missed the demo]

Previously crypto vendors would lock down their releases against major OS releases, but with ACVP we an do more frequent validations. This just wasn't possible before at all.

Thinking to earlier talks - we need to think about the cost of not doing anything. It should not be compliance vs security (think to the airplane example), this way we can have both.

Given the pace of innovation, we are constantly doing production releases, but it's contrary to the FIPS 140-2 validation. Even if people wait for the validation, they often do not deploy it as described in the security policy, or allow restricted algorithms.

Faster and easier validations means more choices for governments and those that require validated products. That will bring the price down.

Question on anticipated cost - No answer, yet, but it can't be free. We have to be able to pay for hosting services at Amazon, developers and maintenance work on tool. Bigger companies may do a subscription model, smaller ones may want to do one-offs. Want to make it workable and affordable in both models.

The demo server is available now on git hub  - instructions are online on how to get access. Now is the time to come and play and find issues. This is going to be locked down this summer, so sooner rather than later would be good. This will be "shipped' in October.

Right now, only authorized vendors will be able to participate, will need some site visits, etc., to get set up. Processes still being defined.

when will the working group be open to the labs? NIST started out by working directly by talking to vendors, because they had accidentally outsourced that job to the labs. Want to continue to keep the dialogue open to vendors. Labs will have a role, but not necessarily in this working group (possibly via CMUF, etc).. It's  a lot of work and energy to handle what is currently happening, will figure out the right way to engage.

Will the NV lab certification for vendors be the same as the labs do today? If they want to be certified  as a lab - then, yes.  Do you have to do this if you want to keep working with CAVP? Only if you want to use the automated system, or  you can continue to work with a lab.

There will always be a demo environment to try against before you final testing, and even receive test vectors. You can integrate this into your CI (Continuous Integration) framework.  You should be doing this early and often. There is lots of documentation out there. Currently works in MacOS, Windows and Linux environments.

Right now, labs answer our questions - who will do that in the future? That could still be the labs, no reason they could not help you do this.

Good session!

ICMC18: Using FPGAs in the CLoud for Decentralized Trusted Execution

Using FPGAs in the Cloud for Decentralized Trusted Execution (G12a) Ahmed Ferozpuri, George Mason University, United States

Started with an overview of TPM (Trusted Platform Modules), widely available in servers and laptops.

[reminder: these are the notes from the presentation and do not reflect the views or opinions of myself or my employer]

Intel has Software Guard Extensions to create a trusted environment within the chip. It's boundary is at the CPU, data outside the core is encrypted. It helps you avoid snooping and has better application state measurement (attestation), but there are concerns as well.

Data about the code, data, stack, heap is stored in the MRENCLAVE. They can be identified by their EPID processor group ID. Right now, remote attestation requires connecting to Intel's attestation servers.

You can use TEEs (Trusted Execution Environments) for secure cloud and multi-party computing, HSMs (see paper on Barbican integration from Intel Labs), Public Blockchains (PoET, etc).

Resource Efficient Mining (REM) is a goal to reduce energy waste in Bitcoin's Proof of Work. It uses Proof of USEFUL Work instead. This leverages Blockchain agents on the network that you can trust. There is a lot to explore here for new avenues in secure computing technologies.

FPGAs are becoming more common, can use Amazon's HDK to develop your own custom logic. Leverage Physical Unclonable Functions (PUFs).

The slides went by a bit too fast to take accurate notes, lots of proofs (Proof of Secret, Proof of Instantiation, Proof of Execution) and diagrams :-)

ICMC18: SP800-90B Testing Process, Result Bounds and Current Issues

SP800-90B: Testing Process, Result Bounds, and Current Issues (G11c) Joshua Hill, Information Security Scientist, UL, United States

We started out with just hand waving  for evaluating entropy - now, after several iterations, we have NISTs SP800-90B final - YAY! But, I had to submit 20 pages of comments.

Major comments: High-entropy noise sources fail the Restart Sanity Check much more than expected. Entropy and noise sources are required to have the same entropy rate across all process characteristic's and all environmental conditions and required to be stationary!
By more than expected... 60x more frequently than expected! There's a statistical test problem

No noise/entropy source behaves the same way across variations or temperature or voltage changes - nobody could comply!

We need to characterize what are the entropy-relevant parameters and asses appropriately.

We also have issues with noise source definition - output can be the XOR of the output of multiple copies of the same physical noise source. Think of deterministic ring oscillators - fixed period. ... get a nice flat, low entropy. Statistically looks good, but not in reality. (don't do that!)

We've constructed various simulated noise sources and models - the work is a larger scale version of DJ Johnston's 2017 work using NIST's reference python implementation. This testing occurred using only the full set of non-IID tests. First pass - looks much better!

Set up a bunch of other models, including one where you might have bad grounding, etc. (Narrow Gaussian noise source, 8-bit ADC, Sinusoidal Bias), and an idealized ring oscillator.

The models are somewhat complicated, and can return a range of entropy values for each parameter set. The lower end off the modeled range is the value that out to be used in our assessments.

It's vital to test only raw data, and to filter out extraneous signals. Don't perform statistical testing on conditional data!

ICMC18: Keynote: Hardware Security Modules: Past, Present and Future

General Technology Track Keynote: Hardware Security Modules (HSM), Past, Present and Future (G11a) Bruno Couillard, Crypto4A, Canada

Don't be offended if a product you worked on is not mentioned, this is not a complete history, but a start - we want to focus on where we are going!

If you go back in time before the 1970s, encryption was just for government - like weaponries. Not until DES (IBM) did cryptography come into the public space, followed closely by Diffie-Hellmen and RSA. ECC has actually been around since the early 1980s!

When the early 1990s came about, the rest of the world found out about this thing called the Internet. Suddenly we needed to solve problems of commerce leveraging PKI and SSL - we suddenly needed HSMs.  The rate of change has accelerated with things like Cloud Computing, IoT and Blockchain.

The HSMs and FIPS 140-1 all popped up around the same time, a quick succession of product releases like Entrust, Verisign, nCiper, Chrysalis-ITS. RSA 1995 was the year of the HSM - this kick started the industry.

IBM had an HSM that was one of the first to go through the FIPS 140-1 validation.  Then RSA started issuing certificates, but needed a secret keeper - SafeKeeper (rumor has it that it ran off of a car battery).

There were Chrysalis-ITS PCMCIA cards, and others made dongles, but then Smart Cards started coming into fashion (lighter and cheaper).

Around that time, nCipher saw another niche to enter - not just to keep the secrets safe, but to also accelerate. Faster vs higher security.

At one point there were desktop HSMs, they started going for tougher FIPS 140-1 levels.

Then nCipher/Luna/Utamaco/others started moving into the network attached HSM - since 2000, these are getting deployed in large volumes.

Many people working in HSMs were coming from a military background - they were considered weapons, hands on devices.  We need to shift away from that high touch model - we can't expect people to go and hand configure 1000s of devices.

We need to look at the challenges of insider threats, security zoning and patch management. As we move into quantum computing as a reality, we need to think about baking in cryptographic agility now - prepare for over the air updates.  These new algorithms will not likely look like what we have now - they may be bigger, have different attributes, etc.

Can we get to the point where we can have unattended or hostile deployments? How will this work with complex and sensitive application deployments?

Looking forward - we are shifting from a privacy challenge to an integrity challenge. I want to know that the software on my car came from the expected vendor and hasn't been modified. That the software running the elevator hasn't been tampered with.  We must have a trusted supply chain.

We need things to be easier to deploy, think about a home security system - there will not usually be highly qualified IT experts in the home to deploy.

Can we do to the HSM what Apple has done for the cell phone? I think we can!

ICMC18: Plenary Keynote Sessions

Yi Mao, atsec, welcome

This year's conference has more than 400 attendees from 26 countries and 9 tracks! The conference focus is Security First! We started with a very cute video.

Plenary Keynote Address: Digital Disruption and the Implications for Cybersecurity and Cryptography (P10a) Jason Hart, CTO Data Protection, Gemalto, United Kingdom

It took radio 38 years to reach an audience of 50 million people. Television only took 13 years to reach an audience of 50 million.

It only took 4 years for the world wide web to reach 50 million. We all started with modems, remember Hayes modems? US Robotics overtook them with their easier to use modems

Facebook took only 2 years to reach 50 million subscribers. 1 in 7 divorces are blamed on Facebook - a new sales chanel for divorce attorneys!

Pokeman GO - took 19 days to reach 50 million users. 19 days!

Nobody goes to the library to search for information anymore, even search websites are getting pushed out by higher order services like Alexa.

Digital Disruption - 10x innovation, 1/10th the cost and 100x the power.

In this time, you need to look for problems to solve. For example, look at Tipsy Robot - a drink making robot, that makes the experience for users easier and simpler (and consistent and eliminates standing in long lines).

Is our industry easier and simpler to use?

Amazon Web Services (S3) was easier to use than others - completely disrupted the market.

Look at some market leaders - uber, facebook, Alibaba and AirBNB - they don't have inventory, cars, or create content. They are changing the market by being simple, filling a need and they are habit forming.

What are we doing to make cryptography easy? There is an opportunity here.

Data is being created at an astronomical rate - 90% of the data was created in the last 2 years.

All businesses have secrets, it is our job to help them keep their secrets safe.

Out of all of the breaches last year, he believes only 1% had the proper cryptographic controls in place. Why? Everyone knows the importance of using cryptography - but it's too hard to use. We have a huge opportunity as a community here - everything needs what we're doing.

Traditional approaches have to change. What does the user need? Will we evolve or not?

We have the tools to solve the problems for trust and data privacy. We need to reset our expectations of users that use our security solutions.  We need to make it exciting and fresh. Adoption will happen.

User is worried about data integrity - they don't connect it to what we do. We need to be working at that level and worry about the implementation details ourselves.

IoT will be driving cryptography adoption in 2018 - we need to be ready as an industry to provide the right algorithms and options the industry needs.

We are moving away from Platforms as a Service, etc - and moving into functions as a service.

We know quantum is coming - are we crypto agile? Are we enabling our customers to be crypto agile?

The future will be decentralized - can we meet that need?  Can we do it simply?

Plenary Keynote Address: What’s Next for Cryptography? How CSE Balances Privacy and Innovation in the Public and Private Sectors (P10b) Scott Jones, Assistant Deputy Minister, Information Technology Security, Communications Security Establishment, Canada

CSE is Canada's cryptography leader, and need to protect the most important information, watch for threats and stay ahead of the industry.

CSE had in recent times been very focused on cyber threats and lost their focus on cryptography, which is the backbone of security. There will be a renewed focus on cryptography.

Cryptography is more widely deployed than the average user is aware - and that's okay, it should just work.

There are proposed changes to authorities and capabilities for CSE, including increased accountability measures.

People say that privacy is dead, but he believes that cryptography needs to play here to give people the option of privacy. In fact, it's our only option to maintain our privacy.

Breaches will happen - you can't protect against them all, so you need cryptography.

Unfortunately there is a lot of misinformation out there - that little lock on your browser is marketed as 'protecting your data' - true for in transit, but what happens on the other end?

Good cryptography implemented poorly is worse than none- it creates a false sense of security.

CSE will become (again) a proactive agent for research and standards, validation programs, secure tailored solutions program and cloud computing.

Government can't match the pace of innovation and speed of delivery of what is happening in industry, need to leverage that work for all except the most specific needs. Need to partner with industry here - share our knowledge and learnings.

Sitting in a building and being locked to a desk is no longer a method of securing data.

We need a variety of validated commercial products to choose from to meet different needs. We need to keep pace with new security vulnerabilities in commercial products. We need to evolve quickly.

Take the aircraft industry - they have to use only validated modules. But a security vulnerability has come out and is patched, but if they patch ... invalidates their validation.  We should not be making the industry make these choices.

Our technology is too hard to use - too many breaches are related to misconfigurations. If a misconfiguration allows a cloud deployment to be breached, we need the data to be securely encrypted.

We must avoid the arrogance problem - we don't have all the answers. We need to work together to solve the tough problems. We need to make our technology accessible - people don't even know what to ask for.

We want to start publishing our research questions - get your input and hopefully you can share your problems as well, and hopefully create partnerships to solve them.  Looking to partner both inside and outside of the government - we all have pieces of the solution. we can't solve these big problems without industry.

We are creating the Canadian Center of Cybersecurity - the Cybercenter will bring together many different research fields, and cryptography and cryptology will remain at the central focus, it ties everything together.

This conference will strengthen the world's Internet, it will strengthen commerce, it will make the world a better place.

We should not be content with the status quo or ever believe we've solved all of the problems.

Creating a quarterly cyber journal to try to bring security topics to the general masses, looking for submissions for cryptography.

Consumers are looking for features, we're looking 10-20 years ahead on how to keep the Internet secure.