Tuesday, June 5, 2018

Learning Ally: Books I've Narrated

Working with Learning Ally, I record textbooks and novels for the blind and dyslexic, along with others that learn differently.

I've been keeping this list on LinkedIn, but hit the LinkedIn character maximum. I didn't always keep track, so there may be a few more books. I started volunteering at Learning Ally in Palo Alto in August 2012, followed them to Menlo Park and am preparing to start volunteering from home.

When I started, we had physical books we read from and we've since moved to VoiceText (scanned texts) and PDF books. This makes it easier to start recording at home!

Here are the books that I've narrated over the years. I'll continue to add to this post as I complete more books.  The hours listed are total length of the finished narration. It takes usually 3 times as long recording and correcting to get that finished product.

Recorded in 2018
Recorded in 2017
  • Tales from a Not-So-Friendly Frenemy (Dork Diaries #11) (Rachel Renee Russell) (248 pages, 2.17 hours)
  • Tales from a Not-So-Fabulous Life (Dork Diaries #1) (Rachel Renee Russell) (282 pages, 3.08 hours)
  • The San Francisco Earthquake (I Survived #5) (Lauren Tarshis) (98 pages, 1.27 hours)
  • Shadows of Sherwood (Robyn Hoodlum #1) (Kekla Magoon) (356 pages, 7.48 hours)
  • Mythology (Edith Hamilton) (475 pages, 11.35 hours)
  • Carve the Mark (Veronica Roth) (467 pages, 12.14 hours)
  • Goosebumps Book 8: The Girl Who Cried Monster (138 pages, 2.50 hours)
  • Goosebumps Book 3: Monster Blood (R. L. Stine)
Recorded in 2016
  • Ink and Bone (Rachel Caine) (354 pages, 10.97 hours)
  • Dragons of Winter (James A. Owen) (389 pages, 9.38 hours)
  • Tru & Nelle (G. Neri) (328 pages, 4.70 hours)
  • City of Ice (Ken Yep) (362 pages, 8:47 hours)
  • Winter: The Lunar Chronicles (Marissa Meyer) (828 Pages, 20 hours)
Recorded in 2015
  • If You Could Be Mine (Sara Farizan) (248 Pages, 4:59 hours)
  • The Vanishing Game (Kate Kae Myers) (356 pages, 7:45 hours)
  • A Northern Light (Jennifer Donnely) (396 pages, 8:57 hours)
  • Liar Temptress Soldier Spy: Four Women Undercover in the Civil War (Karen Abbott) (513 pages, 12:20 hours)
  • The Spiritglass Charade (Collean Gleason) (360 pages)
  • Wicked Girls (Stephanie Hemphill) (389 pages)
Recorded in 2014
  • The Wicked and the Just (J. Anderson Coats) (342 pages, 7:30 hours)
  • The Spy Catchers of Maple Hill (311 pages)
  • California Driver Manual (106 pages, 4:15 hours) (Yes, DRIVER, not Driver's ... )
  • Unbroken: A Ruined Novel (Paula Morris) (295 pages)
  • Froi of the Exiles (Marlena Marchetta) (598 pages, 16:53 hours)
  • The Amazing Monty  (Johanna Hurwitz)
Recorded in 2013
  • Every Other Day (Jennifer Lynn Barnes)
  • The Last Dragonslayer (Jasper Fford)
  • The Red Convertible
  • Michael's Mystery
  • Inkheart



Friday, May 11, 2018

ICMC18: Update from the "Security Policy" Working Group

Update from the “Security Policy” Working Group (U32a) Ryan Thomas, Acumen Security, United States

This was a large group effort, lots of people coming together. The security policy is used by the product vendor, CST laboratory (to validate), the CMVP, and user and auditor (was it configured in the approved fashion?)

The working group started in 2016, to set up a template with the big goals of efficiency and consistency. When they started, were focused on tables of allowed and disallowed algorithms (non-approved), creation of a keys and CSPs table, approved and non-approved services and mapping the difference.

But, the tables were getting unwieldy, and we were told there were changes coming. Then folks started getting ideas on adding it to the automation work. So, the group took a break after helping with the update of IG 9.5.

Fast forwarding to today, many modules leverage previously validated libraries (OpenSSL, Bouncy Castle, NSS) so the documents should be very similar... but not always. Often still very inconsistent. New goal is to target 80% of the validation types, not all.

Creating templates and example security policies will have people coming from a common baseline. This will be less work for everyone, and hopefully get us to the certificate faster!

By Summer 2018, hope to have a level 1 and level 2 security policy. This is not a new requirement, but a guideline. It will point you to the relevant IGs / section and just help you streamline your work and CMVP's work.

Need to harmonize the current tables to the template. It will be distributed on the CMUF website when complete.

While doing this work, discovered a few things were not well understood across the group (like requirements for listing the low level CPU versions.

Got feedback from CMVP about what are the most common comments they send back on security policy - like how does module meet IG A.5 shall statements? How does XTs AEs meet IG A.9? Etc.

ICMC18: Keys, Hollywood and History: The Truth About ICANN and the DNSSEC Root Key

Keys, Hollywood, and History: The Truth About ICANN and the DNSSEC Root Key (U31c) Richard Lamb, Self-Employed, United States

Started with a segment about the Internet phone book from a television show.  Richard notes they got a lot of things right, but instead of breaking up the code into 7 cards - there are indeed 7 smart cards and 7 people all over the world that help ICANN.

Did a quick demonstration of how DNS works in the room, and learned about how important it truly is. Dan Kaminsky's DNSSEC exploit at DefCon 2008 at least drew attention to how important DNS is.

the other source of trust on the Internet is CA Certificate Roots, and encourage all web traffic to be encrypted.

Four times a year, people really do get together to do a public key ceremony. You can come watch if you want - just like they said in TV!  There are at least 12 people involved in the key ceremony, due to the thresh holding schemes by HSM vendors. The members must be from all over the world, cannot be all (or even mostly) Americans. They are Trusted Community Representatives (TCRs).

The Smart Cards are stored in a credential safe. The HSM is in a separate safe, there are iris scans available.  It is all live recorded and in a secure room. Process is certified. Shielded spaces, protected tamper evident bags (changed bags after someone was able to get into the bag w/out evidence).

The presentation moved very fast and lots of interesting things in there - can't wait to get access to the slides.





ICMC18: Panel Discussion: The Future of HSMs and New Technology for Hardware Based Security

Panel Discussion: The Future of HSMs and New Technology for Hardware Based Security Solutions (A31a) Tony Cox, Cryptsoft, Australia; Thorsten Groetker CTO, Utimaco; Tim Hudson, Cryptsoft, Australia; Todd Moore, Gemalto, United States; Robert Burns, Thales, United States

All of the panelists have a strong background in cryptography and HSMs. Starting out by defining HSMs, a secure container for secure information. Needs extra protection, may have acceleration, may be rack mounted, smart card, USB, PCMCIA card, appliance, etc. - maybe even software based.

Or, is it a meaningless term? It could be virtual, it could be a phone, it could be in the cloud - anything you feel is better than things that aren't HSMs, Tim postured.

Thorsten disagrees - there has to be a wall and have only one door and strong authentication.

Bob noted that overloading the term does cause confusion, but should not dilute what are good hardware based HSMs.

Tim notes that people buy their HSMs by their brand, not always for their features and a deep evaluation of the underlying project. Thorstein agrees that may happen in some cases, or they may be looking for a particular protection profile or FIPS 140 level.  Bob notes that branding and loyalty plays a part, but does think people look at features. Tim said he's been in customer conversations where people are influenced by the color of the lights or box.

Bob mentioned that it's not easy to install an HSM, so you're only doing it because you need it or are required to have it.

The entire panel seems to agree (minus some tongue in cheek humor) that easier to configure is important, more likely to be installed correctly.  But, still a way to go - customers are always asking for how to do this faster and more easily.  This may be leading to more cloud based HSMs.

Bob - There are trade offs - we can't tell them what their risk profile is and what configuration is right for them.

On security, Thorstein notes that some customers may be required to use older algorithms, and he recommends doing risk assessments, and just because you are writing a compliant (say "PKCS#11) application does not mean it is secure.  Having standards based makes migrating and interoperability a lot easier, but it does not always meet all of your business needs. This is why most HSM vendors make their own SDK as well.

Bob agrees, that a universal API means punting on some tough problems. As a vendor you can choose being fully compliant, or locking in your customer to your API.  PKCS#11 is great, but it is a C API - where is the future going? Need more language choices.

Tony asks - given the leadership of PKCS#11 team in the room, what could we do better? Tim makes a comment on KMIP, Bob agrees it's important but still not fully portable. Bob thinks there's an opportunity to look at the problem in a different way for PKCS#11 - the implementation is locked into C, which is no longer on the growth curve for our customers. People what the more managed languages, so people are creating shims over PKCS#11.

Thorstein likes the aggregate commands in KMIP, but still not perfect.

Todd noted we need RESTful based APIs, and there are gaps in what the standards are offering.

Tim notes that he doesn't think the vendors are always clear with their customers that they are going down a path of getting locked in. Bob disagrees that vendors are doing this on purpose.

Valerie couldn't help but note that standards are only as good as the people that contribute to them, and if the vendors are finding gaps in PKCS#11 or KMIP, please bring those gaps to the committees and join them and help to improve them.

Tim notes that there are more hardware protections available to software developers (ARM trust zones and Intel's SGX). Bob notes that they are interesting technologies, but not a true HSM, not as strong of a container. Additionally, key progeny and ownership is an issue as those keys are owned by specific companies. It would be good to expand this, particularly in the cloud space.

Thorstein believes the jury is still out - interesting approaches, but not quite there for putting the level of trust you would put into a level 3 HSM. If a US / American vendor has a kill switch that could stop your whole system from running, it's much less appealing for those of us outside of the US.  Worry about what other things could be exposed in that way - it's like a good new cipher; need to look at it and how it is implemented.

Todd notes these technologies are starting to get very interesting because they can go into edge devices and cloud services. We are excited to see how this are going to grow. Vendors still need to provide key life cycle guidance and standards compliance and making sure CIA are in place.

Thorstein notes it is a good building block of an embedded HSM, but he'd still be nervous about sharing the CPU.

Tim says it sounds like it's better than software alone, but not up to these vendor's HSMs. Bob remembers the time that HSMs used to be needed to get any decent performance, and they are already very different just 15 years later and expects another incarnation in 15 years.

Todd notes that Google just launched a silo that would leverage these technologies and managed SDKs. Bob agrees that middleware can benefit from technologies like SGX. Tim notes standards are still very important, and wants users to communicate this to vendors.

Lots more of excellent conversation.



ICMC18: TLS Panel Discussion

TLS Panel Discussion (S30b) Moderator: Tim Hudson, CTO and Technical Director, Cryptsoft Pty, Australia; Panelists: Brent Cook, OpenBSD, United States; David Hook, Director/Consultant, Crypto Workshop, Australia Rich Salz, Senior Architect, Akamai Technologies & Member, OpenSSL Dev Team, United States;

There are quite a few TLS implementations out there, in a variety of languages. David thinks this is generally a good thing, gets more people looking at the specification and working out the ambiguities. Brent agrees, it gets more people looking at it, lowers the chance of one security issue impacting all implementations. Rich noted that in the past that the way OpenSSL did it was the "Right Way" and people would write their code to interoperate with them, as opposed to against the specification, but he thinks it's better to have more as they fit different areas (like IoT).

There are a lot of implementations out there using the same crypto implementations, ASN.1 or X.509. That can be good, like the Russian gentleman who writes low level assembly to accelerate the algorithms - so everyone can be fast, but it's still good to see alternative implementations.

All of the panelists hear from their customers, getting interesting questions.  They generally have to be careful about turning things off, because you never know who is using an option or for what.

Bob Relyea noted users should be cautioned if they think they should write their own TLS library, when there are several very good ones out there. Forking is not always the answer, because it reduces the number of people looking at each implementation.  Let's make sure the ones we really care about have the right folks looking at them.

Brent notes that for him (OpenBSD) are more focused on TLS for an operating environment, and they are glad they forked. If OpenSSL hadn't wrapped their memory management like they did, folks and tools at OpenBSD would've found Heartbleed sooner.

Rich discussed the debate IETF had been having with financial institutions who wanted a way to observe traffic in the clear. The IETF did not want this and said no, they are a paranoid bunch. this means some companies won't be able to do with TLS 1.3 that they may have been able to do before. Encrypted will be encrypted.

Brent makes some deep debugging and injection tools, and also agrees, you don't want there to be an easy way to decrypt banking traffic.

Lots of great questions with quick answers that were hard to capture here, but a very enjoyable presentation.

ICMC18: TLS 1.3 and NSS

TLS 1.3 and NSS (S30a) Robert Relyea, Red Hat, United States

PKCS#11 is the FIPS boundary for NSS. AES/GCM presents a difficulty in the PKCS#11 v2.X API, but will be addressed in v3.X. While in FIPS mode, keys are locked tot he token and cannot be removed in the clear. This means their SSL implementation doesn't have actual access to the keys - so MACing, etc, needs to happen within NSS's softtoken.

In NSS's FIPS mode, only allowed FIPS algorithms were on. This caused problems for accessing things like ChaCha, so now they are only locked in the security policy.

The TLS 1.3 engine in NSS is very different than 1.2. We rewrote the handshake handling state machine. We have finally dropped support for SSL 2.0 support altogether and have notified customers that SSL 3.0 is next (currently turned off). TLS 1.3 uses a different KDF as well, already had support for HKDF through a private PKCS#11 NSS Mechanism.  Essentially, everything (but the record format) has changed.

The implementation was done by Mozilla, primarily by Eric Rescorla and Martin Thompson. They had to rewrite the state machine. We wanted customers to start playing with the software, but due to the way it's configured, they sometimes got it on accident (by applications choosing the highest available version of TLS).

When will you see this? It's fairly complete in the NSS upstream code, but nobody has released it, yet. Draft 28 of TLS 1.3 was posted on March 30, 2018. We doubt there will be any further technical changes. The current PKCS#11 is sufficient, other than the KDF. The PKCS#11 v3.0 spec should be out by the end of 2018. Still gathering final proposals and review comments into the draft. HKDF missed the cutout... Bob will work on taking HKDF through the PKCS#11 process as the 3.0 review moves forward, to hit the next version of the specification.

How do you influence NSS? The more you contribute, the bigger say you can get to influence the direction.

Thursday, May 10, 2018

ICMC18: KMIP 2.0 vs Crypto in a Cybersecurity Context

KMIP 2.0 vs Crypto in a Cybersecurity Context (G23c) Tony Cox, Cryptsoft, Australia; Chuck White, Fornetix, United States

They are both co-editors of the KMIP v 2.0 version of the specification. They are big fans of standards.

Wrapped up KMIP v1.4 in March 2017 (published in November 2017) and scoped KMIP 2.0. In January 2018, KMIP 2.0 working draft is out and had another face to face in April 2018.

Restructured the documents, removed legacy 1.x artifacts. Problems we create today will impact us for the next 10-15 years, so working with that in mind. Want a way to be able to make changes easily as needed in the future. The focus is on data in motion. We want to lower barrier for adoption, and make KMIP more accessible as a service with flow control and signaling for transaction of Encryption Keys and Cryptographic Operations.

Passwords are now sent as hashed passwords, including a nonce to prevent replay. We have an effective double hash, so no longer need to store passwords in the clear. We've had the concept of OTP (One Time Password) for a long time, but it's better defined to make it easier to use and interoperate.

We've also addressed login and delegated rights. Login is a simple mechanism to reduce authentications. Leveraging tickets to improve performance. Allows for delegation and supports 2FA. This will broaden KMIP applicability.

Flow control allows server initiated commands to clients with the client initiated connections. Allows the server to be a trust node managing encryption keys on other devices that are inside or outside a system perimeter.  Best of all - does not break the existing method of establishing a KMIP session (does not break clients).

Multiple ID Placeholders allows for simpler execution of compound key management operations. Provides a path to combine traditional key management operations and HSM operations into a single KMIP operations. Addressing broader concern of IoT and Cloud. Also added some changes to make dealing with offline devices easier.

Digest values usable between client and server - deterministic. Clients can rely on servers! Addresses a major source of non-interoperability.

There is now a concept of re-encrypt. What happens when you have an object that's been encrypted, and you want to rotate the keys? Don't want to expose keys while doing transitions. This new method allows the keys to stay in the server (w/in the FIPS boundary). Enabling rekeying more often. This is future proofing for post quantum crypto when we know people will need to rekey.

There are some default crypto parameters, allows parameter agnostic clients. Cryptography can change on the server side and not change the method in which it is requested by the client. Server can provide defaults if the client does not.

All of these features were to improve crypto agility and resilience, easier to use, and allows KMIP to be more impactful for Data in Motion, IoT, Distributed Compute and Cloud.

Implementation work is starting next, along with final reviews and hopefully close out any final issues in the next few months. Hope to publish in 2018!

This work is the culmination of our efforts and learnings in the key management space over the last 9 years as a standard's body. But, if you have requirements the standard is not handling, come and join us or let us know what the issues are.