Wednesday, August 5, 2020

BH20: Keynote: Stress-Testing Democracy: Election Integrity During a Global Pandemic

Great intro from Dark Tangent (as per usual) - there are people attending from 117 different countries!  Lots of great scholarships this year as well. 

It's strange attending from home - no laser show!

Keynote Speaker: Matt Blaze, Georgetown University

Early elections in the US used little technology - they were literally just in a room and raising hands, but that doesn't scale and it is also not secret.  The earliest technology was simple paper ballots that were hand counted.  As long as the ballot box wasn't tampered with, you could have high confidence your ballot was counted. It was also easy to observe/audit. 

We moved onto machine counted ballots or direct-recorded voting machines, and finally computers. The technology doesn't matter as much as the voters trust the technology and the outcome.

It can be hard to get right - do to some conflicting requirements: secrecy and transparency. How do you audit and make sure everyone's vote was counted in the way they wanted it counted, but w/out disclosing how they voted? 

It is impossible to re-do an election. They need to be certified by a certain date and you cannot really do them again, there's not enough time to do it before transition of power should occur.

The federal government doesn't have as much oversight over each state for a federal election as you might think - they are mostly run by counties, with guidance and standards set federally.  There is no place to change everything nationwide. 

The ballots can (and usually do) vary even within the county - think about school board, city council, local ordinances, etc.  In 2016, there were 178,217 distinct ballots in the US. Sixty percent of eligible voters participated in the election, 17% cast in-person in early voting and 24% was by mail, but the majority were still in person.

In the US, we spend more money campaigning than on running the election itself.

Traditional threats to voting: vote selling, ballot stuffing or mis-counting.  Foreign state adversaries are also a threat, but they may not care about who wins - just that the process is disrupted and cast doubt on the legitimacy of the election.

Taking a walk down memory lane: hanging chads!  Florida was using a punch card system (aside: we used the same system in Santa Clara county when I moved here, except we didn't have the "assistance" of the physical ballot - I had to bring in my sample ballot so I'd know which holes to punch.  In that case, since the Supreme Court stopped the count, we ended up with a certified election that nobody (but the winner) was satisfied - they did not feel their votes were counted.

This debacle did lead to HAVA (Help America Vote Act) - mandated every one to change their voting equipment and did provide funding to purchase it.  Unfortunately, improved tech wasn't widely available,  Most common were DRE (Direct Recording) voting machines - it's computerized. This is different than the older model, where we used offline computers to tally the votes.  These new machines are networked, and much more reliant on software.

As you are aware - software is hard to secure.  There are no general techniques to determine if it is correct and secure.  SW is designed to be easily changed - maybe too easy, if you're not authorized and still able to make a change.  This is a problem for these voting machines.

E-voting, in practice, has a huge attack surface: firmware, software, networking protocols, USB drives floating around, non-technical poll workers, accidental deletion of records, viruses....

Every current system that is out there now is terrible in at least one way, if not several.   There is an exception from the DMCA to do security research on voting machines.  This makes the DefCon voting village a lot of fun (and will be available this year as well). 

Some people are suggesting hand count all - but, there are just too many items per ballot.  The amount of work to do a complete hand count is infeasible. 

The other extreme: the blockchain!  But, it makes us much more dependent on the SW and the client (and what it puts in the blockchain). This does address tamper detection, but not prevention/recovery.  Also, civil elections aren't a decentralized consensus process.

There have been two important breakthroughs - first form Ron Rivest on Software Independence: a voting system is software independent if an undetected change or error in its software cannot cause an undetectable change or error in an election outcome. .... but not how to accomplish that.  Stark came up with Risk-Limiting audits: statistical method to sample a subset of voting machines for post-election hand audit to ensure they reported correct results.  if that fails, hand count the rest.

You can learn more in the paper of "Securing the Vote" from the National Academy.

Everything seemed like 2020 was going to go well... until... March.  Who would've expected a global pandemic?

When we think about voter disruption, you might not be able to get to the polling place due to travel or disability - you can get an absentee ballot (including "no excuse" ballot) - but, with the exception of states like Oregon, they are a small percentage.

If there are local or regional emergencies, like an earthquake or hurricane, that may prevent polling places from opening.  There was an election in NYC on September 11, 2001 - it was definitely disrupted and then highly contested. 

Postponing election is a very disruptive thing - have to figure out what that means for the US? Who then becomes president while we wait for the election? Are there other options?

In an emergency, people may not be able to vote in their normal way: there may not be enough poll workers, they may be in the hospital, recently moved, etc. We are seeing increased pressure on the counties for this, in a time of decreased funding.

Matt then did a great walkthrough of vote-by-mail, how signatures are verified and ballot processing. How do we scale this up?  Exception handling can be very labor intensive, and there is high pressure on chain of custody.   it's hard to know how many people will ask for absentee ballots - they may not have enough, and they can't just copy ballots - so there is a necessary lead time.

how can you help? Volunteer as a poll worker, election judge, wherever your county needs assistance with this election.

Friday, May 17, 2019

ICMC19: At the Root of It All: The Cryptographic Underpinnings of Security

Karen Reinhardt, Director, Security Tools, Entrust Datacard
What is security without cryptography? That's how it used to be - we secured our computers with physical access control. That advanced to password files and access control lists, and then once we got on the network we had to advance to things like LDAP. We still relied heavily on routers and firewalls. But now that we are in the cloud... are those still effective?

We know we will have issues - so we must do monitoring and detection (IDS, IPS, logging, log analysis, etc) - but that's only if things have gone wrong.  But, wouldn't it be better to prevent the incident?

We used to secure devices by being in a physically secure environment, then we introduced VPNs - which allowed us to pretend we were in the physically secure environment.... but now we have so many connected devices in our house filled with personal and professional identity information.

Those identities are hot commodities! Ms. Reinhardt has worked many breaches and she notes the attackers are always going after the MS Active Directory.

Now even ambulances are connected to the Internet - but please don't attack them, you could put someone's life at risk.

Think about comparing crypto keys to nuts & bolts in construction. You need to use good quality nuts and bolts, and you need redundancy - our you could have a catastrophic failure (think about the recent crane collapse in Seattle).

If we have a few bad keys here and there - we might still be okay, depending on what is being protected. But, what if we lose an entire algorithm?  What if it happens before quantum computers?   We have nothing to replace RSA and ECC right now - what if something happens to them?  Should we be looking at older algorithms and ideas?

You need to assume your algorithms are going to fail and you will need to get new keys and new algorithms out there. think about this as plumbing - need to be able to replace the pipes.

If we lose RSA, you lose your entire chain of trust.  We can't reasonably replace every device out there - all the thermostats, traffic signals, cars, etc. impossible.

Good crypto alone is still not good enough - the attackers are still going to go after your users, your user directories, your insecure machines on your network, your kerberos golden ticket....We have to slow them down,

Thursday, May 16, 2019

ICMC19: HW Equivalence Working Group

Carolyn French, Manager Cryptographic Module Validation Program, Canadian Centre for Cyber Security, Canada; Renaudt Nunez, IT Security Consultant, atsec, United States
The working group will work towards a recommendation in the form ofa  draft Implementation Guidance (IG) to the CMVP..

Vendors often want to submit multiple hardware modules in the same report, and therefore on the same certificate. Under what conditions can the lab perform limited operational testing on the group of modules and still provide assurance that the right testing has happened?

The basic assumption is that IG 1.22 is already met (same crypto), but may have different number of cards, chips, memory config, etc.    For example, if you changed from solid state drive to classic hard disk... did you really need to do more testing?  Same for things like field replaceable and stationary accessories.

The draft IG is out and they are looking for reviewers.

ICMC19: KMIP vs PKCS#11: There is no Contest!

Tony Cox, VP Partners, Alliances and Standards, OASIS, Australia

Tony got a question in ICMC 2018 about "which of these two standards will win?" - the answer is BOTH.

The two standards have different scopes and areas of being useful, but both are standards based and should mean that they are vendor independent. Both standards have informative and normative documents updated by the technical committees.

Tony gave a good overview of the specifications, including goals and documents, explaining it all - like what are profiles and what do they mean? Profiles help prove interoperability and do some baseline testing.

KMIP 2.0 is full of loads of new features - hashed passwords, OTP, delegated login, Re-Encrypt (looking forward to post quantum crypto) and PKSC#11 operation... In addition to new features, lots of improvements as well.

PKCS#11 3.0 - out for public review any day now... also has loads of new things! New algorithms, support for Login of a user and AEAD, better functionality support for interaction with KMIP (Like Unique Identifiers). This started from V2.40 errata 1.

Key Manager uses KMIP and HSMs leverage PKCS#11... they work together. Key Manager is higher volume key management, key sharing. An HSM wants to keep the keys locked in.

PKCS#11 over KMIP is essentially giving a standardized way to do PKSC#11 over a network.

The two standards are quite complementary and have many of the same individuals or companies working on both. In the end, by following the standards we are giving the market freedom of choice.


ICMC2019: Intel SGX's Open Source Approach to 3rd Party Attestation

Dan Zimmerman, Security Technologist, Intel, United States

SGX is a set of CPU instructions that enable the creation of memory regions with security features called 'enclaves'. It has encrypted memory with strong access controls, updatable trusted computing base (TCB). Developers can leverage this to relocate sensitive code and data to the enclave, which has a per process trusted execution environment (TEE).

Common use cases are key protection, confidential computing, and crypto module isolation.

SGX Remote Attestation is a demonstration that software has been properly instantiated on a platform in good standing, fully patched and indeed in the enclave. Attestation evidence conveys identity of the software being attested, associated report data and details of the unmeasured state.

The attestation service is truly verification as a service, using privacy preserving and based on enhanced privacy ID (EPID). This approach does require that you're online and connect to a service.
The newer approach is Datacenter Attestation Primitives (Intel SGX DCAP). It is datacenter and cloud service provider focused. Flexible provisioning and based on ECDSA signatures, a well known verification algorithm. Theses primitives allow for construction of on-prem attestation services. This will leverage flexible launch control on the new Intel SGX enabled platforms. And best of all, it's OpenSource! (that's how it got into the opensource track :-) )

Platform Certification Key (PCK) Retrieval. Intel issues a PCK Certificate for each of its processors at various TCBs. The retrieval tool will extract platform provisioning ID info for Intel PCS service requests. There is also a provisioning certification service and caching service.

There is a quote generation library that has an API for generating attestation evidence for an Intel SGX based enclave, and of course a quote verification library.

SGX Remote Attestation is important as a successful attestation provides increased confidence to Relying Parties prior to deploying secrets to application enclaves. It also allows for policy based decisions based on quote verification outcomes.

ICMC19: IoT TLS: Why is it Hard?

David Brown, Senior SW Engineer, Linaro, United States

For reasons we can't really explain, we now have things like our lightbulbs, toasters and fridges on the Internet... now those devices are vulnerable to attack.

5 worst examples: Jeep Hack, Mirai Botnet, Hackable Cardiac Devices, Owlet WiFi Baby Heart Monitor and Trendnet webcam hack. In the Jeep example, they had a lot of great controls in place, but not on who could update the firmware...

James Mickens was quoted as Saying "IoT Security is not Interesting". It's not interesting, because it's not different. We already know how to secure devices... so we should do it! TLS is great - so let's just use that!

But, we have some really tiny devices out there - smaller than a Raspberry Pi. They have maybe less than 10s of KB of Memory, and 10s MHz of CPU... how can we do TLS there?

TLS has a way of specifying which cipher suites can be used during the handshake. It's hard to change what an IoT device is using, so how can a service just start rejecting something?

One of the problems is that lots of folks do not implement TLS correctly - TLS done incorrectly is worse than not doing it at all.

TLS requires memory, time and randomness - all things that are in short supply on IoT devices!

Some suggestions are to pursue stream abstraction or to put TLS under te socket API, but those don't really work.

Looking at Sockets + TLS now, Zephyr network API changes, JWT, time, MQTT...

ICMC2019: Does Open-Source Cryptographic Software Work Correctly


Daniel J. Bernstein, Research Professor, University of Illinois at Chicago, United States


Discussion on CVE-2018-0733 - an error in CRYPTO_memcmp function, where only the least significant bit of each byte are compared. It allows an attacker to forge messages that are lower than the guaranteed by the security claims. Yes, 2^16 is lower than 2^128.... only impacts PA-RISC

Take a look at CVE-2017-3738 ... It impacts Intel AVX2 montgomery multiplication procedure, but how likely is it to be exploited? According to the CVE - not likely, but where is the proof?

Eric Raymond noted, in 1999, that given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone. or less formally, 'given enough eyeballs, all bugs are shallow'.

But, who are the beta-testers? Unhappy users? That's the model used by most social media companies nowadays....

And "almost every problem" is not "all bugs" ... what about exceptions? can't those be very devastating? How do we really know that we are really looking? Who is looking for the hard bugs?

This seems to assume that developers like reviewing code - but in reality they like to write new code. The theory encourages people to release their code as fast as possible - but isn't that just releasing more bugs more quickly?

So then, does closed source stop attackers from finding bugs? Some certifications seem to award bonus points for not opening your code - but, why? How long does it really take an attacker to extract, disassemble and decompile the code? Sure, they're missing your comments, but they don't care.

Closed source will scare away some "lazy" academics, but not attackers... just takes longer for you as a vendor to find out about the issue.

There is also a belief that closed source makes you more money, but is that still true? Aren't there a lot of companies making money off of support?

Dan sees the only path forward through open source - it will build confidence in what we do. Cryptography is notoriously hard to review. Math makes for subtle bugs.... so do side-channel countermeasures. Don't even get started on post quantum...

A big reason it's hard to review is due to our pursuit of speed, since it's often applied to large volumes of data. This leads to variations in crypto designs. Keccak code package has more than 20 implementations for different platforms - hard to review!

Google added hand written Cortex-A7 ASM to Linux kernel for Speck... even though people said Speck is not safe. Eventually switched to ChaCha... but created more hand written assembly.

You can apply formal verification - code reviewer has to prove correctness. It's tedious, but not impossible - EverCrypt is starting to do this, but for the most simple crypto operations (but still have to worry about what the compiler might do... )

Testing is great - definitely test everything! How can we auto generate inputs, get lots of random inputs going through here - but you still may miss the "right" input that trips a bug. There are symbolic execution tools out there. (angr.io, for example)