Showing posts with label privacy. Show all posts
Showing posts with label privacy. Show all posts

Thursday, October 15, 2015

GHC15: Defenses Presentations

Unwanted Software: Google's Efforts to Protect Users from Malicious Intent

Elisabeth Morant, Product Manager at Google



Biggest problem they are seeing are ad injections.
 
They issue 5 M safe browsing warnings a day – do not ignore!!!

Have launched the Chrome Cleanup software .  They’ve halved the hijacked chrome users since launching the tool.


They analyze EVERY binary on the Internet to find unsafe binaries and malware. Misled users to "Chrome Support" and gave you a number to call to "help". Found a nasty executable that took advantage of a bug in the Chrome webstore and persist on user's machines.  They've been able to fix a few bugs - but people are still downloading.  Technical solutions are not enough. Google alone is not enough. They need to collaborate with others in the industry.

(Near) Real Time Attack Monitoring on System Landscapes

Kathrin Nos, Development Architect of SAP SE

Kathrin has a cat (Shroedinger, of course!).  She has an electronic cat flap that leverages the RFID chip embedded in the cat, so other cats cannot come in.  Now, it doesn't stop him from bringing in creatures like mice...

This is like a system landscape. You can put up a firewall, which lets some people in.  People can try to brute force this. You can train your engineers to have good passwords (or check that they are good), but what if they have downloaded malware?

It's not just about money, but the thieves want personal information, blue prints, contracts, etc.

We have to monitor attacks, because even well meaning users can introduce attack vectors.

We're looking for outliers - aberrant behaviour.   This requires statistics - hope you paid attention in college! :-)

Think of it like a metal detector. Define a path of filters and restrict data flow. It will help you to define a pattern - if you see X number of failed login requests from the same source, you might want to lock the account.  Now, that's probably not a low number - people forget their password, network latency issues, etc.  This should also get reset based on time passing between requests.

You can define an additional pattern to detect successful login events.  Here, the threshold is low. One attack is too many!

Hunting APT 28 in Your Fleet with Open Source Tools

Elizabeth Schweinsberg, Google

Elizabeth does incident report work at Google. There are multiple approaches for doing this.  You go on a hunt to find data, triage what is important, and dig in.

SpicyBorscht :-) APt28 aka Fancybear, etc

Some of these exploits are now working together - Sofacy is Coreshell and EvilToss which work with Chopstick.  Look for MD5 hashes of these binaries, registry keys, window event logs, anti-virus logs, browser history.

Google has made their own tools, like Grr. They need software that can run quickly and on large amount of data.

Timesketch uses output from plaso to give you color coded events from each machine as the attack.

They also have a tool that can dig through memory to see where the interesting stuff is happening. Can collect RAM data via GRR. Rekall will do memory forensics. Output is easy to read and share.

Intel® Device Protection Technology with Boot Guard

Shrestha Sinha, Technologist of Intel Corporation

There are so many avenues of attacks. Some are known and controlled, some we're aware of and dealing with - and others... we haven't learned about, yet.  Boot Guard's goal is to prevent these attacks from getting into the server.

Keep malware off, keep data where it belongs, maintain identity consistently and have a way to recover.

Funny analogy about the Leaning Tower of Pisa - we celebrate it because it has a defect - a bad foundation. Would we take our pictures next to a crashed system?

The primary question - is the code that we're running early in boot the right code? Example: Mebromi Attack - it reflashed the bios.  Could bypass secure boot and own the entire platform.  This is where Intel's BootGuard comes into play.  Boot Guard is an important building block in the chain of trust.

Imagine a scenario of an Evil Hotel Maid.  You leave your laptop in your hotel room, and she installs a USB drive and read the keys from the TMP and decrypt your harddrive.  BootGuard protects against that scenario.

Make sure we validate all firmware from first execution. We have extended the root of trust down to the hardware.

Post by Valerie Fenwick, syndicated from Security, Beer, Theater and Biking!

GHC15: Attacks Presentations

Web Security: Thinking Like An Attacker

Sarah Chmielewski, Associate Technical Staff of MIT Lincoln Laboratory

Software development focuses on the end user - performance, functionality and usability.  Lots of attacks happening against web based applications, so we need to stay one step ahead and "think like a hacker" :-)

Injection attacks top the most common attack, along with broken authentication management schemes.

Look at Heartbleed - attacked the transport layer. We've all heard about it, it unfortunately would leak in memory data.  You can play with this yourself - but set up your own server, please don't attack the still vulnerable servers out there :-)  (and also don't expose your server to the Internet... :-)

Think about how you could've discovered this on your own. A traditional attack method are buffer overflows, and there are static analyzers that can catch those.  Heartbleed was a buffer overread, though - no static analysis tools that can find this.

Look at your own code for memory accesses - like memcpy.

Another common attack: Cross-Site Request forgery (XSRF). SXRF exploits the way that a client's browser handles sessions. Check out Google Gruyere - a sandbox to practice with XSRF.  They can do things like withdraw money, delete things, all sorts of "fun" things!  Look for forms that do not have unique token only sent with the form.

 Software developers mostly think of how to build things up - not tear them down. So play around in safe places  with these vulns to expand your way of thinking.  Also check out Damn Vulnerable Web Applications (dfwa) and also check out OWASP.

there are also websites out there with old capture the flag games on them.

Test Driven Security

Rosalie Tolentino, Developer consultant of ThoughtWorks, Inc

You should not trust incoming data. Think about whitelisting, too.

Simple version: if you're creating zip codes in the US - it should not allow alphabetic characters.

A great way to prevent CSRF vulns - do more input validation.

Output encoding mentality - separate user data from execution data. Look out for SQL injection attacks.

Use only the least available privilege. Guests should not be able to see invoices, for example, only approved accountants.

Build these into your code, so they run right away.  This makes developers more aware.

Just because your developers are adding assertions doesn't mean you can get rid of QE, security experts or business ownership of security.  It also requires that you have test driven development knowledge and security awareness in all of your developers.

Ransomware: An Exploration into the Damaging Threats

Marianne Mallen, Antivirus Researcher of Microsoft Corporation.


Ransomware typically wants money before it will unlock your machine, but sometimes they want information.

Screenlocker - MS has worked with the FBI to kill this one off . It works just like you think, it locks your computer screen. It will claim that the Department of Justice will come after you unless you give them money.

Another encrypts your files and holds them encrypted... until you pay.

Browser locker will prevent you from going to other pages, they will claim you have to pay a fine or go to jail.  It's a false claim, though - restarting your browser will take care of it.

Common distribution methods are from attachments in email, or drive-by download by browser exploit kits or software downloaded by other malware.

They will attract you with fake mails from FedEx or other business services - watch out!

Starting in 2010, there started to be exploit kits available for sale. Script kiddies would buy them and exploit many systems. They could make up to $54,000 a day for "unlocking" a user's screen.

Many of these will use a command and control server, which use encrypted communication. These attacks are being done so anonymously now that it is hard to shut them down.

If you get hit with Ransomware - don't hit the panic button, yet.

Be aware - do not just open attachments or click on suspicious links. Hover your mouse over a link to make sure it's going to take you to the place that's displayed in the email.  Keep your anti-virus and patches up to date. Keep backups!

If your browser is locked - restart. Download Windows Defender Offline. Please don't pay the ransom! Reach out to experts for help.

 Post by Valerie Fenwick, syndicated from Security, Beer, Theater and Biking!

GHC15: Identity and Privacy Presentations

Identity & Access Management: Who is Touching What?

Laura Chapba, VP of Bank of America

Laura found herself with a 1.8 GPA and realized she should not be in pre-med. Major change to tech! Graduated with a 3.1 after LOTS of hardwork and then started her career.  There are two paths you can choose for a career: deep focus and become a subject matter expert, or have fun exploring many careers and move around a lot - like she did. :-)

Laura is using applying for a credit card to demonstrate Identity and access management.  We're pretty good at the provisioning part - verifying identity to issue the card.  There are more problems with deprovisioning - when are you not longer using this card?  Users are not good at identifying this, but credit card companies now automate this. Haven't used it recently enough? They will automatically close your account and give the credit line to someone else.

Now - authentication and authorization is really hard.

How do you authenticate that the person is allowed to use the card?  You could ask for driver's license (NOTE: Thought this was not allowed by merchant agreement?).  Then verify folks have the credit limit is a little trickier.

Case example: Amy moves to NYC and gets a new app, NYC101, which asks to have access to her Facebook account. She authorizes it, and has fun exploring NYC.  Then the NYC101 database was hacked... then hackers knew her mother's maiden name, birthdate and home town. That's enough to get credit cards in her name!  Now... she's a victim of identity theft. :(

So - be careful about sharing this information online!

Why are they looking for women in this space? Looking for people that want to work together on diverse teams. Need people that are willing to collect all the facts before jumping to conclusions and willing to work with sensitive situations.

Identity Toolkit

Hadas Shahar, Technical Program Manager of Google

Identity is a building block of every website you want to build. Most websites allow you to use an ID and password, or use another system to authenticate - like Facebook, Twitter, and G+. Usually you are given several choices and most of us cannot remember what they used.

The login screen is the first impression people have of your website - and usually the most complicated and confusing.

OpenID is making some progress - but we are not there, yet.

One id: take someone's email address first, THEN determine what they used to set up their account on your site in the past and redirect them to that page.  That is, if you previously signed in with username and password - it will take you there.  likewise, if you used google to authenticate, it takes you there.

But  why do we have to type our email address everytime? The browser now remembers your commonly used email addresses (because who has only one?)

This is also hard from a developer perspective - you have to wok with all of the different APIs/systems.

Google Identity tool kit is hoping to lower the barrier to entry, making it much easier for developers to get this right.

Ethical Market Models in the Personal Data Ecosystem

Kaliya Hamlin of IIW
Kalliya has worked on ta report: Personal Data: The Emergence of a New Asset Class. What type of data are they talking about? Relationship, government record, health, communications, education, context data (where are you , who are you with, what are you doing?)... and identity data.

Currently there are a lot of unethical data practices out there.

Data sources: Public, retail, schools, websites... passed on to data brokers who aggregate data about you and resell it.  Then it comes back and effects your life and you never consented.

The individual should be at the center of their own data lives.

We need a personal cloud (data bank, data store). You should be able to put your geolocation data somewhere and have it be YOUR data under your control.  You could even understand yourself better with this data.

Vendor Relationship Management: Rewiring how we interact with businesses today. what if businesses had to come to us to get our information?

We need persistent data store that we can share with trusted vendors.

"Infomediary Services" - an agent that will go on the web and find deals for you. It's great, if you trust these services with our info, as opposed to the entire market having this information and making actions.  Like right now, you can go searching for mortgages or about a life event, but perhaps change our mind or complete purchase decision - but the offers don't stop. There is no way to signal "I'm done" - and now way to remain private.

There is a long standing business model of data aggregation services - like Nielsen and Arbitron. People trust those aggregate services, because they are not sharing my name. They've aggregated the data. Businesses get information without all of those business knowing who I am and everything about me. They don't need that.

We are making this real with a worldwide consortium: pcc.

There is an Identy Workshop in Mountain View CA October 27-29, 2015 and again in April.

Life of PI - Protecting Your Personally Identifiable Information

Alisha Kloc, Security Engineer of Google

What happens with your personal data? We're in an information age, data is everywhere.

In the past, you only needed to protect your credit cards and passwords.  Now, there's so much more - photos, etc.

How is our data kept private? Well... no industry wide standards for user data privacy. This is getting better in the EU, but not so much in the US.

This should be figured out early in the development lifecycle - during design. And they should be reviewed by privacy experts. Products should only ask for the data required to make them work, and keep that data safe.  At Google, they also have privacy code reviewers, to make sure they are properly integrated with privacy protection tools.

we need to make sure we receive that data securely - encryption, encryption, encryption! you have control over your Google data - control is key! [Note: I did not know this - will have to figure out how to do configure this.]

Data must also be encrypted while stored, with restricted access.

You should be able to access your data freely, but others should not.

Google has tightly restricted access to user's data. All requests are audited and reviewed by a team before the data access is given.

All granted access is audited - they will know everything the Google employee looked and and did anything with.

Google does not give you personal info to third parties, unless you tell them to, you have a domain admin, or they are using a trusted partner to process information or the law requires them.

Data is not deleted right away - as often users accidentally delete things. Data becomes deactivated and after a period of time will really be deleted.

Be smart - is a 10% of discount worth giving away your name, email, zip and phone?

talk about this - advocate for change. Let companies know this is important to you as a users. Choose products and services from places that actively protect data.  If you're a developer, work with others and share ideas and try your best to do this right.

Post by Valerie Fenwick, syndicated from Security, Beer, Theater and Biking!


Wednesday, October 14, 2015

GHC15: Security from the Boardroom

What to Protect When You Can't Protect Everything?

Kelly Kitsch, Advisory Director of PwC

Unlimited funds don't come, unless you have a massive breach - and we'd all rather it not get to that.

Threats are complex and ever changing, we have to be able to adjust to protect our assets. Assets can be strategy related, branding, in progress patents, physical, etc.

Traditionally, people focus on perimeter security, but we need to really think about our high-impact assets.

The focus in the last few years have been focused on compliance - but compliance alone does not make you secure.

There is a new Economic Impact Analysis Methodology. First phase is to understand threat modeling, and the second (related) is what are your critical assets - physical and intellectual.

CIA: Confidentiality Integrity and Availability.  Use this to assess your risk.

Once you've identified the most critical assets, and can justify why they are so important, it will ease your ability to get funding.

Security and Privacy by Design: Moving from Concept to Implementation

Madhu Gupta, Head of Member Trust and Security Products of LinkedIn

How do we do this better when we build something from scratch? You know, for the next time we start a project.

At LinkedIn, they think of their guiding principles: Members First!  The three important values are clarity, consistency and control - and most importantly: trust.

Everyone at the company must understand that they are accountable for security and privacy. Look out for new features being launched, and make sure we have the right privacy controls before they launch.

How do you do this?

  • Integrate security and privacy into product requirements
  • Hold office hours so people can ask you questions
  • Review our plans at product reviews
  • Embed security chamption engineers
  • Share externally
You can further improve this by dedicating a team to review, consult, etc.

And when people do it right - make a tshirt! Motivate and share your success.

Let the Games Begin (Cyber Security) 

 Linda Betz, Chief Information Security Officer of Travelers

Linda has to worry about strategic things AND worry about delivering. :-)

What's the game? Everyone wants to hack YOU.  So, as CISO, it's important to minimize this and make it not as bad. Need to find and resolve quickly.

This is expensive - the average cost of a breach is between $3.7M and $5.5M.

What tools do your opponents use?

They could be state sponsored actors, paid to rain down malware on top of you.

It could be an insider - whether placed as an attacker or just careless.

What are they after? Personally Identifiable Information? Intellectual Property? Or perhaps simply seeing if they can do it.  We also see denial of services - like a boycott, but the attacker is deciding that NOBODY can do business with you.

You have tools, too! Apply patches, security toolkits, tripwires, etc. You need to understand what's happening on your  network - use analytics, etc.

Leverage the NIST Cyber Framework to help guide.

Make sure you and your team have all the training you need for various certifications.

When things go off the rails, you can bring in the FBI, regulators, cyber incident response companies and even lawyers.

Post by Valerie Fenwick, syndicated from Security, Beer, Theater and Biking!

GHC15: Authorized But Anonymous: Taking Charge of Your Personal Data


Anna Lysyanskaya, Brown University got her PhD under Ron Rivest at MIT, and has received many awards.  She allows people to prove themselves without exposing themselves.

Online, not much thinking required :-)

I log in – therefore I am… provided nobody else has your credentials. How does she log in? Let her ount the ways… :-)

Basic: user name and password:

  •  Pros: intuitive, human-memorizable (up to a point) 
  •  Cons: not privacy preserving, insecure in so many ways
Many people reuse passwords, as there are too many to remember.  But, if one site is compromised… others can be, too.

With public key certificates

  • Cons: not intuitive, not human memorizable, not privacy  preserving
  • Pros: secure – your device would need to be hacked or stolen before your identity could be stolen...
What are digital signature schemes? You need two keys:
  • Secret signing key SK(alice) allows Alice to sign and prove she is Alice    
  • Public verification key PK(alice) allows anyone to verify.
To sign a message m, Alie uses SK(alice) to comput a signature O

When anyone wants to verify Alice’s signature on m, use her public key PK(alice).

Now you know what this is – but how does it work if I don’t have your public key?  What if someone sends you a fake public key?  You must rely on information you can trust. That’s why we allow others  to sign public keys. Like Anna’s key is signed with Brown university – make sure it’s not BOWRN university…

A certificate is when someone whose public key is well-known (e.g. Brown University) certificates that a public key belongs to a particular site/web server/person.

Your public certificate may contain additional information, like date of birth, gender, which buildings you're allowed to access....

So, they are not privacy preserving.

Even if you think, I don't have anything to hide!  Just because you think this, doesn't mean your not leaving a trace of information that you consider private (health care, past abortion, etc) that someone else may want to attack even if you're not a public figure.

For example, let's say you use a persistent ID to log into a newspaper, even if it's "secret" and not associated with your name, you are still identifiable. If you put in your zip code to look up weather? They know where you live. Look  up your horoscope, learns your date of birth. Based on the articles you read, they can accurately infer your gender.  Those three pieces of information - someone could find your real name.

We'd like to use anonymous credentials - where you can prove you're authorized, but cannot be tracked bake to you.
  • Cons: not super intuitive, not human-doable (need a device to remember the credentials)
  • Pros: secure - your device would need to hacked before your identity can be stolen - privacy-reserving
Okay, how do these work?  Underlying building block: zero-knowledge proofs. She gave us a neat graphic explanation with a 3 colored graph.

Anything that is provable at all can be provable in zero-knowledge proofs.

Now that we've been doing this for awhile, efficiency is approaching certificate based non-anonymous authentication.

The old New Yorker joke was: "Nobody knows on the Internet that you're a dog." - now, that's not true! Google knows everything about you. Facebook knows everything about you and your friends.

But what happens if something goes wrong?

Trust her, we can solve this with cool crypto :-)

Right now, you can't do this - as there's no provider that allows for things like anonymously watching a movie (and actually pay for it).

Why aren't software companies doing this?  They may not think users care about this. It's also a fast developing industry, so they may not have caught up, yet.

Things may change due to last week's European Court of Justice ruling may have some impact.

Come learn more at Brown! :-)

They have a CyberSecurity master's program - neat!

Post by Valerie Fenwick, syndicated from Security, Beer, Theater and Biking!


Friday, October 10, 2014

GHC14: Security: Multiple Presentations - Another Perspective

Finding Doppelgangers: Taking Stylometry to the Underground

Sadia Afroz, UC Berkeley) is using stylometry to find who is interacting on underground forums (cybercrime forums).  You want to figure out what this guys is doing there in the first place and who really is doing the work.

Current research around deanonymizing users in social networks is focused around similar usernames - but if you really care about being anonymous, you won't fall for that trap.  The next thing to look at is similar activities or social networks.  For most people you can see that they will write a facebook post and a tweet on the same event/activity, so easy to find the match.  This doesn't work for underground user forums, though.  So, instead they are using Stylometry to analyze the writing style.

Stylometry is based on everyone has a unique writing style - unique in ways you are not aware of, so its hard to modify. To do this, you analyze frequency of punctuation and connector words, n-grams, etc. But, you need quite a large writing sample to analyze, the larger the better - but still can get some accuracy on small samples.

They looked at four forums, 1 in Russia (Antichat), 1 in English (BlackhatWorld) and 2 in German (Carders/L33tCrew).  People move from oe forum to another, but not always easy for researchers to get the full data sample.

Problems? These forums are not in English... often in l33tsp3ak (pwn3d).  Also, people aren't speaking with their natural voice, they are making sales pitches (more likely to overlap with other accounts that aren't actually the same person)..

They parsed l33tsp3ak using regular expressions, and additional parsing for "pitches" vs "conversation" (if there are no verbs and repeated things in lists, it's most likely a sales pitch and was eliminated).

Then it seems to be all about probability - what are they likelihood that  these are the same person.  Lots of analysis followed, like: do these accounts talk to each other or about each other? Are there similar Username, ICQ, Signature, Conatct information, Account information, Topics. Did they ever get banned? (moderators do not like multiple accounts for one account)

People can sell their accounts - accounts that have been established with a higher rank could be sold for more. Some people also want to "brand" so they can sell different things with each account (like CC numbers with one, marijuana with another).

You could avoid detection by writing less (lowering rank), or you could use their tool, Anonymouth :-)

From Phish to Phraud

 Presented by Kat Seymour, Bank of America, senior security analyst. talk started out great with a reference to Yoda. Every talk should have a reference to Yoda!

Phishing used to be around silly things like weight loss pills and male enhancement pills.  But, it's grown up - there's real money to be made here.$4.9 billion lost to phishers last year.

Attacks come from all over the place now - mobile, voicemail, emails, websites... and they've matured. No longer plain text filled with spelling errors, they now are stealing corporate branding and well written emails. They are stealing websites that aren't well watched/maintained.

Ms. Seymour can look at things in the URL to find out more about the phisher (and to help learn for suspicious patterns).  She can also find the IP address to do further research. Additionally, she can leverage the Internet Archive (aka the Way Back Machine) to see if the website has changed a lot recently (shows evidence of takeover).

She pays attention to referrers to their website - if a new referrer shows up quickly in their logs and then disappears?  It's likely a phishing site - so then she has to watch the accounts that logged in through there for suspicious activity (in addition to doing further research on the referring site).

It's not as simple as blocking IPs - she can't control your personal machines... and all of the places you might be coming from.

She needs to work with ISPs to block known phishing websites, but ISPs are spread all over the world  She can watch logs, traffic analysis and referrers - but the phishers are constantly coming up with new ways of  doing this.  Would be great to work with email providers to get them to watch out for this - but too diverse (some email providers are trying to address this, but difficult to coordinate).

Advice? Watch your statements, watch your statements, watch your statements!




GHC14: Passwords with Lorrie Faith Cranor

Lorie Faith Cranor, a professor of Computer Science and of Engineering and Public Policy at Carnegie Mellon University.

Who knew that Carnegie Mellon had a passwords research TEAM!? (looked to be about 10 people).

Lorrie Faith Cranor noted that everyone hates passwords, but no matter how much we hate them, text passwords are here to stay.  These types of passwords have a lot of attack vectors: shoulder-surfing, online attacks and offline attacks.

Offline attacks are difficult to protect against and are very effective and the cause of many publicized breaches.  Passwords are leaked hashed or encrypted and the computers an take BILLIONS of guesses per second, comparing hashes to find matches. Additionally, they exploit the common usage of the same password at mutliple sites.

CMU had rolled out a new password policy (number of numbers required, upper/lower case, allowed symbols, etc). Everyone hated the rules and blamed her (alas, IT department did not consult her). She asked them, though, where they got the rules from: NIST.  Sounds good - so looked into where NIST came up with their recommendations. Seems they came up with their rules based on what they thought would be a good idea, but had not done any tests on actual passwords.

System administrators don't want to get in trouble - they are going to use "best practices". If Dr. Cranor wants them to use something "better" - she has to prove it and get it published in a respected source.

How can you get passwords to study?

One of the easiest ways to get passwords is to ask users to come into your lab and create passwords for you - bu not everyone wants to walk into your lab to do this.  You can expand the reach by doing online and get thousands of passwords.  The problem?  You're asking people to NOT give you their real password, so this is not real data.

Another approach? Steal passwords. Of course, CMU cannot steal passwords - it's not ethical.  But, hackers like to post hacked password lists, so they can do research on some real passwords.

You can ask users to tell you about their passwords (where they put the special symbol, where do they put the number and capital letter, etc).

Or you can ask sysadmins for passwords, but they usually don't want to give these out. [VAF note: the sysadmin should NOT actually have access to the raw password?]

The passwords you get from leaked systems are often from throw-away sites, so not high quality.

Her lab was able to convince CMU to give them 25,000 real, high value passwords. Could compare these passwords to leaked and previous study data to see how relevant it was. These CMU passwords have the CMU password restrictions.  They also got the error logs:how often people logged in using the password, error rate for wrong passwords, and h ow often they changed - along with information about gender, age, ethnicity, etc.

To get this information took a LONG time.  Had to have two computers - one off of the Internet, locked in a room and not accessible by the researchers.  Researchers would write their tests and analysis scripts on a separate machine - then hand it over to the IT staff to run.  Black box testing.

How did they get these passwords that should've been hashed?  Many enterprises don't actually use hashes, they encrypt them with a system they can reverse so they can more easily deploy new systems. [VAF: ARGH!?!?!? what?!] So, at CMU they could decrypt the passwords (in the locked environment that the researchers did not have access to).

CMU Real Password Study

Dr. Cranor's team looked at things like how guessable the password was? Simple ones, like 1234 would be guessed in 4 tries. More complicated may be 'impossible'.

Since they had clear text passwords, they could run a guessability meter on them, as opposed to actually guessing them.  They could see that CS students createad the strongest passwords, business students did not create as good of passwords.

Could not find an effect for facutly vs student vs ethnicity made no difference in password strength, but men didmake a passwords that were 1.1x stronger than women.

You can make your password stronger by dong simple things - like adding a digit. If you put the digit at the beginning of your password, it was better than no digit - but not as good as having a digit in the middle. If you have multiple numbers in your password  - if you spread them out, it's harder to guess.

Password creation was annoying - if you're annoyed while doing it, though, you'll create a weaker password. :-)

They additionally took a look at leaked hash/cracked passwords - those were weaker than those created by sites like CMU that has an "annoying" password policy.

But, they could then compare the spread and diversity of their passwords collected in studies against real CMU passwords and found they were similar enough that her team could do further research with study passwords.

Large-Scale Online Experiment

Used Mechanical Turk - a site you can pay users to participate in your study (10 cents, a dollar, etc). Found this is a great way to do online studies, as Amazon has to manage credit cards, etc.

Asked participants to create passwords under randomly assigned constraints. They could see entropy estimates and guessability estimates. They could also see that people would drop out of the study the more difficult and onerous the password rules were.

NIST research has shown various password entropy estimates.  NIST notes that adding dictionary checks raises entropy and that having one with more rules (comprehensive 8) would be 24 bits of entropy.  Compared to "basic 16" (no dictionary check), they estimate the highest entropy.

Users only seem to use very few symbols (@ sign and ! are the most popular), even though many are available to them.

Found that in general, basic 16 could be pretty good - except for dumb users. Found these passwords quite easily: baseballbaseball, 123456789012345 and xxxxxxxxxxxxxxxx.  Oops!

Some minor restirctions will bring the basic 16 (which is less annoying to set and easier to remember) will make it stronger than a comprehensive 8 password.

Longer passwords though take longer to type... so that is annoying in a different way.

Recommended Policy?

Not sure - our password cracking algorithms are fine tuned to 8 character passwords, so just that they are having a hard time cracking 16 character passwords may not really be because it's harder, but rather because they have the wrong tools.

So... more research on N-grams (Google, book quotes, IMDB, song lyrics, etc) - now 16 character passwords become much easier to crack (Mybonnieliesovertheocean, ImsexyandIknowit#01).  Her students used this to win the DefCon password hacking context this year with their new tools.

Found that password meters can be frustrating - the same password gets different ratings on different meters, but they do make people make better passwords.

XKCD?

Did XKCD solve this all already?

So, Dr. Cranor's team studied this! Found that that the passphrases were not easier to remember, and people didn't like the random word passwords (but didn't like them any less than other password rules).  They tried a method of adding "auto correct" to the random word passwords, which helped people log in faster.

Research uncovered one of the most common words that appears in passswords: MONKEY! Why? Updated their password survey and asked any user that included "monkey" in their password and asked them WHY!? A: a lot of people have pets named Monkey or a friend nicknamed Monkey or... well, they just like monkeys.

As much as they've tried, they have not found a way to make users be random. More research... :-)

Interesting thing about Dr. Cranor? She made her dress and it's covered with discovered password graph (iloveyou in giant letters along the side).

Her team is starting to do more research on Mobile vs Desktop: users are seeming to avoid anything that involves shift key on mobile.

Interested in going to grad school and studying this? Join her team: http://cups.cs.cmu.edu/passwords.html

Question from audience: does changing passwords make them better?  No, her research shows that changing your password more frequently: you end up with a BAD password.  People do simple incremental changes to their passwords that make them easier to guess, particularly if the attacker has an "old" password.  The only time sysadmins should make users change their password is in response to a breach.

Password reuse: sure, for junk websites (newspapers, etc), but do NOT use that for work, bank, personal email. It's better to write them down (requires someone breaking into your house, as opposed to attacking a news site and then having access to your bank account).

This blog is syndicated from Security, Beer, Theater and Biking!

Thursday, October 9, 2014

GHC14: Security: Multiple Talks

With: Morgan Eisler, Shelly Bird, Runa A. Sandvik

Visualizing Privacy: Using (Usable) Short Form Privacy Policies

Morgan Eisler, @mogasaur, works at Lookout, a mobile security company.  This year over 2 billion people worldwide use the internet - more added every minute. Many (most?) of these are mobile devices.  Many companies have privacy policies, but only 12% of Internet users read privacy policies all the time - and only 20% of those that read them (even occassionally) understand them. Simply too long!

Facebook's privacy policy is longer than the constitution of the United States!

If nobody reads them, can we really say that the customers are making a choice? Certainly not an informed one. Users trust their providers - but, expectations rarely match, and can cause negative surprises that lead to loss of trust and loss of revenue.

Consider "Yo".  An app that you can share all of your contacts with, and it will send them push notifications - a literal "Yo".  The app became very popular, and was hacked over night - suddenly peoples phone numbers were no longer private.  This wasn't even an app created by a company - just a few friends having fun.

At Lookout, they made a really short form policy that you could view on just one page on a mobile device - but was it helpful?

It is important, but if people are not reading it - it's not really helpful. the NTIA does give guidelines here to help anyone create a privacy policy.

Lookout created a new short form policy that was quite simple - greyed out icons for things like "Government" to show that they were not showing their data with the Government.  For people they did share with, like "Carriers" - you could click on the icon and get more information.

Did usability studies and found that customers liked it - but did they understand it?  People, for example, weren't sure what the icon of  "user files" meant -  it looked like pictures. Did that mean it only applied to pictures?  Used usability studies to clear up some of the icons.



The Flattening of the Security Landscape and Implications for Privacy

Customers are like sheep (which is not at all like they are portrayed in movies). Sheep are stubborn and if you try to push them too hard, they scatter (enter picture of sheep dog working hard :).  Even though Shelly Bird isn't "in" security, when a security breach happens - customers come to her.  She has to pay attention to everything before deployment, like making sure the bios is up to date.

Shelly thinks of security as a bowl - a container to store and protect your data/applications/etc. Also, like a castle - defense in depth.

Ten years ago, during a deployment, a customer said she had to remove IPsec from all of the machines. Huh?  The router/switch engineers said: That's our job!  What about the "last mile"?  Same customer didn't want IPv6 - convinced their firewall would be confused and not able to process it.

Once Shelly got though all of this - then the Intrusion Detection folks were unhappy! They could no longer read the packets.

Essentially, fear of change.

Shelly could see that the more she could push the work down the stack - the faster things worked.  For example, high level app encrypting a disk took four hours, but letting the OS do it - two!

There are other bigger problems here - credentials! The government likes to authorized users to have something physical to prove their affiliations.  Shelly ended up with a dozen of these cards. Ugh.  Now they are moving them into the mobile device, using TPMs as a trust anchor.  This is claims based authentication, allowing business to move faster.

This is still very complicated, though, as the US Government doesn't even have trust across branches.

 People want to have multiple identities, people travel/move around and have different reasons for doing different transactions - lots of work to get this right.

The Data Brokers: Collecting, Analyzing and Selling Your Personal Information

Runa Sandvick works for Freedom the Press - they protect the press and help to inform the press of their rights. Like those that have been arrested in Ferguson for not moving fast enough.

While she often talking about NSA, today she's talking more about consumer privacy.

It's surprising how much companies know about you by just watching your patterns.  You are volunteering this information in exchange for a discount. Like the father that found out from Target that his daughter was pregnant. She wasn't even buying diapers or anything that obvious, but changed the products she was using in a way that indicated pregnancy to target.

But this stuff happens online, too, and we don't even know about it.

And this information isn't just kept by the one company you are shopping at - it's getting collected by data brokers.  For example, OfficeMax addressed a letter to a man with the title "Daughter Died in Car Crash". Where did they get that data? Why did they have that?

Data brokers sell lists of rape victims, alcoholics and erectile dysfunction sufferers.  Where are they getting this? Why are they collecting it?

When asked directly, data brokers talk about caring about privacy, but don't want to share things like: how to see what information they have about you? How to remove/correct information? How to decline to share?

How many people have read the privacy policy for GHC? No hands went up...Runa did read it for us, and wasn't happy with what she found.  Things like your resume could be shared with non-recruiters.  Privacy policy also notes that they will not use encryption, unless required by law, to protect your information.  She also used a tool, Disconnect, to see what sites were gathering information from users of the GHC website - there was a data broker there (New Relic, which does help you analyze your site traffic, but what is *their* privacy policy? will they share GHC stats with other orgs and corps?).

You can use Tor to protect yourself from these data brokers. The only way the site will know it's you is if you log in. There's no way for them, otherwise, to know who you are so they won't have anything to track against.  Runa only uses Chrome for cat photos. :-)

wow - this really goes beyond the annoying targeted banner ads!


Wednesday, October 3, 2012

GHC12: New Investigators - Mobile Devices

Moderators: Gilda Garreton (Oracle), Rachel Pottinger (University of British Columbia)

 On User Privacy in Personalized Mobile Services

 Michaela Goetz, Twitter, presented her GHC12 Award Winning paper,

Ms. Goetz's paper covered her research on how to best target advertising without compromising user privacy - tricky if you want to find out which advertising is working and to target advertising appropriately.

Her research included a method for doing this without requiring a trusted third party server, which involves doing counts by including noise terms - enough to protect the privacy without statistically impacting the overall counts.

It's nice to allow the users to set their own personal context for what is sensitive or not: for example, going to the hospital to see a relative would be very private, but walking the dog may not be so. While it's nice for the advertisers to learn what is sensitive or not, but even then they could learn more about the user than desired.

Understanding How Children Use Touchscreens

Presented by Quincy Brown, Assistant Professor, Bowie State University.

This is an important topic, as millions of these devices are being sold. Kids love these touch screen products, but they are not designed with little fingers and skill sets.  Children have trouble with things like dragging - the concept of maintaining contact to drag was surprisingly challenging.

Dr. Brown's research covered adults as well, on several different devices. In one of the experiments, they measured success based on target size (area size you could touch in order to get the desired action) and gesture interaction (how they could do things like drawing letters or symbols). Children miss the targets twice as often as adults and found (unsurprisingly) that larger targets were easier for the children to find.

The researches discovered a new phenomenon: holdovers! When the application was "too slow" to respond so the user wasn't sure if they had hit the target or not, they would repeat their action. 96% of the "holdovers" came from children.

Kids gestures were also different - they lifted their fingers more often. For example, to draw a square, children frequently drew 4 independent lines, whereas adults never lifted their finger, just turning their finger to make the shape. This causes problems for the touch device - it doesn't recognize four lines that overlap at the corners as a "square".

This post syndicated from Thoughts on security, beer, theater and biking!

Monday, November 7, 2011

GHC: Excited about presenting!

I'm getting really excited about the Grace Hopper Celebration of Women in Computing - I fly to Portland tomorrow. I've got my schedule put together [1], and the slides for our presentation posted on the GHC Wiki.

I'm thrilled to be presenting with Radia Perlman (Intel), Terri Oda (University New Mexico), and Lindsey Wegrzyn (Adobe) - such an esteemed group of women. We're presenting on modern day security attacks and how to protect your privacy online. This isn't going to be a highly theoretical talk, but helping technically savvy people understand the sometimes tricky environment we all work in every day.

We're presenting on Thursday, November 10th 11:30AM-12:30PM Convention Center – B113-115. Come and check us out!

What talks are you most interested in seeing?

[1] Unlike most conferences where you have a choice between an invited speaker track and refereed papers - the Grace Hopper Celebration of Women In Computing has EIGHT simultaneous tracks. If you haven't spent time at least narrowing down which track you want to attend for each session, you won't really have time to figure it out on the fly and will likely end up in a track that isn't as interesting to you as some of the others. Btw, you can switch to different tracks throughout the day.

Friday, August 12, 2011

USENIX: Pico: No More Passwords!

Frank Stajano, from the University of Cambridge, talked about the growing password problem Many years ago, when we all only had one or two passwords to remember, memorizing one or two simple 8 character passwords was very simple to do.

Nowadays, we like have 20-30 (or more?) accounts, all with different password policies, and we just can't memorize them all - and the things we're coming up with that we believe have high entropy, are actually very easily cracked - as illustrated by this recent xkcd.

The little shortcuts we take, like reusing our "good" passwords, means that once it is compromised on one site (through no fault of the user), the attacker has access to many more sites. This was demonstrated recently with the Sony password leaks.

Because we forget passwords, all websites have a method for recovering your password - which can be attacked.

Stajano says that passwords are both unusable and insecure, so why on earth are we still using them?

Perhaps we can start over? Let's get rid of passwords! That's where Pico comes in. The goals of Pico are:
  • no more passwords, passphrases or PINs
  • scalable to thousands of vendors
  • no less secure than passwords (trivial)
  • usability benefits
  • security benefits
He wants to make sure we stay away from trying to make the user remember things, so that eliminates things like remembering pictures, shapes, etc.

Other requirements for Pico are it must be scalable, secure, loss-resistant, theft-resistant, works-for-all, works from anywhere, no search, no typing, continuous.

Pico would have a camera, display, pairing button and main button, as well as radio to communicate. The device could look like a smart phone, a keyfob, watch, etc, but it is a dedicated device. It shouldn't be on multipurpose device, like an actual smart phone, as it would then be opened up to too many forms of attack.

The camera would use a visual code in order to know what it is trying to authenticate. The radio device would be used to communicate to the computer over an encrypted channel. The main button is used to authenticate, and the pairing button would be used for initialization of an authentication pairing. Obviously, this type of system would not just be an extension of existing systems, but would require hardware extensions.

Pico would initialize by scanning the application's visual code, get the full key via radio and check it against the visual code and stores it. Pico would respond, then, with an ephemeral public key, then challenges the application to prove ownership of the application's secret key. Once all of those challenges are passed, then Pico will come up with it's on keypair for that application and share a long term public key with the application. The application will store that and then would know your Pico the next time you try to connect to that application.

While you're connected to the application, your Pico would be continually talking to the application, via the radio interface.

Of course, simply having the Pico cannot be enough - otherwise someone could take your Pico and impersonate you. This is where the concept of "picosiblings" comes into play. Picosiblings would be things like a watch, belt, ring, cellphone, etc (things you often have with you), and the device would only work with those things nearby. [VAF: Personally, I'd hate to think I wouldn't be able to get money out of the ATM simply because I'd forgotten to wear my Java ring that day].

If you lose your Pico, you'd need to use some of your picosiblings to regenerate it - so don't lose all of your picosiblings as well! It seems that you want to have enough picosiblings, but not too many. I'm not sure how you determine that correct level :)

Pico access can't be tortured out of you, as it can't be unlocked by anything that you know (there's no PIN or password).

"Optimization is the process of taking something that works and replacing it with something that almost works, but costs less." - Roger Needham

With that in mind, Stajano notes that if he actually wants people to adopt this, he would likely need to think of a smart phone client.

There were a lot of interesting ideas in this talk, but the thought of carrying around yet another device is not appealing, and the burden of replacement and function (with all the picosiblings) makes this seem untenable to me - but, if it gets people thinking, then it's definitely a step in the right direction!

The audio and video of this presentation are now online.

This article is syndicated from Thoughts on security, beer, theater and biking!

Thursday, August 11, 2011

USENIX: Privacy in the Age of Augmented Reality

Presented by Alessandro Acquisti, Associate Professor of Information Technology and Public Policy at Heinz College, Carnegie Mellon University.

Acquisti asks what are the trade-offs associated with protecting and sharing personal information? How, rationally, do we calculate the risk and benefits?

You can look at it from a economics point of view. Acquisti starts with an example from a paper called Guns, Privacy and Crime, analyzing where the state of Tennessee released the names and zip codes of all people that had handgun carry permits. The NRA was outraged, as well as privacy experts, saying this information would make these people at more risk for crime - newspapers believed it would be the opposite. Acquisti and his colleagues studied this and found a direct relation between crime in those areas - that is, crime went *up* in areas with low gun ownership. Obviously, the criminals knew the risk was lower to themselves in those neighborhoods. I'm sure that's not what the state of Tennessee was going for.

The conundrum here, of course, is that different people value their privacy at different levels. He asks us to consider: "Willingness to accept (WTA) money to give away information" vs. "Willingness to pay (WTP) money to protect information." In theory, they should be the same, but in practice, they believed people have a higher WTP.

Acquisti and his colleagues did an experiment at a local shopping mall where they rewarded survey participants gift cards as a reward. One group received a $10 gift card that would not be traced, and the other group was given $12 card that would have the transactions tracked and linked to your name, and they were given the option to swap.

So, while they're both actually being given the same choice, it was psychologically framed differently when presented. People who were originally given the $12 card very rarely wanted to give up the $12 to get their privacy back, while those that started with the $10 card wanted to keep it. If you have less privacy, you value privacy less. McNeally's famous quote, "You have zero privacy anyway. Get over it," came up.

Another area they were curious about was is the Internet really the end of forgetting? That is, memories fade, but Facebook doesn't. I've said this over & over again to teenagers, "The Internet is forever." What the researchers wanted to see was that if people would discount the information if it was old. Their hypothesis was that bad information would be discounted more slowly than good information. For example, if you last received an award 10 years ago, people may say, "Yeah, but what have they done lately," compared to being caught drunk driving, for which you may not ever be forgiven.

Their researchers did three experiments: the dictator game (with real money), the company experiment (judging a real company, but no real money involved), and the wallet experiment (where subjects read about someone doing something either good or bad with a wallet and then judge him).

In the wallet experiment, even though all of this information is fresh on the mind of the subjects, they found that if they said Mr. A did something positive with a found wallet 5 years ago (returning cash found), does not impact people's feelings about Mr. A, whereas if he had done it recently, they would have a more positive view of him. But, if he did something negative (like keeping the cash), it didn't matter if it happened last year or 5 years ago - people did not like this Mr. A.

The lesson learned here is that be careful about letting negative information about yourself from getting on the Internet, as people will not forgive your past indiscretions. The speaker gave specific examples of the Facebook meme where young women post pictures of themselves when they are out of control drunk and passed out or worse. Even as they grow up and mature, they will not be forgiven for those past indiscretions.

And, with computer facial tagging getting better and better, even untagging yourself won't prevent you from being recognized.

The researchers studied public Facebook profile pictures along with their IDs and compared them to publicly known pictures of those people to see if people are using their real picture - they were able to discover that about 85% of them were accurate images. This could be further leveraged to see if people are using their own real picture on dating sites :)

What this means, is that even if you change your name, you still won't be able to escape your face (well, not without significant cost and potentially negative consequences).

The better and faster that facial recognition software gets, the less privacy we will have in public. Someone you just met could look you up by your face and learn all sorts of information about you. Scary!

The audio and video of this presentation are now online.

This article is syndicated from Thoughts on security, beer, theater and biking!

USENIX: I'm From the Government and I'm Here to Help: Perspectives From a Privacy Tech Wonk

Tara Whalen, IT Research Analyst from the Office from the Privacy Commissioner of Canada, was a last minute fill in.

Ronald Reagan: "The nine most terrifying words in the English language are 'I'm from the government and I'm here to help.'", and while Whalen is from the government, she hopes that we aren't terrified of her. :-)

As the US Government doesn't have an Office of Privacy, Whalen gave us an overview of her Canadian agency. The office was established in 1983 with the passing of the Canadian privacy act. Their mandate is to oversee compliance with the 1983 Privacy Act and the 2000 PIPEDA Act, which means they protect both corporate world and individual citizens. They help review new policies and guide parliament.

In addition to those more standard government functions, they also have a technology analysis branch, where they do investigations, audits, privacy impact assessments, and research. This division supports a lot of research, even including a game for Canadian children to teach them about privacy.

Whalen went into detail into a couple of case studies. The first one was their investigation of Facebook, where a group of law students had reviewed Facebook's policies as compared to Canada's PIPEDA and Privacy acts. Their result was a 24 point complaint to Whalen's office, which triggered an in depth investigation.

The investigation was very detailed and involved using things like packet sniffers to see what actually is happening with data on the wire. After a year, the Canadian government had an official complaint to give to Facebook requesting eight items to be corrected, six of which where relatively easy changes to the language on the site. For example, disambiguation between account deactivation and account deletion.

Some of the roadblocks that her team hit were Facebook redoing all of their privacy settings and adding many new features in December 2009 as well as all of the third party apps that hook into the system. New complaints have come in, so the investigation is still undergoing and Whalen could not comment further.

The next case study she presented was on the Google WiFi complaints, which was initiated by privacy investigator in Germany. Basically, while Google was driving around collecting pictures for their Street View service, they were also collecting information on WiFi networks. Google's initial response was that there was no data payload being collected, which made the privacy experts very happy ... until they found out that wasn't a true statement. Google had actually accidentally collected over 600 GB of payload data from around the world from unprotected WiFi networks.

Google of course apologized and quickly discontinued the practice.

Google did hand over the data collected in Canada (18G) to the government, who then was faced with a bit of a conundrum. Google had not looked at nor utilized the data, so the privacy group didn't want to go and deep dive into the potentially very personal information and expose things that at this point had still been private. They did a cursory examination where they did look at some of the personal information to verify that it was indeed collected, and presented aggregate information in their report. They did find whole emails, even though Google had stressed they had only picked up fragments of information - obviously, the data they collected depended on what a user was doing at the exact moment the Street View car drove past their house.

Google did take the complaints very seriously, added changes to training for engineering and then also appointed an internal privacy officer.

Another area the privacy office looks at is location privacy. The case shown here was about a German citizen who sued Deutsche Telecom in order to get his own data about his locations and then shared it with the world. Quite a shock about how much information his cell phone carrier had for him!

Then there was the recent case where Apple was collecting location information from iPhones and 3G iPads, even if the location services were disabled on the device. This information wasn't just stored on the device, but also transferred to any computer you would sync with and transmitted to Apple. This was well discussed in the media, particularly due to how visually interesting the maps were.

It wasn't just Apple. Android and Microsoft did this as well, though to varying degrees.

In Canada, there is a lot of legislation being proposed to help protect privacy and better define when data can be held and accessed by law enforcement.

It is good to know that, at least in Canada, there is someone in the government that cares deeply about protecting citizen privacy.

The audio and video of this presentation are now online.

This article is syndicated from Thoughts on security, beer, theater and biking!

Friday, October 2, 2009

GHC09: Susan Landau: Bits and Bytes: Explaining Communications Security (and Insecurity) in Washington and Brussels

Susan Landau started out giving us her history about how she went from a theoretical computer science faculty member at a university to someone working at Sun Microsystems on public policy. A path she said she wasn't working towards, but feel she must've been just a little bit, or she wouldn't have ended up where she is.

The US first started doing wire tapping during the Civil War! Wow! Apparently we didn't slow down - not only did the US use wire tapping to watch criminals, but they were also doing it on congress people and supreme court judges! In particular, a congress person could be talking about the FBI budget and the FBI would be listening in! Clearly a conflict of interest!

Congress didn't like this and put in a law to regulate this - requiring wire taps to only be for a specific person at a specific number

In 1994 a US law was passed that required all digitally switched telephones to be built wire tapped enabled! The equipment was to be designed by the FBI, much to the chagrin of telephony providers.

This is problematic - in 2004-2005, it was discovered that some non US diplomats had been wiretapped - but not by a government entity! (at least not officially.) This was discovered when there was some problems with text messaging on one of these phones. They found the switch in Greece, which had been bought from a US company with the wire tapping software disabled - so no auditing software was enabled. Someone very knowledgeable with the switch used a rootkit to get in, turn on the wire tapping software and then targeted these diplomats! With no auditing software enabled, the Greek phone company had no idea this was happening until there were problems with the text messages! Once this illegal wire tap was discovered, the phones that were listening in suddenly went dark and the perpetrators were never found. Very scary stuff!

This is a clear example of how software made to "protect" us can actually be used to spy on innocent people - terrifying indeed!

All of this gets much more complicated with technology like VoIP (Voice over Internet Protocol) where people do not have a set phone number, it is done with the IP address which will vary every time you reconnect your laptop or mobile device to the network. What this means is it is very hard to pinpoint the caller - one of the risks here is that the wrong person will be eavesdropped upon.

Landau knows it is very important for society to have secure communications - to enable conversations with first responders, for example, and we need to have the technology to do this.

Landau continues on about how much more devastating natural disasters are than terrorist attacks, yet for some reason they don't get nearly as much news and political coverage as a terrorist attack. I wonder if we all feel we're more protected from a random natural disaster? Or if we are fascinated with how evil someone would have to be to purposefully hurt another human? hrm.

President Bush apparently authorized warrantless wire tapping in 2001 - and this was relatively unknown and undiscovered until 2007. She wrote an op-ed for the Washington Post on this topic, and next thing she knew, she was the expert on privacy. This is good, in that she now has Washington's ears, but she realized she needed to find more people to help support her in this and she was happy to find many intelligent, bright and like minded folks.

Now she's been working on reviewing public policy - basically doing law reviews. Landau jokes that she feels she's in training to be a lawyer.

If you want to get into public policy, you need to learn their stuff: "laws, policies, motives", to speak well, write well and have great courage. She believes these are all the traits that a good engineer should have as well, so perhaps it's a career path after all. :-)

Thursday, October 1, 2009

GHC09: Technical Track: E-voting & privacy with health records

This session started out with a fun talk on electronic voting by Dr. Kathy S Faggiani, though it's unfortunate that she seemed to be preaching to the choir. It's not her fault - it seems only people really interested in security of voting and wary of the existing digital voting machines came to the room.

She did a fun experiment with her son that was inspired by California's Secretary of State, Debra Bown who had stated that she had to de-certify California's electronic voting machines because of all the mistakes they made that a first year computer science student wouldn't do. As her son was in his second year, he went and wrote a voting system... turns out his also wasn't as secure as it should've been :-)

Electronic voting is really tricky, though, as you all know. We, as individuals, want to know that our vote counted - but if we're given a receipt that shows how we voted (or with a number where we can look up later on the internet who our vote was cast for), you would be susceptible to vote coercion. This is also why I do not like absentee voting, and am saddened by the state of California's push to force us to do this by taking away polling places and "reminding" you about three times a year to sign up for permanent absentee voting status.

I've read too many stories about voter fraud and simply cannot trust our society to do the right thing in their own homes. I've already heard stories about ballots being stolen from mail boxes. *sigh*

Faggiani mentioned that Hawaii did "successfully" run an all electronic election, managed by Everyone Counts. While it was deemed a success, the voter turnout in this already low-voter state dropped by 83%. Seems like a disaster to me. Clearly the voting was not as accessible to all of the voting public as they thought it would be - since it was all done by cell phone or Internet.

The next talk was on A Cryptographic Solution for Patient Privacy in Electronic Health Records by Melissa Chase. Another area where we are very concerned with the integrity and privacy of the data, yet she pointed out many successful attacks on this information over the last few years. One very egregious example was a doctor that was blogging about his patient's records without their consent. Who needs hackers when someone is giving away your private data for free? *yikes*

Chase covered problems with different encryption key schemes, including saving the private key on the primary server and escrow systems, and went on to propose a hierarchical encryption scheme which seems promising.

She is a strong advocate of making sure the patient is in control of the data and decides where it can go and which doctor can see the data.

This is a fascinating area of research, very important to all of us, and could revolutionize health care in industrialized nations, but there are still many issues to solve like how to handle emergencies when the patient may not be able to "unlock" their data.