Wednesday, August 5, 2015

BHUSA15: Panel: Getting it Right: Straight Talk on Threat & Information Sharing

Panelists: Trey Ford(@treyford) is the Global Security Strategist at Rapid7, Kevin Bankston (@kevinbankston) is the Director of the Open Technology Institute and Co-Director of the Cybersecurity Initiative at New America, and @brianaengl and @hammem (speaker lineup appears to have changed, so twitter handles are what I've got:-) (also, the podium is super giant and blocking my view of the speakers, so I can't tell you who is saying what).

Sharing sounds like fun, but it's not as simple as we remember from our childhood.  There are legal implications, contracts, source trust issues, etc.

Intelligence is like a UDP packet you cast out and hope for the best.  How do you determine if the information is still relevant?

Facebook is working on this - how to do exchange of data?  What can we learn from it?

When people start sharing data, they realize that they need to share with someone who cares. Ie if you're concern is about phishing, don't build a relationship with someone who is focused on bitcoin.

What is stopping companies from sharing information with other companies and the government? It will be relevant to you if new legislation passes.

Some of the barriers are around the wiretap act (Title II) portion of the ECPA which places limits on real-time communications and limits disclosure.Other limits: federal privacy laws protecting HIPPA data and educational records, self-imposed restrictions in Terms of Service, and anti-trust laws (DoJ could accuse them of colluding in an anti competitive way).

Well, and there are nervous lawyers :-)

Most threat information doesn't include content or PII. Non-content can be liberally shared, with exceptions for security and consent via ToS.  DoJ has stated they won't go after companies sharing for these reasons.  Companies already do a lot of sharing, so do they really need new legal permissions?

but there's the new CISA: Cybersecurity Information Sharing Act, S. 754.  It authorizes sharing of broadly defined "cyber thread indicators" and info about defensive measures" with "any other entity or [any agency of] the Federal Government".

DHS must distribute all information to other agencies including NSA, "not subject to any dleay or modification". Gov't can use the information to investigate or prosecute a range of crimes unrelated to cybersecurity.

The house is also working on bills!

Congress has looking at CISA since 2009, and they are starting to feel like they have to do something so they can show they are serious about cybersecurity.

Please call your senator to oppose the bill and support privacy enhancing amendments. If it does pass, it still has to go to conference with congress.

Check out StopCyberSpying.com or call 1-985-222-CISA for more information.






BHUSA15: Stranger Danger! What is the Risk from 3rd Party Libraries?

Kymberlee Price, Bugcrowd, and Jake Kouns is the CISO for Risk Based Security.

It's well known that vulnerability statistics suck (see Steve Christey's (MITRE)  Black Hat 13 talk).

But, the truth is - we are getting attacked, lots of new (and old) vulnerabilities.  This is getting worse every year, not better.

Secunia says there are 15000 vulnerabilities, but they counted Heartbleed as 210 different vulnerabilities (and our speakers say it was just one, while some audience members noted it was three).

There were 100+ vendors impacted by Heartbleed, impacting over 600,000+ servers.

Very large companies are using OpenSSL: Oracle, HP, Blackberry, Juniper, Intel, Cisco, Apple, Intel, etc... so, it's not just little startups using open source anymore.

There have been 52 new vulnerabilities fixed since Heartbleed - average score of CVSS of 6.78.  Nine of them had a public exploit available.

We're beating up on OpenSSL - but what about Gnu library (Ghost), which had a heap vulnerability in it. It's everywhere.

Efficiency at what cost?  By leveraging this third party source, companies can deliver faster, cheaper, etc. But what are companies picking up in exchange?  Some products have more than 100 third party libraries in them. Are they being treated with his much scrutiny as they should be?

The speakers aren't saying: "Don't use 3rd party libraries", but rather to think about things during design and development.

All of the data they are sharing this week are from public sources, even though that data is limited.

Look at FFMPEG - they have 191 CVEs, but over 1000 vulns fixed.

These vulnerabilities spread - think about the FreeType Project font generation toolkit. It's used by Android, FreeBSD, Chrome, OpenOffice, iOS, video games (including the American Girl Doll game).  Everywhere!  There was a vulnerability (missing parameter check) that allowed you to jailbreak your iPhone... or someone else to take over your iPhone.  This is insiduous, as you have to wait for the vendor to fix it..

libpng, Apache Tomcat... everyone is using this and including these things in toolkits.

We shipped a vulnerability to Mars! (Java is on the Mars Rover).

Interesting to note: some vendors don't even release CVEs for anything under CVSS of 5.0. Since 2007 the number of CVEs: OpenSSL (90), Flash (522), Java (539), FreeType (50), libpng (28), APache Tomcat (100).

Now, this is not telling you what is more or less secure. For example, Adobe has an excellent bug bounty program and internal pen testers. Just because a product has only reported a few, doesn't mean more aren't lurking.

We should consider time to relief. How long does the vendor know about the issue before they provide a fix? You can use this to figure out how seriously that vendor is about security.

Had to define a framework to understand time of exposure, identify vendors and products you want to work with and establish a scorecard.

Calculating Vendor Response Time is how long from when the vendor was informed before they responded to the researcher. This can't be an automated reply, but actual acknowledgement.

Time to patch - when do the customers get relief.

But another time to consider: how long were customers vulnerable? That is, how long from when the patch is available to when the patch was applied (many folks only do updates quarterly, for example).. Total time of exposure covers the period from when the vulnerability was discovered until when was it fixed at the customer site.

We got to walk through a few case examples.

In one case, a researcher reached out to a company on twitter asking how to securely disclose a vulnerability - and for 2.5 months they kept pointing the researcher at their insecure support page.

It is critical for vendors to respond promptly and investigate the issue.

And this data is hard to figure out, as the terminology for "zero day" (oh day, 0 day) seems to be malleable.The speakers believe that it's only a 0-day when the vendor does not know about it.  Once he vendor knows, or the vuln is publicly disclosed, then it's no longer a zero day.

In one case, the vendor created a patch - but did not release it, instead they wanted to roll it up to their next version release. In the end, their customers were exposed for 451 days.

While most companies update their systems every 30 days, their exposure could be much longer due to a vendor not actually providing the  fix to their customers.

Advice: once you incorporate a 3rd party software suite into your tools, you need to become active in that community, watch it, help out, provide funding, or you are putting your own customers at risk.

You also need a clear prioritization scheme, to know what to fix and when (as most likely your incoming rate is higher than your fix rate)..  Proactively manage your risk. Understand what third party code your organization relies and implement a plan to address the exposure, and work with the vendors.




BHUSA15: Gameover Zeus: Badguys and Backends

Speakers: Elliott Peterson is a Special Agent with the FBI in the Pittsburgh Field Office. Michael Sandee is a key member in the Fox-IT financial malware intelligence unit. Tillmann Werner is the Director of Technical Analysis at CrowdStrike Intelligence.

Gameover Zeus went after backend banking systems, very successfully, a botnet run by an organized crime game. It was designed to make it impossible to be subverted by the good guys.

We estimate that the losses ranged from $10,000 to $6,900,000 / attack. The criminals had knowledge of International banking laws, leveraged international wires, and used DDoS attacks against the banks to distract and prevent the victims from identifying the fraud.

Dirtjumber Command/Control was being used.

They see the $6.9 million loss, informed the bank - but the bank could not find the loss. It took a long time to find, due to the DDoS. The FBI was finally able to track down who was receiving the funds in Switzerland and put a stop to this. Now the feds can prevent the transactions and even get the money back in he end.

The first Zeus came out in 2005 as a crimeware kit. The primary developer "abandoned" the project, and turned it into a private project in 2011.

JabberZeus crew was using the kit malware then moved into Zeus 2.1.0.x, which included support for domain generation algorithm, regular expression support and a file infector.  Then, in September it was upgraded to Mapp 13, which includes peer-to-peer + traditional comms via gameover2.php.  The focus was on corporate banking, and would often drop in additional malware (like CryptoLocker).

The attack group seemed to have 5 years experience, some as many as 10. Mainly from Russia and Ukrain, with two leaders.  Included support staff and 20 affiliates.

They had "bulletproof" hosting - exclusive servers together, virtual IP addresses, new address in 2 business days - very expensive!  Additionally, proxies all over the place - like in front of the peer-to-peer network.

The network was proteted using a private RSA key.

The FBI, and their private assistants, had to watch for traffic patterns and cookie theft/removal. For example, they could remove your existing cookie to force you to login again so that they could get your password.  Once they got what they wanted, they would block (locally) access to the bank's website.

This wasn't just financial, but also political. There was espionage, targeting intelligence agencies, looking for things around the Crimea and Syrian conflicts.  Specifically looking for top secret documents, or agent names.


Why take control? If not, if the feds presence was detected, the command engine could shut down and destroy the rest of the botnet.

The botnet uses a basic p2p layer. Every infected machine stores  a list of the neighbor nodes, updated often and peers talk directly to each other - getting weekly binary updates!

They had proxy nodes, which were announced by special messages to route C2 communication (stolen data, commands). Many nodes in the cluster are not publicly accessible, so there are proxy nodes that encapsulate traffic in HTTP so they can continue to communicate with infected machines behind a firewall.

The criminals was also configured to NOT accept unsolicited responses - must match a request, so the feds (and friends) could not use a poisoning attack.

Goal: isolate bots, prevent normal operation, by turning the p2p network into a centralized network with the good guys at the controls (a sinkhole).

The good guys had to attack the proxy layer with a poisoning attack. Peers maintian a sorted list of up to 20 proxies, regular checks if still active. Had to poison that list, and the make sure none of the other proxies reply any more.  Needed to work with ISPs to get access to some active proxies.

Needed to take over the command and control node first - that's where the commands came from.  Once they were in, they killed the old centralized servers (one was in Canada and the other in the Ukraine). Took advantage to completely change the digraph and essentially took down the botnet.

Needed to watch emails exchanged with "Business Club". Helpfully, "Business Club" kept ledgers!

The FBI need to look at the seams , to find who these people were. For example, Bogachev used the same VPN servers to log into his personal/identifiable accounts as he used to control the bot net.

They are still looking for him. The FBI is offering $3 million for information leading to the capture of  Bogachev (showed us pictures of the guy - he likes his fancy boats).

Let me know if you get a piece of that bounty!




BHUSA15: Executive Women's Forum

Alta Associates hosted Black Hat's Executive Women's Forum! The discussion was led by none other than Joyce Brocaglia, CEO of Alta Associates and Founder of EWF.  This was a great opportunity to network with other women working in security and hear more about the programs of EWF (and lunch was good, too!)

EWF focuses on women making decisions in security and privacy, hosting an annual conference where women can spend time with other women working in security. Women who have attended past conferences note how awesome it is to be surrounded by so many intelligent and security focused ladies. It's very inspiring to see the success stories and see how they got there and learn about their road blocks.

In addition to the major EWF conferences (this year's is October 20-22, 2015 in Scottsdale, AZ), they do local events as well.

This year's conference's theme is Big Data, big Risks, Big Opportunities, with talks on negotiating, opportunities and innovation in healthcare big data, data sovereignty, global cybersecurity policy and government control and the voice privacy conundrum. Also, includes a themed dance party!

EWF provides mentors to help junior and middle managers get to the next step, as an inspirational conference is good to get things started, but not maintain progress. They've got a program called The Leadership Journey. It's a year long program! Covering things like establishing your leadership vision, optimizing emotional and social intelligence, managing stress and cultivating resilience, work/live integration (because there is no balance).

The soft skills are actually the hard skills - lots of people are good at coding, but not any good at the truly hard stuff - the "soft skills".

This was followed by a fun Q&A with Theodora Titonis, Vice President of Mobile at Verac01de.


Recommended reading: The Confidence Code.


 

BHUSA15: Understanding and Managing Entropy Usage

Bruce Potter is a director at KEYW Corporation and was previously the Chief Technologist and cofounder of Ponte Technologies. Sasha Wood is a Senior Software Engineer at KEYW Corporation, with ten years' experience in developing and assessing software systems, and researching classical and quantum computational complexity.

Their research was funded by Whitewood Encryption Systems, with help from great interns.

Their goal was to get a better understanding of how entropy is generated and consumed in the enterprise. There were rants from Linus, but nobody seemed to be looking at the actual source code. They wanted to determine rates of Entropy Production on various systems, determine rates of Entropy Consumpitio of common operations and determine correlation etween entropy demand and supply of random data..  The theme: "No one really understands what's happening with entropy and random number generation"

What uses more entropy? Generating an RSA512 bit key or 1024? They both use the same! Surprisingly, running /bin/ls uses more entropy from the kernel than setting up a TLS connection!

How do we distinguish between entropy vs random numbers? It's a bit of a state of mind, there are several ways to think about it.  Entropy is the uncertainty of an outcome. Randomness is about the quality of that uncertainty from a historical perspective.

Full entropy is 100% random. There are tests that measure entropy, but randomness either is or is not. Entropy has a quantity and randomness has a quality. Think about the simple coin flip. A regular person flipping a coin will have random output, but someone like the magicians Penn & Teller - they can control their flip and the outcome is NOT random.

As long as we have great cryptographic primitives, the RNG will be excellent. In theory.

This is actually really hard to judge without analyzing the source code and doing actual testing. Documentation does not match what's actually in the source (common problem). This testing was done on Linux (note: I missed the version number).

On Linux, there are two PRNGs - one that feeds /dev/random and one that feeds /dev/urandom, but both leverage the same entropy source.

Entropy sources: time/date (very low entropy), Disk IO, Interrupts, and other SW things

There are Hradware RNGs - like Ivy Bridge, that uses thermal noise. There's Entropy Key (shot noise, from USB generator). Some places even use Lava Lamps! (seriously)

Linux maintains a entropy pool, data goes in and then fed out to the PRNGs. It has a maximum amount in the pool, but if you don't have HW behind this - it will never fill up.

Linux has a system call that will tell you how much entropy is in the pool.  But, beware - don't check it with a script! you'll invoke ASLR, etc, which will consume entropy from the pool.

The /dev/random and /dev/urandom is generally close to ero. Entropy is fed from the main pool when necessary.

Unloaded VMs are only generating 2 bits of entropy per second. Bare metal is a big faster. The more loaded the machine is - the more entropy you'll get.

For example, if you ping the machine every .001s, it will generate entropy at 13.92bits/s, as compared to 2.4 bits/s on an unloaded system.

RDRAND is normally unavailable in a VM, however, even on bare metal, kernel entropy estimation was not helped by RDRAND. Turns out,due to recent concerns regarding RDRAND, even though RDRAND can be used to reseed the entropy pool, the entropy estimation is NOT increased by the kerenel...on purpose.

VMs do get starved of entropy, but even bare metal systems aren't great.

Android devices did better than Linux boxes observed.

Oddly, the accelerometer on Androids is *not* used to feed the entropy pool, although it would be a good source of entropy.

/dev/random provides output that is roughly 1:1 bits of entropy to bits of random number, access depletes the kernel entroy estimation and with block if the pool is depleted.

/dev/urandom works differently  if you ask for 64 bits, it tries to get 128, and reduce estimation doesn to 128bits. Will not reduce entropy estimation from the pool if the pool is less than 192 bits. Each read produces a hash which is immediately fed back into the pool.

get_random_bytes() just a wrapper to access /dev/urandom.

Here are somethings that are not random: C's "rand" (a linear congruential generator" - if you know two consecutive outputs, you know ALL the outputs.

Python's randompy - implements a Mersenne Twister. Better than rand(), but still not suitable for crypto operations. Need 650 outputs to break the algorithm. So, better, but not great.

When Linux spawns processes, it spawns ASLR, KCMP and other aspeces of fork/copyprocess() , consume up to 256 bits of entropy each time you start a process.

This is not consistent, though, so more research.
 
OpenSSL maintains its OWN PRNG that is seeded by data from he kernel. This PRNG is pulled from for all cryptographic operations including: generating long term keys, generating ephemeral and session keys, and generating nonces. 

OpenSSl only seeds its internal PRNG once per runtime. No problem for things like RSA 1024 it keys. It's a different situation for long running daemons that link to OpenSSL... like webservers.Apache PFS connection requires 300-800 bits of random numbers. If your application does not restrict this, you will be pulling this data from a source that is never reseeded.

OpenSSL pulls seed from /dev/urandom by default (and stirs in other data taht is basically knowable). OpenSSL does NOT check to see the quality of the entropy when it polls /dev/urandom.

mod_SSLs attempt to generate entropy is not very sophisticated. On every request , it stirs in: date/time (4 bytes), PID, and 256 bytes off the stack.  Date/time is low resolution and guessable, ID is a limited search space, and it always looks at the same place on the stack.

mod_SSL is trying really hard, but not really accomplishing much.

How much entropy goes into each random byte?  It depends...

The researches tested various common actions in OpenSSL. Different operations required different amounts of entropy. When creating keys, you need to find big numbers - there's a lot of testing that goes on to find a prime.

Attacks on PRNGs come under three umbrellas: Control/Knowledge of "enough" entropy sources (like RDRAND), knowledge of teh internal state of the PRNG, and analysis of the PRNG traffic.

By default, the Linux kernel pulls from a variety of sources to create entropy pool, so difficult to control them all. Knowledge of the state of the PRNG is very complex, but not impossible to understand.

The caveat is based on PRNGs being seeded correctly - analysis is showing this is not the case.  So, you can follow NIST's guidance on correctness, and still get this wrong.

The researchers created a WES Entropy Client as a solution to the wild west of entropy generation and distribution. Initial release is for OpenSSL.  Client allows users to select sources of entropy, how to use each source, which PRNG to use, etc.

Currently available at http;//whitewoodencryption.com/

Client is under active development, looking for feedback.


BHUSA15: Bring Back the Honey Pots

Haroon Meer is the founder of Thinkst, and Marco Slaviero is the lead researcher at Thinkst.

Honey pots are not a new concept - there are many previous talks on this. This is basic deception in warfare, another old concept. Check out : Dectpion for the cyber Defender: To Err is Humn; to Deceive, Divine.

Honey pots really got started in 1989 and 1991. In Bill Cheswick's 1989 paper wrote about effectively tracking down an attacker who had broken into his network. This was really one of the first deep dive documents for this. Next was the Cuckoo's Egg by Clifford Stoll (wait, also 1989?), which hit on the themes of vulnerability disclosure ethics and what the NSA is up to. Mr. Stoll also talked about the concept of honey pots.

In 2000, Lance Spitzner launched the Honeynet Project, where we all gained valuable information from the "Know Your Enemy" series.

Think about big recent attacks like at Target, where the hackers lurked for months, before actively attacking. How could that be, if we've had the concepts of honey pots for years to help folks discover when they were being attacked?

Looking at the mailing list traffic on the Honey Pot mailing list - very active in 2003, nearly dying off starting in 2007. Honey pots are just not sexy - how do you demo this?  "Um, it only makes noise when there's a problem, so it mostly does ... nothing" It's easier to sell other technology.

Honey pots have been traditionally pitched badly. They are overrepresented in academic work, doesn't seem like an industry solution.

Studying the attack after it happened doesn't seem interesting or relevant. Honey pots were looking for what was happening, but not focused on finding new attacks.

We need these, though. We can't wait to find out that our network has been exploited when the press contacts them. Verizon noted that 95% (?) of companies only find out about attacks when a 3rd party tells them. That's simply not acceptable.

As a defender - you MUST defend ALL the time. Attackers can come and go

There are a lot of arguments against honey pots:

Isn't this just an arms race?  No, an arms race is like what we saw between the US and USSR, not what we're seeing today between US and North Korea. You have to be at the table, making the attacker work for this.f

Will honey pots just introduce new risk to our organization? No, you can run python on a hardened server, support only minimal protocols. If you get even just one alert, you're better off than you were yesterday.

And, really, come on - we know you have an NT4 server floating around still on your network.  You've already got the risk there, but this is something that you can manage.

"These are painful to deploy! I already have to manage so many things!"  The speakers have solved this with Open Canary (https://canary.tools/) which can deploy in 3 minutes.

The speakers introduced their Open Canary project:
  • Mixed (low + high + ?) interaction honeypot
  • written in python
  • produces high quality signals
  • it's a sensor
  • trivial to deploy and update
OpenCanary can be configured to send you lots, or a few alerts - you can control the noise level.

Watches various protocols, watching for login attempts, NTP, SIP, and Samba.

As the name implies, the code is open source. You can configure and deploy multiple feeds across the network.

You do have to worry about discoverability. You want to make sure they are referenced (like in naming service) and also deploy multiple honey pots so they are more likely to be found.


Of course, there is a problem that hackers might be able to fingerprint the honey pot.  The speakers thinks this is misguided effort. There are ways to detect when the honey pot software running on the system - look for how this system is different than it should be. Is it running a strange service or kernel module?  But we need to draw the distinction: we should not confuse what methods are successful in a lab versus what works in the real world.

Canary Tokens are not new concepts - Spafford & Kim (1994) and Spitzner (2003). People do this for map making, but putting fake cities or points of interest on maps so you can tell when someone has copied it.

Canary tokens are simple unique tags that can be embedded in a  wide number of places, like in a DNS channel.

You can learn more at canarytokens.com

How can tokens help us spot attackers on the network? You can watch a particular README file, and when it gets read - that will trigger the canary token that will send the alert.

You can even deploy a canary token into databases! You can tell if someone is querying on a table or a view. Same with PDF files.

Interesting use of bing ads, etc.  Cool talk!

(sorry if these notes are spotty, the speakers flashed through their slides REALLY fast, it was hard to catch everything).

BHUSA15: The Lifecycle of a Revolution

Jennifer Granick, Director of Civil Liberties at the Stanford Center for Internet and Society.

Jennifer and Jeff Moss (aka Dark Tangent) met at DefCon III in 1995 - they immediately connected, and she became the go-to lawyer for hackers ever since.


We’re seeing an internet that is no longer dominated by the US. This is important, as these other governments that don’t have a bill of rights will get in on making rules to regulate our Internet. Where will we be in 20 years? Will you know who is making the decisions? Computers will be deciding if you get a loan, where your car drives, etc. There will be mistakes, but as long as they are on the edge cases, that’s okay.


Technology was supposed to help us overturn oppressive regimes, but instead we’re seeing the opposite happen. The repressors are centralizing security, creating chokepoints where regulation can happen.  The backdoors and restrictions will be done by the elites and governments with local interest – not global interest.
Who is responsible for deciding who gets security, who gets access to what things on the Internet?

She was inspired by Steven Levy's book, Hackers, which espoused freedom of information and decentralization of information. This empowered people to make decisions on what was right and wrong. The global network would allow us to communicate with anyone, anywhere, any time.

Jennifer attended New College - where students were responsible for their own education. They wanted information to be free, and they wanted to use their freedom of thought to change the world.

She started her career as a lawyer with a deep love of technology, and was upset seeing hackers getting prosecuted for things she considered “pretty neat tricks”. She met a prisoner who was at risk of losing his “time credit” after it was discovered he was hacking the pay phone to get himself and his friends free phone calls. She wanted this to stop. That was in 1995, and she started paying more attention to what was happening.

Meet “Cyberporn” – A Time Magazine expose about what you could find on the Internet. Congress wanted this to stop (nothing gets government more excited than porn) – and they wanted to create an online decency act.  Of course, doing so required assuming that there were no first amendment rights available on the Internet.

John Perry Barlow, founder of the EFF and lyricist for the Grateful Dead wrote:

Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You’re not welcome among us. You have no sovereignty where we gather.

The Supreme Court, fortunately, turned over most of the provisions of CDA, except the one provision which specified that the provider did not have to be the police.

The Internet was supposed to make us more free – but that’s not what’s happening anymore out there.

Race, gender and class discrimination seems resilient to change on the Internet.  While Jennifer has always felt welcome, there is too much evidence  to ignore.  Look at our big tech companies, which have 17, 15, or 10% female engineers.

How  is that equality?

There are talented people on all parts of the autism spectrum, with different college (or no college) backgrounds, and at any age – from the very young to the elderly.  Given that, could we lead in equality?

What about Freedom to Tinker?

For example, Mike Lynn was coming to present on new vulnerabilities in Cisco routers at Blackhat. His employer, ISS (Inernet Security Systems), and Cisco decided he should not do the talk and threatened Black Hat Conference to remove the pages from the program referencing Mike’s talk and redo the CDROMs with conference proceedings on them.  Jennifer was his lawyer. Mike gave the talk anyways, but the first thing he did in his talk was resign from ISS.
 
What looks more like censorship than ripping out pages out of a book?

Jennifer also represented Aaron Swartz, who ended up killing himself while being prosecuted.

How do we stop this?

Congress has to stop the “tough on cybercrime” hand waving and actually do something about cyber security.  They have made big prison sentences for violators of this, but when another country like China is behind the attack – nothing is done. China does not go to jail. It’s the little guys that are really hurt by DMCA and CFFAA. We need to get rid of them.

 Already now, algorithms are making decisions about our lives, our money, our jobs – and we do not understand these algorithms.  How do we take advantage of AI and machine learning, without ending up completely out of control.

Who is responsible when software fails?  For the most part, nobody. People are sick and tired of this.

Think about this; what happens when your self driving car crashes?  When your internet connected toaster catches on fire? When hackers can control your car remotely using your OnStar device?

We will end up with software liability. Once we are suing Tesla and GM for their software issues, it will be a small step to start suing software companies.

Jennifer recommends reading the Master Switch, by Tim Wu, which studies the cycle of major technologies. History shows a typical progression of information technologies from somebody’s hobby to somebody’s industry; from jury-rigged contraption to slick production marvel; from a freely accessible channel to one strictly controlled by a single corporation or cartel – from open to closed system.

If we don’t do things differently, the Internet will end up like TV, strongly regulated.

Sadly, there are people on the Internet that suck – 4chan, Nazis, jihadists.  Freedom of speech allows those – if you try to regulate them, you will end up impacting everyone. We must tread carefully.

Jennifer asks: who has ever had a blog? Lots of hands go up. Who still blogs? A few hands go up. She noted, “I used to blog, I don’t anymore, I use the centralized service – Facebook”. Nobody, well, except people in this room, still run their own mail server – they all use gmail.com.  We are giving up the control, we are doing this to ourselves.

When we talk about the “cloud” - is it all happy and free? No, it is actually controlled by a small handful of companies, subject to government regulations (US or otherwise). This creates a centralized point for control and eavesdropping.

The law is not protecting us here – in fact, quite the opposite. For example, we have laws that allow surveillance on foreigners, but loopholes in those laws are being used to spy on US Citizens. Laws are passing to give corporations protection from lawsuits if they turn over information to the US Government.

There is not a lot of case law here, oddly, considering the Internet has been around for awhile.

When there is no warrant requirement, searches can be massive and arbitrary.

The myth is that security and privacy are opposites. Not true! Think about how the putting a lock on a cockpit door provides security, but doesn’t mean privacy is exposed. A gay man in another country needs to keep that information private in order to be secure in his own health and happiness.

The current situation is leading to the security haves and have nots. It’s increasingly about power – and once that happens, the people will lose will be the minorities (religious, ethnic, etc) – those who need security most! In the US, we have the Bill of Rights, so we don’t care enough about this. But, other countries do not have those protections. We need to be the leader to protect the world, but we’re not doing that.

We’re already scanning for terrorist threats, and it’s broadening now into monitoring people that seem to be becoming radicalized. What does that mean? There is no agreement, even from the FBI and psychologists, on what it means to be “becoming radicalized”.  So, now more people are getting observed.

People don’t even realize what the Internet is. In a national survey, more people say that they are using Facebook than reported using the Internet. Of course, Facebook is on the Internet – but it is NOT the Internet. So who is correct their?  Facebook decides what to show you based on some algorithm, the freedom is not there...  The further this goes, the less we will know about the world.

We need to start thinking about decentralizing technology again. We need end to end encryption. We need to be afraid of the right things. People are terrible at assessing risk. People are more afraid of sharks than of cows, but EIGHT times more people die at the hoofs of cows every year than are killed by sharks. (note: WHAT?!?! Now I’m more afraid of cows, I knew they were after me!)

We can use law to provide safeguards where technology doesn’t, but we don’t. Congress is simply not protecting our privacy. We need to push them.

We need to get ready to smash it apart and make something new and better.