Showing posts with label BHUSA. Show all posts
Showing posts with label BHUSA. Show all posts

Thursday, August 6, 2020

BH20: The Dark Side of the Cloud - How a Lack of EMR Security Controls Helped Amplify the Opioid Crisis

Mitchell Parker, CISO, Indiana University Health

The Opioid crisis has caused mass addiction and broken up families and support systems. Why is this of interest to Black Hat? A major root cause of the crisis was due to underhanded manipulation of an Electronic Medical Record (EMR) system.

Practice Fusion, now a division of Allscripts. They had advertisements in their EMR, which seemed like a violation of the Stark Act.   Many smaller practices used them, because they couldn't afford better systems. Had over 100K customers at their peak.

Many hospitals and small practices are losing money or barely staying afloat - so they were using this system, as it was 'free'.

EMRs are digital version of paper records.  They can be on mobile, desktop, browser or application - often with remote access, as physicians are overworked, too, and would rather complete their charting from home.

EMRs need to be certified to be eligible for federal reimbursement, and are meant to be kept up to date. Lots of HIPAA violations are caught in the big EMR companies, so it's hard to say what's happening in the smaller providers. 

These systems tend to be lacking 2 factor authentication for system access, which means you can even get system administrator access this way.   The physicians are overworked and focused on spending time with patients, not spending time on IT and compliance.

Most of the revenue for Practice Fusion came from advertisements, even though it was a violation for Anti-Kickback Statute. They additionally marketed themselves to drug manufacturers as willing to customize clinical decision support alerts - Pharma Co. X paid $1M to add custom alerts to recommend extended release opioids.  They were able to prove that doctors that saw this alert prescribed at a higher rate than those who did not.

Death and Opioid abuse is not new, was impacting parts of our  America as far back as the late 1990s.

People died and became drug addicts because of a marketing department.

To help stem this type of abuse, there are proposed changes to the Department of Health and human services regulations.  Additionally, Mitchel would like to see diversion monitoring software and privacy monitoring. 

Additionally, recommending that doctors use the larger providers - those have already been set up to limit opioid prescriptions.

Going forward, EMRs should have 2-factor auth, limited access and configuration change reporting. 

We tell our doctors everything about our lives, so this information must be protected. When that trust is broken, it is tragic.


BH20: A Framework for Evaluating and Patching the Human Factor in Cybersecurity

Ron Bitton, Principal Research Manager, Cyber Security Research Centre at Ben Gurion University

Social engineering attacks go beyond just phishing and no longer limited to PCs, but most solutions don't distinguish between different types of attacks or platforms.

The existing methods are based around self-reported measures, attack simulations,  and training (with some mitigation).

But the self-reported method is biased and resource intensive, so cannot be done continuously.  The attack simulations are typically limited to classic phishing, and cannot be used to evaluate users vulnerability to other attack vectors.   The training workshops are great, but unlikely to reflect the users normal behaviour - as they know they are in training.  Additionally, employees are not big fans of forced training, and may not be engaged.

Most technological mitigations are limited to specific environments (like the office, specific browser).

The researchers have created a new toolkit: SafeMind.   The researchers looked into specific areas of awareness models, and worked with other security researchers to help rate the importance of the criteria, which helped them narrow down the most  critical areas to measure.

Created an endpoint solution, attack simulator and network solution. The endpoint solution looks at a lot of things on the endpoint - sensors on social media activity, security settings, certificate management - to create a profile of the user.  Using this profile, could target attack simulations for that user. 

Over 7 weeks they experimented on 162 subjects. They could see that those users with lower security knowledge were less successful at mitigating some attacks.  Users self-reported behaviour may differ significantly from their actual behaviour, whereas their research could predict more accurately their actual behaviour.

BH20: Keynote: Hacking Public Opinion

Renée DiResta, Research Manager, Stanford Internet Observatory

Vocab background: Misinformation, the sharer thinks the information is true, and sharing out of trying to help people. Disinformation, the sharer knows the information is false.  Propoganda is information that is created to make you feel and act a certain way (not always false). Finally, there's an agent of influence - someone acting on behalf of someone else (Nation State, etc). 

Dissemination is an important part of sharing information. In the past, someone would have to physically hand out flyers.  This got easier with tv and radio, but still restricted.  Then, we got zero-cost publishing with blogging - but attracting the audience was still tricky.  Now we have social media - the feeds are designed for engagement and dissemination. 

Now we have a glut of material, no editors, no gatekeepers - just an algorithm that rates, ranks and disseminates.  These algorithms are gameable, and the systems are open to everyone.

We are now going beyond influencing public opinion to hacking public opinion.  It's easy and cheap to create fake media companies and personas, it's how the platform was designed.

We see distraction, persuasion, entrenchment (to highlight and exacerbate existing divisions), and then divide.

Now our broadcast media feeds into social media - and it also flows in the reverse! Both of these can be easily influenced by bad actors.

Renee then walked through a few examples from China - obvious government propaganda, less obvious and then "news" coming from a fabricated news company on twitter to make China look good.  In addition, many Chinese news agencies have facebook pages - even though Facebook is banned in China.  Why? To influence China's image in countries that do have access to Facebook, used recently to discredit Hong Kong protestors.

She did a great breakdown, as well, on creation of twitter bots and figuring out their purposes - and also how effective they were (engagement, number of retweets, etc.)

Memes are properties created for social media, and are easily digestible, identity focused. Often created by state actors to create more division - on both sides of the political spectrum.

Great deep dive into the Russian interference in the 2016 election, with lots of great graphics.

Well researched state agents will exploit divisions in our society using vulnerabilities in our information ecosystem. They will likely target voting machines again and to infiltrate groups. But most of all, they will aim to reduce trust in the US elections.

The more these images and stories are spread, they start to influence and impact people, though direct measurement of impact on each individual is more difficult and will be part of further research. They can see disinformation jumping from one group to another, which seems to demonstrate people are believing it and feel strongly enough to reshare.

An excellent talk - I highly recommend you catch it on YouTube when posted!





Wednesday, August 5, 2020

BH20: Hacking the Voter: Lessons from a Decade of Russian Military Operations

Nate Beach-Westmoreland, Head of Strategic Cyber Threat Intelligence, Booz Allen Hamilton

Nate has been involved in elections since a youth.  For background, read Russian's Military Doctrine that explains tactics, targets & timing of GRU operations.  Long story short: they've been doing what they said they would do!

This is not a new thing - been doing this at least since the 1970s.  Many of the strategies haven't changed, either. what has changed is the technology and who is doing it. In the 1980s, it was the KGB and the Propaganda department.

In the late 1990s, Russia switched to the tactic of Information Confrontation - the continuous competition over beliefs, opinions, perceptions and feelings to enable the furthering of states' agendas.  This has been adopted by the Russian Military and is even documented on their website!

The Information Confrontation has two sides: informational-psychological and informational-technical capabilities.  These are used for more than just swaying an election.  Moscow's preferred candidates have rarely won, but they did succeed at undermining the winner - making them weaker, less able to oppose Russia. 

Information conflict is both offensive and defensive - can demonstrate that "fair, free and democratic" societies are not desirable nor obtainable - So, Russians should stick with the status quo.

Look at what happened in the Ukraine in 2014.  Attacks against the Ukrainian election started a few days in advance, trying to destroy the vote counting system.   They took over websites of officials, creating fake announcements that the system had been breached and then attacked the vote reporting site to show a fringe candidate as winning - all to delegitimize the actual election results.


Similarly in Bulgaria, the GRU launched an DDOS on voter registrar sites, so voters could not find their polling place.

In France (2017), the GRU started phishing Macron's campaign, and started blasting Macron with all sorts of falsehoods about Macron's character.  Even though they were easy to debunk, they built a story that Macron may be a seedy character.  France has a ban on campaigning and commentary within 48 hours of the election, and released more falsehoods and private campaign documents right before.

Similar things happened in Montenegro in 2016.

Then in the US in 2016, similar tactics again: leaking internal campaign - time released to maximally inflame divisiveness. They started spreading fear about election infrastructure and threats of large scale fraud/vote rigging.

When Russia is caught, they go on a "whataboutism" campaign - 'So, what, our athletes were doping, your athletes have done the same thing - what about those athletes?" How can you be angry about us trying to interfere in your election, when US does it to other countries? 

As we've already seen Russia attack power grids, what would happen if they did it on an election day?  Either in the US or other nations?


BH20: We Went to Iowa and All We Got were These Felony Arrest Records

Justin Wynn, Senior Security Consultant, Coalfire Systems
Gary Demercurio, Senior Manager, Coalfire Systems

Client asked them to come on sight and test physical penetration and plantation of drone device.  They were requested by the client to do the work at night/after hours.  What was said later to the press by the client was very different.  Originally it was night only, but they changed the contract later to add social engineering during the day.  It wasn't just the pentesters on the phone with the client, but also their project manager, manager and another pen tester. 

They also received a letter of authorization that also asked them to begin on Sunday (when the court house is closed), so for the client later said they only wanted it to happen during business hours (courthouses are closed on weekends).  The pentesters were given restrictions for each of the 5 buildings, like which floors are off limits, which data centers are in scope/out of scope. This was worked out building by building.  The contract was more generic, and the scoping call was more detail (lesson learned: record your scoping call)!

Charges were filed against each of them independently.

Spent the day on Sunday scoping locations, during business hours they got tours (some public/free access, some with escorted tour).

Started out Monday night at Judicial branch - a State Trooper came by (as expected), who said this was common practice and asked for a business card.  They did get inside, got into the IT department and left a card on his desk. The contact from client sent a "can't wait to see how this was done", reviewed the overnight footage, and didn't say anything.  Everything was seeming fine to the researchers.

Started again on Tuesday night, breached 3 more buildings with no alarms. They knew the last building had an alarm, and were hoping they would set it off.   they arrived at 11:30PM on Tuesday, did a brief walk around - could see the sheriff department across the street.  They found an open door when they arrived - wow.  They closed it, and then re-breached the door.  they tried the default codes for the alarm, didn't work - so they decided to hang out and wait for the police to arrive. 

They wanted to make sure they did not scare the police, or get surprised, so they called out regularly as they were moving down to the ground floor. 

Then we got to watch the body cam footage from first officer on scene, and can hear the police talking, seemed fine with the researchers and they were told they were good to go.

then the sheriff arrived.....and the police officers turn off their body cams.  Suddenly sheriff said the client didn't have the authority to authorize the pen test (state vs county property), and decides to arrest them for burglary. 

Up until when the sheriff arrives, everyone was very professional, then suddenly everyone's attitude changes. Suddenly, the fact that they are penetrating with commonly available tools, they couldn't possibly be professionals (!?!?!?).

Now being questioned about whether or not one of the testers was an actual marine, took a lot of pushing to get them to say they were under arrest.  Finally got ahold of the client, to let them know they were in jail. Asked for help.   "Andrew" was supposed to talk to the sheriff, but the sheriff won't budge because it's a county building - "nothing" can be done.

Judge at arraignment was not pleased that they had been arrested breaking into her courthouse... thought their client would come and protect them, but instead noted they were a flight risk - set their bail at $50,000 (same as people are given for murders).

This led into jurisdictional infighting. Client removed documents from Coalfire portal.

They want someone to be responsible for this.  Polk County DA was not going to charge the speakers, as he was aware that it was the three contacts from the client were at fault, but Polk County Sheriff was defensive of Dallas County Sheriff and threatens Coalfire CEO.

While things are moving forward, in their favor, the Chief Justice dies and everything dies with them.

Now they both have permanent felony records.  Cannot get firearms.

They have laws in the state that are more concerned with liability and less about the security of their infrastructure.  Based on this, all offensive security has stopped in Iowa. 

They would like to get laws passed to prevent this from happening again - if you can help, reach out!

[Q&A]

Do you still have a felony record? Yes.

Was the sheriff of Dallas County ever reprimanded? No.

BH20: Election Security: Securing America's Future

Chris Krebs, Director, Cybersecurity & Infrastructure Security Agency (CISA)

About this time in 2016, it became very clear that Russia was intent on disrupting our election in several ways, including information disruption, election tampering, etc.   There was an ad-hoc response pulled together, as it hadn't been clear this was going to happen in advance.  The Russians did research and targeted attacks on all 50 states, but did not seem to be able to impact a vote via cyber means.

Why was it an ad-hoc response? there was no dedicated approach on election security.  The security research community was aware, but there was nothing dedicated at the federal level. Pulled it together last minute and provided a successful defense from a cyber security standpoint.  Then a playbook was brought out now that others can study.

What are the implications of what happened in 2016? it was a Sputnik type moment - for the first time, the Soviets had a way to reach out and touch us, geographic isolation was no longer in our favor.  Now they could use cyber techniques to destabilize and election. gave the US heads up that we had a lot at stake in 2018 and 2020.  

We have 3 distinct advantages now: vibrant election security community, better understanding of risks, better visibility of what is happening with elections.   Federal gov't is here to support state and local governments run their elections.  Since 2016, pulled together and information sharing infrastructure. sharing threats, strategic and defense tactics. Been providing services / tech capabilities to partners in local government.   Been working together to analyze trends and issues, helping others to buy-down risk with the tools & techniques that have been developed. 

We have a much better understanding now than we did in 2016 how different states and counties are running elections - we are listening to them about what their risks and issues are.  One of the best risk management technique: paper.  We are asking states to switch to a system that has a paper record. for 2020, we may hit 92% or higher with a paper trail.  The paper trail is needed for audibility.

We now have a much better understanding & visibility of what is happening in the election space and worked hard to develop trust with state & local election authorities.  We've been able to provide tools, like intrusion detection, deployed across all 50 states (not necessarily all counties). 

Even with all these preparations, still more work to do - there could be more disruptions, we have Covid-19, and we need voters to be informed.

Today, in 2020, the focused mission of NSA, Intelligence, etc - watching out for Russia, China, and other state actors targeting our infrastructure. Lots of scanning, but not seeing anything at the level we saw in 2016.   But, still seeing too many ransomeware attacks of hospitals and financial institutions - do not want to see this happen to election systems. Helping with tools and techniques to protect these systems.

Looking at the failover mechanisms - analog backups of voter registration databases, etc. we need to make sure that the voters can vote, no matter what.  We also have provisional ballots as a backup.

We have Albert Sensors (IDS), but we also need end point detection, capabilities on individual hosts. We have to continue to improve security at all levels.

In terms of Covid, that's why he's here talking to us today.  Covid will change how we do elections - we realized in February that Covid was going to change the voting process. We are, at the very least, going to need PPE for poll workers, sanitation procedures, etc.   But not just about in-person voting, many states are adopting absentee & mail-in balloting. This takes time & money.  States like New Jersey could not identify budget for doing things like upgrading their machines to have paper audit,  but now they are moving to more mail-in system - so they may get the paper trail this year.

It's quite possible that we won't know on November 3 who won the election. Please be patient. 

We need informed voters - something will change in the way you vote. May be a new polling location: schools & aged homes may not be available. Have a plan for how you will vote.  Take advantage of early voting, absentee or mail-in.   Be a part of the solution.

[Q&A - Live Commentary section]

Under the constitution, states will determine the time, place and manner of an election.  Congress has a role here as well, but local & state has to carry the bulk of the burden.  CISA and the intelligence committee are here to help and support. 

Couple of developments since this was recorded: have set up a vulnerability disclosure guidance, saw University of Chicago is providing free support to state & local election boards, and launching an end-point detection system pilot in 29 states.

We are trying to help with debunking/prebunking of disinformation, in a balanced way. 

Last fall pushed out a state & local disinformation kit, so they can tailor to their local needs, and also leveraged that for Covid related disinformation.  they launched the War on Pineapple campaign, benign and easy to understand. 

Working to help the states adjust and studying the equipment and risk controls, adjusting our approach to do more remote pen testing.

Unfortunately for us, he can't discuss confidential information ;-)

Be prepared, participate - we need 250K poll workers, and be patient!

BH20: Keynote: Stress-Testing Democracy: Election Integrity During a Global Pandemic

Great intro from Dark Tangent (as per usual) - there are people attending from 117 different countries!  Lots of great scholarships this year as well. 

It's strange attending from home - no laser show!

Keynote Speaker: Matt Blaze, Georgetown University

Early elections in the US used little technology - they were literally just in a room and raising hands, but that doesn't scale and it is also not secret.  The earliest technology was simple paper ballots that were hand counted.  As long as the ballot box wasn't tampered with, you could have high confidence your ballot was counted. It was also easy to observe/audit. 

We moved onto machine counted ballots or direct-recorded voting machines, and finally computers. The technology doesn't matter as much as the voters trust the technology and the outcome.

It can be hard to get right - do to some conflicting requirements: secrecy and transparency. How do you audit and make sure everyone's vote was counted in the way they wanted it counted, but w/out disclosing how they voted? 

It is impossible to re-do an election. They need to be certified by a certain date and you cannot really do them again, there's not enough time to do it before transition of power should occur.

The federal government doesn't have as much oversight over each state for a federal election as you might think - they are mostly run by counties, with guidance and standards set federally.  There is no place to change everything nationwide. 

The ballots can (and usually do) vary even within the county - think about school board, city council, local ordinances, etc.  In 2016, there were 178,217 distinct ballots in the US. Sixty percent of eligible voters participated in the election, 17% cast in-person in early voting and 24% was by mail, but the majority were still in person.

In the US, we spend more money campaigning than on running the election itself.

Traditional threats to voting: vote selling, ballot stuffing or mis-counting.  Foreign state adversaries are also a threat, but they may not care about who wins - just that the process is disrupted and cast doubt on the legitimacy of the election.

Taking a walk down memory lane: hanging chads!  Florida was using a punch card system (aside: we used the same system in Santa Clara county when I moved here, except we didn't have the "assistance" of the physical ballot - I had to bring in my sample ballot so I'd know which holes to punch.  In that case, since the Supreme Court stopped the count, we ended up with a certified election that nobody (but the winner) was satisfied - they did not feel their votes were counted.

This debacle did lead to HAVA (Help America Vote Act) - mandated every one to change their voting equipment and did provide funding to purchase it.  Unfortunately, improved tech wasn't widely available,  Most common were DRE (Direct Recording) voting machines - it's computerized. This is different than the older model, where we used offline computers to tally the votes.  These new machines are networked, and much more reliant on software.

As you are aware - software is hard to secure.  There are no general techniques to determine if it is correct and secure.  SW is designed to be easily changed - maybe too easy, if you're not authorized and still able to make a change.  This is a problem for these voting machines.

E-voting, in practice, has a huge attack surface: firmware, software, networking protocols, USB drives floating around, non-technical poll workers, accidental deletion of records, viruses....

Every current system that is out there now is terrible in at least one way, if not several.   There is an exception from the DMCA to do security research on voting machines.  This makes the DefCon voting village a lot of fun (and will be available this year as well). 

Some people are suggesting hand count all - but, there are just too many items per ballot.  The amount of work to do a complete hand count is infeasible. 

The other extreme: the blockchain!  But, it makes us much more dependent on the SW and the client (and what it puts in the blockchain). This does address tamper detection, but not prevention/recovery.  Also, civil elections aren't a decentralized consensus process.

There have been two important breakthroughs - first form Ron Rivest on Software Independence: a voting system is software independent if an undetected change or error in its software cannot cause an undetectable change or error in an election outcome. .... but not how to accomplish that.  Stark came up with Risk-Limiting audits: statistical method to sample a subset of voting machines for post-election hand audit to ensure they reported correct results.  if that fails, hand count the rest.

You can learn more in the paper of "Securing the Vote" from the National Academy.

Everything seemed like 2020 was going to go well... until... March.  Who would've expected a global pandemic?

When we think about voter disruption, you might not be able to get to the polling place due to travel or disability - you can get an absentee ballot (including "no excuse" ballot) - but, with the exception of states like Oregon, they are a small percentage.

If there are local or regional emergencies, like an earthquake or hurricane, that may prevent polling places from opening.  There was an election in NYC on September 11, 2001 - it was definitely disrupted and then highly contested. 

Postponing election is a very disruptive thing - have to figure out what that means for the US? Who then becomes president while we wait for the election? Are there other options?

In an emergency, people may not be able to vote in their normal way: there may not be enough poll workers, they may be in the hospital, recently moved, etc. We are seeing increased pressure on the counties for this, in a time of decreased funding.

Matt then did a great walkthrough of vote-by-mail, how signatures are verified and ballot processing. How do we scale this up?  Exception handling can be very labor intensive, and there is high pressure on chain of custody.   it's hard to know how many people will ask for absentee ballots - they may not have enough, and they can't just copy ballots - so there is a necessary lead time.

how can you help? Volunteer as a poll worker, election judge, wherever your county needs assistance with this election.

Tuesday, October 9, 2018

BH18: Why so Spurious? How a Highly Error-Prone x86/x64 CPU "Feature" can be Abused to Achieve Local Privilege Escalation on Many Operating Systems

Nemanja Mulasmajic  and Nicolas Peterson are Anti-Cheat Engineers at Riot Games.

This is about a hardware feature available in Intel and ARM chips. The “feature” can be abused to achieve local privilege escalation.

CVE-2018-8897 – this is a local priv escalation – read and write kernel memory from usermode. Execute usermode code with kernel privileges. Affected Windows, Linux, MacOS, FreBSD and some Xen configurations.

To fully understand this, you’ll need to have some good assembly knowledge and privilege models. In the standard model, Ring 1 and 2 are really never used, just Ring 3 (least privileged) to Ring 0 (most) (it is a simplified view).

Hardware breakpoints cannot typically be sent by userland, though there are often ways to do it in syscalls. When an interrupt fires, it transfers execution to an interrupt handler. Lookup is based off of the interrupt descriptor table (IDT), which is registered by the OS.

Segmentation is a vestigial part of the x86 architecture now that everything leverages paging.  You can still set arbitrary base addresses.  The first 2 bits describe if you’re in kernel or user mode. Depending on the mode of execution, the GS base means different things (it holds data structures relevant to the mode of execution). If we’re coming from user mode, we need to call SWAPGS to update to the equiv in kernel mode.

MOV SS and POP SS force the processor to disable external interrupts, NMIs and pending debug exceptions until the boundary of the instruction following the SS load was reached. The intended purpose was to prevent an interrupt from firing immediately after loading SS but before loading a stack pointer.

It was discovered while building a VM detection mechanism, as VMs were being used to attack Anti-Cheat.. They thought – what if VMEXIT occurs during a  “blocking” period? Let’s follow the CPUID… They started thinking about what would happen if they did interrupts at unexpected times.
So, what happens? Why did his machine crash?  Before KiBreakpointTrap executes its first instructions, the pending #DB is fired (which was suppressed by MOV SS) and execution redirects to where KiBreakpointTrap, which sends execution back to where it *thought* it should go – kernel (though it had come from user mode).

Code can be found at github.com/nmulasmajic, if you aren’t passed, system will crash.  Showed demo of 2 lines of assembly code putting a VM into a deadlock.

They can avoid SWAPGS since Windows thinks they are coming from kernelmode.  WRGSBASE writes to the GSBASE address, so use that!

They fired a #DB exception at unexpected location, and then the kernel becomes confused. Handler thinks they are privileged, now they control GSBASE.  Now they just need to find instructions to capitalize on this…

Erroneously assumed there was no encoding for MOV SS, [RAX] only immediate. It doesn’t dereference memory, but POP SS does dereference stack memory. BUT… POP SS is only valid in 32-bit compatibility code segment. On Intel chips, SYSCALL cannot be used in compatibility mode. So… focusing on using INT # only.

With the goal of writing memory, found that if they caused a page fault (KiPageFualt) from kernelmode, they c ould call KeBugCHeckEx again.  This function dereferences GSBASE memory, which is under their control…

It clobbers surrounding memory. Had to make one CPU “stuck” to deal with writing to target location. Chose CPU1 since CPU0 had to service other incoming interrupts from APIC. CPU1 endlessly page faults, goes to the double fault handler when it runs out of stack space.

The goal was to load an unsigned driver. CPU0 does the driver loading. They attempted to send TLB shootdowns, forcing CPU0 to wait on the other CPUs by checking PacketBaerrier variable in its _KPCR. But, CPU1 is in a dead spin… will never respond. But, “luckily” there was a pointer leak in the +KPCR for any CPU, accessible from usermode. (the exploit does require a minimum of 2 CPUS).

It is complicated, and it took the researchers more than a month to make it work. So, they looked into the syscall handler – KiSystemCall64. They registered in the IA32_LSTAR MSR. SYSCALL, unlike INT #, will not immediately swap to kernel – actually made things easier. (Syscall funcions similar to Int 3)

Another cool demo J

A lot of this was patched in May. MS was very quick to respond, and most OSes should be patched by now. You can’t abuse SYSCALL anymore.

Lessons learned – want to make money on bug bounty? You need a cool name and a good graphic for your vuln (pay a designer!), and don’t forget a good soundtrack!

BH18: How I Learned to Stop Worrying and Love the SBOM

Allan Friedman  | Director of Cybersecurity, NTIA / US Department of Commerce

Vendors need to understand what they are shipping to the customer, need to understand the risks in what is going out the door. You cannot defend what you don’t know. Think about ingredients list on a box – if you know you have an allergy, you can simply check the ingredients and make a decision. Why should software/hardware we ship be any different?

There had been a bill before congress, requesting that there always be an SBOM (SW Bill of Materials) for anything the US Government buys – so they know what they are getting and how to take care of it. The bill was DoA, but things are changing…

The Healthcare Sector has started getting behind that. Now people in FDA and Washington are concerned about the supply chain. There should not be health care way of doing this, automotive way of doing this, DoD way of doing this… there should be one way.   That’s where the US Department 
of Commerce comes in.  We don’t want this coming from a single sector.

Committees are the best way to do this – they are consensus based. That means it is stakeholder driven, no single person can derail. Think about it like “I push, but I don’t steer”.

We need Software Component Transparency. We need to compile the data, share it and use it.  Committee kicked off on July 19 in DC. Some folks believe this is a solved problem, but how do we make sure the existing data is machine readable? We can’t just say ‘use grep’. Ideally it could hook into tools we are already using.

First working group is tackling defining the problem. Another is working on case studies and state of practice. Others on standards and formats, healthcare proof of concept, and others.

We need more people to understand and poke at the idea of software transparency – it has real potential to improve resiliency across different sectors.

BH18: Keynote! Parisa Tabriz

Jeff Moss, founder of Blackhat, started out the first session at the top of the conference, noting several countries have only one person from their country here – Angola, Guadalupe, Greece, and several others. About half of the world’s countries are represented here this year! Blackhat continues to offer scholarships to encourage a younger audience to attend, who may not be able to afford to. Over 200 scholarships were awarded this year!

To Jeff, it feels like the adversaries have strategies, and we have tactics – that’s creating a gap. Think about address spoofing – it’s allowed and turned on on popular mobile devices by default, though most consumers don’t know what it is and why they should turn it off.

With Adobe Flash going away, beliefs out there are this will increase SPAM and change that landscape. We need to think about that.

Parisa Tabriz, Director of Engineering, Google.
Parisa has worked as a pen tester, engineer and more recently as a manager. She has often felt she was playing a game of “whack-a-mole” – how do we get away from this? Where the same vuln (or a trivial variation of another vuln) pops up over and over. We have to be more strategic in our defense.
Blockchain is not going to solve our security problems. (no matter what the vendors in the expo tell you…)

It is up to us to fix these issues. We can make great strides here – but we have to realize our current approach is insufficient

We have to tackle the root cause, pick milestones and celebrate and build out your coalition.  We need to invest in bold programs – building that coalition with people outside of the security landscape.

We cannot be satisfied with just fixing vulnerabilities. We need to explore the cause and effect – what causes these issues.

Imagine a remote code execution (RCE) is found in your code – yes, fix it, but figure out why it was introduced (the 5 Whys)

Google has started Project Zero – Make 0-Day Hard. Project Zero was formed in 2014, treats Google products like 3rd party. Finding thousands of vulnerabilities. But they want to achieve the most defensive impact from any vulnerabilities they find.

Team found that vendor response varied wildly in the industry – and it never really aligned with consumer needs. There is a power imbalance between security researcher and the big companies making the software. Project Zero has set a 90 day release time line, which has removed the negotiation between a researcher and the big company. A deadline driven approach causes pain for the larger organizations that need to make big changes – but it is leading to positive change at these companies. They are rallying and making the necessary fixes internally.

One vendor improved their patch response time by as much as 40%! 98% of the issues are fixed within the 90-day disclosure period – a huge change!  Unsure what all of those changes are, but guessing it’s improved processes, creating security response teams, etc.

If you care about end user security, you need to be more open. More transparency in Project Zero has allowed for more collaboration.

We all need to increase collaboration – but this is hard with corporate legal, process and policies. It’s important that we work to change this culture.

The defenders are our unsung heroes – they don’t win awards, often are not even recognized at their office. If they do their job well, nobody notices.

We lose steam in distraction driven work environments. We have to project manage, and keep driving towards this goal.

We need to change the status quo – if you’re not upsetting anyone, then you’re not going to change the status quo.

One project Google is doing to change the world is to move people away from HTTP and to HTTPS on the web platform.  Not just Google services, but the entire world wide web.  We wanted to see a web that was by default secure – not opt-in secure. The old Chrome browser didn’t make this as obvious to users which was the better website – something to work on.

Browser standards come from many standards bodies, like IETF, W3C, ISO, etc – and then people build browsers on top of those using their own designs. Going to HTTPS is not as simple as flipping a switch – need to worry about getting certificates, performance, managing the security, etc.

Did not want to create warning fatigue, or to have it be inconsistently reported (that is, a site reported as insecure on Chrome, but secure on another browser).

Needed to roll out these changes gradually, with specific milestones we could celebrate. Started with a TLSHaiku poetry competition, which led to brainstorming.  Shared ideas publicly, got feedback from all over, and helped to build support internally at Google to drive this. Published a paper on how to best warn users.  Published papers regarding who was and was not using HTTPS. 

Started a grass root effort to help people migrate to HTTPS. Celebrated big conversions publicly, recognizing good actors.  Vendors were given a deadline to transition to, with clear milestones to work against, and could move forward. Had to work with certificate vendors to make it easier and cheaper to get certificates.

Team ate homemade HTTPS cake and pie! It is important to celebrate accomplishments, acknowledge the difficult work done. People need purpose – it will drive and unify them.

Chrome set out with an architecture that would protect a malicious site from attacking your physical machine. But, now with lots of data out there in the cloud, has grown the cross site data attacks.  Google’s Chrome team started the Site Isolation project in 2012 that prevented the data from moving that way.

We need to continue to invest in ambitious proactive defensive projects.

Projects can fail for a variety of reasons – management can kill the project, for example.  The site isolation project was originally estimated to be a year, but it actually took six….. schedule delay at that level puts a bulls-eye on you.  Another issue could be lack of peer support – be a good team player and don’t be a jerk!

Thursday, August 9, 2018

BH18: Lowering the Bar: Deep Learning for Side Channel Analysis

Jasper van Woudenberg, Riscure

The old way of doing side channel analysis was to do leakage modeling to pull out keys from the signals. Started researching what happens if they use a neural network for the analysis.

They still need to attach the scopes and wires to the device, can't get robots to do that, yet. They do several runs and look for variations in signal/power usage to find leakages from the patterns (and divergence of the patterns).

Then we got a demo of some signal analysis - he made a mistake, and noted that is the problems with humans, we make mistakes.

Understanding the power consumption can give you the results of X (X-or of Input and Key), then if we know input - we can get the key! Still a lot of work to do.

In template analysis, you build models around various devices from power traces - then look for other devices using the same chipset, and then can start gathering input for analysis.

The researchers than looked at improving their processes with Convolutional Neural Networks (CNNS). THere is the input layer (size is equal to number of samples), the convolutional layer (feature extractor + encoding), then Dense Layers (classifiers) and finally the output later. Convolutional layers are able to detect the features independently of their positions.

There are a lot of visuals and live tracing, hard to capture here, but fascinating to watch :-)

Caveat - don't give too much input, make the network is too big = or the model cannot actually learn and will not be able classify new things.  (memorizes vs learning).  Need to verify this with validation recall. 

Deep learning can really help with side channel analysis and it scales well. It does require network fiddling, but it's not that hard. This automation will help put a dent into better securing embedded devices.


BH18: Legal Liability for IoT Cybersecurity Vulnerabilities

IJay Palansky, Partner, Armstrong Teasdale

IJay is not a cyber security expert, but he is a trial lawyer who handles complex commercial litigation, consumer protection,  and class actions - usually representing the defendant.

There is a difference between data breach and IoT vulns. They aren't handled the same. There is precedent on data breaches, but not really much on IoT devices. People have been radically underestimating the cost and volume of IoT lawsuits that are about to come. The conditions are going to be right for a wave of lawsuits.

Think about policy. The rules are changing. It is hard to predict how this will play out, so it's hard to say how IoT companies should protect themselves. IJay likes this quote from Jeff Motz - "What would make 'defense greater than offense'..?" (Motz? maybe Moss?)

People are trying to get the latest and greatest gadget out, to get the first to market advantage. Security slows this down. But if your'e not thinking about security devices up front, you are putting yourself at risk. If you get drawn into litigation or the media draws attention to it, you need to be able to answer to the media (or a judge) what you did to meet basic security requirements for that type of device. Think of avoiding the liability. Judges will look for who is the msot responsible.

It's estimated that there will be 20 Billion connected devices by 2020.

There are ridiculous items coming online all the time - like the water bottle that glows when you need to drink, the connected Moen shower to set temperature, and the worst the i.Con Smart Condom... oh boy.

These devices have potential to harm, from privacy issues to physical harm.  There can be ransomware, DDoS attacks, etc. These are reality - people are remotely hacking vehicles already.

Plaintiffs' lawyers are watching and wating, they want to make sure they can get soemthing out of it financially. They need to be able to prove harm and attribution (who to blame). Most importantly, the plaintiffs' lawyers don't understand this technology (and neither do the judges), or how the laws here work.

There is general agreement that the security of IoT devices is not where they should be. There will be lawsuits, once there are some, there will be more (those other attorneys will be watching).

This is not the first time that product liability or other law has had to address new technology, but the interconnectedness involved in IoT is unique. They need to show who's fault it was - could get multiple defendants, and they will be so busy showing what the other defendant did wrong - doing the plaintiffs' lawyer's job for them. :-)

There has been some enforcement by regulators, like the response to TRENDnet Webcam hack in Jan 2012, which resulted in a settlement in 2013.

Some lawyers will be looking for opportunities to take up these cases, to help build a name and reputation.

The Jeep hack was announced in 2015, then Chrysler recalled the vehicles. That's not where the story ends... there is a class action lawsuit moving forward still. (filed in 2016, but only approved yesterday to go forward). This is where things get interesting - nobody was hurt, but there was real potential of getting hurt.   People thought they were buying a safe car, and they were not. What is the value?

There is reputation loss, safety issues, and the cost of litigation that makes this all a problem. It's a burden and distraction on key employees that have to be deposed, find documents, etc.

The engineers and experts get stressed about saying something that will hurt their company, or thinking that they did something wrong that hurt someone. That is a cost.

IJay then walks us through law school in 10 minutes :-)

You need to understand the legal risks and assocaited costs, so when you are making decisions on the right level of security.

Damages vary by legal claim and the particular harm. Claims can be around things like negligence, fraud or fradulent omission, breach of warranty, strict product liability.  These are all state law claims, not federal, which means there will be variance.

Negligence means you have failed tot take "reasonable care" - often based on expert opinions.  Think of the Pinto - they had design defects.

Design defets could be around hardware or software, things like how passwords are handled.

Breach of warranty is an issue as well - there are implied warranties, like of merchantability (assumption product is safe and usable)  If you know you have an issue, and don't tell anyone - that's fraudulent omission.

Keep in mind that state statutes are dsigned to be cosnumer friendly, with really broad defintiions.

You need to minimally follow industry standards, but that may not be sufficient.

Think about security at all stages of your design, be informed and ask the right questions, be paranoid and allocate risk. Test and document the testing you did, save it while you do the work. It will hep protect you.  Be careful about words you use around your products, watch what you say in your advertisement and don't overstate what you do.

You should also get litigation insurance and make sure it covers IoT.

If it goes wrong - get a  good lawyer who knows this area. Investigate the cause, inclding discussions with engineers.

A wave of IoT hack and vuln litigation is coming - you need to be thinking about this now. Understand and use sound cybersecurity design and engineering principles.

BH18: WebAssembly: A New World of Native Exploits on the Browser

Justin Engler, Technical Director, NCC Group
Tyler Lukasiewicz, Security Consultant, NCC Group

WASM (WebAssembly) allows you to take code written elsewhere and run it in a browser.

Crypto minors and archive.org alike are starting to use web assembly.

Browsix is a project to implement POSIX interfaces in the browswer, and JsLinux has an entire OS in the browser. eWASM is a solution for ethereum contracts (an alternative to solidity). (and a bunch of other cool things)>

Remember when... Java Applets used to claim the same things (sandboxing, virtualization, code in browser)...

WebAssembly is a relatively small set of low-level instructions that are executed by browsers. It's a stack machine. You can push and pop things off the stack (to me the code looks a lot like lisp).  We do a couple of walkthroughs of sample code - they created a table of function pointers (egads! it's like networking kernel programming).

WASM in the browser - it can't do anything on it's own (can't read memory, write to screen, etc). If you want it to do anything, you need to import/export memory/functionality/etc. Memory can be shared across instances of Wasm.

Emscripten will help you create .wasm binaries rom other C/C++ code, incldues buit-in C libraries, etc.  Can also connect you to Java and JavaScript.

Old exploits in C work in WASM, like format strings and integer overflows. WASM has it's own integer types, different from C, different than JavaScript. You need to be careful sending integers across boundaries (overflow)..  Buffer overflows are an issue as well.  If you try to go past your linear memory, you get a JS error - it doesn't work well, it's pretty ugly.

You can now go from a BOF (Buffer Over Flow) to XSS. Emscripten's API allows devs to reference the DOM from C/C++. CHaracter arrays being written to the DOM create the possibilyt of DOM-based XSS and can use a user-tainted value to overwrite a safe value.  This type of attack likely won't be caught by any standard XSS scanners.  As JS has control of the WASM memory and tables, XSS should give us control of any running WASM.

And this even creates new exploits here! We can now have a function pointer overflow. Emscripten has functions that run arbitrary code (emscripten_rn_script). Can take advantage of that as lont as it's loaded. They discovered that function tables are constant - across compilations and even on different machines.

You don't necessarily to go after the XSS here, but could use functions written by the developers as long as it has the same signature as the real one.

They also showed a service-side RCE (Remote Code Execution). Showed code in browser starting a process on the server.

Many mitigations from C/C++ won't work on WASM. THey could use things like ASLR and could use some library hardening. Effective mitigations include control flow integrity and function definitions and indexing (prevents ROP-style gadgets).

WASM did cover these in their security warning, in a buried paragraph. It should be mroe obvious.

If you can avoid emscripten_run_script and friends, run the optimizer (removes automatically included functions that might have een useful for control flow attacks), use control flow integrity (but it may be slower) and you still have to fix your C bugs!

There is whitepaper out - Security CHasms of WASM

BH18: AI & ML in Cyber Security - Why Algorithms are Dangerous

Raffael Marty, VP Corporate Strategy ForcePoint

We don't truly have AI, yet. Algorithms are getting smarter, but experts are more important. Understand your data and algorithms before you do anything with them. It's important to invest in experts that know security.

Raffael has been doing this (working in security) for a very long time, and then moved into big data. At Forcepoint, he's focusing on studying user behavior so that they can recognize when something bad is happening. ("The Human Point System")

Machine learning is an algorithmic way to describe data. In supervised case, we are giving the system a lot of training data. Unsupervised, we give the system an optimization for it to solve.  For "Deep Learning" - it is a newer machine learning algorithm. It eliminates the feature engineering step.  Data mining is a set of methods to explore data automatically.  And AI - "A program that doesn't simply classify or compute model parameters, but comes up with novel knowledge that a security analyst finds insightful" (not there, yet).

Computers are now better than people at playing chess and Go, they are even getting better at designing effective drugs and for making things like Siri smarter.

Machine learning is used in security, for things like detecting malware, spam detection, and finding pockets of bad IP addresses on the Internet in supervised cases, and more in unsupervised..

There are several examples of AI failures in the field, like the Pentagon training AI to learn tanks (they used sunny pictures for "no tank" and cloudy with tanks, so the AI system assumed no tanks were in sunny weather... ooops!)

Algorithms make assumptions about the data, they assume the data is clean (often is not), make assumptions about distribution of data and don't deal with outliers.  The algorithms are too easy to use today - the process is more important than the algorithm.  Algorithms do not take domain knowledge into account.  Defining meaningful and representative distance functions, for example.  Ports look like integers and algorithms make bad assumptions here about "distance"

There is bias in the algorithms we are not aware of (example of translating "he is a nurse. she is a doctor" from English to Hungarian and back again... suddenly the genders are swapped! Now she is a nurse....)

Too often assumptions are made based on a single customer's data, or learning from an infected data set, or simply missing data.  Another example is an IDS that got confused by IKE traffic and classified it as a "UDP Bomb".

There are dangers with deep learning use. Do not use if there is not enough or no quality labelled data, look out for things like time zones along with timezones. You need to have well trained domain experts and data scientists to oversee the implementation, and understand what was actually learned.
Note - there are not a lot of individuals that understand security and data science, so make sure you build then a good, strong and cohesive team.

You need to look out for adversarial input - you can add a small amount of noise to an image, for example, that a human cannot see, but can trick a computer into thinking a picture of a panda is really a gibbon.

Deep learning - is it the solution to everything? Most security problems cannot be solved with deep learning (or supervised methods in general). We looked at a network graph - we might have lots of data, but not enough information or context nor labels - the dataset is actually no good.

Can unsupervised data save us?  Can we exploit the inherent structure within the adta to find anomalies and attacks?  First we have to clean the data, engineer distance functions, analyze the data, etc...

In one graphic, a destination port was misclassified as a source port (80!), and one bit of data had port 70000!  While it's obvious to those of us with network knowledge that the data is messed up, it's not to the data scientists that looked at the data. (with this network data, the data scientists found "attacks" at port 0).

Data science might classify port 443 as an "outlier" because it's "far" from port 80 - but to those of us who know, they are not "far" from each other technically.

Different algorithms struggle with clustered data, the shape of the data.  Even if you choose the "right" algorithm, you must understand the parameters

If you get all of those things right, then you still need to interpret the data. Are the clusters good or bad? What is anomalous?

There is another approach - probabilistic inference. Look at a Beysian Belief Networks. The first step is to build the graph, thinking about the objective and the observable behaviors. If the data is too complicated, may need to introduce "grouping nodes" and introduce the dependencies between the groups. After all the right steps, you still need to get expert opinions.

Need to make sure you start with defining your use-cases, but by choosing an algorithm. ML is barely ever the solution to your problem. Use ensembles of algorithms and teach the algos to ask for input!  You want it to have expert input and not make assumptions!

Remember - "History is not a predictor, but knowledge is"


BH18: Kernel Mode Threats and Practical Defenses

Joe Desimone, Gabriel Landau (Endgame)

Looking at kernel attacks, as it is a method to take over the entire machine and evade all security technology. Historically, Microsoft was vulnerable to malware - not prepared for those types of attacks, but they have made improvements over the year with things like PatchGuard and Driver Signature Enforcement. PatchGuard isn't perfect, attacks get through, but MS is constantly updating so the attacks don't work for long.

Both of these technologies are focused on 64-bit kernels, which is the growing norm today.

Attackers are now using bootkits, so Microsoft and Intel have come up with technology to counter (Secure Boot, Trusted Boot, Itnel Boot Guard, and Intel BIOS Guard).

All of those protections have changed the landscape. We don't see millions of kernel based botnets out there anymore.  But now people are signing their malware to look more legitimate and trick people to install.

DUQU 2.0 was a nation state attack, main payload used 0day in win32k.sys for kernel execution (CVE-2015-2360), it was spoofing process information to route maliious traffic on the internal network.

With the introduction of virtualization based security has also made the system more secure against things like Uroburos, Duqu2, DoublePUlsar.

The MS kernel has been greatly evolving over the last 10  years to greatly improved their mitigations. But, the problem is the adoption rate. There are still a lot of systems running Windows 7, which does not benefit from these new protections.

The speakers are on their orgs red team, so they are always looking for new ways to attack the system. They want to avoid detection and signature checks - their blue team is on the lookout for user mdoe priv escalation, so they wanted to be in the kernel. Looked at sample code from Winsock Kernel, found it was very effective (no beacons).

Did find a good attack, which means they needed to improve their own security.

Modification of kernel memory can significantly compromise the integrity of the system, so this is a major area of concern.

Need chip manufacturer to ship hardware with ROP detection enabled, otherwise this will always be a vector of attack. They did this by creating a surrogate thread, put it to sleep and though foudn the location of the stack and take advantage of it. (more details in the deck, the slides move pretty fast), but the interesting thing here is how much they can do by reusing existing code.

To project yourself, you should very carefully monitor driver load events. Look for low prevalence drivers and known-exploited drivers.  You need hypervisor protection policies, using white lists (which are hard to maintain) and leverage kernel drivers to WHQL. They have made a new tool to also  help to reduce the attack surface, available on their website today.

They wrote some code to generically detect function pointer hooks, locate the function pointers by walking relocation tables and leverage Endgame Marta.  They consider it a hit if it originally pointed to +X section in on-disk copy of driver, does not pont to a loaded driver in memory and points to executable memory.
 
ROP generates a lot of mispredictions, so need to protect this area as well (they could attack by scanning drivers to identify call/return sites, configure LBR to record CPLO near returns, etc)

The talk had lots of cool demos - can't really capture it here.

Windows platform security has gotten much better, but tehre are still kernel threats. You need to be using at least Windows 10 with SecureBoot and HVCI. at a minimum to protect yourself. Requite EV/WHQL within your organization

Wednesday, August 8, 2018

BH18: Don't @ Me: Hunting Twitter Bots at Scale

Jordan Wright, Olabode Anise, Duo Labs

Social media is a great way to have genuine conversations online, but the sphere is getting filled with bots, spam and attackers.

Not all bots on twitter are malicious - they could be giving us automated data on earthquakes, git updates, etc. So, their research was focused on finding bots and then figuring out if they were malicious.

The goal here is to build a classifier, one that could learn and adapt.

They wanted their research to be reproducible, so used the official Twitter APIs - though by doing so, they were rate limited. Because they were rate limited, they needed to be as efficient as possible. Fitting into that model, they were able to look up 8.6 million lookups per day.

Twitter's account ids started as sequential 32-bit unsigned integers, but the researchers started with random 5% sampling. The dataset has gaps - closed accounts, etc. Noticed accounts went up to very large numbers, and those accounts were up to 2016. But, Twitter changed to using "Snowflake IDs" - generated by workers, same format as other Twitter ids (tweets, etc).

The Snoflake ID is 63-bit, but starts with a timestamp (41-bits), then worker number (10 bits), then sequence (12 bits). It is very hard to guess these numbers. So, they used the streaming API with a random sample of public statuses (contains the full user object).

Now - they have a giant dataset :-)

Looked at last 200 tweets, accounts with more than 10 tweets, declared English and then they fetched the original tweets.  This data was too hard to get - could only do 1400 requests/day.

They took the approach of starting from known bots and discovering the bot nets they were attached to.

The data they have include attributes (how many tweets, are they followed, in lists, etc), looking at tweet content (lots of links?), and frequency of tweets.

They examined the entropy of the user name, was it fairly random? Probably a bot. Same for lots of numbers at the begining or end. Watchin for ratios of followers to following and the number of tweets.

They applied heuristics to the content - like number of hashtags in tweets, number of URLs (could be a bot or a news agency!), number of users @ replied.  On behavior - look at how long it takes to reply or retweet, and the unique set of users retweeted.  Genuine users would go queit for periods (like when sleeping).

Then we got a Data Science 101 primer :-)

This is where it gets complicated and statistics come into play, and the reminder that your model is only as good as your data. For example, if they trained with the crypto currency bots, they found 80% of the other spam bots. when reversed, they only caught about 50% of the crypto currency bots.


 Crypto currency give-a-way accounts are very problematic - they look legitimate and they will take your "deposit" and then you will lose your money.  They were hard to find, until they realized that there are accounts are out there that have many bots following them. Find those legitimate accounts, then you can find the bots.... also following like behaviors, used to map relatinships.  They found mesh and hub/spoke networks, but they were connected with likes.

They also discovered verified accounts that had been taken over, then they are modfiied to look like a more active account (like Elon Musk) that adds legitimacy to the crypto currency spam.

Very interesting research!



BH18: There Will Be Glitches: Extracting and Analyzing Automotive Firmware Efficiently

Alyssa Milburn & Niek Timmers, Riscure.

The standard approach for breaking into embedded systems: Understand target, Identify vulnerability, exploit vulnerability. Note - he also is referring to ECUs found in cars.

To understand the embedded system, need to understand the firmware. To do so - you need to get a hold of a car! Good source for cheep cars with common components - recalled Volkswagens :-)

Today's talk is targeting the instrument cluster - why? Because it has visual indicators you can see what is happening - it has blinking lights! :-)

Inside the instrument panel you will find the microcontroller, the EEPROM. display and the UART for debugging (but, it's been secured).  So, we have just inputs and outputs we don't understand. After much analysis, discovered most instrument panels talk UDS over the CAN bus. (ISO14229). This covers diagnostics, data transmission (read/write), security access check and loads more!

The team identified the read/write memory functions, but also discovered they were well protected.

Discovered that there are voltage boundaries, and if they go out of bounds they can stop the MCU. But... what if we do it for a very short amount of time? Will the chip keep running?

Had to get fault injection tooling - ChipWhisperer or Inspector FI - all available to the masses.

Fault injectors are great for breaking things. Once a glitch is introduced, nothing can be trusted. You can even change the executed instructions - opens a lot more doors! If you can modify instructions, you can also skip instructions!

They investigated adding a glitch to the security access check. Part of the check has a challenge, and if the expected response is received - access is granted. The team tried adding a glitch here, but were not successful, due to 10 minute timeout after 3 failed timeouts. As they are looking for something easy... moved on!

So, they moved on to glitching the ReadMemoryByAddress - no timeout here! They were successful on several different ECUs, which are designed around different MCUs.  Depending on the target, they could read N bytes from an arbitrary address. It took a few days, but were able to get the complete firmware in a few days.

There are parameters you can tweak for this glitch - delay, duration and voltage. Lots of pretty graphs followed.

It's hard to just do static analysis, as there is often generated code.

So, they wrote an emulator - allowed them to hook into a real CAN network, add debug stop points, and track execution more closely.

By using taint tracking, were able to find the CalculateKey function with the emulator.

There are new tools coming or electromagnetic fault injection - expensive right now, but getting cheaper.

ECU hardware still needs to be hardened - things like memory integrity and processing integrity. Unfortunately, these are currently being only designed for safety (not security).

There should be redundancy and the designers should be more paranoid. ECUs should not expose keys - need to leverage HSMs (hardened cryptographic engine). Highly recommend using asymmetric crypto - so the ECU only has a public key.

Do better :-)



BH18: Blockchain Autopsies - Analyzing Ethereum Smart Contract Deaths

Jay Little, Principal Security Engineer, Trail of Bits
Trail of Bits is a cyber security research company - high end security research and assesments.

Earlier this year he was working on a project with a friend to look into an aspects of contracts

Ethereum, EVM and Solidty

Ask for a show of hands about who has bought Ethereum here, lots of hands went up.

Ethereum is a blockchain based distributed ledger, called a "world computer" and has "smart" contracts. It is the 2nd largest crypto currency.

The Ethereum Virtual Machine (EVM) is a big endian stack machine with 185 opcodes, native data width is 256 bits, whith many similar instructions. Each instruction has a 'gas cost' to prevent infinite loops.

Most contracts start at 0, there are 5 addresse spaces. Most people don't write their contracts in EVM, but use Solidty instead - it's a JavaScript inspired high level language for smart contracts. It has evolved (as opposed to being designed).

Much of the presentation is done with emoji's - easier to see than a string of numbers :-)

 Because contracts start at zero, he has seen undefined behaviors when counters get decremented too low.  ALso issues with unintialized variables - used in one case to backdoor a lottery system.

There is a new tool, Rattle, recovers EVM control flow.  Other tools, Geth and Party, run on public nodes. This followed by a walkthrough of using the tools and their CLI options and looking at a some contracts.  He shared the code for finding contracts as well.  Geth and Parity have a lot of issues, so he's been looking at etherscna.io - a quick lookup database.

Doing a hybrid approach of using Geth and Parity to find the contracts over a few hours, then look into eherium.io.  Looking at 6M blocks, about half are duplicates. Some are empty, but have a balance - which shouldn't happen.

Sometimes the contracts fail, because they did not use enough 'gas' . Found a contract with no code (unusable) but with about $7000 in it - stuck there forever.  All told, there is about $2.6M stuck in empty contracts that can never be retrieved.

Some duplicates have infinite loops - could be intended as a network DoS. Others seen with noise or spam, or NUL value issues

From tracing they were able to look into contracts where the self destruct was not the original creator - they tend to send the money to address 0, losing it forever. 

If you are developing contracts, make sure you understand and fix all warnings. Add an Echnidan test and write extensive positive and negative tests. Most importantly, perform a rigorous assessment!






Wednesday, August 9, 2017

BHUSA17: Datacenter Orchestration Security and Insecurity: Assessing Kubernetes, Mesos, and Docker at Scale

Speaker: Dino Dai Zovi

This was a challenging session to take notes on, given the speed of the slides and the mountain of information, but suffice it to say - Docker and Kubernetes need security help and consistency!

Kubernetes (K8) is a young project, but very active. Many companies have full time engineers working on the project

The security mechanisms in K8 are all very new - only in alpha or beta, or less than 1 month old - seems like an add on.  For example, RBAC is enabled by default in K8 1.6, but many people turn it off to work with older versions.

But, because most security features are new, there are many private distros forked earlier that may be missing the security features entirely! And some will "dumb down" to successfully connect to older versions - so you may have the security feature, but it's not configured. Plenty of potential attacks distributed.

BHUSA17: Tracking Ransomware End to End


Only 37% of people backup their data, which leaves them open to ransomeware.

Victims are shown a URL to pay to get back their data. Posted in Tor, so the source is hard to take down. They will only accept BitCoin, so they can use the blockchain to see who paid and who didn't.

BitCoin is anonymous and irrefutable - cannot be reversed! If you find the ledger, you can go back and see who else was ransomed.  Gathering seeds from victim reports and synthetic victims means you have to pay a small amount to find out more about the network.

The researchers initial data was for 34 families with 154,000 ransomed files. by using clustering for dataset expansion to find other victims, they are now working with 300,000 files.  This one ransomware has made approximately $25,253,505 (low ball estimate) - so there's money to be made no doubt!

In 2017, ransomeware increased binary diversity in order to evade AVs.

Many victims don't have any BitCoin, so they buy it from "LocalBitCoins" site (think Craigslist for BitCoin).

BlackHat 2017
The researchers found that 90% of the transactions went through as a single transaction, 9% did not account for the transaction fees and a small percent are doing multiple transactions for unknown reasons.

Locky - a ransomeware family increased spread - started seeing it in infrastructure like hospitals. It was making about $1million/month!

Dridex, Locky and Cerber are all distributed via botnets. Cerber recruits low-tech criminals to help them make a consistent income of $200K/month.

Cerber includes real time chats to talk to customer "service" to help you simply recover certain files.

WannaCray seems more like wipeware, than ransomeware. Even if victims paid, the way it was done was hard to track that you did indeed pay and harder to get your files back.

The researchers have also seen a rise in NotPetya lately - another wipeware.

This is not going away. this is a multi-million dollar industry. Cerber has even introduced the concept of an affiliate model - so more people can "play".  yikes!