Thursday, August 6, 2015

BHUSA15: Black Hat Panel – Beyond the Gender Gap: Empowering Women in Security

Kelly Jackson Higgins, Executive Editor at Dark Reading (panel moderator)

This is a growing industry, but women are leaving. We need more people, so how do we empower the women we have?

Panelists:
Justine Bone, Independent Consultant
Joyce Brocaglia , Founder Alta Associates (executive search firm for security, etc)
Jennifer Imhoff-Dousharm, co-founder, dc408 and Vegas 2.0 hacker groups
Katie Moussouris, Chief Policy Officer, HackerOne

All of the women here come from different backgrounds - hacking (black hat and white), executives, startups, big companies.

Justine learned a big hard lesson when she dropped out of industry to work on her own startup - at the same time as having kids.  While she was working her butt off, she wasn't showcasing her work or engaging with her peers - everybody thought she'd taken off time to have kids, totally unaware of the hard work she'd been doing. Lesson: always engage, promote, etc.

Joyce mentioned that she sees a lot of Employee Research Programs that are more checkboxes than actually beneficial programs for women. She noted that a company might pay Alta  $100-$150,000 to find a new executive, but when she asks if they'll pay $100,000 for leadership program with a proven track record - the same company will say "we don't have that kind of money." (note: sigh)

Katie started up a bug bounty program at MS - it was hard.  Big companies had vowed to never pay ransom for security bugs - she had to present this in a different way, to get it to line up with their goals, getting organizational empathy (when is the best time for devs to get vulns). Hence, IE 11 Beta Bug Bounty - which ran for 30 days. Alas, folks would hold on to their vulns until after beta was closed, forcing MS to release vulnerability reports.

We have a shortage of engineers, why aren't women coming in?  Jennifer said she doesn't see it as a pipeline problem - she noted that women that grew up in the 80s were exposed to computers (yay, Oregon Trail) and didn't hit the "cootie" program until they entered corporate America. It's scary to be the only person like you in the room - you don't realize it until you are that only woman. It doesn't matter how strong you are or how much you lean in, you have to carry that weight of diversity.

Justine noted the "DefCon problem" - it's annoying that everyone asks you "who are you here with? who's your boyfriend" - it gets exhausting. (Note: YES - happened to me every year, after my bf & I broke up and I continued to go alone).  Explaining over and over that you deserve to be there, what you do, that you really are technical.

Katie noted there's a challenge as well that you are expected to be a representative of ALL women, irregardless of how different we all are.  She hates the question: "what's it like being a woman in security?" - stop asking her about the least important aspect of her job and her personality, she is so much more than just "a woman in security."

Joyce notes that she sees job advertisements all the time that will literally use the male gendered pronoun, "he will be responsible for X, Y, Z". Knowing that men will apply for a job where they only meet 6/10 of the qualifications, and women require 9/10 before they will apply.. adding "he" to the description is one thing off the bat that the woman will not be.  Confidence matters as much as competence - men tend to have more confidence, which may help explain why women are not making it to the higher levels.

Companies need to invest in younger women to make these changes - they are an investment.  Women and men need sponsors, but companies should make sure that it's not only men getting them. If women are raising their hands for stretch assignments, but getting skipped over... is it their fault?

Justine noted that we also need to be willing to accept help - if someone tries to bring you into the "old boys club" - GO! Joyce cautioned, though, don't wait for it.

Justine says she's always criticized for her travel for work, by friends, family, etc. How could she leave her kids? She notes she's on these flights with a ton of men doing the same thing - and nobody criticizes them.

Can you have work and family?  Yes, but you need help - nannies, families, etc. "Women have the capacity to multitask and get shit done," Joyce.

Personal space at these events is important. Katie had a run in with "Handsy McMansy" last night - fortunately, she's adept at profanity to throw at him. The men around though seemed shell shocked and didn't know what to do. "I don't need somebody to fight for me, I need them to fight with me."

Joyce had a run in last night that was similar with a male executive, sloppy drunk, asking dumb questions and hanging on people. If a woman did that - she would be shamed by the men around.

Joyce noted that women still don't get taken seriously at booths at events like RSA.  People don't want to talk to the women, even if they may be the one making purchasing decisions.

Justine looked at the Black Hat review board this morning - there is only ONE woman on the review board. Not saying the men on the board are not skilled and talented, but they need diversity.

Joyce noted that women need to submit talks, start with smaller conferences and get practice, confidence, etc. 

Men should talk to women at conferences - acknowledge them, don't question why they are here - but actually engage. Like, "what do you do at your company?" vs "who's your boyfriend?"

Joyce noted that older generations of men are lacking the emotional intelligence to understand why what they are doing or saying is not okay. She has high hopes for the younger generations, who grew up with working mom's, etc.

Katie noted that women need to stop denigrating yourself - the world will do that for your. Speak about your work in positive tones, not "well, I don't do kernel work, I don't do... ". Believe in yourself and don't be afraid to tell the world about what you do.

BHUSA15: Information Access and Information Sharing: Where We Are and Where We Are Going

Alejandro Mayorkas, Deputy Secretary of Homeland Security.

Homeland security means security of our institutions, security of our way of life and most importantly security of our values.  Security of the Internet is very much a part of what we do. It is clear that the challenges of network security are immense. We as a government are making advances in this area, but we are not where we need to be.

Every morning, the secretary and get a briefing about threats, events that are occurring or are about to occur all over the world. Increasingly Internet security events are more common in that meeting.

The more he travels around the country, it becomes obvious how important this is for everyone.  Internationally, the same thing. Foreign companies and governments all care about this.

The current state of affair with individualized responses is not working well to ensure that the Internet is protected.  DHS considers themselves uniquely situated to address these concerns. DHS is a civilian agency, standing at the intersection of Private Sector, Enforcement community, Intelligence Community and desire to protect .gov.  They have created a critical response set of protocols and organization (National Cybersecurity Communications Organization).

DHS currently shares information in bulletins or entity to entity. It is not currently in an automated fashion. The President, in his last executive order, placed DHS in charge of leading information sharing with the private sector.

DHS wants an automated and near real time way to share and disseminate information, to raise the bar and capacity for the private sector to protect themselves.  When a threat is shared with DHS, they can receive that in automated form and disseminate in near real time to prevent replication of that threat.

One thing in their way: the issue of trust. That emanates from a variety  of sources - can DHS keep this secure? can you trust those providing information?

DHS needs to work on building trust - it will take time, but will be worth the effort.

As they are working on the automatic reception of cyber threats, please give them a chance and share some information so that they can prove their capabilities and prove their results.

Question about how important is it for private industry to participate?  Answer: very, many of them are very critical systems. It's critical they participate.

We have to understand our responsibilities for the public good. Alejandro hopes that sharing the cyber threat will have a public dimension. It's vital for them to be shared far more publicly than they are now. This is important for DHS's mission to secure this country.

DHS is very active in research and development in achieving network security - we are investing in public as well as private sector.

Various questions show that folks are nervous about sharing with the government, Alejandro noted that they will be working on correcting that.

Another questioner asked about the OPN breach, where NASA, etc, lost lots of personal information.  He noted that not all agencies are as advanced as others, and they've been doing a 30 day security activity with the goal of improving this.

Question: will information about 0-days that the government has bought be shared? Answer: we are going to declassify and release everything that we can.

Question: gov't is know for antiquated systems, how do we know you'll do this right? Alejandro noted that they have to start with new gear, and stay on top of the systems. (no Windows NT here)

Additionally, DHS is looking at recruiting the best and the brightest, and even looking at opening an office in Silicon Valley.

BHUSA15: Where? You can't Exploit What You Can't Find

Christopher Liebchen & Ahmad-Reza Sadeghi & Andrei Homescu & Stephen Crane

 We're concerned with many problems that are actually 3 decades old. Nowadays, everyone has access to cell phones. Many developers with different intentions and different background (particularly with security).

So - how do we build secure systems from insecure code?  Seems counter-intuitive, but we have to do it. We cannot just keep adding more software onto these old systems.

We've had decades of run-time attacks - like the Morris Worm in 1988, which just keeps going.

There are a number of defenses, but often the "defenses" are broken by the original authors juts a few years ago.  So, there is a quest for practical and secure solutions... 

Big software companies, like Google and Microsoft, are very interested in defending against run-time attacks, like Emet or CF Guard in MS or IFCC and VTV from Google.  But how secure are these solutions?

The Beast in Your Memory: includes bypass of EMET. return oriented programming attacks against modern control-flow integrity protection techniques... 

The main problems are memory corruption and memory leakage.

You can do a code-injection attack or a code-reuse attack.

Randomization can help a lot, but not perfect.

Remember, for return-oriented programmng: basic idea: use small instruction sequences instead of whole functions. Instruction sequences of length 2to5, all sequences end with a return instruction. Instruction sequences chained together, modifying what code is executed after return.

Assuptions for our adversary model: memory pages are writable executable, we also assume there is address space layout randomization (ASLR), and that it can disclose arbitrary readable memroy.

 Main defenses: code randomization and control flow integrity.


Randomization has low performance overhead and scales well to complex software  Though, this suffers with low system entropy and information disclosure is still hard to prevent.

For CFI: formal security (explicit control flow checks) - if something unexpected happens, you can stop execution (in theory).  It's  a trade-off be performance and secuity (inefficient) and chalenigng to integrae in complex software. 

What about fine-grained ASLR? You are just trying to make it more complicated.  But, this has been attacked by JIT-ROP (BH13).  That undermines any fine-grained ASLR and shows memory disclosures are far more damaging than believed. This can be instantiated with real-world exploits.

Then we got a pretty graphical demo of how JIT-RoP works.

Their current research is around code-pointer hiding.

Their objectives were to prevent code reuse & memory disclosure attacks. It should be comprehensive (ahead of time + JIT), practical (real browsers) and fast (less than 6% overhead).

We prevent direct memory disclosures by using execute-only code pages, which prevent direct disclosure of code. Previous implementations do not entirely meet our goals. So, we fully enforce execute-only permissions with current x86 hardware.

We have virtual addresses that get translated to physical addresses. During the translation, the MMU can enforce some translations. As soon as your code page is executable, it is also readable.  But you might want something to be readable, but not executable.  Can do this with extended page tables, which will note that something is only executable (not readable).

The attacker can leak pointers to functions that point to the beginning or in the middle - once he's got that pointer, he can figure a lot more things out.

So, we can add another layer of indirection: code pointer hiding!

They modified the readactor compiler so we would have codedata separation, fine-grained code randomization, and code-pointer hiding.

our goal is to protect applications which are frequent targets of attack. They have JIT (just in time) code, which is harder to protect, as it frequently changes.  Solution: Alternate EPT usage between mutable mapping RW-) and execute-only mapping (--X).

Now, does this actually work?  Checked performance with SPEC CPU2006 and chromium benchmarks. Checked practicality bye compiling and protecting everything in Chromium.

The full readactor caused roughly 6.4% slowdown. But, if you only protect the hypervisor in execute only mode, that's only around 2% performance impact, which seems acceptable.

How does it do wrt security? Resilience against (in)direct memory disclosure.

Code resuse attacks are a a severe threat, everyone acknowledges this. Readactor: first hardware enforced execute-only fine-grained memory randomization for current x86 hardware.












BHUSA15: The Memory Sinkhole – Unleashing an x86 Design Flaw Allowing Universal Privilege Escalation

Chris Domas is an embedded systems engineer and cyber security researcher, focused on innovative approaches to low level hardware and software RE and exploitation.

There has been a bug in x86 architecture for more than 20 years... just waiting for Chris Domus to escalate privileges.

Chris did the demo on a small, cheap netbook computer. In case it didn't work, he had a stack of netbooks.  We saw just running a program where a general user ran a simple program and had root.

Some things are so important that even the hypervisor should not be allowed to execute it.

We originally wanted to do power management without the OS to worry about it - system management mode. SMM becamse a dumping ground for all kinds of things, eventually it took "security" enhancements.  Now SMM is imortant: root of turst, TPM emulation and communication, cryptographic authentication...

Whenever there was something important or sensitive or secret, it got stuck in SMM.

Userland is at Ring 3, kernel in Ring 0 . Ring -1 is they hyperviser... Ring -2 is SMM. On modern systems , Ring 0 is not in control. We have to get deeper (and hide from) Ring 0.

If you're in ring 0 and try to read from SMM - you'll just get a bunch of garbage. Memory control separated SMRAM from the rest of the system. If you're in SMM, though, you can read from SMRAM.

There are many protections on SMM - locks, blocks, etc.  but most exist in the memory controller hub. Lots of research in this area on how to to get to Ring -2.

The local APIC used to be a physically separate chip that did this management. But, it's more efficient and cheaper to put the local APIC on the CPU.  Now it's even faster!

Intel reserved 0xfee0000-0xfee01000 - so to access, you have to do some round about ways to get there. When they created this model, this broke legacy systems that expected that segment of memory to map to something else. Looking at the Intel SDM, c 1997 describes what's happened here in the P6.

Now we're allowed to move where the APIC window is located, allowing us to access APIC reserved space.  This "fix" opens systems up to this vulnerability.

If we're in Ring 0, and we try to read SMRAM we will be denied. But, you can do it from SMM. What if we're in Ring 0, and relocate the APIC window. Now, from Ring 0, we can read SMRAM.  Now that we can do that, we can modify SMM's view of memory.  Now the security enforcer has been removed.

How to attack ring -1 from ring 0? SMRAM acts as a fe haven for SMM code. As long as SMM code stays in SMRAM, ring 0 cannot touch it. But if we can get SMM code to step out of its hiding spot, we can hijack it.

Move APIC over SmRM, corrupt execution , trigger fault in SMM. This gets is to look up an exception handler - under our control.  Though, that attack doesn't work. There's an undocumented security feature which causes a triple fault (reset) the system.

He overlayed APIC MMIO range at the SMI entry point: SMBAE+0x80000 - getting the APIC entry point and the SMBAE to overlap.

Now, we just need to store shell code in the APIC registers. The challenge is they have to be 4 K aligned. Place  exactly SMI entry. Execution begins @ exactly start of APIC registers.4096 bytes available.

Many registers are largely hardwired to 0, giving few registers that can actually be changed. We need to do something useful before the system resets.

You need to keep things from executing right away before our last byte is activated.

we only really have 4bits to do something to actively attack the system.  Looking at the opcode map, not a lot of interesting things.

But, the attack didn't work as expected. We still can't execute from the APIC, so must control the SMM with data alone.

How do we attack code when our only control is to disable some memory?

SMM code comes from system firmware.  Intel makes template code, which goes to independent BIOS vendors, then the OEMs (HP, Toshiba, Lenovo) makes more changes.

The only way to make a general attack is to look at the EFI template BIOS code from Intel, as that will be on EVERY system.

From ring 0, we try to sinkhole the DSC to switch it into system management mode.  We've lsot control, but maybe it'll let us do something before memroy resets.

(lots of stuff about self rewriting code, and far jumps and long jumps and lots of hex codes)

Then we successfully got the SMM to read code that we could control, by controlling the memory mapping.

With only 8 lines of code, to exploit: hardware remapping, descriptor cache configurations, etc.

In the end, used well behaving code in order to abuse a different area.

This has opened up a new class of exploits.  Now that we have Ring -2 control, what can we do?  We can disable the cryptographic checks, or turn off temperagur control, brick the system, or install a root kit.

Once we have the control, we can preempt the hypervisor, periodic interception, filter ring 0 i/o, modify memory, escalate processes, etc.

Adapted code from Dmytro Oleksiuk.

We can simultaneously take over all security controls. Mitigations don't look good. This is unpatchable, would need new processors.... Which Intel did. Their developers seem to have found this independently. This is now fixed with Sandy Bridge and Atom 2013. Now check in SMRRs with APIC is relocated.

intel.com/security has a write up on this. They have been easy to work with, and have been working on mitigations where ever it was architecturally possible.








Wednesday, August 5, 2015

BHUSA15: Panel: Getting it Right: Straight Talk on Threat & Information Sharing

Panelists: Trey Ford(@treyford) is the Global Security Strategist at Rapid7, Kevin Bankston (@kevinbankston) is the Director of the Open Technology Institute and Co-Director of the Cybersecurity Initiative at New America, and @brianaengl and @hammem (speaker lineup appears to have changed, so twitter handles are what I've got:-) (also, the podium is super giant and blocking my view of the speakers, so I can't tell you who is saying what).

Sharing sounds like fun, but it's not as simple as we remember from our childhood.  There are legal implications, contracts, source trust issues, etc.

Intelligence is like a UDP packet you cast out and hope for the best.  How do you determine if the information is still relevant?

Facebook is working on this - how to do exchange of data?  What can we learn from it?

When people start sharing data, they realize that they need to share with someone who cares. Ie if you're concern is about phishing, don't build a relationship with someone who is focused on bitcoin.

What is stopping companies from sharing information with other companies and the government? It will be relevant to you if new legislation passes.

Some of the barriers are around the wiretap act (Title II) portion of the ECPA which places limits on real-time communications and limits disclosure.Other limits: federal privacy laws protecting HIPPA data and educational records, self-imposed restrictions in Terms of Service, and anti-trust laws (DoJ could accuse them of colluding in an anti competitive way).

Well, and there are nervous lawyers :-)

Most threat information doesn't include content or PII. Non-content can be liberally shared, with exceptions for security and consent via ToS.  DoJ has stated they won't go after companies sharing for these reasons.  Companies already do a lot of sharing, so do they really need new legal permissions?

but there's the new CISA: Cybersecurity Information Sharing Act, S. 754.  It authorizes sharing of broadly defined "cyber thread indicators" and info about defensive measures" with "any other entity or [any agency of] the Federal Government".

DHS must distribute all information to other agencies including NSA, "not subject to any dleay or modification". Gov't can use the information to investigate or prosecute a range of crimes unrelated to cybersecurity.

The house is also working on bills!

Congress has looking at CISA since 2009, and they are starting to feel like they have to do something so they can show they are serious about cybersecurity.

Please call your senator to oppose the bill and support privacy enhancing amendments. If it does pass, it still has to go to conference with congress.

Check out StopCyberSpying.com or call 1-985-222-CISA for more information.






BHUSA15: Stranger Danger! What is the Risk from 3rd Party Libraries?

Kymberlee Price, Bugcrowd, and Jake Kouns is the CISO for Risk Based Security.

It's well known that vulnerability statistics suck (see Steve Christey's (MITRE)  Black Hat 13 talk).

But, the truth is - we are getting attacked, lots of new (and old) vulnerabilities.  This is getting worse every year, not better.

Secunia says there are 15000 vulnerabilities, but they counted Heartbleed as 210 different vulnerabilities (and our speakers say it was just one, while some audience members noted it was three).

There were 100+ vendors impacted by Heartbleed, impacting over 600,000+ servers.

Very large companies are using OpenSSL: Oracle, HP, Blackberry, Juniper, Intel, Cisco, Apple, Intel, etc... so, it's not just little startups using open source anymore.

There have been 52 new vulnerabilities fixed since Heartbleed - average score of CVSS of 6.78.  Nine of them had a public exploit available.

We're beating up on OpenSSL - but what about Gnu library (Ghost), which had a heap vulnerability in it. It's everywhere.

Efficiency at what cost?  By leveraging this third party source, companies can deliver faster, cheaper, etc. But what are companies picking up in exchange?  Some products have more than 100 third party libraries in them. Are they being treated with his much scrutiny as they should be?

The speakers aren't saying: "Don't use 3rd party libraries", but rather to think about things during design and development.

All of the data they are sharing this week are from public sources, even though that data is limited.

Look at FFMPEG - they have 191 CVEs, but over 1000 vulns fixed.

These vulnerabilities spread - think about the FreeType Project font generation toolkit. It's used by Android, FreeBSD, Chrome, OpenOffice, iOS, video games (including the American Girl Doll game).  Everywhere!  There was a vulnerability (missing parameter check) that allowed you to jailbreak your iPhone... or someone else to take over your iPhone.  This is insiduous, as you have to wait for the vendor to fix it..

libpng, Apache Tomcat... everyone is using this and including these things in toolkits.

We shipped a vulnerability to Mars! (Java is on the Mars Rover).

Interesting to note: some vendors don't even release CVEs for anything under CVSS of 5.0. Since 2007 the number of CVEs: OpenSSL (90), Flash (522), Java (539), FreeType (50), libpng (28), APache Tomcat (100).

Now, this is not telling you what is more or less secure. For example, Adobe has an excellent bug bounty program and internal pen testers. Just because a product has only reported a few, doesn't mean more aren't lurking.

We should consider time to relief. How long does the vendor know about the issue before they provide a fix? You can use this to figure out how seriously that vendor is about security.

Had to define a framework to understand time of exposure, identify vendors and products you want to work with and establish a scorecard.

Calculating Vendor Response Time is how long from when the vendor was informed before they responded to the researcher. This can't be an automated reply, but actual acknowledgement.

Time to patch - when do the customers get relief.

But another time to consider: how long were customers vulnerable? That is, how long from when the patch is available to when the patch was applied (many folks only do updates quarterly, for example).. Total time of exposure covers the period from when the vulnerability was discovered until when was it fixed at the customer site.

We got to walk through a few case examples.

In one case, a researcher reached out to a company on twitter asking how to securely disclose a vulnerability - and for 2.5 months they kept pointing the researcher at their insecure support page.

It is critical for vendors to respond promptly and investigate the issue.

And this data is hard to figure out, as the terminology for "zero day" (oh day, 0 day) seems to be malleable.The speakers believe that it's only a 0-day when the vendor does not know about it.  Once he vendor knows, or the vuln is publicly disclosed, then it's no longer a zero day.

In one case, the vendor created a patch - but did not release it, instead they wanted to roll it up to their next version release. In the end, their customers were exposed for 451 days.

While most companies update their systems every 30 days, their exposure could be much longer due to a vendor not actually providing the  fix to their customers.

Advice: once you incorporate a 3rd party software suite into your tools, you need to become active in that community, watch it, help out, provide funding, or you are putting your own customers at risk.

You also need a clear prioritization scheme, to know what to fix and when (as most likely your incoming rate is higher than your fix rate)..  Proactively manage your risk. Understand what third party code your organization relies and implement a plan to address the exposure, and work with the vendors.




BHUSA15: Gameover Zeus: Badguys and Backends

Speakers: Elliott Peterson is a Special Agent with the FBI in the Pittsburgh Field Office. Michael Sandee is a key member in the Fox-IT financial malware intelligence unit. Tillmann Werner is the Director of Technical Analysis at CrowdStrike Intelligence.

Gameover Zeus went after backend banking systems, very successfully, a botnet run by an organized crime game. It was designed to make it impossible to be subverted by the good guys.

We estimate that the losses ranged from $10,000 to $6,900,000 / attack. The criminals had knowledge of International banking laws, leveraged international wires, and used DDoS attacks against the banks to distract and prevent the victims from identifying the fraud.

Dirtjumber Command/Control was being used.

They see the $6.9 million loss, informed the bank - but the bank could not find the loss. It took a long time to find, due to the DDoS. The FBI was finally able to track down who was receiving the funds in Switzerland and put a stop to this. Now the feds can prevent the transactions and even get the money back in he end.

The first Zeus came out in 2005 as a crimeware kit. The primary developer "abandoned" the project, and turned it into a private project in 2011.

JabberZeus crew was using the kit malware then moved into Zeus 2.1.0.x, which included support for domain generation algorithm, regular expression support and a file infector.  Then, in September it was upgraded to Mapp 13, which includes peer-to-peer + traditional comms via gameover2.php.  The focus was on corporate banking, and would often drop in additional malware (like CryptoLocker).

The attack group seemed to have 5 years experience, some as many as 10. Mainly from Russia and Ukrain, with two leaders.  Included support staff and 20 affiliates.

They had "bulletproof" hosting - exclusive servers together, virtual IP addresses, new address in 2 business days - very expensive!  Additionally, proxies all over the place - like in front of the peer-to-peer network.

The network was proteted using a private RSA key.

The FBI, and their private assistants, had to watch for traffic patterns and cookie theft/removal. For example, they could remove your existing cookie to force you to login again so that they could get your password.  Once they got what they wanted, they would block (locally) access to the bank's website.

This wasn't just financial, but also political. There was espionage, targeting intelligence agencies, looking for things around the Crimea and Syrian conflicts.  Specifically looking for top secret documents, or agent names.


Why take control? If not, if the feds presence was detected, the command engine could shut down and destroy the rest of the botnet.

The botnet uses a basic p2p layer. Every infected machine stores  a list of the neighbor nodes, updated often and peers talk directly to each other - getting weekly binary updates!

They had proxy nodes, which were announced by special messages to route C2 communication (stolen data, commands). Many nodes in the cluster are not publicly accessible, so there are proxy nodes that encapsulate traffic in HTTP so they can continue to communicate with infected machines behind a firewall.

The criminals was also configured to NOT accept unsolicited responses - must match a request, so the feds (and friends) could not use a poisoning attack.

Goal: isolate bots, prevent normal operation, by turning the p2p network into a centralized network with the good guys at the controls (a sinkhole).

The good guys had to attack the proxy layer with a poisoning attack. Peers maintian a sorted list of up to 20 proxies, regular checks if still active. Had to poison that list, and the make sure none of the other proxies reply any more.  Needed to work with ISPs to get access to some active proxies.

Needed to take over the command and control node first - that's where the commands came from.  Once they were in, they killed the old centralized servers (one was in Canada and the other in the Ukraine). Took advantage to completely change the digraph and essentially took down the botnet.

Needed to watch emails exchanged with "Business Club". Helpfully, "Business Club" kept ledgers!

The FBI need to look at the seams , to find who these people were. For example, Bogachev used the same VPN servers to log into his personal/identifiable accounts as he used to control the bot net.

They are still looking for him. The FBI is offering $3 million for information leading to the capture of  Bogachev (showed us pictures of the guy - he likes his fancy boats).

Let me know if you get a piece of that bounty!