Thursday, August 6, 2015

BHUSA15: When IoT Attacks: Hacking a Linux-Powered Rifle

Runa A. Sandvik is a privacy and security researcher, working at the intersection of technology, law and policy.

 Michael Auger is an experienced IT security specialist with extensive experience in integrating and leveraging IT security tools.

Runa and Mike spent the last year researching the Trackingpoint 338TP. When CNN asked Runa why attack a rifle? She replied, because "cars are boring".

The base rifle is Remington 700 .308 bolt-action rifle. Hardware platorm is called "cascade, runs modified Angstrom Linux.

It uses Tag Track Xact (TTX) .

The wifi is off by default, and you cannot fire the rifle remotely.  The gun still works even if the scope/targeting system is broken - it is a gun, after all.

The first thing that they did was run a port scan on the rifle. It runs a webserver and rtsp server.

The more interesting side is the TrackingPont App - you can adjust settings for wind, media, and do software updates.

The mobile app was using encryption, etc.

 When they got stuck ... they just tried ALL THE THINGS! :-)

After round 1, found that the SSI contains the serial number, and it can't be changed. Guessable WPA2 key, and it also cannot be changed. Any RTSP client can stream the scope vie.

The API is unauthenticated, but it does validate input.

There is a 4 digit pin that locks advanced mode - you can brute-force. /set_factory_defualts" resets the lock.  Updates to the rifle are GPG encrypted and signed.

 Round Two...

Fortunately Tracking Point's website has an excellent diagram of what the rifle would look like, before tearing it a part. They actually used their CAD drawings in their marketing material.  Though, the website has a lot of 2D things, in reality the circuit board is round :-)

To get the circuit board out, you have to desolder at least 60-pins.

So excited to see it booting Linux!

But, alas, it did not auto-login as root.

Console access is at least password protected and the kernels and filesystem are on separate chips.

the filesystem chip was hidden under a big capacitor - missed it the first few times.

Some of the folks they were working with recognized the silk screening on the board and recommended an EMC to USB converter. Then got to see what was on the filesystem.

The webserver had a lot of interesting APIS, like ssh_accept - that could be fun!

The system backend requires unpublished API call to open port. The API validates input, backend does not. You can make temporary changes to the system. Can change wind, temperature, ballistics valus and control the solenoid, etc.  They could even lock the trigger, crash the gun, make the scope think it is attached to a different firearm, or make this one command segfault (which triggers reboot).

The changes are temporary, if the user reboots, the changes will be lost.

Now time for demos!

Watched a change in the ballistics screw up the calculation so that the shot hit the target next to the one you were aiming at.

TrackingPoint operates with two GPG keys for updates, one of which is on the scope. Update script accepts packages signed by either of the two keys. This will allow you to make persistent changes to the system AND get root access.

Successfully able to login with no password as root!

Round 3 findings: the Admin API is also unauthenticated, the system backend is unauthenticated and does not validate input. GPG key on the scope can encrpt and sign updates.

Did have to have previous access to the rifle for all of the attacks.

But, there are ways to do remote code execution - if you can get on the wifi.

it's not all that bad... USB ports are disabled during boot, media is deleted from scope once downloaded, SPQ2 is in use, even if key cannot be changed. The API does validate user input, console access is password protected and software updates are GPG encrypted and signed.

Will this get better?  Have been calling them since April, zero replies, until Wired called... since have received two calls. TrackingPoint is working on a patch. They have been easy to work with, once the connection was made.

"You can continue to use WiFi (to download photos or connect to ShotView) if you are confident no hackers are within 100 feet" - note on TrackingPoint's website. :-)

They had done security work (better than most people doing embedded work).

BHUSA15: Black Hat Panel – Beyond the Gender Gap: Empowering Women in Security

Kelly Jackson Higgins, Executive Editor at Dark Reading (panel moderator)

This is a growing industry, but women are leaving. We need more people, so how do we empower the women we have?

Panelists:
Justine Bone, Independent Consultant
Joyce Brocaglia , Founder Alta Associates (executive search firm for security, etc)
Jennifer Imhoff-Dousharm, co-founder, dc408 and Vegas 2.0 hacker groups
Katie Moussouris, Chief Policy Officer, HackerOne

All of the women here come from different backgrounds - hacking (black hat and white), executives, startups, big companies.

Justine learned a big hard lesson when she dropped out of industry to work on her own startup - at the same time as having kids.  While she was working her butt off, she wasn't showcasing her work or engaging with her peers - everybody thought she'd taken off time to have kids, totally unaware of the hard work she'd been doing. Lesson: always engage, promote, etc.

Joyce mentioned that she sees a lot of Employee Research Programs that are more checkboxes than actually beneficial programs for women. She noted that a company might pay Alta  $100-$150,000 to find a new executive, but when she asks if they'll pay $100,000 for leadership program with a proven track record - the same company will say "we don't have that kind of money." (note: sigh)

Katie started up a bug bounty program at MS - it was hard.  Big companies had vowed to never pay ransom for security bugs - she had to present this in a different way, to get it to line up with their goals, getting organizational empathy (when is the best time for devs to get vulns). Hence, IE 11 Beta Bug Bounty - which ran for 30 days. Alas, folks would hold on to their vulns until after beta was closed, forcing MS to release vulnerability reports.

We have a shortage of engineers, why aren't women coming in?  Jennifer said she doesn't see it as a pipeline problem - she noted that women that grew up in the 80s were exposed to computers (yay, Oregon Trail) and didn't hit the "cootie" program until they entered corporate America. It's scary to be the only person like you in the room - you don't realize it until you are that only woman. It doesn't matter how strong you are or how much you lean in, you have to carry that weight of diversity.

Justine noted the "DefCon problem" - it's annoying that everyone asks you "who are you here with? who's your boyfriend" - it gets exhausting. (Note: YES - happened to me every year, after my bf & I broke up and I continued to go alone).  Explaining over and over that you deserve to be there, what you do, that you really are technical.

Katie noted there's a challenge as well that you are expected to be a representative of ALL women, irregardless of how different we all are.  She hates the question: "what's it like being a woman in security?" - stop asking her about the least important aspect of her job and her personality, she is so much more than just "a woman in security."

Joyce notes that she sees job advertisements all the time that will literally use the male gendered pronoun, "he will be responsible for X, Y, Z". Knowing that men will apply for a job where they only meet 6/10 of the qualifications, and women require 9/10 before they will apply.. adding "he" to the description is one thing off the bat that the woman will not be.  Confidence matters as much as competence - men tend to have more confidence, which may help explain why women are not making it to the higher levels.

Companies need to invest in younger women to make these changes - they are an investment.  Women and men need sponsors, but companies should make sure that it's not only men getting them. If women are raising their hands for stretch assignments, but getting skipped over... is it their fault?

Justine noted that we also need to be willing to accept help - if someone tries to bring you into the "old boys club" - GO! Joyce cautioned, though, don't wait for it.

Justine says she's always criticized for her travel for work, by friends, family, etc. How could she leave her kids? She notes she's on these flights with a ton of men doing the same thing - and nobody criticizes them.

Can you have work and family?  Yes, but you need help - nannies, families, etc. "Women have the capacity to multitask and get shit done," Joyce.

Personal space at these events is important. Katie had a run in with "Handsy McMansy" last night - fortunately, she's adept at profanity to throw at him. The men around though seemed shell shocked and didn't know what to do. "I don't need somebody to fight for me, I need them to fight with me."

Joyce had a run in last night that was similar with a male executive, sloppy drunk, asking dumb questions and hanging on people. If a woman did that - she would be shamed by the men around.

Joyce noted that women still don't get taken seriously at booths at events like RSA.  People don't want to talk to the women, even if they may be the one making purchasing decisions.

Justine looked at the Black Hat review board this morning - there is only ONE woman on the review board. Not saying the men on the board are not skilled and talented, but they need diversity.

Joyce noted that women need to submit talks, start with smaller conferences and get practice, confidence, etc. 

Men should talk to women at conferences - acknowledge them, don't question why they are here - but actually engage. Like, "what do you do at your company?" vs "who's your boyfriend?"

Joyce noted that older generations of men are lacking the emotional intelligence to understand why what they are doing or saying is not okay. She has high hopes for the younger generations, who grew up with working mom's, etc.

Katie noted that women need to stop denigrating yourself - the world will do that for your. Speak about your work in positive tones, not "well, I don't do kernel work, I don't do... ". Believe in yourself and don't be afraid to tell the world about what you do.

BHUSA15: Information Access and Information Sharing: Where We Are and Where We Are Going

Alejandro Mayorkas, Deputy Secretary of Homeland Security.

Homeland security means security of our institutions, security of our way of life and most importantly security of our values.  Security of the Internet is very much a part of what we do. It is clear that the challenges of network security are immense. We as a government are making advances in this area, but we are not where we need to be.

Every morning, the secretary and get a briefing about threats, events that are occurring or are about to occur all over the world. Increasingly Internet security events are more common in that meeting.

The more he travels around the country, it becomes obvious how important this is for everyone.  Internationally, the same thing. Foreign companies and governments all care about this.

The current state of affair with individualized responses is not working well to ensure that the Internet is protected.  DHS considers themselves uniquely situated to address these concerns. DHS is a civilian agency, standing at the intersection of Private Sector, Enforcement community, Intelligence Community and desire to protect .gov.  They have created a critical response set of protocols and organization (National Cybersecurity Communications Organization).

DHS currently shares information in bulletins or entity to entity. It is not currently in an automated fashion. The President, in his last executive order, placed DHS in charge of leading information sharing with the private sector.

DHS wants an automated and near real time way to share and disseminate information, to raise the bar and capacity for the private sector to protect themselves.  When a threat is shared with DHS, they can receive that in automated form and disseminate in near real time to prevent replication of that threat.

One thing in their way: the issue of trust. That emanates from a variety  of sources - can DHS keep this secure? can you trust those providing information?

DHS needs to work on building trust - it will take time, but will be worth the effort.

As they are working on the automatic reception of cyber threats, please give them a chance and share some information so that they can prove their capabilities and prove their results.

Question about how important is it for private industry to participate?  Answer: very, many of them are very critical systems. It's critical they participate.

We have to understand our responsibilities for the public good. Alejandro hopes that sharing the cyber threat will have a public dimension. It's vital for them to be shared far more publicly than they are now. This is important for DHS's mission to secure this country.

DHS is very active in research and development in achieving network security - we are investing in public as well as private sector.

Various questions show that folks are nervous about sharing with the government, Alejandro noted that they will be working on correcting that.

Another questioner asked about the OPN breach, where NASA, etc, lost lots of personal information.  He noted that not all agencies are as advanced as others, and they've been doing a 30 day security activity with the goal of improving this.

Question: will information about 0-days that the government has bought be shared? Answer: we are going to declassify and release everything that we can.

Question: gov't is know for antiquated systems, how do we know you'll do this right? Alejandro noted that they have to start with new gear, and stay on top of the systems. (no Windows NT here)

Additionally, DHS is looking at recruiting the best and the brightest, and even looking at opening an office in Silicon Valley.

BHUSA15: Where? You can't Exploit What You Can't Find

Christopher Liebchen & Ahmad-Reza Sadeghi & Andrei Homescu & Stephen Crane

 We're concerned with many problems that are actually 3 decades old. Nowadays, everyone has access to cell phones. Many developers with different intentions and different background (particularly with security).

So - how do we build secure systems from insecure code?  Seems counter-intuitive, but we have to do it. We cannot just keep adding more software onto these old systems.

We've had decades of run-time attacks - like the Morris Worm in 1988, which just keeps going.

There are a number of defenses, but often the "defenses" are broken by the original authors juts a few years ago.  So, there is a quest for practical and secure solutions... 

Big software companies, like Google and Microsoft, are very interested in defending against run-time attacks, like Emet or CF Guard in MS or IFCC and VTV from Google.  But how secure are these solutions?

The Beast in Your Memory: includes bypass of EMET. return oriented programming attacks against modern control-flow integrity protection techniques... 

The main problems are memory corruption and memory leakage.

You can do a code-injection attack or a code-reuse attack.

Randomization can help a lot, but not perfect.

Remember, for return-oriented programmng: basic idea: use small instruction sequences instead of whole functions. Instruction sequences of length 2to5, all sequences end with a return instruction. Instruction sequences chained together, modifying what code is executed after return.

Assuptions for our adversary model: memory pages are writable executable, we also assume there is address space layout randomization (ASLR), and that it can disclose arbitrary readable memroy.

 Main defenses: code randomization and control flow integrity.


Randomization has low performance overhead and scales well to complex software  Though, this suffers with low system entropy and information disclosure is still hard to prevent.

For CFI: formal security (explicit control flow checks) - if something unexpected happens, you can stop execution (in theory).  It's  a trade-off be performance and secuity (inefficient) and chalenigng to integrae in complex software. 

What about fine-grained ASLR? You are just trying to make it more complicated.  But, this has been attacked by JIT-ROP (BH13).  That undermines any fine-grained ASLR and shows memory disclosures are far more damaging than believed. This can be instantiated with real-world exploits.

Then we got a pretty graphical demo of how JIT-RoP works.

Their current research is around code-pointer hiding.

Their objectives were to prevent code reuse & memory disclosure attacks. It should be comprehensive (ahead of time + JIT), practical (real browsers) and fast (less than 6% overhead).

We prevent direct memory disclosures by using execute-only code pages, which prevent direct disclosure of code. Previous implementations do not entirely meet our goals. So, we fully enforce execute-only permissions with current x86 hardware.

We have virtual addresses that get translated to physical addresses. During the translation, the MMU can enforce some translations. As soon as your code page is executable, it is also readable.  But you might want something to be readable, but not executable.  Can do this with extended page tables, which will note that something is only executable (not readable).

The attacker can leak pointers to functions that point to the beginning or in the middle - once he's got that pointer, he can figure a lot more things out.

So, we can add another layer of indirection: code pointer hiding!

They modified the readactor compiler so we would have codedata separation, fine-grained code randomization, and code-pointer hiding.

our goal is to protect applications which are frequent targets of attack. They have JIT (just in time) code, which is harder to protect, as it frequently changes.  Solution: Alternate EPT usage between mutable mapping RW-) and execute-only mapping (--X).

Now, does this actually work?  Checked performance with SPEC CPU2006 and chromium benchmarks. Checked practicality bye compiling and protecting everything in Chromium.

The full readactor caused roughly 6.4% slowdown. But, if you only protect the hypervisor in execute only mode, that's only around 2% performance impact, which seems acceptable.

How does it do wrt security? Resilience against (in)direct memory disclosure.

Code resuse attacks are a a severe threat, everyone acknowledges this. Readactor: first hardware enforced execute-only fine-grained memory randomization for current x86 hardware.












BHUSA15: The Memory Sinkhole – Unleashing an x86 Design Flaw Allowing Universal Privilege Escalation

Chris Domas is an embedded systems engineer and cyber security researcher, focused on innovative approaches to low level hardware and software RE and exploitation.

There has been a bug in x86 architecture for more than 20 years... just waiting for Chris Domus to escalate privileges.

Chris did the demo on a small, cheap netbook computer. In case it didn't work, he had a stack of netbooks.  We saw just running a program where a general user ran a simple program and had root.

Some things are so important that even the hypervisor should not be allowed to execute it.

We originally wanted to do power management without the OS to worry about it - system management mode. SMM becamse a dumping ground for all kinds of things, eventually it took "security" enhancements.  Now SMM is imortant: root of turst, TPM emulation and communication, cryptographic authentication...

Whenever there was something important or sensitive or secret, it got stuck in SMM.

Userland is at Ring 3, kernel in Ring 0 . Ring -1 is they hyperviser... Ring -2 is SMM. On modern systems , Ring 0 is not in control. We have to get deeper (and hide from) Ring 0.

If you're in ring 0 and try to read from SMM - you'll just get a bunch of garbage. Memory control separated SMRAM from the rest of the system. If you're in SMM, though, you can read from SMRAM.

There are many protections on SMM - locks, blocks, etc.  but most exist in the memory controller hub. Lots of research in this area on how to to get to Ring -2.

The local APIC used to be a physically separate chip that did this management. But, it's more efficient and cheaper to put the local APIC on the CPU.  Now it's even faster!

Intel reserved 0xfee0000-0xfee01000 - so to access, you have to do some round about ways to get there. When they created this model, this broke legacy systems that expected that segment of memory to map to something else. Looking at the Intel SDM, c 1997 describes what's happened here in the P6.

Now we're allowed to move where the APIC window is located, allowing us to access APIC reserved space.  This "fix" opens systems up to this vulnerability.

If we're in Ring 0, and we try to read SMRAM we will be denied. But, you can do it from SMM. What if we're in Ring 0, and relocate the APIC window. Now, from Ring 0, we can read SMRAM.  Now that we can do that, we can modify SMM's view of memory.  Now the security enforcer has been removed.

How to attack ring -1 from ring 0? SMRAM acts as a fe haven for SMM code. As long as SMM code stays in SMRAM, ring 0 cannot touch it. But if we can get SMM code to step out of its hiding spot, we can hijack it.

Move APIC over SmRM, corrupt execution , trigger fault in SMM. This gets is to look up an exception handler - under our control.  Though, that attack doesn't work. There's an undocumented security feature which causes a triple fault (reset) the system.

He overlayed APIC MMIO range at the SMI entry point: SMBAE+0x80000 - getting the APIC entry point and the SMBAE to overlap.

Now, we just need to store shell code in the APIC registers. The challenge is they have to be 4 K aligned. Place  exactly SMI entry. Execution begins @ exactly start of APIC registers.4096 bytes available.

Many registers are largely hardwired to 0, giving few registers that can actually be changed. We need to do something useful before the system resets.

You need to keep things from executing right away before our last byte is activated.

we only really have 4bits to do something to actively attack the system.  Looking at the opcode map, not a lot of interesting things.

But, the attack didn't work as expected. We still can't execute from the APIC, so must control the SMM with data alone.

How do we attack code when our only control is to disable some memory?

SMM code comes from system firmware.  Intel makes template code, which goes to independent BIOS vendors, then the OEMs (HP, Toshiba, Lenovo) makes more changes.

The only way to make a general attack is to look at the EFI template BIOS code from Intel, as that will be on EVERY system.

From ring 0, we try to sinkhole the DSC to switch it into system management mode.  We've lsot control, but maybe it'll let us do something before memroy resets.

(lots of stuff about self rewriting code, and far jumps and long jumps and lots of hex codes)

Then we successfully got the SMM to read code that we could control, by controlling the memory mapping.

With only 8 lines of code, to exploit: hardware remapping, descriptor cache configurations, etc.

In the end, used well behaving code in order to abuse a different area.

This has opened up a new class of exploits.  Now that we have Ring -2 control, what can we do?  We can disable the cryptographic checks, or turn off temperagur control, brick the system, or install a root kit.

Once we have the control, we can preempt the hypervisor, periodic interception, filter ring 0 i/o, modify memory, escalate processes, etc.

Adapted code from Dmytro Oleksiuk.

We can simultaneously take over all security controls. Mitigations don't look good. This is unpatchable, would need new processors.... Which Intel did. Their developers seem to have found this independently. This is now fixed with Sandy Bridge and Atom 2013. Now check in SMRRs with APIC is relocated.

intel.com/security has a write up on this. They have been easy to work with, and have been working on mitigations where ever it was architecturally possible.








Wednesday, August 5, 2015

BHUSA15: Panel: Getting it Right: Straight Talk on Threat & Information Sharing

Panelists: Trey Ford(@treyford) is the Global Security Strategist at Rapid7, Kevin Bankston (@kevinbankston) is the Director of the Open Technology Institute and Co-Director of the Cybersecurity Initiative at New America, and @brianaengl and @hammem (speaker lineup appears to have changed, so twitter handles are what I've got:-) (also, the podium is super giant and blocking my view of the speakers, so I can't tell you who is saying what).

Sharing sounds like fun, but it's not as simple as we remember from our childhood.  There are legal implications, contracts, source trust issues, etc.

Intelligence is like a UDP packet you cast out and hope for the best.  How do you determine if the information is still relevant?

Facebook is working on this - how to do exchange of data?  What can we learn from it?

When people start sharing data, they realize that they need to share with someone who cares. Ie if you're concern is about phishing, don't build a relationship with someone who is focused on bitcoin.

What is stopping companies from sharing information with other companies and the government? It will be relevant to you if new legislation passes.

Some of the barriers are around the wiretap act (Title II) portion of the ECPA which places limits on real-time communications and limits disclosure.Other limits: federal privacy laws protecting HIPPA data and educational records, self-imposed restrictions in Terms of Service, and anti-trust laws (DoJ could accuse them of colluding in an anti competitive way).

Well, and there are nervous lawyers :-)

Most threat information doesn't include content or PII. Non-content can be liberally shared, with exceptions for security and consent via ToS.  DoJ has stated they won't go after companies sharing for these reasons.  Companies already do a lot of sharing, so do they really need new legal permissions?

but there's the new CISA: Cybersecurity Information Sharing Act, S. 754.  It authorizes sharing of broadly defined "cyber thread indicators" and info about defensive measures" with "any other entity or [any agency of] the Federal Government".

DHS must distribute all information to other agencies including NSA, "not subject to any dleay or modification". Gov't can use the information to investigate or prosecute a range of crimes unrelated to cybersecurity.

The house is also working on bills!

Congress has looking at CISA since 2009, and they are starting to feel like they have to do something so they can show they are serious about cybersecurity.

Please call your senator to oppose the bill and support privacy enhancing amendments. If it does pass, it still has to go to conference with congress.

Check out StopCyberSpying.com or call 1-985-222-CISA for more information.






BHUSA15: Stranger Danger! What is the Risk from 3rd Party Libraries?

Kymberlee Price, Bugcrowd, and Jake Kouns is the CISO for Risk Based Security.

It's well known that vulnerability statistics suck (see Steve Christey's (MITRE)  Black Hat 13 talk).

But, the truth is - we are getting attacked, lots of new (and old) vulnerabilities.  This is getting worse every year, not better.

Secunia says there are 15000 vulnerabilities, but they counted Heartbleed as 210 different vulnerabilities (and our speakers say it was just one, while some audience members noted it was three).

There were 100+ vendors impacted by Heartbleed, impacting over 600,000+ servers.

Very large companies are using OpenSSL: Oracle, HP, Blackberry, Juniper, Intel, Cisco, Apple, Intel, etc... so, it's not just little startups using open source anymore.

There have been 52 new vulnerabilities fixed since Heartbleed - average score of CVSS of 6.78.  Nine of them had a public exploit available.

We're beating up on OpenSSL - but what about Gnu library (Ghost), which had a heap vulnerability in it. It's everywhere.

Efficiency at what cost?  By leveraging this third party source, companies can deliver faster, cheaper, etc. But what are companies picking up in exchange?  Some products have more than 100 third party libraries in them. Are they being treated with his much scrutiny as they should be?

The speakers aren't saying: "Don't use 3rd party libraries", but rather to think about things during design and development.

All of the data they are sharing this week are from public sources, even though that data is limited.

Look at FFMPEG - they have 191 CVEs, but over 1000 vulns fixed.

These vulnerabilities spread - think about the FreeType Project font generation toolkit. It's used by Android, FreeBSD, Chrome, OpenOffice, iOS, video games (including the American Girl Doll game).  Everywhere!  There was a vulnerability (missing parameter check) that allowed you to jailbreak your iPhone... or someone else to take over your iPhone.  This is insiduous, as you have to wait for the vendor to fix it..

libpng, Apache Tomcat... everyone is using this and including these things in toolkits.

We shipped a vulnerability to Mars! (Java is on the Mars Rover).

Interesting to note: some vendors don't even release CVEs for anything under CVSS of 5.0. Since 2007 the number of CVEs: OpenSSL (90), Flash (522), Java (539), FreeType (50), libpng (28), APache Tomcat (100).

Now, this is not telling you what is more or less secure. For example, Adobe has an excellent bug bounty program and internal pen testers. Just because a product has only reported a few, doesn't mean more aren't lurking.

We should consider time to relief. How long does the vendor know about the issue before they provide a fix? You can use this to figure out how seriously that vendor is about security.

Had to define a framework to understand time of exposure, identify vendors and products you want to work with and establish a scorecard.

Calculating Vendor Response Time is how long from when the vendor was informed before they responded to the researcher. This can't be an automated reply, but actual acknowledgement.

Time to patch - when do the customers get relief.

But another time to consider: how long were customers vulnerable? That is, how long from when the patch is available to when the patch was applied (many folks only do updates quarterly, for example).. Total time of exposure covers the period from when the vulnerability was discovered until when was it fixed at the customer site.

We got to walk through a few case examples.

In one case, a researcher reached out to a company on twitter asking how to securely disclose a vulnerability - and for 2.5 months they kept pointing the researcher at their insecure support page.

It is critical for vendors to respond promptly and investigate the issue.

And this data is hard to figure out, as the terminology for "zero day" (oh day, 0 day) seems to be malleable.The speakers believe that it's only a 0-day when the vendor does not know about it.  Once he vendor knows, or the vuln is publicly disclosed, then it's no longer a zero day.

In one case, the vendor created a patch - but did not release it, instead they wanted to roll it up to their next version release. In the end, their customers were exposed for 451 days.

While most companies update their systems every 30 days, their exposure could be much longer due to a vendor not actually providing the  fix to their customers.

Advice: once you incorporate a 3rd party software suite into your tools, you need to become active in that community, watch it, help out, provide funding, or you are putting your own customers at risk.

You also need a clear prioritization scheme, to know what to fix and when (as most likely your incoming rate is higher than your fix rate)..  Proactively manage your risk. Understand what third party code your organization relies and implement a plan to address the exposure, and work with the vendors.