Three weeks ago I got a call from the operations director of a packaging manufacturer in the mid-Hudson Valley. Two hundred and twenty employees, three production lines, a decent IT team of four. He said they were “pretty locked down”—endpoint protection, MFA on email, regular backups. He wanted to talk about cyber insurance renewal, which had jumped 40% and now required an incident response plan on file. He had one. It was twelve pages. It had been written in 2023. I asked him one question: “If ransomware hit right now—this minute—who makes the call to shut down the production network?” Silence.
That silence is the gap. Not the policy. Not the tools. The gap between having a plan and being able to execute it under the conditions that actually exist today. The ransomware landscape of 2025 does not look like the one that plan was written for. The economics changed. The speed changed. The targeting changed. And most critically, the attacker’s playbook shifted from “encrypt everything and wait” to something considerably more destructive.
The Speed Problem Is Worse Than You Think
CrowdStrike’s 2025 data showed average breakout time—the interval between an attacker gaining initial access and moving laterally to other systems—dropped to four minutes for the fastest groups. Mandiant’s M-Trends report documented the handoff from initial access brokers to ransomware operators compressing from hours to seconds. These are not theoretical benchmarks. They are measured timelines from real intrusions in organizations no different from the one that called me.
Four minutes. Think about what happens in four minutes at your organization. Your SOC analyst is still triaging the alert. Your IT manager is still reading the email notification. Your incident response plan is still in a SharePoint folder that three people know exists. The attacker is already on your domain controller.
The incident response plans that worked in 2022—when you had hours to detect, assess, and decide—are structurally incapable of addressing a four-minute breakout time. This is not a tuning problem. It is an architecture problem.
The packaging manufacturer had endpoint detection. It would have fired an alert. But their alert workflow routed to an email distribution list monitored by two people during business hours. At 2 AM on a Saturday—when 76% of ransomware deployments occur, according to Sophos’s 2025 Active Adversary Report—that alert would sit unread for hours. By the time someone saw it, the encryption would be complete and the exfiltration finished.
Recovery Denial: The New Playbook
The shift that caught the most organizations off guard in 2025 wasn’t speed. It was what the industry has started calling “recovery denial”—the deliberate, systematic targeting of an organization’s ability to recover without paying.
Early ransomware encrypted files and left the infrastructure intact. If you had good backups, you could rebuild. The calculus was straightforward: backup quality determined whether you paid. That calculus no longer holds. Modern ransomware operations now include dedicated playbook steps for identifying and destroying backup infrastructure before deploying the encryption payload. They target Veeam servers. They delete shadow copies. They compromise cloud backup credentials. They specifically look for and disable immutable storage configurations that weren’t properly architected.
What Changed
Attackers now spend days inside your network before deploying ransomware. They use that time to map your backup infrastructure, identify recovery mechanisms, and pre-position to destroy them simultaneously with encryption.
Why It Matters
The assumption that “we have backups, so we’re covered” is the single most dangerous belief in mid-market cybersecurity. Your backup strategy must assume the attacker knows about it and has a plan to defeat it.
The packaging manufacturer had backups. Daily snapshots to a NAS device on the same network. Weekly offsite replication. Both were reachable from the domain admin credentials that a ransomware operator would have within four minutes of lateral movement. Neither had been tested for a full restore in over a year. The offsite replication used the same service account credentials as the primary—meaning if one was compromised, both were.
I didn’t have to tell him this was a problem. He could see it once we mapped it. What he couldn’t see—until we walked through it—was that his 2023 incident response plan assumed the backups would be available. Every recovery timeline in that plan started from “restore from most recent backup.” If the backups are gone, the plan is fiction.
What an Honest Response Plan Looks Like Now
An incident response plan that reflects current conditions has to answer different questions than the ones most plans were built around. The old questions: Who do we call? What do we do first? How do we communicate? Those still matter. But the questions that determine whether you survive have shifted.
The Five Questions Your IRP Must Answer
What the Packaging Manufacturer Did
We rebuilt their response capability over sixty days. Not the whole security program—that’s a longer engagement. Just the response posture. The changes were not exotic. They were structural.
First, we separated the backup infrastructure. New service accounts, isolated network segment, immutable storage with a retention policy that couldn’t be modified from the production domain. Cost: approximately $8,000 in additional storage and a weekend of engineering time. This single change moved them from “backups probably don’t survive a real attack” to “backups survive anything short of physical destruction of the offsite facility.”
Second, we pre-authorized network isolation decisions. The senior IT administrator now has documented, board-approved authority to disconnect production systems from the network if EDR alerts meet specific criteria—without calling anyone first. We defined the criteria precisely enough that the decision is mechanical, not judgmental. If X happens, do Y. No phone tree. No committee. This shaved the response time from “however long it takes to reach an executive” to “however long it takes to type a command.”
Third, we ran a tabletop exercise. Two hours on a Tuesday afternoon. We walked through a ransomware scenario that started at 11 PM on a Friday. The exercise revealed three things the written plan didn’t address: the IT team didn’t have after-hours contact information for the insurance carrier’s breach response line, nobody knew the credentials for the offsite backup console, and the communications plan assumed email would be available—which it wouldn’t be.
Fourth, we timed a full restore. It took 31 hours. Their plan had assumed 8. That single data point changed how the operations director talked to his board about cyber risk. Thirty-one hours of production downtime at their margins was a six-figure loss. Suddenly the $8,000 backup investment and the $15,000 annual cost of a fractional security advisor looked different in the math.
The 60-Day Response Readiness Checklist
- Pre-authorize network isolation decisions with specific trigger criteria—no executive approval required at 2 AM
- Isolate backup infrastructure: separate credentials, separate network segment, immutable retention
- Test a full restore end-to-end and record the actual time. Write that number down. Show it to the board.
- Run a tabletop exercise with a realistic scenario. Friday night. Key people unavailable. Email down. What happens?
- Verify your insurance coverage: sublimits, waiting periods, excluded scenarios, breach response hotline after hours
- Establish an out-of-band communication channel that doesn’t depend on your primary infrastructure
- Document the decision tree: if X alert fires, who does what, in what order, with what authority
The Math Your Board Needs to See
The packaging manufacturer’s board had been hearing about cybersecurity as an IT project for three years. What changed their engagement was a single number: 31 hours. Thirty-one hours of downtime multiplied by their production throughput and contractual penalties produced a loss figure that dwarfed every security investment they’d ever considered. The conversation stopped being about “should we spend money on security” and started being about “what’s the most effective way to reduce a specific, quantified business risk.”
That shift—from security as expense to security as risk management—is available to every mid-market organization that does the work to quantify its exposure honestly. It requires testing the response plan, measuring the recovery time, and presenting the financial implications to leadership in terms they already use for every other business risk. Most organizations skip this step because it’s uncomfortable. The numbers are always worse than the assumption. That discomfort is precisely the point. The gap between the assumption and the reality is where the risk lives.
You don’t need a bigger security budget. You need a tested recovery time, an honest conversation with your board, and the structural changes that make your response plan survivable under real conditions.
References
CrowdStrike. (2025). Global Threat Report 2025. CrowdStrike Holdings, Inc.
Mandiant. (2026). M-Trends 2026. Google Cloud Security.
Sophos. (2025). Active Adversary Report 2025. Sophos Group plc.