Why Small Businesses Should Stop Treating Windows Updates as an Afterthought
The CVE was 11 months old by the time the ransomware hit. Microsoft had patched it in March. The compromise was in February of the following year, so almost a year of opportunity to apply the fix. The attacker exploited a known vulnerability on an unpatched Windows Server that was, on paper, the file server for an accounting practice with about thirty employees.
The patch was free. It had been free for 11 months. Applying it would have taken about forty minutes including a reboot.
We didn't manage that environment. We picked up the recovery work afterward, which took several weeks and cost the firm well into six figures between the ransom they didn't pay, the consulting they did pay, the lost productivity, the client notifications, and the cyber-insurance increase the following year. The whole thing was preventable in a way that gets harder to admit the more you look at it.
This isn't a rare story. We see versions of it every quarter. The specific CVE changes. The specific environment changes. The pattern doesn't.
Why SMBs Defer Patches
The deferral isn't laziness. It's a stack of plausible reasons:
Fear of breakage. Somebody got burned by an update in 2018 or 2019. A feature update broke a key application. Office got auto-updated and broke macros. The driver update bricked the laptops for two days. The lesson learned was "updates break things," and the conclusion drawn was "don't update."
No test environment. They don't have a way to validate an update against their real workloads before pushing it everywhere. Without a test ring, applying patches feels like rolling dice. The instinct is to delay.
"It's been fine." Nothing has broken because nothing has been patched. The reasoning runs backward: if updating risks breakage and not updating hasn't caused problems yet, then not updating is the safer path.
Limited maintenance windows. Servers can't be rebooted during business hours. Workstations can't be unavailable during the morning rush. Pharmacy, manufacturing, healthcare, retail — all have windows narrower than a generic "weekend maintenance" schedule. Patching gets pushed because there's never quite the right time.
The patch-management product is opaque. They have WSUS, or Intune, or the RMM tool's patching module. Reports show 87% compliance. Nobody knows which 13% is failing or why. The dashboard is comforting but not informative.
Every one of these has merit. The cumulative result is that the patching cycle drifts, the gap between "patch available" and "patch applied" grows, and the exposure window keeps expanding.
What Modern Patching Is For
The threat landscape has changed in ways that make patch deferral much more expensive than it used to be.
Vulnerabilities are weaponized faster. The gap between a CVE being published and exploits appearing in the wild is now measured in days for high-severity issues, sometimes hours. Ransomware operators run automated scans for known CVEs constantly. A patch that's three months stale on an internet-exposed service is a beacon.
Cyber insurance has stopped paying for poor hygiene. Underwriters increasingly require demonstrated patching discipline as a condition of coverage. Some won't bind a policy without proof of MDM-enforced patching. A claim that comes in for a CVE the policyholder had eleven months to fix often gets contested.
Compliance frameworks have caught up. CMMC, NIST 800-171, HIPAA, PCI DSS, and most state-level data protection statutes now have explicit patching requirements with defined timeframes. Critical patches within 14 or 30 days, depending on the framework. Documented evidence of compliance.
And the cost of a breach has gone up. Not just the ransom (which most organizations don't pay anymore), but the consulting, the legal, the notification obligations, the business interruption, and the long tail of remediation work. The math used to be "patching is annoying, breaches are unlikely." It isn't anymore.
What Proper Patching Discipline Looks Like
The good news is that proper patching isn't complicated, and most of the heavy lifting is one-time work to set up the discipline. Once it's running, the recurring effort is modest.
A ring structure
You don't push patches to everyone simultaneously. You have a test ring (5–10% of devices, including some IT staff and some volunteers) that gets patches first. After a few business days with no reported issues, the rest of the fleet gets the same patches. For some patch categories — critical security updates with active exploitation — the testing window may be compressed, but the structure is still in place.
A schedule
Patches roll on a predictable cadence. Microsoft's Patch Tuesday is the second Tuesday of every month. Most SMBs benefit from a fixed weekly or biweekly patching window after that. Users learn that Wednesday or Thursday after Patch Tuesday is when things get updated. The cadence becomes part of the operational background instead of a periodic emergency.
Monitoring that actually means something
The patching dashboard tells you, by device, which patches are missing, why deployment failed if it failed, and how to remediate. Devices that haven't checked in for a defined period are flagged. Devices that have been failing the same patch for multiple cycles are flagged. The 13% non-compliant from the opaque dashboard becomes a list of named devices with specific reasons.
Rollback when needed
The rare patch that causes problems can be reverted across the fleet quickly. Most modern patching tools support this for OS and application patches. For driver and firmware patches, the recovery path is documented separately. We covered the rollback discipline in our piece on safer software rollouts.
Server patching is its own discipline
Servers don't get patched on the same schedule as workstations. They run on a maintenance-window cadence that matches the business — late evening, weekend overnight, or whatever fits. Server patches get more pre-deployment validation, because the blast radius of a failed server reboot is larger. Critical security patches get applied within the window the threat justifies, not on the next scheduled maintenance.
Where the Right Posture Sits
Not every environment needs aerospace-grade patching discipline. The right level is the one where:
- Your exposure window for critical security patches is measured in days to a few weeks, not months
- You can answer "are we patched against CVE-X?" with confidence and evidence
- A failed patch doesn't take you by surprise and you have a path back
- Your cyber-insurance carrier sees your patching report and doesn't have follow-up questions
- Your compliance auditor sees your patching report and doesn't have follow-up questions
For most SMBs, that level is fully achievable with the patching tooling already included in Microsoft 365 Business Premium and a modest amount of ongoing operational discipline. The discipline is what's missing in most of the environments we audit, not the tooling.
What Happens If You Don't
Eventually, statistically, something on the unpatched edge of your fleet gets exploited. It might be a worm, it might be ransomware, it might be a credential-stealing payload that opens a longer-tail compromise. The specific shape varies. The fact that it happens, given enough time and a wide enough patch gap, is not statistical luck — it's structural.
When it happens, the recovery cost is large enough that the cumulative effort of doing patching properly for years would have been a rounding error against the loss. We can show you the numbers from cases we've worked. They don't make patching feel optional anymore.
If you want a read on where your current patching posture sits, our cybersecurity team can pull a gap report against your environment in a single working session. We'll tell you what your real exposure window is, what's missing, and what the path to a defensible posture looks like. If your backup and disaster recovery posture isn't where it needs to be either — which it usually isn't, in environments that defer patches — we'll fold that into the same conversation.