Shadow AI: What to Do About the AI Use You Can't See
The finance director was sure nobody on her team was using AI tools for client work. She'd put it in writing in the employee handbook. Nobody had asked for a Copilot license. As far as she knew, the company didn't have an AI problem.
We pulled the browser extension inventory across 38 endpoints during a routine endpoint review. 22 of them had at least one AI extension installed. Four had three or more. One workstation belonged to the controller. Another belonged to the bookkeeper who handled client billing.
This isn't a story about that company in particular. It's a story about what "we don't use AI" actually means in a small environment without active management. It means: somebody decided they didn't want to think about it, and the employees thought about it anyway.
What Shadow AI Actually Covers
When we say shadow AI, we mean any AI tool in use inside your business that IT didn't approve, license, or know about. The shapes it takes:
- Free-tier consumer AI accounts. Personal ChatGPT, Claude, Gemini, and Perplexity accounts, used at work, often signed in with personal email addresses.
- Browser extensions. AI summarizers, writing assistants, email rewriters, meeting recorders. Most install in a single click and don't require admin privileges.
- Embedded AI features in tools you already approve. Notion AI, Slack AI, Grammarly, Otter, the AI button that quietly appeared in your CRM last month — often turned on by default with no admin notice.
- Personal phones running AI-enabled apps. Sales reps dictating client notes into transcription apps. Marketing running campaign briefs through personal ChatGPT Plus accounts from home.
- AI in the dev toolchain. Engineers using Cursor, Windsurf, Copilot, or Claude Code — sometimes through corporate licenses, often through personal ones.
Most SMBs have several of these running at once. The well-managed ones know which. The kickoff-call story knew about zero.
Why Shadow AI Is Already Everywhere
The spread isn't a security failure. It's the product of incremental decisions, each rational at the time:
- A marketing person uses ChatGPT once to draft a blog post. It works. They use it again. Nobody asked, so nobody told.
- A salesperson installs a meeting-recording extension to keep up with note-taking. It's free. They didn't read where the audio gets stored.
- A finance analyst pastes a P&L into ChatGPT to draft a board summary. The deadline is in three hours. The alternative is doing it by hand.
- A developer uses an AI coding tool because the company doesn't have an approved one, and writing without it now feels like typing with mittens on.
After six months of decisions like these, the AI footprint is a mosaic. No source of truth. No inventory. No policy anyone enforces. The data that's gone out is gone — you can't pull it back.
This isn't a moral failure. It's what an unmanaged environment looks like when AI tools are this cheap, this accessible, and this useful. The work to consolidate it into a managed footprint is real, and it doesn't happen by accident.
The Risks That Are Actually Real
Not every AI risk is the same. Some are theoretical. Some have already happened to companies like yours.
- Data leakage into model training. Free-tier accounts often retain prompts to train future models. Pasting a client contract, salary spreadsheet, or financial statement into one is a one-way trip. "They say they don't train on it" isn't enforceable.
- Confidentiality breaches in regulated environments. If you're under HIPAA, CMMC, GLBA, or any client NDA, data in an unmanaged AI tool is a finding waiting to happen. The auditor's question isn't "did anything bad happen" — it's "can you tell us what data your employees have put into AI tools?"
- Output treated as authoritative. AI tools hallucinate. The risk is when somebody copy-pastes a hallucinated client recommendation, tax calculation, or contract clause into a real document and ships it without reading carefully.
- License exposure. Some tools are licensed per seat. If half your company uses personal accounts for work that should be on corporate licenses, you're under-licensed and underprotected at the same time.
- Cyber-insurance and audit findings. 2026 renewals are starting to include AI governance questions. "We don't have a policy" is a worse answer than "we have a policy and we enforce it imperfectly."
The risks that aren't real are mostly the science-fiction ones. The model isn't going to wake up. The risks are mundane, and they're already happening.
What Good Looks Like
One sanctioned AI tier
Pick the AI tools you want your team to use, and license them properly. For most SMBs, that's Microsoft 365 Copilot or Google Gemini for Workspace, plus optionally a focused tool for engineering. The point isn't to give people every AI tool — it's to give them an obvious answer when they want one.
The discipline isn't the licensing. It's that there's a clearly better path than the personal account. If the sanctioned tier is easier to use, faster, and approved, employees will use it. If it's worse than the free version, they won't.
One acceptable-use policy people will actually read
The policy doesn't need to be long. It needs to answer four questions: what tools are approved, what data is allowed to go into them, what's never allowed in any AI tool, and who to ask when in doubt. We've seen good versions fit on a single page.
The cyber-insurance questionnaire your broker sends you next year will ask if you have one. The version that fits on a page and gets used is worth more than the ten-page template nobody reads.
One inventory you can trust
When somebody asks "what AI tools are in use here," the answer is a report from your MDM and browser extension management, not a guess. The report includes which AI extensions are installed where, which AI accounts are signed in on managed devices, and which approved tools are licensed to whom. We covered the underlying discipline in what good endpoint management actually looks like.
One conversation about what's already gone
This is the part nobody likes. Before you start fresh, you need an honest internal read on what data has already left through unmanaged AI tools. You don't need every instance. You need to know roughly what kinds of data — client lists, financials, contracts, source code — to assume are out there. That changes how you think about disclosure obligations and renewal questionnaires.
One escalation path when somebody wants a new tool
When somebody finds a new AI tool that would help, they need to know who to ask. The ask doesn't have to take weeks — a 24-hour review plus a fast yes-or-no is enough for most cases. Without an escalation path, employees default to the path that doesn't require asking, which is shadow AI again.
What This Connects To
Shadow AI isn't a standalone problem. It connects directly to cybersecurity — most AI risk is data-handling risk in a new wrapper, and the controls that protect against insider data leakage protect against this. It connects to endpoint management — you can't surface AI extensions on devices you don't manage. It connects to compliance — HIPAA, CMMC, GLBA, and other frameworks are starting to issue AI-specific guidance you'll need to demonstrate adherence to.
Without an endpoint inventory, you can't start counting AI usage. Without a sanctioned tier, you can't channel employees away from risky tools. Without a policy, you can't tell the cyber-insurance broker anything useful.
What It Takes to Get in Front of It
The work isn't the licenses or the policy. Both are cheap and fast. The cost is the project — inventorying what's deployed, picking a sanctioned tier, writing the policy, and standing up the escalation path so future tools route through it instead of around it.
For a 50-person environment, this is typically a four-to-six-week project, with most user-facing change in a single rollout week. After that, the ongoing discipline is significantly less work than the alternative, because shadow AI generates exactly the kind of compounding risk that's expensive to clean up after the fact.
Our managed IT support and cybersecurity engagements increasingly include AI governance as a baseline. If you want a read on what AI is actually running in your environment, that's a one-meeting conversation. We can usually tell you within a working week whether you have a manageable problem or a project on your hands.