Reading Through the 'AI' on Your Cybersecurity Stack
The new SOC dashboard had "AI-Detected Anomalies" across the top. The line was orange-and-red, and the count was always nonzero. We were evaluating the tool for a client, so we asked the vendor what model was running underneath. Was it an anomaly detection algorithm, was it an LLM, was it a supervised classifier?
Three weeks later, after escalating the question twice, the answer came back: "It's a tuned rule set with statistical thresholds."
That's not nothing. A well-tuned rule set with statistical thresholds is a legitimate detection tool. But it isn't what the dashboard said it was, and it didn't justify the 40% premium over the same vendor's non-AI tier.
This is the conversation we've been having with vendors and clients more or less every week for the last twelve months. Every EDR, MDR, SIEM, email security gateway, and helpdesk product now has "AI-powered" in the deck. Some of them earned it. Most of them slapped it on a feature that existed before anyone called it AI. Telling them apart is becoming a core skill in evaluating cybersecurity spend.
What Vendors Actually Mean by "AI"
When a security vendor uses the word AI, they could mean one of at least four very different things:
- Statistical anomaly detection. Math that flags outliers in data streams — login times, traffic volumes, process counts. This existed in security tools for fifteen years before anyone called it AI. It's useful. It's not new.
- Supervised machine learning classifiers. Trained models that predict whether a file, URL, or email is malicious based on labeled training data. This is real ML, and it's been in EDR and email security products for the better part of a decade. It's useful. It's also not new.
- Large language models. GPT-class models doing summarization, triage, or natural-language querying of security data. This is the actually-new part of the wave. When you can ask a SIEM "show me everything related to this user in the last 24 hours" in English and get a coherent answer, that's an LLM at work.
- None of the above. Marketing. The same feature renamed. A dashboard tile with the letters AI on it.
A vendor saying "AI-powered" without telling you which of these they mean is the tell. A vendor who can describe their AI stack in two sentences — what's a model, what's a rule, what's a heuristic — is usually one we take seriously.
Where AI Is Actually Earning Its Keep in Security
The places we've seen real value in 2026:
- EDR-level behavioral classification. Modern endpoint agents use trained ML models to score process behavior as malicious or benign in near-real time. The false positive rates are dramatically lower than signature-based detection, and the time-to-detect on novel threats is meaningfully shorter. This isn't marketing — it's the foundation of why EDR replaced AV.
- Email security triage. ML-based phishing and business-email-compromise detection catches things pattern-matching missed for years. The detection rate improvements on payload-less BEC alone are worth the price difference between basic gateway filtering and modern AI-augmented email security.
- SOC analyst assistance through LLMs. Tools that summarize an alert, correlate it with related events, and propose next steps to a human analyst. This is where LLMs are adding real time savings for SOC teams. The analyst still makes the call. The agent compresses the context-gathering from twenty minutes to two.
- Natural-language SIEM querying. Asking your SIEM "did anyone log in from an unusual country in the last week" in English instead of writing the query in the vendor's DSL. For SMBs without a full-time SOC analyst, this lowers the floor on who can ask useful questions of the data.
The pattern in each is the same: the AI is doing something a human could do, but slower and more inconsistently. The human is still in the loop where the consequences are high. The model is augmenting, not replacing.
Where It's Marketing
The places where "AI" is usually a wrapper around something that was already there:
- "AI-detected anomalies" dashboards that are actually rule sets. Like the example we opened with. The threshold is statistical, the math is from a textbook, and there's no model in sight.
- "AI-prioritized alerts" that are weighted scoring. The vendor assigns severity scores based on a formula you can read. That's not AI. That's a scoring rubric with a marketing relabel.
- "AI-driven threat intelligence" that's the same threat feeds. The feed is curated by humans, distributed via standard formats, and applied to your environment with the same indicator-matching that's existed since the early 2000s.
- "AI-powered" features in tools where the actual AI work happens upstream at the vendor and you never touch it. This isn't dishonest. It's just not buying you anything you couldn't get from the vendor's previous tier.
None of these are necessarily bad products. The question is whether you're paying an AI premium for non-AI value. For an SMB on a tight security budget, that premium is the difference between affording MDR coverage and not.
The Questions to Ask a Vendor
Five questions that cut through most of the marketing in five minutes:
- "Which of these features uses a trained model, and what's the model doing?" A vendor who can answer this crisply is usually being straight with you. A vendor who deflects is telling you something too.
- "When the AI gets it wrong, what does that look like, and how do I tune it?" Real AI products have a feedback loop. Marketing AI doesn't.
- "What's the false positive rate on the AI-driven detections, and how was it measured?" Even an approximate answer is informative. "We don't share that" is informative in a different way.
- "Does the AI take actions, or only flag things for a human?" This is the line between recommendation engines and agentic tools, and it matters for the controls you need around it. We wrote separately about what happens when you let AI click things.
- "What's the price difference between your AI tier and your non-AI tier, and what specifically am I getting for it?" If the answer is hand-wavy, the value is too.
Vendors who've thought through their AI story have crisp answers. Vendors who haven't, don't. The signal is strong.
The MSP Read
We use AI-augmented tools across our stack — EDR with behavioral classification, AI-augmented email security, LLM-assisted triage in our SOC operations. We pay AI premiums where they earn it. We've also walked away from products where the AI premium was 30% to 50% over the non-AI tier and the differentiator wasn't holding up under questioning.
The point isn't to be skeptical of AI in security tools. The point is to be specific about what you're paying for. The MSP that takes vendor marketing at face value is going to spend a lot of your money on dashboard tiles. The MSP that asks the five questions above and reports back what they heard is the one worth having in the room when you're choosing a stack.
What This Connects To
Evaluating AI in security tools is part of the broader vendor-review work that distinguishes managed IT support with operational discipline from a vendor reseller. It also connects to cybersecurity program design — the right AI augmentation reduces analyst load and improves detection, the wrong one adds cost without changing the security posture.
If you're staring at a renewal that's pitched as an "AI upgrade" and you can't tell what you'd actually be getting, that's exactly the kind of question our IT consulting team is built to help with. We'll read through the deck, ask the vendor the questions they'd rather not answer, and tell you whether the upgrade is buying you real capability or a different-colored dashboard.