Why Your Detection Engineering Team Needs a Strategy (Not Just More Rules)
- Paul Veenstra

- 5 dagen geleden
- 5 minuten om te lezen
Walk into most SOCs and ask to see their detection strategy, and you'll often get one of two responses: a blank stare, or someone handing you a spreadsheet of detection rules sorted by severity.
Here's the thing—a list of rules isn't a strategy. And without a real strategy, even the best detection engineering team is just guessing about what to build next.
The Problem with Ad-Hoc Detection Engineering
Most organizations approach detection engineering reactively. A new threat hits the news, and someone says "we should detect that." An incident happens, and the post-mortem action item is "write a detection rule." A penetration test finds a gap, and detection engineering gets tasked with filling it.
Each of these detections might be perfectly valid on its own. But together? They're a patchwork. You end up with:
Gaps in coverage you don't even know exist
Overlapping detections that create alert fatigue
No clear way to measure whether your detection program is actually improving
Engineers constantly context-switching between unrelated detection projects
No framework for prioritizing what to build next when you have limited time and people
Sound familiar?
What a Detection Strategy Actually Is
A detection strategy is a framework that answers these fundamental questions:
What are we trying to detect? Not just "bad stuff"—specifically, what adversary behaviors, techniques, and patterns matter most to your organization based on your threat model, critical assets, and risk profile?
How will we detect it? What data sources do we need? What detection methods (signatures, behavioral analytics, anomaly detection, threat intelligence) apply to which scenarios? Where do we leverage native platform capabilities versus building custom detection logic?
How do we know we're successful? What metrics indicate that our detection coverage is improving? How do we measure both breadth (are we covering the techniques that matter?) and depth (are our detections actually effective)?
What do we build next? When your detection engineering team has capacity, what's the next priority? How do you make that decision consistently based on risk, not just whoever shouts loudest?
Without clear answers to these questions, you're flying blind.
The MITRE ATT&CK Foundation
Most mature detection strategies start with MITRE ATT&CK as their framework. It provides a common language for describing adversary behaviors and a structured way to map your detection coverage.
But here's where many teams go wrong: they treat ATT&CK coverage like a checkbox exercise. "We need detections for all the techniques!"
That's not realistic, and it's not strategic. Not all techniques are equally relevant to your environment. A technique that requires physical access to a system might not matter if all your infrastructure is in the cloud. A technique targeting a specific application might not matter if you don't use that application.
A good detection strategy uses ATT&CK as a framework but prioritizes based on:
Threat intelligence: What techniques are threat actors relevant to your industry actually using?
Asset criticality: Which techniques could lead to compromise of your most valuable systems and data?
Exploitability: What techniques are easiest for attackers to execute in your specific environment?
Detection feasibility: What can you realistically detect given your current data sources and visibility?
Why This Matters for Modern SOCs
Modern SOCs aren't just running a SIEM anymore. They're orchestrating a complex ecosystem of security tools—EDR, NDR, cloud security, identity analytics, threat intelligence platforms. Each generates telemetry. Each has native detection capabilities. Each requires integration and tuning.
Without a detection strategy, you end up with:
Alert chaos: Different tools detecting similar behaviors in different ways, creating duplicate alerts or gaps where everyone assumed someone else had it covered
Wasted effort: Detection engineers rebuilding capabilities that already exist natively in your platform
Inconsistent quality: Some techniques have ten overlapping detections while others have none
Tool sprawl: Buying new security tools to "cover" techniques without understanding what your existing tools can already detect
A solid detection strategy helps you understand what each tool in your stack should be detecting, how they work together, and where you need to invest engineering effort versus leveraging built-in capabilities.
The Elastic Security Angle
This is where platforms like Elastic Security become really interesting. Elastic provides both the data layer (ingesting logs from across your environment) and native detection capabilities (prebuilt rules, behavioral analytics, machine learning).
But to use it effectively, you need strategy around:
Data source prioritization: What logs should you be ingesting into Elastic? Everything is expensive and noisy. Your detection strategy should drive data source decisions based on what techniques you need visibility into.
Rule customization: Elastic ships with hundreds of prebuilt detection rules mapped to MITRE ATT&CK. Which ones matter for your environment? Which need tuning to reduce false positives? Which techniques require custom rules because your environment is unique?
Detection method selection: When should you use Elastic's prebuilt rules versus building custom rules versus leveraging ML jobs versus using threshold-based detections? Your strategy should guide these architectural decisions.
Coverage measurement: Elastic Security can show you ATT&CK technique coverage. But coverage isn't the same as effectiveness. Your strategy defines how you measure whether detections are actually working—catching real threats, not drowning analysts in noise.
Without strategy, teams often just enable all the prebuilt rules and hope for the best. That leads to alert fatigue, tuning paralysis, and eventually, analysts ignoring alerts because the signal-to-noise ratio is terrible.
Building Your Detection Strategy: Where to Start
You don't need a perfect strategy on day one. You need a good enough framework that you can evolve. Here's a practical starting point:
1. Define Your Threat Model Who is likely to target you? What are their typical techniques? What are your crown jewels they'd go after? This doesn't need to be a 50-page document. A workshop with key stakeholders can get you 80% of the way there.
2. Map Current Coverage What are you actually detecting today? Map your existing rules and tool capabilities to MITRE ATT&CK. Be honest about gaps. This is your baseline.
3. Prioritize Gaps Of the techniques you're not detecting, which ones matter most based on your threat model and asset criticality? That's your roadmap.
4. Define Standards How will your team write detection rules? What's your testing process? How do you handle false positives? What metadata needs to be in every rule? Consistency matters as you scale.
5. Establish Metrics How will you measure progress? Common metrics include ATT&CK technique coverage, alert volume trends, detection true positive rate, and mean time to detect specific technique classes.
The Engineering Payoff
Here's what changes when your detection engineering team has a real strategy:
They know what to build next without waiting for direction. The strategy provides a prioritized backlog.
They can push back on random requests with a framework. "That's not aligned with our threat model" is a lot more compelling than "we're too busy."
They can work in themed sprints rather than constantly context-switching. Two weeks focused on persistence techniques. Two weeks on lateral movement. Real depth, not scattered effort.
They can make architectural decisions about tooling and data sources with confidence, knowing what capabilities they need to support.
They can show measurable improvement to leadership. Coverage going from 40% to 65% of priority techniques is a story executives understand.
It's Not Just About the Rules
The best detection engineering teams aren't the ones with the most rules. They're the ones with the right rules, deployed effectively, covering the techniques that actually matter to their organization.
That requires strategy.
Without it, you're just collecting alerts. With it, you're building a detection program that evolves with the threat landscape and actually makes your organization more secure.
Does your detection engineering team have a strategy, or just a backlog? What would it take to build one?
At Perceptive Security, we help organizations develop detection strategies that align with their threat models and build detection engineering programs that scale. Whether you're working with Elastic Security or other platforms, we can help you move from ad-hoc detection to strategic coverage. Let's talk about your detection program.


Opmerkingen