← Back to Blog

Your Detection Coverage Matrix Is Lying to Your CISO

13 April 2026 Detection Engineering 5 min read
DETECTION MATURITY LEVELS Level 0 No detection Technique is invisible Level 1 Rule exists Not validated or tested Level 2 Validated detection Tested, data confirmed Level 3 Operationally effective Fires, triaged, actionable What most report: 73% (Levels 0-3) Misleading What to report: Level 2+ only = 31% Honest posture assessment

You mapped your Sentinel analytics rules to MITRE ATT&CK techniques. The heat map shows 73% coverage. The CISO puts it in the board deck. Everyone feels good about the security program.

Except 73% coverage means nothing if you have not answered three questions: Does the rule actually fire when the technique is used? Does it fire with enough context for an analyst to investigate? Does it fire within a timeframe that allows containment before the attacker achieves their objective?

Three ways coverage matrices lie

1. The rule exists but does not fire

You deployed a Sentinel analytics rule for T1053.005 (Scheduled Task). The rule queries SecurityEvent for Event ID 4698. Coverage matrix: green.

But Event ID 4698 requires Advanced Audit Policy Configuration with "Audit Other Object Access Events" enabled. If that audit policy is not deployed to your endpoints — and on most endpoints it is not configured by default — the event never generates. The rule runs every 5 minutes, queries an empty table, and finds nothing. The technique is "covered" in the matrix. It is completely undetected in the environment.

2. The rule fires but the alert is noise

You have a rule for T1078 (Valid Accounts). It fires when a sign-in occurs from an IP address not seen in the last 30 days. It fires 47 times per day. Analysts stopped investigating it two weeks after deployment because 46 of the 47 are legitimate users on new networks.

A rule that generates alerts nobody reads is not a detection. It is a checkbox.

3. The rule detects but too late

You have a rule for T1486 (Data Encrypted for Impact). It fires when a process writes to more than 100 files in 5 minutes with high entropy content. The rule fires when 100 files are already encrypted. The detection works — but containment starts after the damage is done.

What the matrix should actually show

Replace binary "covered / not covered" with a 4-level scoring per technique.

Level 0 — No detection. No rule exists. The technique is invisible.

Level 1 — Rule exists. A rule is deployed but has not been validated against the data pipeline, tested with simulated attacks, or tuned for the environment.

Level 2 — Validated detection. The rule has been tested: the data source is confirmed to be ingesting, a simulated attack triggered the rule, and the alert contains sufficient context for investigation.

Level 3 — Operationally effective. The rule fires within a useful timeframe, the false positive rate is manageable, and the alert is linked to an investigation playbook.

When you report coverage to leadership, report Level 2+ coverage — not Level 1. The difference between "we have a rule for 73% of ATT&CK techniques" and "we can operationally detect 31% of ATT&CK techniques" is the difference between a comfortable board deck and an honest security posture assessment.

The quarterly exercise

Every quarter, pick 5 techniques from your coverage matrix that show as "covered." For each one: verify the data pipeline (is the telemetry arriving?), run a simulated attack (does the rule fire?), and check the investigation record (when it fired, did anyone investigate it?).

The techniques that fail this test get downgraded. The coverage percentage drops. The CISO asks questions. And the security program gets honest about what it can actually detect — which is the first step toward improving it.

Ready to strengthen your security program?

Browse our products or use our guide to find the right products for your organization.