Open your Sentinel workspace. Go to Analytics. Count the active rules. If most of them are Microsoft-authored templates you enabled during onboarding, you have a coverage problem — and it’s not the one you think.
The issue isn’t that Microsoft’s built-in detections are bad. They’re not. The templates for brute force, impossible travel, password spray, known malware hashes, and MFA fatigue work well. The issue is where they stop. Microsoft’s detection content clusters around initial access and well-known credential attacks. Post-compromise activity — what the attacker does after they’re inside — has significant gaps that Microsoft doesn’t fill for you.
This matters because the attack chain doesn’t end at credential theft. An AiTM phishing campaign gets detected by Defender for Office 365. The stolen session token gets flagged by Entra ID Protection. Those are the green checkmarks. But what happens in the next 30 minutes — the mailbox rules, the consent grants, the data staging — happens in silence unless you’ve built the detections yourself.
Here are five detections that don’t exist in the default template library and that you need to build.
1. Mailbox Rule Abuse — Post-Compromise Persistence
After compromising a mailbox through AiTM or token theft, the first thing an attacker does is create inbox rules that forward or redirect email. They do this for two reasons: to maintain access to incoming email even after the password is reset, and to hide their activity by moving security notifications to deleted items.
Microsoft has no default Sentinel analytics rule for suspicious mailbox rule creation. Defender for Office 365 generates alerts for some rule patterns, but the coverage is inconsistent — rules that move messages to RSS Feeds or Conversation History (common attacker choices because users never check those folders) frequently don’t trigger alerts.
The detection you need watches the OfficeActivity table for mailbox rule operations. The KQL concept: query OfficeActivity where Operation is New-InboxRule or Set-InboxRule, and the rule action forwards externally, deletes messages matching security-related keywords, or moves messages to rarely-used folders. Correlate the rule creation timestamp with sign-in risk events in the preceding 24 hours. If a user creates a forwarding rule within hours of a risky sign-in, that’s not coincidence.
In a real scenario, this detection would have caught the attacker who created a rule forwarding all messages containing “invoice,” “payment,” or “wire transfer” to an external address — 14 minutes after authenticating with a stolen session token. The built-in detections caught the token theft. Nothing caught the rule that enabled the BEC fraud that followed.
2. Illicit Consent Grant — OAuth Application Abuse
Consent phishing convinces a user to grant permissions to a malicious OAuth application. The user clicks “Accept” on what looks like a legitimate Microsoft consent prompt, and the application receives access to their email, files, or profile — access that persists even after password changes because OAuth tokens are independent of the user’s credentials.
Microsoft has templates for detecting when users consent to applications, but the default rules focus on applications requesting high-privilege scopes (Mail.ReadWrite, Files.ReadWrite.All). Attackers have adapted. Modern consent phishing requests narrower scopes — Mail.Read is enough to read every email in the mailbox, and it doesn’t trigger the same alarms as Mail.ReadWrite. Applications from unverified publishers requesting even moderate permissions should be investigated, but the default detection threshold is set too high for most environments.
The detection you need watches the AuditLogs table for consent grant operations. The KQL concept: query for Activity containing Consent to application where the application’s publisher is not verified, the target resource includes mail or file access, and the consenting user has elevated privileges or handles sensitive data. The key fields are buried in the AdditionalDetails and TargetResources JSON — parsing them correctly is the difference between a detection that works and one that drowns in false positives.
This detection would have caught the application called “SharePoint Document Viewer” — unverified publisher, registered 48 hours before the attack — that was granted Mail.Read and Files.Read.All by a finance team member who thought they were opening a shared document.
3. SharePoint and OneDrive Data Staging
Before exfiltration, attackers stage data. In M365, this means bulk downloading files from SharePoint or OneDrive, or sharing files externally using anonymous links. The download itself is logged in the OfficeActivity table, but Microsoft doesn’t ship a default detection that identifies the pattern — because what constitutes “bulk” depends entirely on your environment’s baseline.
This is a detection that can’t be a static rule. An analyst who downloads 200 files from a project site during a quarterly review is normal. An attacker who downloads 200 files from a finance site at 2 AM after authenticating through an AiTM proxy is not. The detection needs context: time of day, the user’s historical download volume, the sensitivity of the site, and whether the access was preceded by a risk event.
The detection you need queries OfficeActivity where Operation is FileDownloaded or FileSyncDownloadedFull, grouped by user and time window. The KQL concept: calculate the download count per user per hour, compare against their 30-day rolling average, and alert when the count exceeds 3 standard deviations. Enrich with the user’s sign-in risk level from SigninLogs in the same time window. High download volume plus elevated sign-in risk equals an alert worth investigating.
The alternative pattern — external sharing — watches for AnonymousLinkCreated or SharingSet operations where the sharing target is external and the user’s session shows risk indicators. An anonymous link to a finance SharePoint site, created by a user whose session originated from a known AiTM proxy IP, is about as clear an exfiltration indicator as you’ll get.
4. Entra ID Privilege Escalation via Role Assignment
An attacker with a compromised Global Admin — or an attacker who compromises a Privileged Role Administrator — can assign themselves any Entra ID role. They can add their compromised account to Global Admin, or more subtly, they can activate a PIM-eligible role assignment outside of normal business hours without the approval workflow that legitimate activations require.
Microsoft has basic templates for role assignment changes, but they’re noisy in environments where role changes happen regularly (onboarding, project transitions, PIM activations). The default templates don’t distinguish between a legitimate PIM activation by an IT admin during business hours and a suspicious role activation at 3 AM by an account that just had its MFA method changed.
The detection you need watches the AuditLogs table for directory role assignment operations. The KQL concept: query for role assignment events where the target role is high-privilege (Global Admin, Exchange Admin, SharePoint Admin, Security Admin), and correlate with the assigning account’s recent activity. Flag when: the assignment happens outside business hours, the assigning account had a recent MFA registration change, the assigning account authenticated from an unusual location, or the same account both assigned and received the role (self-elevation).
This detection would have caught the attacker who, after compromising a Privileged Role Administrator account via AiTM, activated a Global Admin PIM assignment at 01:47 on a Saturday — 6 hours after the phishing email was delivered. The built-in detection for “new Global Admin” fired, but it fires for every legitimate assignment too, and the SOC had tuned it to informational severity months ago.
5. Cross-Tenant Lateral Movement
In environments with B2B collaboration, guest accounts, or multi-tenant architectures, an attacker who compromises a user in Tenant A may be able to access resources in Tenant B through existing cross-tenant trust relationships. This is a growing attack surface as organisations adopt more B2B collaboration through Entra ID External Identities.
Microsoft has no default detection for cross-tenant lateral movement because the telemetry spans two tenants. Tenant A sees an outbound access. Tenant B sees an inbound guest access. Neither tenant sees the full picture unless you build a detection that correlates both.
The detection you need watches SigninLogs for cross-tenant authentication events where ResourceTenantId differs from HomeTenantId. The KQL concept: query for sign-in events where the user authenticates to a resource in a different tenant, and the sign-in risk level is elevated, or the user hasn’t accessed that tenant in the past 90 days, or the access happens outside business hours. In the target tenant, correlate with AuditLogs for any privileged actions taken by the guest account.
This is the hardest detection on this list to build because it requires visibility in both tenants. If you manage both tenants (common in holding company or subsidiary structures), you can build a cross-workspace query in Sentinel. If you don’t manage the other tenant, you’re limited to detecting the outbound access pattern from your side — which is still better than no detection at all.
The Operational Reality
These five detections aren’t theoretical. They map to real attack patterns seen in production M365 environments. The common thread is that Microsoft’s built-in detection content stops at the perimeter — credential theft, initial access, known malware. What happens after the attacker is inside your tenant, using legitimate credentials, operating through legitimate Microsoft services, is your problem to detect.
Detection engineering in M365 is not a project with a completion date. Your environment changes quarterly — new applications are onboarded, new B2B relationships are established, new PIM roles are created, new SharePoint sites store new categories of sensitive data. Every change shifts your baseline and potentially creates new detection gaps. If you built your custom detections 12 months ago and haven’t touched them since, your coverage is degrading.
The KQL concepts in this post are intentionally high-level. The implementation details — the specific table joins, the baseline calculation methods, the false positive tuning, the automated response actions — are where the real engineering happens. That’s what separates a detection that generates 200 alerts a day from one that fires twice a month on genuine threats.
If you’re running an M365 E5 environment with mostly default Sentinel templates, start with detection #1 (mailbox rules). It’s the highest-value detection on this list — every BEC attack uses inbox rules, and the default coverage is the weakest. Build it, validate it against your environment’s baseline, and tune it until the false positive rate is manageable. Then move to #2 and #3. By the time you’ve built all five, you’ll have closed the gaps that matter most.
Next week: the specific KQL patterns behind mailbox rule detection — including the OfficeActivity parsing that Microsoft’s documentation doesn’t explain well.
These five detections are covered in depth in Detection Engineering — 13 modules covering rule architecture, ATT&CK-mapped detection development, and the full detection lifecycle from hypothesis to production rule. The first two modules are free, no account required.
Security Program Foundation Toolkit
Build your first documented security program — the essential governance foundation with risk register, control mappings, and evidence management.
Document Customization
Need this customized to your organization?
You complete an intake form. We customize every document — industry context, regulatory mapping, calibrated parameters, risk pre-population. Delivered in 7–10 business days.
Foundation $1,997 · Compliance $3,497 · Product purchase separate
Need the skills to operate the program? Our training platform builds the capability — 9 courses at training.ridgelinecyber.com →
Related Training
Build the skills to implement what you just read
Weekly detection engineering and threat hunting techniques — written by a practitioner, not a content team
One email per week · KQL queries, detection techniques, operational security · Unsubscribe anytime