Skip to Content

Conditional Access — Engineering Experiments

Conditional Access is the enforcement engine of Microsoft Entra ID and is often one of the most misunderstood security controls.

This page documents Conditional Access experiments conducted in isolated lab tenants to observe how policies are evaluated, enforced, bypassed, or skipped.

Everything listed here reflects observed behavior, not assumptions or documentation claims.




green circuit board

What This Page Is

This page provides a live index of Conditional Access experiments conducted on F11.ca.

This page includes:

  • A structured index of Conditional Access experiments
  • Documented outcomes for specific configurations
  • Identified patterns across multiple tests
  • Links to detailed experiment records

This page is not a how-to guide or a best-practice reference.

Conditional Access Experiment Index

Each experiment ID links to a detailed record including configuration, logs, and observed behavior.

IDCategoryDescriptionResultRisk
CA-EXP-001BaselineFirst enforced CA policy in a clean tenantUser access disruption🟠 Medium
CA-EXP-002Admin ScopeGlobal admin included unintentionallyAdmin lockout🔴 High
CA-EXP-003Report-OnlyEnforcement gap not visible in report-onlyFalse sense of enforcement🔴 High
CA-EXP-004ExclusionsEmergency access exclusion validatedRecovery successful🟢 Low
CA-EXP-005Trusted LocationsMFA bypass via trusted IPUnauthenticated session🔴 High
CA-EXP-006Session ControlsSign-in frequency not enforcedSession persists🟠 Medium

Experiment Categories

Experiments are organized to identify recurring failure patterns.

  • Baseline : First-time or default enforcement behavior

  • Admin Scope: Risks related to privileged and administrative targeting
  • Policy Evaluation: Policies that are not evaluated or are silently skipped
  • Report-Only: Visibility gaps prior to enforcement
  • Exclusions: Break-glass scenarios and exclusion behavior
  • Session Controls: Token reuse and session persistence
  • Trusted Locations: Scenarios involving abuse of implicit trust

MacBook Pro near white open book
black and white laptop computer

Experiment Methodology

All Conditional Access experiments use a consistent methodology:

  • Define the policy intent.
  • Configure users, applications, and conditions.
  • Observe sign-in behavior and review logs.
  • Compare expected and actual enforcement results.
  • Document the security impact and key takeaways.

This approach ensures that experiments are repeatable, traceable, and defensible.

Patterns Observed Across Experiments

In several Conditional Access experiments, we noticed these patterns come up again and again:

  • Policies can exist without being evaluated
  • Report-only mode hides real enforcement gaps
  • Trusted locations create silent bypass paths
  • Session controls are frequently misunderstood
  • MFA success is mistaken for security success
  • Admin scope mistakes cause tenant-wide impact

You can find more details about these patterns in the linked experiment records.


black flat screen computer monitor beside white computer keyboard

Scope & Notes

All experiments took place in separate lab tenants.

he results differ depending on tenant age, licensing, and whether CAE is available.

This page records what was observed, not what is recommended.

Relevant Microsoft documentation is cited where appropriate.

turned on MacBook Pro on gray surface

Conditional Access almost never fails in obvious ways.

Instead, it often fails in ways that seem like everything is working.

F11 - Full-Scale Engineering Mode

Learn more

person using a laptop