Conditional Access — Engineering Experiments
Conditional Access is the enforcement engine of Microsoft Entra ID and is often one of the most misunderstood security controls.
This page documents Conditional Access experiments conducted in isolated lab tenants to observe how policies are evaluated, enforced, bypassed, or skipped.
Everything listed here reflects observed behavior, not assumptions or documentation claims.
What This Page Is
This page provides a live index of Conditional Access experiments conducted on F11.ca.
This page includes:
- A structured index of Conditional Access experiments
- Documented outcomes for specific configurations
- Identified patterns across multiple tests
- Links to detailed experiment records
This page is not a how-to guide or a best-practice reference.
Conditional Access Experiment Index
Each experiment ID links to a detailed record including configuration, logs, and observed behavior.
| ID | Category | Description | Result | Risk |
|---|---|---|---|---|
| CA-EXP-001 | Baseline | First enforced CA policy in a clean tenant | User access disruption | 🟠 Medium |
| CA-EXP-002 | Admin Scope | Global admin included unintentionally | Admin lockout | 🔴 High |
| CA-EXP-003 | Report-Only | Enforcement gap not visible in report-only | False sense of enforcement | 🔴 High |
| CA-EXP-004 | Exclusions | Emergency access exclusion validated | Recovery successful | 🟢 Low |
| CA-EXP-005 | Trusted Locations | MFA bypass via trusted IP | Unauthenticated session | 🔴 High |
| CA-EXP-006 | Session Controls | Sign-in frequency not enforced | Session persists | 🟠 Medium |
Experiment Categories
Experiments are organized to identify recurring failure patterns.
Baseline : First-time or default enforcement behavior
- Admin Scope: Risks related to privileged and administrative targeting
- Policy Evaluation: Policies that are not evaluated or are silently skipped
- Report-Only: Visibility gaps prior to enforcement
- Exclusions: Break-glass scenarios and exclusion behavior
- Session Controls: Token reuse and session persistence
- Trusted Locations: Scenarios involving abuse of implicit trust


Experiment Methodology
All Conditional Access experiments use a consistent methodology:
- Define the policy intent.
- Configure users, applications, and conditions.
- Observe sign-in behavior and review logs.
- Compare expected and actual enforcement results.
- Document the security impact and key takeaways.
This approach ensures that experiments are repeatable, traceable, and defensible.
Patterns Observed Across Experiments
In several Conditional Access experiments, we noticed these patterns come up again and again:
- Policies can exist without being evaluated
- Report-only mode hides real enforcement gaps
- Trusted locations create silent bypass paths
- Session controls are frequently misunderstood
- MFA success is mistaken for security success
- Admin scope mistakes cause tenant-wide impact
You can find more details about these patterns in the linked experiment records.

Scope & Notes
All experiments took place in separate lab tenants.
he results differ depending on tenant age, licensing, and whether CAE is available.
This page records what was observed, not what is recommended.
Relevant Microsoft documentation is cited where appropriate.

Conditional Access almost never fails in obvious ways.
Instead, it often fails in ways that seem like everything is working.
F11 - Full-Scale Engineering Mode
