Enterprise AI adoption has exploded. McKinsey's State of AI 2025 reports that 88% of organizations now use AI in at least one business function. The Stanford AI Index 2026 reports that scaled agentic deployment, however, remains in the single digits across nearly every enterprise function — even though the same research highlights agents as the higher-value frontier.
62% of organizations cite security and risk as the #1 barrier to scaling agentic AI — outranking technical limitations, regulatory uncertainty, and tooling gaps combined.— Stanford AI Index 2026
This isn't because the technology isn't capable. Organizations deploying agents into consequential workflows discover a structural gap — and find no infrastructure to fill it. Agentic AI flows do not meet basic audit, control, and compliance requirements without one-off custom solutions or being constrained to remove the autonomy that made agents valuable in the first place.
Only 7% of executives say their organization has an up-to-date register of AI use.— BDO Techtonic States Report 2025 (n=415 CEOs, CFOs, CIOs, risk and compliance leaders)
"You cannot define access lists for interactions that don't exist until the moment they happen."
Between January and April 2026, four independent research teams disclosed fifteen CVEs across the de facto agent protocols — MCP and A2A. The vendor's response was that the protocols are working as designed. They're right. The protocols are working as designed. The root cause: agent actions evaluated only against probabilistic model reasoning, with no architectural surface for runtime authorization in business context.— OX Security, Cyata, Check Point, GitHub Advisory Database (Jan–Apr 2026)
The same, core architectural issue also applies to A2A although not leading to RCE. This isn't a bug in MCP or A2A - it is outside the scope of those standards pointing to a gap in the architecture stack.
This pattern hasn't gone unnoticed by industry. In late 2025, the Fintech Open Source Foundation (FINOS) — a neutral body where regulated financial institutions collaborate on open standards — published v2.0 of its AI Governance Framework. It identified six agentic AI risks that map directly onto the same architectural gap.

Three of the six risks are addressed by what a context layer does: time-of-execution evaluation of authority, business intent, and process state. The remaining three are addressed by how the layer is built: neutral stewardship, manifest integrity, and verifiable attribution. They aren't six different problems. They're six places the same missing layer surfaces.
When an agent selects a tool, invokes a service, or delegates to another agent, the decision happens inside the model. No structured record exists of what capabilities were considered, what criteria were applied, or why one was selected over another. This isn't a logging problem — the information doesn't exist to log. For low-risk automation, that's tolerable. For workflows where "why did this happen?" needs an answer — financial operations, healthcare, supply chain, regulated decisions — the absence isn't a gap you can retrofit. It's a structural property of how the system was built.
Every agentic workflow begins with an initiating event carrying explicit authority: this identity, in this capacity, initiated this action for this purpose. That scope is the product of deliberate access policy and process design. Once the first agent runs, it discovers and calls whatever resources it can find. Does the originating authority travel with it? Does the next service know what initiated this chain? Or does it just see the calling agent's identity, disconnected from the originating intent? The configuration-based controls that govern traditional systems have no surface to attach to in a runtime-discovered chain.
Read the deep diveScope of authority establishes who initiated a workflow. Business context defines what they were authorized to accomplish — the specific process, function, and data involved. For that context to be enforceable rather than merely descriptive, it has to be machine-readable and grounded in standards independent of any single organization's terminology. Today, business intent is never part of the authorization decision. An auditor reconstructing an execution sees system-level traces, not business-level reasoning. The result: a record that proves the call happened, but can't explain whether it should have.
A business process defines what must be complete before this step is appropriate, and what this step is expected to enable. Without a transaction instance that links every invocation in the chain to its originating event, those predecessor and successor relationships exist only in documentation — never enforced at runtime. Downstream actions are attributed to the agent performing them, not the chain that produced them. The enterprise loses the ability to answer: did this happen in the right sequence, on behalf of the right originator, within the bounds of the right transaction?
It's a reasonable instinct. If we know who the agent is — if it's registered in the enterprise directory — surely we can govern what it does. This is the reasoning behind initiatives like Microsoft's registration of agents in Azure Entra ID. Agent identity is necessary. It is not sufficient.
Identity answers who. It doesn't answer:
These aren't gaps that identity systems can be extended to fill. They're structurally outside what identity is designed to do. Identity governs who. The gaps above are about what, why, in what context, and on whose authority — and those are architectural questions, not credential questions.
Read the full analysis: Why agent identity isn't enoughThe four failure modes share a root. They aren't failures of model intelligence, agent capability, or enterprise security practice. They're the predictable consequence of deploying a fundamentally dynamic technology into infrastructure designed for deterministic systems — and discovering the layer connecting the two has never been built.