Your agents are making decisions and taking action. Do you know what they're doing — and why?

Agents plan, discover resources, and execute autonomously. That's the goal of agentic AI. But enterprises need something else: a verifiable answer to what happened, why, and under whose authority.

The problem isn't agent intelligence. Today's agents reason well. The problem is that when an agent acts, the business context of why it was acting, the scope of authority it was operating under, and the chain of decisions that led to that action are invisible — not reconstructed after the fact, but genuinely absent from the execution itself. Traditional controls were built for deterministic systems where you knew what ran, when, who could run it, and for what purpose. Agentic AI breaks every one of those assumptions simultaneously.

What's actually missing?

The gap between what agents do and what organizations can see, audit, and explain.

Enterprise AI adoption has exploded. McKinsey's State of AI 2025 reports that 88% of organizations now use AI in at least one business function. The Stanford AI Index 2026 reports that scaled agentic deployment, however, remains in the single digits across nearly every enterprise function — even though the same research highlights agents as the higher-value frontier.

62% of organizations cite security and risk as the #1 barrier to scaling agentic AI — outranking technical limitations, regulatory uncertainty, and tooling gaps combined.Stanford AI Index 2026

This isn't because the technology isn't capable. Organizations deploying agents into consequential workflows discover a structural gap — and find no infrastructure to fill it. Agentic AI flows do not meet basic audit, control, and compliance requirements without one-off custom solutions or being constrained to remove the autonomy that made agents valuable in the first place.

Only 7% of executives say their organization has an up-to-date register of AI use.BDO Techtonic States Report 2025 (n=415 CEOs, CFOs, CIOs, risk and compliance leaders)
"You cannot define access lists for interactions that don't exist until the moment they happen."
What this Gap Causes

Fifteen CVEs in 4 months. One cause - a missing architectural layer.

Between January and April 2026, four independent research teams disclosed fifteen CVEs across the de facto agent protocols — MCP and A2A. The vendor's response was that the protocols are working as designed. They're right. The protocols are working as designed. The  root cause: agent actions evaluated only against probabilistic model reasoning, with no architectural surface for runtime authorization in business context.— OX Security, Cyata, Check Point, GitHub Advisory Database (Jan–Apr 2026)

The same, core architectural issue also applies to A2A although not leading to RCE. This isn't a bug in MCP or A2A - it is outside the scope of those standards pointing to a gap in the architecture stack.

This pattern hasn't gone unnoticed by industry. In late 2025, the Fintech Open Source Foundation (FINOS) — a neutral body where regulated financial institutions collaborate on open standards — published v2.0 of its AI Governance Framework. It identified six agentic AI risks that map directly onto the same architectural gap.

Three of the six risks are addressed by what a context layer does: time-of-execution evaluation of authority, business intent, and process state. The remaining three are addressed by how the layer is built: neutral stewardship, manifest integrity, and verifiable attribution. They aren't six different problems. They're six places the same missing layer surfaces.

Where the Gap Shows Up

You don't know what the agent called or why.

Discovery is Opaque

You can see thecall. You don't know why.

When an agent selects a tool, invokes a service, or delegates to another agent, the decision happens inside the model. No structured record exists of what capabilities were considered, what criteria were applied, or why one was selected over another. This isn't a logging problem — the information doesn't exist to log. For low-risk automation, that's tolerable. For workflows where "why did this happen?" needs an answer — financial operations, healthcare, supply chain, regulated decisions — the absence isn't a gap you can retrofit. It's a structural property of how the system was built.

Read the deep dive
Authority Doesn't Travel

Originator Scope of Authority is Lost.

Every agentic workflow begins with an initiating event carrying explicit authority: this identity, in this capacity, initiated this action for this purpose. That scope is the product of deliberate access policy and process design. Once the first agent runs, it discovers and calls whatever resources it can find. Does the originating authority travel with it? Does the next service know what initiated this chain? Or does it just see the calling agent's identity, disconnected from the originating intent? The configuration-based controls that govern traditional systems have no surface to attach to in a runtime-discovered chain.

Read the deep dive
No Business Context

Agent calls are tracable- not Explainable

Scope of authority establishes who initiated a workflow. Business context defines what they were authorized to accomplish — the specific process, function, and data involved. For that context to be enforceable rather than merely descriptive, it has to be machine-readable and grounded in standards independent of any single organization's terminology. Today, business intent is never part of the authorization decision. An auditor reconstructing an execution sees system-level traces, not business-level reasoning. The result: a record that proves the call happened, but can't explain whether it should have.

Read the deep dive
Transaction State is Invisible

Steps get logged. The business purpose isn't.

A business process defines what must be complete before this step is appropriate, and what this step is expected to enable. Without a transaction instance that links every invocation in the chain to its originating event, those predecessor and successor relationships exist only in documentation — never enforced at runtime. Downstream actions are attributed to the agent performing them, not the chain that produced them. The enterprise loses the ability to answer: did this happen in the right sequence, on behalf of the right originator, within the bounds of the right transaction?

Read the deep dive
What we Hear Most

Why Agent Identities Aren't Enough

Access can't be pre-defined when you don't know how an agent or tool will be used.

It's a reasonable instinct. If we know who the agent is — if it's registered in the enterprise directory — surely we can govern what it does. This is the reasoning behind initiatives like Microsoft's registration of agents in Azure Entra ID. Agent identity is necessary. It is not sufficient.

Identity answers who. It doesn't answer:

  • whether the originating scope of authority has been preserved through the chain that led to this moment
  • whether this specific call is appropriate to the business process the agent is executing
  • which transaction instance this invocation belongs to, and what predecessor steps should have completed first
  • whether the capability about to be invoked is compatible with the compliance posture of the organization that initiated the workflow

These aren't gaps that identity systems can be extended to fill. They're structurally outside what identity is designed to do. Identity governs who. The gaps above are about what, why, in what context, and on whose authority — and those are architectural questions, not credential questions.

Read the full analysis: Why agent identity isn't enough
The Missing InfrastrucTURE Layer

The gap isn't better agents or models. It is a governance plane.

The four failure modes share a root. They aren't failures of model intelligence, agent capability, or enterprise security practice. They're the predictable consequence of deploying a fundamentally dynamic technology into infrastructure designed for deterministic systems — and discovering the layer connecting the two has never been built.