As enterprises rush to deploy autonomous AI agents, a dangerous assumption has taken root: that if we know who an agent is, we can safely govern what it does. This assumption is driving a wave of enterprise agent identity initiatives, such as Microsoft's recent announcement allowing agents to be registered in Azure Entra ID.
While establishing an agent's identity is a meaningful and necessary capability, it is entirely insufficient for enterprise governance. Identity answers only one question: is this agent who it says it is?. It does not answer what the agent is trying to accomplish, whether it is invoking a tool at the correct stage of a workflow, or whether it has the authority to take action on a specific piece of data.
To build truly trustworthy and compliant autonomous systems, organizations must move beyond mere agent identity. An enterprise infrastructure must preserve and enforce the originating authority, the business process context, the transaction instance, and the state of the data being acted upon.
In traditional, deterministic software, identity and access control frameworks like Role-Based Access Control (RBAC) are sufficient because the what, why, and context of an action are encoded directly into the system's design. We know precisely what a system will do, when it will run, and for what purpose.
Agentic AI breaks these assumptions. Agents infer actions and construct execution chains dynamically at runtime. While RBAC can answer whether an agent can perform a class of operations, it has no mechanism to evaluate whether the agent should perform that specific operation right now, in this exact context. An agent registered in an enterprise directory proves that it is sanctioned to exist, but it says nothing about whether the capability it is about to invoke complies with the business and regulatory posture of the organization.
Every business workflow begins with an initiating event—such as a human user request—that carries an explicit scope of authority. This establishes exactly who initiated the action and for what purpose.
However, as soon as an initial AI agent is invoked and begins discovering and delegating tasks to other resources, this originating scope of authority is typically lost. When an agent calls a second agent, and that agent calls a third, the downstream services only see the immediate calling agent's identity. They have no connection to the intent or access rights of the human or system that originally started the workflow.
Without an infrastructure layer that preserves this originating scope of authority across the entire execution chain, organizations lose the ability to detect and prevent privilege escalation or inappropriate execution. Every downstream action is attributed only to the agent performing it, leaving a massive governance void regarding who ultimately authorized the work and why.
Scope of authority dictates who initiated a workflow, but business context defines what they were authorized to accomplish. "Selecting" an agent or a tool isn't just about finding one that matches a functional keyword; it requires understanding what the invocation means within a specific operational reality.
Consider an invoice. An invoice in an Accounts Payable workflow represents a purchase, whereas an invoice in an Accounts Receivable workflow represents a sale. They share the exact same data structure, but exist in completely different operational contexts. If an agent merely has access rights to "process invoices" but lacks grounding in the specific business process context, it may invoke capabilities at the wrong time or silently misinterpret results.
While business context defines the blueprint of an action, the transaction instance dictates where a specific invocation sits within a live execution.
To prevent out-of-sequence failures, every workflow must carry a unique transaction instance ID that links every agent invocation back to its originating event. This is not simply a correlation key for logging; it is the active mechanism that makes predecessor and successor relationships enforceable. The transaction instance connects the static rules of a business process (e.g., "credit evaluation must precede a financing offer") to the specific execution happening at that exact moment.
Even if an agent possesses the correct identity and is operating within an approved process, the state of the business data fundamentally alters the safety and legality of an action.
Consider a simple read/write operation on a healthcare claim record. Editing an "in-process" claim is a routine workflow step. However, editing a "finalized" claim is a severe compliance event. The action is the same, and the identity performing the action is the same. The difference is the business context of the data being acted upon. No identity system or RBAC model can resolve that distinction, because those systems do not evaluate the lifecycle state of the business data.
Identity systems govern who. But the most critical governance failures in agentic AI revolve around what, why, in what context, and on whose authority.
If organizations rely on identity alone, they are deploying autonomous systems in a governance vacuum. To safely deploy agents into consequential workflows, enterprises require a semantic infrastructure layer—like the SADAR framework—that carries the originating scope of authority end-to-end, grounds actions in explicit business process context, enforces the state of the data, and ties every invocation to a specific transaction instance. Trustworthy AI requires verifiable accountability, and accountability demands far more than just a name badge.