Agentic AI represents a fundamental shift in how organizations interact with technology. Rather than relying on predefined workflows and rigid user interfaces, AI agents can autonomously plan, discover resources, and execute complex business tasks.
Instead of navigating multiplesystems manually, organizations will be able to issue high-level objectivessuch as:
“Identify our most profitable customers, investigate openissues, and generate an action plan to improve retention.”
The agent will determine how toaccomplish the task, discover the necessary systems and capabilities, and execute the work — across internal systems and external services alike.
This capability promises unprecedented productivity, flexibility, and automation. Realizing it requires infrastructure that does not yet exist.
Traditional enterprise systems are deterministic. Given the same input, they produce the same output. This makes them:
• Testable — behavior can be verified before deployment
• Repeatable — the same input always produces the sameresult
• Auditable — every action can be attributed andexplained
• Access – same users use the system in the same way,with the same intent, every time
Today’s controls and governance rely on this consistency. It is how we test, audit, and assign access.
Agents operate differently. They infer actions and outputs based on context and probabilities. Given identical inputs, an agent may:
• Select different tools or invoke different agents
• Use different data, or interpret the same data differently
• Choose not to use a necessary tool or agent at all
• Develop and execute an entirely different work plan
Even when outcomes appearconsistent, the reasoning process is opaque and cannot guarantee consistency.The shift from deterministic to probabilistic operation fundamentally breaksthe assumptions underlying enterprise governance — and with them, the controls organizations depend on for compliance, auditability, and operational reliability.[1]
Today’s agent ecosystems are fundamentally constrained by four structural gaps that no amount of model improvement will resolve. They are infrastructure gaps, not capability gaps.
Agents operate as “black boxes” with no verifiable attribution, no visibility into which capabilities were invoked, and no transparent execution record. This makes them inherently non-compliant with SOC2, ISO, FedRAMP, HIPAA, and similar frameworks — not because of scale, but because the compliance infrastructure does not exist.
Equally absent is any mechanismfor compliance posture assertion and matching. A requesting agent has no way tosignal that it operates under HIPAA, GDPR, or FedRAMP requirements, and aservice holding sensitive data has no way to restrict access to requestors thatassert compatible compliance postures. Compliance today cannot be asserted andwas traditionally achieved via configuration. Agentic dynamic discovery make this a run-time concern.
Agents select tools and nterpret data probabilistically, without grounding in the business context that defines what those tools and data. Research confirms inconsistent tool usage even in severely constrained environments. McKinsey and Gartner report that semantic fragmentation costs organizations $15M per year on average.[3] I
If trained, experienced humans struggle to discern semantic differences between data in your organization - and most do - why do you think agents can figure it out using simple descriptions?
Business intent ambiguity also produces out-of-sequence execution. Agents have no inherent awareness of process prerequisites or required ordering. The same research confirms that agents routinely invoke tools and services out of sequence. This has the added complication of failing silently with agent success but process failure and potential overall integrity issues. Without explicit process grounding, an agent cannot know that invoice approval must precede payment authorization, or that a credit evaluation is a required predecessor to a financing offer.
There are no enforceable controls for cost, rate limits, credential exchange, licensing compliance, or first-use authorization. Agents acquire and exercise authority with no mechanism for policy enforcement, no prior-approval workflow, and no capability to revoke access after the fact.
Traditional IAM and RBAC systemswere designed for deterministic environments where the identity of the actor,the resource being accessed, and the nature of the operation are all known atdesign time. Static permission grants are meaningful only when we know what theagent will do, why it is doing it, and in what business context.
In an agentic world,capabilities are discovered dynamically at runtime shifting a traditionallydesign-time configuration into a run-time policy enforcement. Since agents and tools are runtimediscovered, we cannot know ahead of time how these may be used once inproduction thus we can’t define the controls statically. Each usage must be adjudicated in the contextof the originator’s authority, business process intent, and the specifics of therequested action – is this agent/tool allowed to participate in at this pointin this business process to operate on these items, in their current state, atthe request of the originating user/process?
This is not a scale problem. It is an architectural gap.
Without foundational infrastructure, organizations face an unacceptable choice: restrict agent autonomy to the point of limited utility, or accept governance risk that no enterprise compliance program can tolerate.
SADAR — the Semantic AgentDiscovery and Authorization Registry — is an open standard that defines thefoundational specifications required for enterprise-grade agent ecosystems. Itenables agents to safely discover, authenticate to, and interact withcapabilities — internal or external — without sacrificing attribution,governance, or control.
[1]ISACA Industry News, 2025. The Growing Challenge of Auditing Agentic AI.https://www.isaca.org/resources/news-and-trends/industry-news/2025/the-growing-challenge-of-auditing-agentic-ai
[2]Roig et al., December 2025. How Do LLMs Fail In Agentic Scenarios? AQualitative Analysis of Success and Failure Scenarios of Various LLMs inAgentic Simulations. arXiv:2512.07497.
[3]Shereshevsky, A., January 2026. Why Enterprise AI Agents Are Failing.Medium. (Cites McKinsey and Gartner research.)