SADAR
 -
Opinion
April 28, 2026

The Hidden Infrastructure Crisis of Agentic AI: Why We Need a Semantic Registry

The real promise of Agentic AI requires that agents be able to broadly discover other agents, tools and resources such as files and databases. Today's implementations constrain this discovery by providing defined lists of things that can be discovered and relying on the agent to 1 - Identify the need to use them 2 - Understand the order in which they must be used 3 - Successfully match data - both in business meaning and syntax - as inputs and outputs This works for pre-defined, low-risk flows but falls short of the vision of agents independently performing tasks. For this, we need to define not agents and tools but capabilities and data in business in a Semantic Registry.

The era of experimenting with artificial intelligence is ending, and the agentic shift is underway. We are moving from models that simply answer questions to autonomous agents that can plan, discover resources, and execute complex business tasks. However, as organizations attempt to deploy these agents into high-stakes enterprise environments, they are hitting a wall.

The problem is not agent intelligence; today's agents reason exceptionally well. The problem is an infrastructure crisis. To safely deploy autonomous agents, enterprises require a Semantic Registry to bridge the massive governance and operational gaps that current AI frameworks ignore.

The Shift from Deterministic to Probabilistic Systems

Traditional enterprise software is deterministic. A developer writes explicit instructions, and given the same inputs, the system produces the same outputs every time. This predictability is the foundation of modern enterprise controls—it is how we test, assign access, monitor, and audit software.

Agentic AI fundamentally breaks these assumptions. Agents operate probabilistically, inferring actions and constructing execution chains dynamically at runtime. Given a task, an agent might decide to invoke different tools, interpret data differently, or construct a totally different work plan. This means that traditional security and governance controls, such as static Role-Based Access Control (RBAC), have no surface to attach to in a dynamic discovery environment.

The Six Gaps in Agentic AI Discovery

Right now, agent discovery relies on LLMs guessing based on free-form text. A developer writes a short prose description of a tool, and the consuming agent tries to match its inferred need against that description. This is like hiring an employee based on a three-line job posting, without a resume or interview, and hoping they understood the role exactly as you described it.

This brittle approach exposes six critical problems that demand a registry solution:

  1. No Shared Ontology: There is no standard vocabulary for capabilities. One developer might describe a tool as "retrieves customer data," while another calls an equivalent tool "fetches member profile".
  2. Data Semantic Fragility: Even if the right tool is selected, data exchange is a minefield. A "date" field could mean submission date, service date, or processing date. Without semantic grounding, data is silently misinterpreted by the agent, propagating errors downstream.
  3. Loss of Originating Authority: Every workflow starts with a human or system trigger carrying a specific scope of authority. But when one agent delegates to another, that originating context is lost. Downstream tools only see the calling agent's identity, resulting in a total loss of accountability for who initiated the action and why.
  4. Missing Business Context: Selecting a tool isn't just about finding a keyword match; an agent must know when and how to use it. Agents lack inherent awareness of process prerequisites, leading to out-of-sequence execution (e.g., approving an invoice before a credit check).
  5. Invisible Non-Functional Requirements (NFRs): Functional fit is only half the battle. Agents must also evaluate rate limits, SLAs, costs, data sovereignty, and compliance frameworks (like HIPAA or SOC 2). Today, this critical information is invisible during agent discovery.
  6. No First-Contact Trust: When an agent discovers a new capability, there is no mechanism to verify the provider's identity, negotiate commercial terms, or establish secure authorization without manual, pre-negotiated relationships.

Merely assigning an identity to an agent in an enterprise directory (like Azure Entra ID) does not solve these issues. Identity only answers who the agent is; it does not answer what it is authorized to do, in what business context, or under whose authority.

The Solution: The SADAR Semantic Registry

Just as DNS, certificate authorities, and identity infrastructure made the internet reliable and secure, agentic AI requires a foundational directory to operate autonomously at scale. This is the purpose of the Semantic Agent Discovery and Attribution Registry (SADAR).

SADAR is not a runtime proxy or a new agent framework; it is an open-standard discovery and governance layer. It solves the agentic infrastructure crisis by providing:

  • Deterministic, Standards-Grounded Discovery: Instead of free-form text, capabilities are registered using established industry standards like NAICS (industry), APQC PCF (business process), and HL7 or X12 (data transactions). This means agents no longer guess; they match exact semantic contracts.
  • Bilateral Matching with NFRs: Non-functional requirements are elevated to first-class discovery criteria. A requesting agent can filter out services that are too expensive or non-compliant, while a provider can automatically refuse connections from requestors lacking required certifications (like FedRAMP or GDPR).
  • Business Process Integrity: Manifests explicitly declare a capability's place within a business process, defining strict predecessors and successors. This prevents agents from silently executing tasks out of sequence.
  • Verifiable Identity and Attribution: Every participant has a cryptographically verifiable identity anchored by the registry. Using artifacts like the SADAR Context Token, the system preserves the originating human's scope of authority across arbitrarily deep multi-agent chains, ensuring every action is fully auditable.
  • Registry Isolation: The registry operates strictly at discovery time. Once an agent discovers a compliant capability and negotiates credentials via OpenID Connect (OIDC), the agents interact directly. The registry never handles sensitive operational data, payment instruments, or runtime execution.

The Prerequisite for Autonomy

Without a Semantic Registry, organizations are forced into an unacceptable choice: either restrict agent autonomy to tightly curated, low-risk sandboxes, or deploy them in a governance vacuum, accumulating massive compliance and operational liability.

Agentic AI represents the most significant shift in enterprise computing since the cloud, but it cannot be safely exploited without foundational standards. The Semantic Registry is the infrastructure that transforms autonomous agents from experimental black boxes into trustworthy, auditable, and enterprise-ready systems.