SADAR
 -
Article
April 28, 2026

The Missing Agentic Infrastructure

Agentic AI promises autonomous systems that can discover, reason, and act—but the infrastructure required to make that safe and reliable doesn’t exist yet. Today’s agents can execute tasks, but they cannot correctly discover, interpret, or govern interactions with tools and other agents. Until we solve this infrastructure gap, agentic AI will remain confined to controlled demos rather than real-world enterprise use.

The Promise — and the Prerequisite Nobody Is Building

The holy grail of agentic AI is a system that truly operatesautonomously. Given a set of goals and constraints, it can identify tasks, planhow to accomplish them, discover the resources it needs (other agents, tools,APIs), execute, evaluate the results, and iterate until the objective is met. Thisis more than a predefined workflow with an LLM choosing between hard-codedsteps. Here an agent reasons about what it needs, finds it, and uses it.

Admittedly this is the vision.  The reality is that many (most) tasks aren’tthat open ended making a more deterministic flow acceptable.  However, that doesn’t fully solve the problemwithout creating tightly coupled and, therefore, rigid flows.  We must break down the tool/agent use intoseven distinct components:

-           Identifying that a tool or agent is needed tobegin with

-           Knowing when to invoke the tool or agent

-           Understanding what information the tool or agentrequires syntactically and semantically

-           Understanding the data returned by the tool oragent syntactically and semantically

-           Understanding non-functional requirements aspart of discovery and usage

-           Validating Agent Identity and provenance

-           Attribution of agent behavior when agents arediscovering one another

 

Agentstoday can reason and execute, but they cannot safely discover and interact withcapabilities in a way that meets enterprise requirements for correctness,auditability, identity, and governance.

 

Thisit not a model issue but an infrastructure limitation.  The internet solved a similar problem throughfoundational infrastructure such as DNS, certificate authorities, and identitysystems. Agent ecosystems require analogous infrastructure via a registry thatallows agents to discover, identify, evaluate, and safely interact autonomously.

Agent and Tools Use

Identifying that a tool or agent is needed to begin with.In a fully autonomous system, the agent must recognize from context that itlacks a capability that must be filled. Unless all of the tools and agents arepre-identified and specifically captured in the system prompt, we are left tohoping the agent infers from the context that it needs additional informationfrom free-form text.  With no standardvocabulary and grounding in business process context, there is no assurancethat the agent will identify the gap at all or, if it does, appropriately.  The agent may fail to recognize the need, orrecognize a need that doesn't exist, because the description was ambiguous orincomplete.

Knowing when to invoke the tool or agent. Selection andcorrectly using in the context of the business process and state are differentproblems. An agent may correctly identify that a claims processing capabilityexists but invoke it at the wrong lifecycle stage — submitting a claim thatshould have been held for additional documentation, or adjudicating a claimthat hasn't been accepted yet. The "when" is defined by the businessprocess, not by the tool's description. Without business operational context thatties the capability to a specific stage in a specific workflow, the agent hasno basis for timing its invocation correctly. It knows the tool exists; itdoesn't know if now is the right moment to use it.  This can also lead to inappropriateinterpretation of data being passed or returned.

Understanding what information the tool or agent requiressyntactically and semantically. Once an agent decides to invoke acapability, it must correctly identify the data from its own context (includingprompts and any retrieval) that matches both in structure (syntax) as well asmeaning (semantics).  Provider matchingis a notoriously challenging task.  Thereis a billing provider and a practitioner provider.  A practitioner may have privileges atmultiple hospitals as well as their own practice.  How does the LLM know, with absolutecertainty, what “provider” means in its context (e.g. memory) vs what it meansin the business operation context vs in the agent it is calling’s context?  Today, we expect the LLM to infer this fromthe system prompt and context data.  Without a semantic contract end-to-end with clear data name spacing, weare, once again, hoping that the LLM performs correctly with no insight intowhat choices it made or how it made them.  The input contract must beexplicit, machine-readable, and grounded in a shared definition of what eachfield means in the context of the operation being performed.

Understanding the data returned by the tool or agentsyntactically and semantically. The same problem applies in reverse. Whenan agent returns data, it must faithfully adhere to a syntactical and semanticcontract.  Further, the receiving LLMmust faithfully represent the data in its output.  This is non-negotiable for most complexbusiness problems.  It is even moreimportant in regulated industries.

Understanding non-functional requirements as a part ofdiscovery.

There is a wide range of non-functional requirements thatmust be adjudicated to ensure appropriate and reliable agent/tool usage:

-           Operational Limits:  Rate limits, batch sizes, payload sizes,SLA/OLA, uptime, etc.

-           Costs: Unit of measure, amount, tiers, etc.

-           Payment: How the consumer pays for usage (e.g x402 or similar)

-           Compliance Assertions:  Agent/Tools is HIPAA, SOC2, FedRAMP, ISO,etc. compliant/certified

-           Licensing/Privacy/Data Sovereignty:  What terms and conditions apply

Each of these areas are critical filter/selection dimensionsfor the discovery and usage.  Thiscertainly applies to using external agents/tools but also internally ownedagents/tools.  

Validating Agent Identity.

Identity is a foundational principle/requirement forvirtually any IT component.  Without it,there can be no attribution, authentication, or authorization.  Most organizations are assigning IDs to theiragents in their IAM systems.  This is anecessary step but it doesn’t solve the problem of how agents representthemselves to one another during the runtime discovery and usage.

It also doesn’t address how those identities and credentials(e.g. API keys) are exchanged in a secure and consistent manner from thediscovery through the utilization.  Thesemust be negotiated during discovery.  

Attribution of agent/tool behavior when agents areself-discovering.

Since today’s agentic AI frameworks lack a formal method forvalidating and exchanging identity, agent usage of other agents and tools arenot fully attributed.  Not only does thiscreate audit and logging issues but also operational issues.  In today’s model, there is no attributionthat ties invocations of and agent/tool to the consuming agent.  This means that there is no agent-level way toenforce limits or costs.  In many cases,this is done using network addressing and the API key itself. However trustedagent identities and attribution allow for protection against spoofing, replayattacks, authentication/authorization, etc. more robustly.

 

Where we are today

Today’s frameworks and methods for discovering tools andagents are geared towards speed-to-market and ease of use but they cannotsupport business critical workloads – whether using internal or externalagents/tools.  This isn’t a scale issue.It is a control, trustworthiness, audit, and compliance issue.  Even a single internal agent invoking from alist of only one other internal agent is subject to the same probabilisticinterface and usage risks.  The problemis not how many tools exist but

-           How agents select and identify how/if/when touse

-           Framing the call in the context of the businessfunction and process

-           Ensuring the semantic meaning of the informationis preserved end-to-end

We would expect - demand - that a developer understand andconsider these dimensions when selecting APIs and tools to use.  To not perform that analysis is a violationof the developer's required due care. Why would we not expect the same from an Agent?  The answer is we must but that means we haveto enable the Agents with the ability to do so.

The gaps outlined above are not model limitations. They areinfrastructure limitations.

Without a shared system for semantic discovery, identityvalidation, attribution, and non-functional requirement evaluation, agentscannot safely operate beyond tightly curated, low-risk environments.

This infrastructure must address:

Traditional software Agentic AI
Need definition Deterministic — defined by a human Inferred by the LLM
Invocation Deterministic — based on state and flow logic Inferred by the LLM
Data mapping Deterministic — syntactic and semantic consistency enforced Inferred by the LLM
Non-functional requirement reconciliation Deterministic Not addressed in existing frameworks
Gap
Control compliance Deterministic Not addressed in existing frameworks
Gap
Identity & authorization Deterministic Static rights with LLM-driven contextual use — no formal authorization policies
Gap
Explainability / auditability / attribution Fully explainable and auditable Partially addressed in existing frameworks — largely opaque
Partial

In short, agents need a registry.

Just as DNS, certificate authorities, and identityinfrastructure made it possible for computers to reliably discover andcommunicate across the internet, agent registries will make it possible foragents to reliably discover and interact across ecosystems.  Without this infrastructure, agentic AI willremain constrained to narrow, curated demonstrations.  With it, truly autonomous systems becomepossible.

What’s Next?

The support of business-critical processing using Agentic AIrequires that we address the gaps in today’s frameworks.  The concepts outlined in this article are astarting point for the conversation.  Forthis to be successful,

1.      This framework must be an open-source standard(implementations can be proprietary)

2.      It must leverage current business standards –not inventing new ones

3.      It must extend current standards such as MCP andA2A

In the next article, we will explore the existing businessstandards and how they might be codified into the registry following the A2Aschema.  Following that will be anarticle discussing a MVP reference architecture geared towards functionalcapabilities vs named tools.