Emerging protocols for agent interoperability—such as theModel Context Protocol (MCP) and Agent-to-Agent (A2A) communication models—arean important step forward.
They define how agents:
But they leave a critical question unanswered:
How do agents reliably decide what to use in the first place?
Protocols like MCP and A2A are effect at:
They assume a world where:
Those assumptions work in controlled and constrained environments, but it doesn't support the promise of open, but controlled, agent runtime discovery.
They break down in open agent, tool, and resource ecosystems - regardless whether those exist solely internally or include 3rd party capabilities.
Before an agent can invoke another agent, tool, or access a resource such as a database, it needs to know
Today's standards are silent on these points.
Current standards, MCP, A2A, and agentic frameworks, provide narrative descriptions of what agents and tools do with JSON "hints" at inputs and outputs. The problem is that the LLM interpreting these has no grounding to ensure understanding. That is further exacerbated by the LLM's probabilistic nature of interpreting what information it does have regarding the capabilities.
In fact, this has become so problematic regarding data that offerings such as Pydantic AI are specifically aimed at addressing the gap by strict data typing. That addresses the syntactical challenge but it does nothing to ensure that a Date or Product in the agent's context is semantically equivalent to what the called agent or tool expects in the context of the business intent.
Is Invoice_Date the date created, paid, due, updated, or something else?
The semantic layer addresses this gap by defining both the agent's capabilities as well as the data exchanged grounded in pre-existing business standards.
There are industry-wide and industry-specific process frameworks from the like of the American Productivity and Quality Center. These provide concrete grounding in business processing terms defining what agents need and provide. This is further qualified by using pre-existing standard transactions such as X12 EDI, HL7, SWIFT, etc.
The transaction grounding adds further clarity to the capabilities and definitively defines the business meaning and syntax of all data exchanged.
This enables agents to:
Understanding business intent and data semantics is critical but only part of the picture. What good is an agent or tool that does the right thing but can't support your response time requirements, doesn't match your compliance needs, or is too expensive?
Just as the semantic registry unambiguously defines intent and data, it too defines non-functional requirements:
These become first-class dimensions in the selection process ensuring both the requester and the server agree on non-functional terms before invocation.
Think of this as a layered architecture
The semantic layer answers "What should I use and when?" while the transport layer answers "How do I use it?"
Without this semantic layer:
As agentic AI expands, we must move beyond the artificially constrained implementations of today. The real value in Agentic AI is supporting open discovery but doing so within a trust framework.