SADAR
 -
Opinion
April 28, 2026

Why MCP and A2A Need a Semantic Layer

Today's frameworks focus on execution - the transport and data exchange. They require prior knowledge of agents, tools and resources organized in a curated list for the requesting agent. Even then, studies show agents aren't reliable in deciding if, how, or when to use agents and tools.

Execution protocols alone are not enough for scalable agentinteroperability

Emerging protocols for agent interoperability—such as theModel Context Protocol (MCP) and Agent-to-Agent (A2A) communication models—arean important step forward.

They define how agents:

  • Communicate
  • Invoke Tools
  • Exchange Data

But they leave a critical question unanswered:

How do agents reliably decide what to use in the first place?

The Strength of Current Protocols

Protocols like MCP and A2A are effect at:

  • Standardizing execution
  • Structuring interactions
  • Enabling tool invocation

They assume a world where:

  • Tools are already known and vetted
  • Capabilities are pre-configured for a specific agent's use
  • Trust is implicit

Those assumptions work in controlled and constrained environments, but it doesn't support the promise of open, but controlled, agent runtime discovery.

They break down in open agent, tool, and resource ecosystems - regardless whether those exist solely internally or include 3rd party capabilities.

Reliable Discovery

Before an agent can invoke another agent, tool, or access a resource such as a database, it needs to know

  • It exists
  • What it does
  • Whether or not it meets the requirements
  • How to use it
  • Whether it should be trusted
Today's standards are silent on these points.

Fragile Discovery

Current standards, MCP, A2A, and agentic frameworks, provide narrative descriptions of what agents and tools do with JSON "hints" at inputs and outputs. The problem is that the LLM interpreting these has no grounding to ensure understanding. That is further exacerbated by the LLM's probabilistic nature of interpreting what information it does have regarding the capabilities.

In fact, this has become so problematic regarding data that offerings such as Pydantic AI are specifically aimed at addressing the gap by strict data typing. That addresses the syntactical challenge but it does nothing to ensure that a Date or Product in the agent's context is semantically equivalent to what the called agent or tool expects in the context of the business intent.

Is Invoice_Date the date created, paid, due, updated, or something else?

The Semantic Layer

The semantic layer addresses this gap by defining both the agent's capabilities as well as the data exchanged grounded in pre-existing business standards.

There are industry-wide and industry-specific process frameworks from the like of the American Productivity and Quality Center. These provide concrete grounding in business processing terms defining what agents need and provide. This is further qualified by using pre-existing standard transactions such as X12 EDI, HL7, SWIFT, etc.

The transaction grounding adds further clarity to the capabilities and definitively defines the business meaning and syntax of all data exchanged.

This enables agents to:

  • Discovery capabilities based on intent
  • Compare options across providers
  • Cleanly map data inputs and outputs

The Role of Non-Functional Requirements

Understanding business intent and data semantics is critical but only part of the picture. What good is an agent or tool that does the right thing but can't support your response time requirements, doesn't match your compliance needs, or is too expensive?

Just as the semantic registry unambiguously defines intent and data, it too defines non-functional requirements:

  • Operational: rate limits, response time, payload sizes, uptime, etc.
  • Governance/Compliance: compliance frameworks (HIPAA, SOC 2, ISO, FedRAMP, etc.)
  • Licensing
  • Cost
  • Payment methodology

These become first-class dimensions in the selection process ensuring both the requester and the server agree on non-functional terms before invocation.

How this Augments Existing Protocols

Think of this as a layered architecture

  • Semantic Layer (SADAR): Discovery, meaning, trust
  • Transport Layer (MCP, A2A): Invocation and communication
The semantic layer answers "What should I use and when?" while the transport layer answers "How do I use it?"

The Gap Leaves Organizations At Risk

Without this semantic layer:

  • Discovery is fragile and unexplainable
  • No evaluation of operating, compliance, legal, etc. requirements
  • No trust anchor for agents, tools, and resources

As agentic AI expands, we must move beyond the artificially constrained implementations of today. The real value in Agentic AI is supporting open discovery but doing so within a trust framework.