SADAR
 -
Opinion
April 22, 2026

The Architecture Choice You're Already Making

Many agentic implementations have avoided an agentic AI governance crisis by deploying to low-risk and/or constraining agents to tightly scoped and deterministic use cases that fall short of the vision of fully autonomous agents. These constraints are not a substitute forgovernance – they are a barrier to value.

Most organizations experimenting with agentic AI today have quietly avoided a governance crisis. Not because they’ve solved it—but because they’ve sidestepped it.

The majority of early deployments don’t truly leverage open discovery. Instead, they rely on tightly constrained designs where agents operate within narrowly defined scopes, often tied to a single business problem. In these environments, capability selection is effectively deterministic.

When an agent has only one viable option, governance appears implicit in the architecture.

We sacrafice flexibility and value by applying this constraint because today's agents and agent frameworks cannot:

  • consistently select the correct capability (agent, tool, or resource)
  • correctly pass it the right data from context, prompt, and other capability responses
  • correct interpret the returned data
  • execute the capabilities in the correct order
  • something, somewhere, is handling failures
  • perform within the originating scope of authority

To support the target state, these must occur across models, agentic frameworks, and even across organizational boundaries.

The Illusion of Control

Constraining design is often substitutued for governance. If you can't reliably select from a set of capabilities, reduce the options.

It’s a shortcut—one that works only as long as systems remain simple, static, and predictable.

In these constrained environments, organizations gain a sense of control not because governance is robust, but because variability has been artificially removed. The system behaves deterministically because it has been engineered to do so.

But this approach delivers only incremental improvement over traditional programming.

It does not unlock the defining promise of agentic AI:

  • autonomy
  • discoverability
  • dynamic composition of capabilities

More importantly, even in its constrained state, it doesn't fully meet current regulatory, audit, and compliance requirements.

The Moment It Breaks

The governance gap becomes visible the moment agents are allowed to operate as they are intended to:

  • Observing
  • Planning
  • Dynamically discovering capabilities
  • Interacting with other agents
  • Operating in contexts for problems outside their design scope and never anticipated

This is where the true value of Agentic AI lives. It frees users from rigid system flows to open-ended task and goal oriented interaction.

Traditional systems are programmed. The computer follows a set of instructions connected in a series of pre-defined flows. Given the same inputs they produce the same outputs. They are used by the same users in the same ways every time - guarnteed by the structure of the system and its controls.

Today's control frameworks are built on this fundemental premise. It is how we test systems, define access controls, and audit.

AI is different. It isn't programmed, it is trained. The training results in a model representing probabilities -given x, what is either the probability of y or predict y. Because it is probabilistic, you aren't guaranteed to get the same results every time. In fact, many LLM's intentionally introduce variability so that they sound more human. But the variability is much more complex than just the probabilistic nature of the model, it extends to how the model is tuned, how it was instructed in the prompt, and is very sensitive to how its memory - the context, is managed and its contents.

If you test an AI system 1,000 times, you only know how it answered but not how will answer the 1,001 time.

The shift to agents exponentially complicates the controls.

Dynamic discovery replaces predetermined execution paths. Capability selection becomes contextual. Execution becomes adaptive. We go from always knowing who, how, when, and why a module is invoked to having no idea if, when, why, or by whom it might be invoked. The controls that worked before—identity systems, RBAC models, static workflows—no longer provide sufficient guarantees.

Why Existing Governance Models Fall Short

Consider a simple example.

A system performs a read/write operation on a claim record.

On the surface, this is unremarkable.

But context changes everything:

  • Editing an in-process purchase order is a routing workflow step
  • Editing a finalized order once it is closed is a compliance event

The action is identical. The risk is not. The difference lies entirely in business context.

Add to this complexities such as

  • Who requested the change and is that preserved through the agent flow?
  • How do you scope change control to certain products, customers, geographies, etc.?
  • How do you enforce business process predecessors such as validating inventory availability or shipping logistics before finalizing an order? What about verifying if the customer is in good standing?

The list of complexities goes on and on. When agents discover capabilities, they must do so, and use them, within the constraints of the business process.

No identity system captures that distinction.
No RBAC model encodes it.
No agent registration in an enterprise directory resolves it.

Assigning agents identities is necessary—but insufficient.

Governance in agentic systems is not about who performed an action.
It’s about:

  • what has been done
  • what is about to be done
  • why - within what business process- it was done
  • in what context of the underlying business entities or state was it/will it be done
  • under whose authority

Without that context, accountability breaks down.

The Governance Gap That Scales With Ambition

Today, many organizations aren’t feeling this gap acutely.

Not because it doesn’t exist—but because their deployments haven’t grown into it yet.

But they will.

As agentic systems expand:

  • across technical landscapes - models, agentic flow frameworks, APIs, cloud providers/on premise
  • across workflows
  • across systems
  • eventually across organizational boundaries

The limitations of constrained design become impossible toi gnore.

What begins as a pragmatic shortcut becomes a structural barrier.

Organizations are already making architectural decisions—often implicitly—about how:

  • agents discover capabilities
  • authority is defined and enforced end-to-end
  • business context is preserved across the flow
  • actions are observed, logged, audited, explained, and monitored

Those decisions will compound over time to either for the foundation of durable, trustworthy, and scalable agentic systems or be technical debt and a compliance liability.

The shift underway is not just technological—it’s conceptual.

We are moving from AI that advises to AI that acts. That's not in the future - its now.

We've entered the Agentic Era. - McKinsey, BDO, Stanford University

This transition changes everything.

Actions have consequences.
Consequences require accountability.

Organizations are accountable for their actions. Every control framework requires transparency into the controls around operations and meaningful decisions made in the process of conducting daily business. This isn't optional. It is what regulators, auditors, customers, and other stakeholders deserve and demand.  "The Agent did it but we don't know how, why, or under whose authority." is not an acceptable explanation.

Until this is solved, agentic AI cannot be responsibly deployed for critical flows.

The Missing Layer

What’s missing is not more sophisticated models. It’s infrastructure.

A layer that provides:

  • structured business process context
  • standardized definitions of actions and data meaning/structure
  • clear process sequencing and dependencies
  • end-to-end propgation of authority
  • visibility into execution by design

This is the layer that enables governance to scale with autonomy.

Toward Governed Execution

SADAR addresses this gap by introducing a structured, standards-based approach to agentic execution.

It does this by:

  • Defining business processes in terms of predecesors and successors using industry-standard process definitions
  • Representing business data in the context of industry transactions
  • Persisting the originating authority, business process definition, and transaction instance ID end-to-end
  • Ensuring and enforcing agent provenance
  • Facilitating agent identity and time-of-use token exchange with TTL and replay protection
  • Non-functional requirements for operations, compliance, financial, and legal are first-class discovery elements

The result is not just better observability, it is accountability by design.

The Choice Ahead

The question is no longer whether organizations will adopt agentic AI.

They will.

The question is how.

Will agentic systems be built on constrained designs that limit risk by limiting capability?

Or will they be built on governed infrastructure that enables both?

Because the same mechanisms that make agentic AI powerful are the ones that make governance non-negotiable.

Final Thought

Constraining design can delay the governance problem but it cannot solve it.

In doing so, it doesn’t just limit risk—it limits value.

Because the most valuable use cases for agentic AI are the ones that require:

  • flexibility    
  • adaptability    
  • and     execution in complex, real-world contexts

Without governance, those use cases never make it to production.

With it, they redefine how organizations operate.

Your agents today likely aren’t in a compliance crisis – not because you’ve addressed the underlying requirements but because you’ve constrained the options the agent can take.  

Constraining design is not a substitute for governance – it is a barrier to value.