Meet Us In Person|We'll be at #StartupGrind Conference 2026 | April 27 to 29, 2026Learn more

Five protocols, one checkout: the fragmentation problem in agentic commerce

·

Five protocols, one checkout: the fragmentation problem in agentic commerce

Every AI agent talks to your store differently. That's about to become your biggest operational headache.

Imagine you run a restaurant. One day, five different delivery platforms show up and say they want to list you. Great, more orders. But each platform uses a different system to place those orders. One sends a fax. One calls the phone. One uses an iPad app. One pushes orders through a printer you've never seen. One sends a person who stands at the counter and reads orders off a phone.

Same kitchen. Same menu. Five completely different ways orders arrive.

Now multiply that by a thousand restaurants. And none of them know which delivery platform their next order will come from.

That's what's happening in ecommerce right now, except the delivery platforms are AI agents, and the ordering systems are protocols.

The shift nobody talks about at board meetings

Here's a stat that should make every VP of Commerce nervous: the number of AI agent-initiated ecommerce API calls is growing faster than mobile did in 2010. But unlike mobile, where you could see the traffic in Google Analytics, watch the sessions, track the funnel, agent traffic is largely invisible to traditional monitoring.

An AI agent doesn't load your homepage. It doesn't trigger a pageview. It doesn't show up in your session recordings. It calls a structured API endpoint, gets a response, makes a decision, and either completes the transaction or moves on.

When it moves on, you don't get an abandoned cart. You don't get a bounce rate. You get nothing.

The transaction just doesn't happen.

And the reason it didn't happen might be as simple as a malformed JSON field in your product search response, or a delegation token your checkout endpoint didn't know how to validate.

How agent transactions silently disappear

AI agent discovers your store
100%
Product search returns valid response
-8%92%
Cart created successfully
-14%78%
Checkout flow completes end-to-end
-27%51%
Payment confirmed and order placed
-7%44%
=
Transactions your dashboard shows
44%

The other 56% left no trace. No error page. No abandoned cart. No bounce rate. Nothing.

Five protocols. Five different conversations.

Over the past eighteen months, the agentic commerce landscape has converged on five distinct protocols. Each one was designed by different organizations, for different interaction patterns, with different assumptions about how agents and merchants should communicate.

They're all arriving at the same time. And they all hit the same checkout.

Five protocols, one checkout

📦
UCPStructured API

Structured REST calls for product search, cart, checkout, and payment. The closest to traditional ecommerce APIs.

🤝
ACPNegotiation

Multi-agent conversations with delegation tokens, capability negotiation, and dialogue-based commerce.

🛠️
MCPTool Invocation

Agents discover and invoke merchant tools via JSON-RPC. Native to how LLMs reason about actions.

🔄
A2ACoordination

Multiple agents collaborating on a single task across trust boundaries and companies.

🌐
WebMCPWeb Bridge

Browser-native agent access through web manifests and form annotations. Lowest barrier to entry.

All five are arriving at the same time. All five hit the same checkout.

UCP: the structured API

Unified Commerce Protocol is the closest to what most merchants already have. Structured REST calls for product search, cart management, checkout, and payment, with schema validation and idempotency requirements.

If you've built a modern ecommerce API, UCP feels familiar. But "my API works" and "my API conforms to agent protocol spec" are different statements. The gap between them is where thousands of agent transactions silently fail.

UCP cares about things your frontend never checked: Does your search response include every required schema field? If an agent retries a payment with the same idempotency token, do you correctly return the original result instead of charging twice? Do your webhooks fire reliably for every order state change?

ACP: the negotiator

Agent Commerce Protocol is designed for something traditional ecommerce never needed: multi-agent conversations.

A buyer agent, a merchant agent, a mediator, and a trust broker, all participating in a single transaction. Capability advertisement ("what can this merchant do?"), delegation tokens ("this agent can spend up to $50 on my behalf"), dialogue-based negotiation, and context propagation across agent handoffs.

Think of it as the difference between a vending machine and a bazaar. UCP is the vending machine. ACP is the bazaar, and more commerce works like a bazaar than you'd think, once agents are doing the shopping.

Any product that involves configuration, personalization, bundling, or negotiation is an ACP conversation. That includes most of enterprise ecommerce.

MCP: the tool invoker

Model Context Protocol, originally from Anthropic, flips the model. Instead of the agent calling REST endpoints, MCP lets the agent discover and invoke tools the merchant exposes through JSON-RPC.

Your store publishes a set of tools with typed schemas: search_products, add_to_cart, apply_coupon, create_checkout. The AI agent reads those schemas, understands what each tool does, and invokes them during its reasoning.

This is closer to how LLMs naturally work. Tool use is a first-class concept in every major model. MCP removes the translation layer between "what the LLM wants to do" and "which endpoint to call."

The catch: tool lifecycle management, connection handling, schema versioning, and error recovery all need to work perfectly. A dropped connection mid-invocation doesn't retry the way a browser refresh does. The agent just fails.

A2A: the coordinator

Agent-to-Agent handles what none of the others do well: multiple agents collaborating on one task.

A personal assistant agent gets the consumer's request. It delegates research to a specialist. The specialist hands options to a payment agent. The payment agent checks budget constraints with a financial agent. All of this happens across services, across trust boundaries, and often across companies.

A2A provides task creation, multi-turn execution, status notifications, and auth handshakes between agents. It's the coordination layer.

If you think this sounds complex, it is. But it's also how agent commerce is actually starting to work in practice. Multi-agent shopping flows and enterprise procurement chains all need A2A.

WebMCP: the bridge

WebMCP is the most pragmatic of the five. It lets agents discover merchant capabilities through browser-native mechanisms: web manifests, form annotations, and polyfilled tool invocations.

The insight: most merchants already have a website. Instead of building a separate agent API, what if agents could interact with the existing web interface through standardized annotations?

Lowest barrier to entry. Most constrained in capability. But it gets merchants into the game without a full backend rebuild.

How the five protocols compare

InteractionComplexityAdoptionEnterprise fit
UCPRequest/ResponseLowHighestGood
ACPMulti-turn dialogueHighGrowingExcellent
MCPTool invocationMediumGrowingGood
A2AMulti-agent tasksHighestEarlyCritical
WebMCPBrowser-nativeLowestEmergingLimited

Most merchants will need to support at least two or three of these simultaneously.

Here's where it gets hard

It's not that five protocols exist. Merchants have dealt with multiple integration partners before.

It's that a single merchant will need to support multiple protocols simultaneously, and each one has completely different failure modes.

A UCP failure is a malformed response. An ACP failure is a broken delegation chain. An MCP failure is a schema mismatch. An A2A failure is a corrupted task state. A WebMCP failure is a missing annotation.

You can pass every UCP check and fail catastrophically on MCP. Your ACP delegation flow can work perfectly while your A2A task coordination is broken.

And here's the part that really matters: you won't know you're failing.

What traditional monitoring catches vs. what agents encounter

What your dashboard shows

5xx server errors

Your APM catches these immediately

Slow response times

Latency spikes show up in your metrics

Broken frontends

Real User Monitoring flags rendering issues

Failed payments

Payment gateway errors are well-logged

High error rates

Alert rules fire when thresholds are exceeded

What agents encounter silently

Expired delegation tokens

Agent silently abandons and moves to a competitor

Wrong field types in schemas

LLM misinterprets data and makes wrong decisions

Desynced task states

Multi-agent coordination breaks mid-transaction

Missing form annotations

WebMCP agent can't find the checkout flow

Idempotency violations

Double charges or phantom orders with no alert

Every item on the right is a lost transaction. None trigger an alert in any dashboard you currently own.

Traditional monitoring catches 5xx errors, slow responses, and broken frontends. It does not catch:

  • A valid-but-expired delegation token that an agent silently abandons
  • A tool schema that returns the wrong type for a field the LLM depends on
  • A task state that desyncs between two agents mid-transaction
  • A form annotation that doesn't match the actual checkout flow
  • An idempotency violation that charges a card twice, or worse, charges zero times and confirms the order anyway

Each of these is a lost transaction. None of them show up in your dashboard.

Why what you have today doesn't cover this

Most ecommerce teams already test their APIs. The problem isn't effort, it's assumptions.

Traditional API testing assumes one client type calls your endpoints. In agentic commerce, dozens of different AI agents, each with different behavioral profiles, will call the same endpoint through different protocols.

It assumes request-response is the pattern. ACP is multi-turn dialogue. MCP is iterative tool invocation. A2A is multi-agent task coordination. None of these map to "send request, check response."

It assumes the caller is predictable. An LLM reasons about what to do next. Two agents with identical goals might take completely different paths through your checkout. Your test suite needs to cover the range of behaviors, not just one scripted path.

It assumes failures are visible. Agent failures are silent by design. The agent doesn't file a bug report. It moves to a competitor.

Where traditional testing falls short

4assumptions broken
Assumes one client type (agents are dozens of types)90%
Assumes request-response (ACP/A2A are multi-turn)80%
Assumes predictable callers (LLMs reason differently)75%
Assumes failures are visible (agent failures are silent)95%

Traditional testing was designed for browsers and humans. Agents break every assumption it relies on.

The conformance score is the new competitive moat

Here's why this is a board-level issue, not a QA backlog item.

When consumers searched Google for products, merchants who ranked higher got more traffic. Search optimization became an industry. Billions of dollars spent on appearing in the right results.

When AI agents choose which merchants to transact with, they'll use a different signal: protocol conformance. Does this merchant respond correctly? Quickly? Consistently? Across all five protocols?

An agent that encounters a broken checkout at Merchant A doesn't retry. It doesn't complain. It switches to Merchant B. Silently. Instantly.

The merchant with high conformance across protocols wins the transaction. The merchant with gaps doesn't even know they lost it.

This means conformance isn't a quality metric. It's a revenue driver. The merchants who track their conformance scores the way they track conversion rates will have a structural advantage as agent commerce scales.

The merchants who don't will watch revenue decline with no explanation in any dashboard they currently own.

Conformance is the new competitive moat: the revenue timeline

2000sSearch engine optimization

Merchants who ranked higher in Google got more traffic. SEO became a multi-billion dollar industry.

2010sMobile optimization

Mobile-friendly merchants captured the smartphone commerce wave. Others watched traffic shift away.

2020sSocial commerce

Merchants integrated with Instagram, TikTok, and social platforms captured a new generation of buyers.

2025+Protocol conformance

AI agents choose merchants based on response quality, speed, and protocol compliance. The conformance score becomes the new ranking signal.

In every era, the merchants who adapted to how customers found them won. The signal is changing again.

What to do about it

The five-protocol landscape isn't consolidating. UCP, ACP, MCP, A2A, and WebMCP each solve different problems. Merchants will need to support multiple protocols, and they'll need visibility into how each one performs.

Three things to start thinking about now:

1. Audit your protocol surface. Which of the five protocols does your platform currently support, even partially? Where are the gaps? Most merchants discover they have partial UCP coverage, no ACP capability, and no idea whether their endpoints are MCP-compatible.

2. Understand your failure modes. For each protocol you support, what does failure look like? Not 5xx errors, the subtle failures. Expired tokens. Schema mismatches. Desynchronized state. Incomplete form annotations. These are the failures agents encounter and humans never see.

3. Test with real agent behavior, not scripts. A scripted API test tells you whether your endpoint returns 200. It doesn't tell you whether an AI agent can actually complete a purchase through it. The gap between "endpoint works" and "agent can shop here" is where revenue lives.

Where most merchants stand today (before protocol audit)

45%
UCP coverageWARN
5%
ACP readinessFAIL
15%
MCP compatibilityFAIL
0%
A2A supportFAIL
20%
WebMCP annotationsFAIL
10%
Failure visibilityFAIL

Most merchants have partial UCP coverage and no visibility into anything else. The gap is where revenue will be won or lost.

The protocols are shipping. The agents are coming. The merchants who can prove their storefronts work, across all five surfaces, will capture the next wave of commerce.

The ones who can't won't even know what they missed.


We're building the testing and simulation layer for agentic commerce at OrcaQubits. If any of this resonated, whether you're a merchant trying to figure out your protocol readiness, a platform team building agent-ready APIs, or just someone who wants to talk about where this is heading, we'd genuinely love the conversation.

Julekha Khatun : jkhatun@orcaqubits-ai.com

Rohit Bajaj : rbajaj@orcaqubits-ai.com