Rethinking corporations, platforms, and power when intelligence becomes infrastructure
The Theory Meets the Ground
Posts 1–14 built a framework:
- Firms exist because coordination is costly (Coase)
- Hierarchies dominated when human bandwidth was the bottleneck
- Digital infrastructure shifted firms toward networks
- Network geometry determines scaling potential
- Protocols replace hierarchy as the coordination mechanism
- Value streams define what protocols coordinate around
This was theory. Now it is engineering.
In early 2026, Google launched the Agent-to-Agent (A2A) protocol with over fifty enterprise partners. Anthropic’s Model Context Protocol (MCP) is already embedded in developer toolchains. The infrastructure for protocol-governed agent coordination is being built — not in research labs, but in production systems.
This post examines what the early evidence tells us. Which parts of the framework hold up? Where does reality diverge from theory? And what does the emerging agentic ecosystem actually look like when viewed through the lens of coordination economics?
The Protocol Layer Is Real
The series argued (Post 12) that protocols would replace hierarchy as the coordination mechanism for networked firms. That argument was structural — derived from Coase, from Conway, from the scaling laws.
The empirical picture now confirms the structural prediction. Two protocol layers have emerged simultaneously, each addressing a different coordination problem:
MCP standardises how agents connect to tools and data sources. It has been adopted by Claude, ChatGPT, VS Code, Cursor, and a growing list of developer tools. MCP is the agent-to-infrastructure layer — the equivalent of device drivers for the agentic stack.
A2A standardises how agents discover and communicate with each other. It is backed by Salesforce, SAP, PayPal, ServiceNow, Intuit, and every major consulting firm. A2A is the agent-to-agent layer — the equivalent of TCP/IP for autonomous coordination.
The layering the series predicted — firm protocols sitting atop agent protocols sitting atop infrastructure — is exactly how the ecosystem is organising itself.
But a crucial question from Post 12 remains: are these open protocols or proprietary ones? A2A is published as an open standard. MCP is open-source. Yet the firms building on top of them — Salesforce, SAP, ServiceNow — control proprietary protocol layers at the business logic level. The open infrastructure enables interoperability; the proprietary layer above it captures value.
The protocol stack is real. But who governs which layer determines who captures value — and that question is far from settled.
This mirrors the history of the internet itself. HTTP was open. The platforms built on HTTP were not.
Case Study 1: Salesforce — The Protocol Governor
Salesforce’s Agentforce platform illustrates the protocol firm concept from Post 12 with unusual clarity. Their stated goal is to “turn disconnected capabilities into orchestrated solutions.”
That phrase deserves unpacking. “Disconnected capabilities” is the pre-protocol state — siloed features, fragmented workflows, manual hand-offs between systems. “Orchestrated solutions” is the protocol state — interactions governed by standardised rules, composed into coherent outcomes.
Salesforce controls the CRM protocol. It defines who participates in customer interactions, what actions are possible, how data flows between touchpoints, and how outcomes are validated. Every company using Salesforce implicitly accepts these interaction rules. This is governance through protocol design, not through organisational hierarchy.
A2A extends this power. With agent interoperability, a Salesforce agent coordinating a sales pipeline can now invoke an SAP agent for pricing, a ServiceNow agent for provisioning, and a DocuSign agent for contract execution — all within the protocol boundary that Salesforce defines.
The firm boundary and the protocol boundary are converging. Salesforce does not employ the agents it coordinates. It does not own the infrastructure they run on. But it controls the interaction rules — and in a protocol-governed network, that is the locus of power.
Post 6 described the platform as a “proto-firm” — an entity that coordinates economic activity without traditional employment relationships. Salesforce Agentforce is the next iteration: a protocol governor that coordinates not just human participants but autonomous agents across organisational boundaries.
The protocol governor does not need to employ the participants or own the infrastructure. It needs to define the rules by which they interact — and those rules must be good enough that participants prefer following them to negotiating bilaterally.
Case Study 2: SAP — Cross-Boundary Value Streams
If Salesforce illustrates the protocol firm, SAP illustrates value streams crossing firm boundaries — the phenomenon Post 14 predicted but could only describe theoretically.
SAP’s Joule platform promises “end-to-end processes” with agents working “seamlessly across platforms.” This is not marketing language. It is a precise description of what happens when value stream mapping meets agent interoperability.
ERP systems are inherently value stream systems. The core flows — procure-to-pay, order-to-cash, plan-to-produce — are sequences of transformations that produce value. They have always been the natural coordination unit for enterprise operations.
Historically, these streams terminated at the firm boundary. A company’s procure-to-pay process ended when the purchase order left their system and entered the supplier’s. The hand-off — email, EDI, manual re-entry — was a friction point that the series (Post 14) identified as precisely the kind of waste that value stream mapping exposes.
A2A changes the physics of this. An SAP Joule agent managing procurement can now communicate directly with a supplier’s agent managing order fulfilment. The value stream extends beyond the SAP boundary. Agents from different vendors, running on different infrastructure, participate in the same stream.
This is the Coasean question from Post 5 made concrete. If agent protocols reduce the coordination cost of cross-firm value streams to near zero for certain tasks, where does the boundary of the firm settle? The traditional answer — “internalise activities where internal coordination is cheaper than market coordination” — assumed that cross-boundary coordination was inherently expensive. That assumption is being engineered away.
When value streams extend beyond firm boundaries through protocol-governed agent coordination, the Coasean boundary shifts. The firm does not need to own every step — it needs to govern the protocol that connects them.
Case Study 3: Atlassian — Group-Forming Networks
Atlassian’s Rovo agents demonstrate something different from Salesforce’s protocol governance or SAP’s cross-boundary streams. They demonstrate group-forming network dynamics — the Reed-scale phenomenon from Post 11 — operating inside a collaboration platform.
Rovo agents can “discover, coordinate, and reason with one another.” This is not a fixed workflow. It is ad-hoc group formation. Agents assemble around a task — a Jira ticket, a Confluence document, a code review — contribute their specialised capabilities, and dissolve when the task completes.
This is the Cyborg Cell from Post 9, extended. Instead of one human anchor with N specialist agents, Rovo enables multiple human anchors, each with their own agent capabilities, forming temporary working groups that include both human and agent participants.
The groups are dynamic. The same Rovo agent that participates in sprint planning might, an hour later, join an incident response group with an entirely different set of human and agent participants. This is delegation at scale — what Tomašev, Franklin, and Osindero formalise as intelligent AI delegation, where authority transfer, accountability chains, and trust assessment operate through the protocol layer rather than through managerial oversight.
Post 11 showed that group-forming networks have Reed-scale potential (value ∝ 2ᴺ) but face enormous coordination burden. A million possible groups means a million possible coordination failures. Atlassian’s approach suggests a partial answer: the platform constrains which groups can form (through workspace boundaries, permission models, and task context) while the protocol layer governs how they interact. The platform does not eliminate the coordination problem — it manages it by curating the activation rate.
Group-forming networks do not need to activate every possible group. They need to activate the right ones. The platform’s role is curation — and the protocol’s role is governance.
Case Study 4: The Consulting Firms — Institutional Validation
The most telling evidence may not come from the technology companies. It comes from the consulting firms.
BCG, McKinsey, Deloitte, PwC, Accenture, KPMG, Capgemini, Cognizant, TCS, Infosys, Wipro, HCLTech, and EPAM are all A2A partners. Thirteen major consulting and services firms, all positioning around agentic coordination as the next enterprise transformation.
Their language is revealing:
- BCG: “sustained, autonomous competitive advantage through open capabilities”
- TCS: “semantic interoperability”
- Deloitte: “foundation for evolving agentic AI architectures”
- Cognizant: “pioneer in multi-agent systems; interoperability as critical requirement”
- PwC: “seamless agent collaboration via agent OS”
These are not technology companies making product announcements. These are the firms that advise the world’s largest corporations on organisational design. They are telling their clients — the Fortune 500, the FTSE 100, the DAX 40 — that the coordination model is changing.
This is the institutional validation the series thesis requires.
Post 3 examined what happened when corporations became too big — when the coordination costs of hierarchy exceeded its benefits and firms began to fragment into networks. The consulting firms are now telling their clients that the next fragmentation is arriving: from managed networks to protocol-governed agent ecosystems.
The significance is not that these firms endorse A2A specifically. It is that they have converged independently on the same structural diagnosis: hierarchy is insufficient for the coordination problems that agentic AI creates. Whether they frame it as “semantic interoperability” (TCS), “autonomous competitive advantage” (BCG), or “evolving agentic architectures” (Deloitte), the underlying claim is the same one this series has been building toward: protocols replace hierarchy as the coordination mechanism for networked work.
When thirteen consulting firms converge on the same structural diagnosis, it is worth asking whether the diagnosis is correct. The convergence itself is evidence.
What the Scaling Research Tells Us
Not all of this will work.
Kim and Liu’s research on scaling agent systems (referenced in Posts 11 and 13) provides the empirical grounding that tempers the enthusiasm of fifty-company partner lists. Their findings, tested across 180 agent configurations, are precise:
Parallelisable tasks benefit enormously from multi-agent coordination. Financial analysis across multiple portfolios. Candidate sourcing across multiple channels. Compliance checks across multiple jurisdictions. These are tasks where the work is decomposable — where agents can operate on independent segments and a coordinator can merge results. Performance improvements of up to 80% are achievable.
Sequential reasoning tasks degrade with multiple agents. Complex legal analysis. Strategic planning. Architectural design. These are tasks where each step depends on the output of the previous one. Adding agents fragments the reasoning process. Performance drops of 39–70% are measured, not speculated.
The implications for the A2A ecosystem are direct. Many of the fifty-plus partner companies will discover that naive multi-agent deployments fail. The hype cycle will overshoot. Companies that deploy agents everywhere — without mapping which tasks are decomposable and which are sequential — will experience the agent equivalent of Brooks’s Law: adding agents will make things worse.
The companies that succeed will be the ones that do what Post 14 described: map their value streams first, identify which steps are parallelisable and which are sequential, and choose agent topologies based on task structure. The value stream is not just an organisational tool. It is a diagnostic for where agent coordination will work and where it will not.
| Task Type | Agent Topology | Expected Outcome |
|---|---|---|
| Parallel (sourcing, screening, compliance) | Distributed with coordinator | Significant improvement |
| Sequential (strategy, legal reasoning, design) | Single agent or human-led | Better than multi-agent |
| Mixed (hiring workflow, procurement) | Hybrid — parallel where decomposable, sequential where dependent | Moderate improvement with careful design |
The question is not “should we deploy agents?” It is “what is the task structure?” Kim and Liu’s research turns this from opinion into measurement.
Where Reality Diverges from Theory
The framework built across thirteen posts holds up better than expected against the evidence. But it is not without gaps. Four divergences are worth noting.
The Speed of Standardisation
The series implied that protocol layers would emerge gradually as firms experimented with coordination structures. In practice, MCP and A2A materialised rapidly — within months, not years. The protocol layer is forming before most firms have redesigned their internal coordination structures. Infrastructure is outpacing organisation.
This creates an inversion. The series assumed firms would develop internal protocols first and then extend them outward. Instead, the ecosystem is providing the protocol layer from the outside, and firms are adapting their internal structures to fit it. The infrastructure is shaping the organisation, not the other way around. Conway’s Law is operating in reverse — the protocol is designing the firm.
The Proprietary Layer Above the Open Layer
A2A is open. MCP is open-source. But the firms building on them — Salesforce, SAP, ServiceNow — control proprietary protocol layers above the open infrastructure. Salesforce’s Agentforce defines proprietary interaction rules for CRM workflows. SAP’s Joule defines proprietary rules for ERP processes.
The real power may not concentrate at the A2A level. It may concentrate one layer up — at the business logic layer where proprietary protocols define what agents can do within specific domains. The open infrastructure enables interoperability; the proprietary layer captures value.
Post 12 flagged the distinction between open and proprietary protocols. The evidence suggests that both will coexist — and that the power dynamics will be determined by which layer becomes the bottleneck.
The Governance Gap
Who governs cross-firm agent interactions? When a Salesforce agent invokes an SAP agent, which protocol governs the interaction? What happens when they disagree? Who arbitrates?
The series (Post 12) raised the governance question but treated it as a design challenge. The evidence suggests it is more fundamental than that. Cohere’s emphasis on “air-gapped environments” and the repeated invocation of “responsible” coordination across the partner list indicate that trust is a harder problem than the protocol layer alone can solve.
Protocols define interaction rules. But interaction rules presuppose a governance structure — someone or something that defines the rules, adjudicates disputes, and evolves the protocol over time. The A2A ecosystem does not yet have this. It has a protocol. It does not have a governance framework for the protocol.
Security as a First-Order Constraint
The theoretical framework treated security as a property of the protocol layer — something that well-designed protocols would handle. The evidence suggests it is more constraining than that.
Cohere explicitly positions its A2A participation around “air-gapped environments” — agent coordination that operates within security boundaries that the agents themselves cannot cross. This implies that for many enterprise use cases, the theoretical benefits of cross-boundary agent coordination will be limited not by protocol design but by security policy.
The boundary of the firm may not be determined by coordination costs alone. It may be determined by the trust boundary — the set of agents that an organisation is willing to let communicate without human oversight.
The Ecosystem as Network
Step back from the individual companies and examine the A2A partner list as a whole. It exhibits exactly the dynamics the series describes.
Fifty-plus companies joining creates Metcalfe-scale value at the protocol level. Each new participant can, in principle, coordinate with every other participant’s agents. The potential connections grow quadratically. This is why the ecosystem attracted so many partners so quickly — the value of joining increases with each additional member.
But the ecosystem is not a flat connection network. It has structure. There are at least three tiers:
Protocol governors (Google, Anthropic) define the foundational interaction rules. Domain controllers (Salesforce, SAP, ServiceNow, Atlassian) define proprietary business logic protocols on top. Infrastructure providers (LangChain, Confluent, Datadog, MongoDB) provide the execution and observability layer. System integrators (the consulting firms) bridge the gap between protocol and organisation.
This structure is not accidental. It mirrors Conway’s Law at ecosystem level: the companies designing the protocols are designing the future coordination topology of the enterprises that adopt them. The structure of the A2A consortium will shape the protocols, which will shape the agent systems, which will shape how work is coordinated inside and between firms.
The ecosystem is not just deploying agents. It is designing the coordination architecture of the next generation of firms. Conway’s Law applies at every level — including the level of the consortium itself.
There is a deeper pattern here. The A2A ecosystem is a group-forming network in its own right. Subgroups of partners are forming around specific use cases — hiring workflows, supply chain coordination, IT service management. These subgroups assemble, produce reference architectures, and dissolve or recombine as the protocol matures. The ecosystem exhibits the same dynamics it is engineering into its products.
What Comes Next
Four trajectories are visible from the current evidence.
Protocol competition. A2A and MCP are complementary today. They may not remain so. As both protocols mature, the boundary between “agent-to-agent communication” and “agent-to-tool connection” will blur. Other protocol proposals will emerge. The question of which protocol layer captures the most value — and whether a single dominant standard emerges or a fragmented landscape persists — will shape the ecosystem for the next decade.
Regulatory response. Autonomous agents coordinating across firm boundaries will attract regulatory attention. When an agent in one jurisdiction triggers an action in another — a procurement decision, a hiring recommendation, a financial transaction — questions of liability, jurisdiction, and accountability become urgent. The governance gap identified above is not merely a design problem. It is a regulatory one.
The emergence of protocol firms. Post 12 described the protocol firm as an entity that controls interaction rules without controlling infrastructure or labour. The A2A ecosystem is creating the conditions for this entity to emerge. A company that designs the dominant protocol for, say, cross-firm hiring workflows — defining how agents source, screen, interview, and onboard across organisational boundaries — would capture enormous value without employing a single recruiter or owning a single server.
The Coasean boundary, revisited. If agent protocols reduce coordination costs to near zero for parallelisable tasks, the boundary of the firm shifts outward — firms can coordinate more activity through the market (or through protocol-governed networks) rather than internalising it. But for sequential tasks, where Kim and Liu’s research shows multi-agent coordination degrades performance, the firm boundary holds. The result may be a new equilibrium: firms that are thin where work is parallelisable and thick where work is sequential. The boundary of the firm becomes a function of task decomposability.
The future firm may be defined not by what it owns or whom it employs, but by the set of value streams it governs through protocols — and the task structures that determine which streams can be externalised and which cannot.
References & Intellectual Lineage
- Google (2026). “A2A: A New Era of Agent Interoperability.” Google Developers Blog.
- Kim, J. & Liu, M. (2026). “Towards a Science of Scaling Agent Systems.” Google Research.
- Tomašev, N., Franklin, M. & Osindero, S. (2026). “Intelligent AI Delegation.” arXiv:2602.11865.
- Coase, R. (1937). “The Nature of the Firm.”
- Conway, M. (1968). “How Do Committees Invent?”
- Reed, D. (1999). “The Law of the Pack.”
- Metcalfe, R. (1980). Network value and telecommunications economics.
- Brooks, F. (1975). The Mythical Man-Month.
- Posts 1–14 in this series, particularly:
- Post 5: The Boundary of the Firm in a Digital Age
- Post 6: The Platform as Proto-Firm
- Post 9: The Hybrid Topology — the Cyborg Cell
- Post 11: The Geometry of Networks
- Post 12: The Protocol Layer
- Post 13: The Physics of Flow
- Post 14: Value Stream Mapping