Skip to content
Go back

Agents, Routing, Patterns, and Actors

Published:  at  11:00 AM

Building Agent Networks that Don’t Collapse Under Their Own Cleverness

TL;DR

• Once you get past the hello-world demos, agents don't just need tools—they need infrastructure.

• And that means message routing, protocol contracts, and actors that know when to get off the stage.

Message Routing is Not Optional

Let’s say your agent asks another agent to “book a trip.” Simple, right? But now you’ve got one of three subflows:

  • A one-way travel researcher.

  • A multi-stop itinerary planner.

  • A customer with loyalty points that change pricing.

Which service (or agent) handles that message? That’s message routing and if you don’t define the logic, you’ll end up reimplementing it ad-hoc in every agent’s brain.

This is exactly the problem that Enterprise Integration Patterns (EIP) solved decades ago. Back then it was about backend systems; today it’s agents. The core idea holds:

Messages should go to the right place, at the right time, and in the right format.

Classic Patterns Still Apply

Gregor Hohpe’s canonical patterns are a perfect fit for agent systems:

PatternAgent World Analogy
Content-Based RouterPlannerAgent chooses Translate vs Refund
Dynamic RouterClassifierAgent hands to domain-specific tool
AggregatorSummariseAgent compiles subtasks
Scatter-GatherSearchAgent fans out to Web, API, Memory
Message FilterValidationAgent drops irrelevant inputs

These map directly onto the coordination challenges in multi-agent setups.

If your planner agent always knows which sub-agent to call, fine.

But as soon as your logic becomes dynamic, you either reimplement these patterns—or you use the infrastructure built for them.

Actors: The Right Execution Model

What’s the runtime that supports this kind of messaging and state isolation? That’s where actors come in.

An actor is:

  • A process (or task) that owns its state
  • Handles one message at a time (no concurrent access to internal state)
  • Sends messages to other actors asynchronously
  • Can create child actors (supervision hierarchies)

If you’ve worked with Erlang/OTP, Akka, or Orleans, you’ve seen this pattern. The actor model eliminates shared mutable state by design — each actor is an island that communicates only through messages. This makes reasoning about concurrency dramatically simpler.

This is ideal for agents:

  • Each agent instance can be an actor
  • Their inbox is the agent protocol interface (A2A, JSON, gRPC, etc.)
  • Their logic mixes deterministic workflows and LLM-powered reasoning

The actor model, when combined with message-passing routing patterns, becomes a natural foundation for building distributed, fault-tolerant, and evolvable systems. Reactive Messaging Patterns

A2A, Agent Cards, and Protocol-Driven Routing

Enter Google’s A2A spec.

This isn’t just another transport.

It brings:

  • Discovery: via Agent Cards that describe capabilities
  • Routing: via structured task types and semantic capabilities
  • Progress: via server-sent events and status updates

Using A2A, your planner agent doesn’t need to know internal URLs or auth details.

It just says: “I need an agent that can apply_discount” and the A2A runtime handles the route.

This is Content-Based Routing made dynamic: messages flow not based on hardcoded rules but on declared capabilities and metadata.

Actor-Style Routing in Practice

Here’s how this looks using a hybrid actor + EIP setup. Both examples below achieve the same goal — routing messages to the right agent — but represent different implementation approaches:

Option A: Custom routing logic (pseudocode inside a planner agent)

message = receive()

if message.task_type == "invoice_adjustment":
    target = registry.lookup("apply_credit")
    send(target, message)

elif message.task_type == "translate":
    lang_agent = registry.lookup("translate", lang=message.payload.lang)
    send(lang_agent, message)

Option B: Protocol-driven routing (using A2A’s declarative envelope)

{
  "task": "translate",
  "data": {
    "text": "Bonjour",
    "target_lang": "en"
  },
  "requirements": ["low_latency", "eu_region"]
}

With Option A, your code decides the routing. With Option B, you declare what you need and let the A2A runtime find an agent that matches the requirements. Both are valid — Option A gives you control, Option B gives you flexibility.

This pattern scales.

Agents don’t contain the logic.

The routing layer (A2A or pub/sub + actor registry) handles dispatch.

Coordinators Are Just Process Managers

Whether you’re using LangGraph or Dapr Workflows, the same thing is happening: some actor (or step) is deciding what happens next based on what just happened.

This is the Process Manager pattern.

It:

  • Tracks state across multiple steps or agents
  • Issues commands and handles results
  • May run compensations if things go wrong

In LangGraph, the planner is a graph edge that emits new calls.

In Dapr, it’s a durable workflow.

Either way, it’s a coordinator—not unlike what you’d use in a long-lived SAGA.

Summary: What We’re Actually Building

LLM agents are just services with probabilistic dispatch logic.

Once you connect more than one, you are building a distributed system.

That means:

  • Use actor runtimes to isolate logic and state.
  • Use message-routing patterns to avoid hardcoding workflows.
  • Use discovery protocols like A2A to let agents describe what they can do.
  • Use process managers or coordination graphs to control long-running flows.

If you ignore this, you’ll build brittle, unobservable spaghetti.

If you embrace it, you’ll get resilient, swappable, evolvable agent meshes.

And yes, they’ll still hallucinate. But at least they’ll do it predictably.

The future of agents is distributed. Let’s build it on purpose.