Skip to content
Go back

Metrics Are Projection

Published:  at  10:00 AM
TL;DR

• Metrics aren't compression — they're irreversible projections

• The real design question isn't how much to compress, but which dimensions to preserve

• Agents optimise the projection, not the underlying reality

• In reinforcement learning terms: metrics define the reward surface

• As autonomy increases, projection design matters more than model intelligence

Metrics Don’t Measure Reality

We talk about metrics as if they measure reality.

Revenue. Conversion rate. Utilisation. Churn.

But metrics don’t measure reality — they project it.

Every metric is a deliberate projection of a high-dimensional business state onto a lower-dimensional surface.

You choose which dimensions survive. The rest are discarded permanently.

That choice determines what an agent can see, what it can optimise, and what it will never know to question.

As organisations become more autonomous, model intelligence matters less than projection design.

Design the wrong projection, and even a perfect agent will optimise you into a wall.


Why Agents Need Reduction

An agent — human or machine — cannot reason over raw reality.

Reality is:

  • High-dimensional
  • Noisy
  • Partially observable
  • Full of long causal chains

Logs, events, conversations, transactions — these are not decision-friendly.

So we reduce.

A metric is a function:

High-dimensional state → low-dimensional decision surface

That looks like compression. But calling it compression is misleading.

The difference matters.


Compression vs Projection

When you compress an image to JPEG, you lose detail — but you retain structure. You can still see the picture. The degradation is approximate.

Metrics don’t work like that.

When you reduce a complex system to:

“Click-through rate: 6.8%”

you haven’t made a blurry version of reality. You’ve selected a single axis and discarded everything else.

You cannot recover:

  • Margin variation
  • Customer lifetime value
  • Return behaviour
  • Long-term retention
  • Distribution shape

That information isn’t hidden.

It’s gone.

Compression preserves structure approximately. Projection selects structure deliberately.

A metric is a projection onto a chosen subspace of reality.


Why This Matters for Agents

If metrics were compression, the design question would be:

“How much detail do we need?”

But metrics are projection.

The real question is:

Which dimensions should the agent be able to see?

Two projections of the same reality — with the same number of metrics — can produce radically different behaviour depending on which axes they preserve.

The projection is the architecture.


Projection Loses Twice

Projection discards dimensions. That’s the first loss.

The second loss is subtler: uncertainty disappears.

Most metrics are presented as bare numbers:

  • CTR: 6.8%
  • Churn risk: 0.23
  • Conversion rate: 2.1%

But those numbers have variance. Confidence intervals. Sample sizes.

An agent consuming only point estimates has no way to distinguish between a confident estimate and a wild guess. But the difference matters enormously:

  • CTR: 6.8% (SE ± 0.2, n = 12,400) — stable, trustworthy
  • Churn risk: 0.23 (SE ± 0.09, n = 47) — noisy, worth less
  • Conversion rate: 2.1% (SE ± 0.18, n = 11) — barely a signal at all

The first number is actionable. The third is a guess wearing a number’s clothes. But to an agent that only sees the point estimate, they look equally real.

A mean without uncertainty is not decision-ready.

When we design metric systems, we usually strip uncertainty away. We project reality onto a number — then project away confidence around that number.

The agent is left doubly impoverished:

  • It cannot see discarded dimensions.
  • It cannot tell which of its beliefs are reliable.

A well-designed metric system preserves uncertainty, not just values.


A Concrete Failure Case

Imagine an e-commerce recommendation agent optimising:

Maximise click-through rate (CTR).

Initially, performance improves. CTR rises from 4.1% to 6.8%.

Dashboards turn green.

But the projection hid critical dimensions:

  • High-CTR items are low-margin impulse buys.
  • Premium products require more consideration.
  • Long-term retention depends on satisfaction, not clicks.

The agent learns:

  • Surface discounted products.
  • Promote novelty.
  • Avoid high-consideration items.

CTR increases.

But:

  • Average order value declines.
  • Margin compresses.
  • Customer lifetime value falls.
  • Returns increase.

Nothing in the projection surface exposed that trade-off.

The agent optimised exactly what it could see.

This is not a model failure.

It is a projection failure.


This Is a Reward Design Problem

In reinforcement learning terms, metrics define the reward surface.

If you mis-specify the projection, you mis-specify the reward — and the agent will learn the wrong policy perfectly.

Projection determines which policies are even discoverable.

This is the same structural issue that appears in agent orchestration systems — intelligence is constrained by interface design. I explored that more concretely in Agent Experimentation at the Edge.

Model intelligence cannot compensate for a distorted reward surface.


Goodhart at Machine Scale

When a measure becomes a target, it stops being a good measure.

Humans have always gamed metrics. But machines change the dynamics:

Speed — Exploitation happens instantly. Scale — Optimisation reshapes the entire system. No implicit constraints — Unless encoded, nothing is “obviously wrong.”

Goodhart with humans is a slow leak.

Goodhart with machines is a burst pipe.

And because projection is irreversible, the agent has no way to notice what it cannot see.


The Projection Stack

Modern organisations operate on a projection stack. Each layer discards structure from the one below.

Each transformation is a projection:

Structuring removes ambiguity.

Aggregation removes variation.

Targets remove debate.

Narrative removes alternative explanations.

Agents mostly live in the metric layer.

Human trust lives in the narrative layer.


Designing Metrics for Agents

Never give an agent a single metric.

Single-metric optimisation produces pathological behaviour.

Instead, design with tension:

  • Speed and margin
  • Engagement and retention
  • Growth and stability

Perfect orthogonality isn’t required — only enough independence that no single degenerate strategy satisfies all metrics at once.

Eventually, agents hit the Pareto frontier — the boundary where improving one metric must degrade another.

At that point, trade-offs are unavoidable.

That’s not an engineering problem.

It’s governance.

But tension alone isn’t enough. Three metrics in pairwise tension can create a situation where every possible action degrades at least one metric. The agent oscillates or freezes. The fix is constraints first, optimisation second:

  • “Margin must stay above 15% — within that constraint, maximise growth”
  • “Churn must not exceed 3% — then optimise engagement”

Constraints give the agent a feasible region to work within, rather than an impossible surface to balance on.

Weights, thresholds, constraints — these encode business priorities. Changing metrics changes behaviour. That’s a deployment decision, not a dashboard tweak.

The Real Takeaway

Designing metrics is designing the agent.

Projection determines:

  • What the agent believes
  • What it can influence
  • Which policies it can discover

Metrics are abstractions, not reality.

Maps, not territory.

As autonomy increases, the quality of our systems will depend less on model intelligence — and more on the choice of projection.

Design metrics carefully.

They decide what your agents can see — and what they’ll never know to look for.