Big Brain
← All insights
Performance Marketing4 March 2026· 3 min read

The four numbers your performance team should hate

If your paid program optimizes for these, it's optimizing for the wrong thing. The replacement metrics, and what they reveal.

Most performance programs report against numbers that flatter the program. The right numbers don't flatter — they reveal. Here are four reports your performance team should treat with suspicion, and what to replace them with.

1. Last-click ROAS

The first lie. Last-click attribution rewards channels that touch users closest to the conversion — usually branded search and remarketing. It tells you the channel converted, not whether it caused the conversion. Cut the channel and the conversions don't disappear; they just shift to whichever channel is now the last touch.

What to use instead. Geo-experiments. Hold-out tests. Bayesian incrementality models. A media mix model if you have the spend volume to fit one honestly. The goal is to attribute incremental contribution — what would you lose if the channel went dark?

When we run incrementality on freshly migrated accounts, the typical pattern is that 20–40% of "ROAS" reported on lower-funnel channels evaporates. Most clients find the result uncomfortable. That's the right reaction.

2. Lead volume

A vanity metric in B2B. Filling the top of the funnel feels like motion, but if those leads convert at 2% versus the 14% your best segment converts at, you've quintupled the work without moving the business. The marketing team gets a lead-volume bonus. The sales team quietly stops trusting marketing leads. Six months later the org rebuilds outbound.

What to use instead. Pipeline-weighted leads, segmented by ICP. A lead from your top-tier ICP is worth 7× a lead from your bottom tier. Reporting them at parity isn't simplification; it's lying to yourself.

3. CTR

CTR rewards creative that grabs attention. That is one third of the job. The other two thirds — does the click convert, does the customer return — are conveniently invisible at the CTR layer.

I have personally bought hundreds of thousands of clicks that performed at 2× the account average CTR and converted at half the average rate. Clicks are cheap. Customers are not.

What to use instead. Stage-weighted creative scoring. CTR is one input. So is lander engagement, post-click conversion rate, and 30-day cohort revenue. Score creative against the journey, not the impression.

4. Spend-pacing alerts

The most insidious of the four. Spend pacing — "are we hitting budget" — is the metric that turns marketing into accounts payable. Once your team is rewarded on hitting spend, they will hit spend. Performance becomes residual.

What to use instead. Pace against marginal CAC, not budget. As long as the next dollar is being spent at acceptable marginal cost, spend more. The moment marginal CAC exceeds the bar, stop spending — even if budget is unspent. The annual budget is a guess; the marginal economics are real.

What to do on Monday

You don't need to overhaul the program. You need to overhaul the report.

Walk into the next performance review and replace the four standard charts with these:

  1. Incremental contribution by channel (geo or hold-out evidence required)
  2. Pipeline-weighted leads, segmented by ICP tier
  3. Stage-weighted creative scoring
  4. Marginal CAC by channel, with the threshold drawn

If your team can't produce those — that is the problem. Not the campaigns. Not the creative. The measurement layer. Fix that first; the budget reallocations will write themselves.

We have rebuilt this layer for ten partnerships in the last two years. Every single time, the first month produces an uncomfortable conversation about which channels were never earning their seat. Every single time, the second quarter's revenue is materially higher.

Liked this? A 30-minute call is the next step.