Running Incrementality Tests

docs@attribution.aiReviewed 2026-04-28Status published

Running Incrementality Tests

Incrementality tests answer the question every CFO eventually asks: "What would happen to revenue if we turned this channel off?"

Attribution and MMM give you correlations and modeled estimates. Incrementality tests give you causal evidence by deliberately withholding spend from a treatment group and measuring the revenue gap that opens between treatment and control.

Test types

Attribution.ai supports three incrementality test designs:

1. Geo holdout

Split US states (or other geographic units) into matched treatment and control groups, then turn a channel off in control for a fixed window (typically 4–6 weeks). Best for:

  • Channels with broad reach (Meta, Google, TikTok, CTV, OOH)
  • Brands with national footprint and ≥ $20K/week spend on the test channel

2. PSA / ghost-bid

Run "public-service announcement" (PSA) ads in the control group while the treatment group sees real ads. Removes seasonal noise that geo-holdouts can't. Best for:

  • Awareness or upper-funnel channels where pure-off would lose long-tail conversions
  • Brands using Meta's built-in lift test (Attribution.ai consumes the Meta Lift study output directly when you connect that surface).

3. Audience holdout (1P)

Hold out 5–10% of your owned audience (email, SMS, retargeting) from the targeted campaign and compare conversion rates. Best for:

  • Klaviyo flows
  • Retargeting campaigns
  • Lifecycle automation

Setting up a test

  1. Navigate to Dashboard → Incrementality → Create Test.
  2. Pick a test type, channel, and primary KPI (usually revenue).
  3. Choose a test window. Minimum: 14 days; recommended: 28–42 days.
  4. Attribution.ai matches treatment and control on revenue history, seasonality, and channel exposure. Review the matching report before launch — a poor match invalidates the result.
  5. Confirm the spend-off plan (geo holdouts) or audience-exclusion list (1P holdouts), then schedule the test. The platform will automatically gate spend through the connected ad platforms when a test is "running".

Reading the results

Each test report shows:

  • Lift point estimate — the percentage difference between treatment and control.
  • Confidence interval (90 / 95% CI) — how tight the estimate is. A lift of "+12% [–3%, +27%]" is not decision-ready; a lift of "+12% [+8%, +16%]" is.
  • iCAC — incremental customer acquisition cost (true CAC, not platform-reported).
  • iROAS — incremental return on ad spend.
  • p-value — only useful as a tie-breaker; prefer the CI.

A test is decision-ready when:

  • The CI excludes 0 at 90% confidence
  • iCAC is within ±25% of platform-reported CAC (otherwise the platform is mis-attributing badly)
  • The matching report's pre-period parallel-trends test passed

Common pitfalls

  • Underpowered tests — small spend, short window, noisy KPI. The dashboard will warn you in the matching report when MDE (minimum detectable effect) > 30%.
  • Ad platform overrides — some Meta automated bidding will redirect spend out of held-out geos. We auto-detect this by comparing scheduled vs. delivered spend; flag appears as "Spillover risk" on the test dashboard.
  • Holiday windows — never run a 28-day test through Black Friday. Either shorten or push the window.

How incrementality results feed the rest of the platform

When a test reaches decision-ready, Attribution.ai automatically:

  • Updates the channel's trust score in Dashboard → Performance.
  • Recalibrates the MMM's prior for that channel (next Sunday retrain).
  • Surfaces the test in the Measurement Confidence Layer summary on Dashboard → Home.

Related articles