How to Read and Interpret Attribution Data

docs@attribution.aiReviewed 2026-02-22Status published

How to Read and Interpret Attribution Data

Understanding your Attribution.ai dashboard data is essential for making informed marketing budget decisions. This guide explains every key metric, how to read your reports, how to interpret differences between pixel and survey data, what confidence scores mean, and how to turn attribution data into action.

Key Metrics

ROAS (Return on Ad Spend)

ROAS measures how much revenue you earn for every dollar spent on advertising. It is calculated as:

ROAS = Attributed Revenue / Ad Spend

For example, a ROAS of 4.0 means you earn $4 in revenue for every $1 spent on ads.

Attribution.ai calculates ROAS at multiple levels:

  • Overall ROAS: Across your entire marketing mix
  • Channel ROAS: For each ad platform (Facebook, Google, TikTok, etc.)
  • Campaign ROAS: For individual campaigns within each platform
  • Ad-level ROAS: For specific ads when click ID matching is available

How to interpret ROAS:

  • ROAS above 3.0: Generally considered healthy for most Shopify stores. You are generating strong returns.
  • ROAS between 1.5 and 3.0: May be acceptable depending on your profit margins and customer lifetime value. If your gross margin is 70%, a ROAS of 2.0 still generates profit.
  • ROAS between 1.0 and 1.5: You are spending almost as much on ads as you earn in revenue. Profitable only if you have very high margins or strong repeat purchase rates.
  • ROAS below 1.0: You are spending more on ads than the revenue they generate. This requires immediate investigation unless you are intentionally investing in long-term customer acquisition.

Important ROAS caveats:

  • ROAS values differ depending on which attribution model you use. Last-touch ROAS for Google may be higher than linear ROAS because last-touch gives Google full credit for closing sales.
  • Compare ROAS across models to get a more complete picture. If a channel has high ROAS under every model, it is consistently valuable.
  • ROAS does not account for profit margins. A channel with lower ROAS but higher AOV may still be more profitable.

AOV (Average Order Value)

AOV shows the average dollar amount per order attributed to a specific channel:

AOV = Total Attributed Revenue / Number of Attributed Orders

How to interpret AOV:

  • Compare AOV across channels to identify which sources drive higher-value purchases.
  • A channel with lower volume but significantly higher AOV may be more valuable than a high-volume, low-AOV channel.
  • AOV differences across channels often reflect different customer segments. For example, Google Shopping may drive higher AOV because customers are searching for specific products, while social media ads may drive lower-AOV impulse purchases.

CPA (Cost Per Acquisition)

CPA measures how much you spend to acquire one customer through a given channel:

CPA = Ad Spend / Number of Attributed Conversions

How to interpret CPA:

  • Lower CPA is generally better, but always consider it alongside customer lifetime value (LTV).
  • A channel with high CPA but strong repeat purchase rates may be more profitable long-term than a low-CPA channel that attracts one-time buyers.
  • CPA varies significantly by channel. Brand search campaigns on Google typically have the lowest CPA, while prospecting campaigns on social platforms have higher CPA.

Conversion Rate

The percentage of attributed visitors from a channel who complete a purchase:

Conversion Rate = Conversions / Attributed Sessions x 100

How to interpret conversion rate:

  • Higher conversion rates indicate that a channel is sending purchase-ready traffic.
  • Low conversion rates on a channel do not necessarily mean the channel is ineffective -- it may be a top-of-funnel awareness channel where the goal is introducing new customers, not immediate conversion.
  • Compare conversion rates across channels to identify which sources drive the most purchase-ready traffic and which are better suited for awareness.

Attribution Credit

Attribution credit is the fractional share of a conversion assigned to each channel by the attribution model. For any given order, the total credits across all channels sum to 1.0 (100%).

For example, under the linear model, an order with three touchpoints (Facebook ad, Google search, email) assigns 0.33 credit to each channel. Under position-based, it would assign 0.40 to Facebook (first touch), 0.40 to email (last touch), and 0.20 to Google (middle).

When viewing channel-level reports, the "Attributed Revenue" for each channel is the sum of its fractional credits multiplied by the order values. This means the sum of attributed revenue across all channels equals your total revenue -- there is no over- or under-counting.

Reading the Attribution Dashboard

Overview Page

The Overview dashboard provides a high-level summary:

  • Total Attributed Revenue: Revenue for the selected date range with attribution data
  • Total Orders: Number of orders processed
  • Top Channels: A ranked list of channels by attributed revenue
  • ROAS Trend: A time-series chart showing ROAS over the selected period
  • Order Usage: A progress bar showing your current order count relative to your plan limit

Attribution Tab

The Attribution tab is your primary analysis view:

  • Channel table: Shows all channels with their attributed revenue, order count, ROAS, AOV, CPA, and conversion rate
  • Model selector: Switch between attribution models (Last Touch, First Touch, Linear, Time Decay, Position-Based, Markov, Shapley) to see how credit shifts
  • Date range selector: Choose the time period to analyze
  • Chart view: Toggle between table and chart views for visual comparison

Channel Deep-Dives

Click any channel in the Attribution tab to see campaign-level breakdowns:

  • Individual campaign performance (spend, revenue, ROAS, CPA)
  • Ad set or ad group level data (when available from the platform)
  • Trend lines showing campaign performance over time
  • Click ID match rates (what percentage of conversions had a direct click ID link)

Model Comparison View

The model comparison view shows the same data through multiple attribution models side by side. This is one of the most valuable views in Attribution.ai because it reveals:

  • Channels that score high in First-Touch but low in Last-Touch: Strong discovery channels (top of funnel) that introduce new customers but do not close sales. Examples: organic social, podcast mentions, influencer content.
  • Channels that score high in Last-Touch but low in First-Touch: Strong closing channels (bottom of funnel) that convert existing prospects. Examples: branded Google search, retargeting ads, email campaigns.
  • Channels that score consistently across all models: Valuable at every stage of the funnel. These are your most reliable channels.
  • Channels where data-driven models (Markov, Shapley) disagree with rule-based models: This often reveals hidden value or overattribution that rule-based models miss. Investigate these discrepancies carefully.

Understanding Pixel vs. Survey Data

Attribution.ai provides two independent attribution signals. Understanding how they differ -- and when they disagree -- is key to making good decisions.

Pixel-Based Attribution

The pixel tracks digital touchpoints: ad clicks (via click IDs), UTM parameters, referral URLs, and page navigation. It is precise for trackable digital interactions but has blind spots:

  • Cannot track word-of-mouth, podcast mentions, or offline channels
  • Affected by ad blockers (15-30% of traffic may be untracked)
  • Cannot connect cross-device journeys (someone who discovers on mobile but buys on desktop)
  • Attributes organic TikTok/Instagram discovery as "direct" if no link was clicked

Survey-Based Attribution

The post-purchase survey captures self-reported data. It excels at channels the pixel misses but has its own biases:

  • Captures offline channels (word-of-mouth, podcasts, TV, etc.)
  • Not affected by ad blockers
  • Can capture the true initial discovery channel even for cross-device journeys
  • Subject to recall bias (customers may not accurately remember how they first heard about your brand)
  • Subject to recency bias (customers may report their most recent interaction, not the one that truly influenced them)
  • Completion rate is 30-60%, so not all orders have survey data

When Pixel and Survey Agree

When both signals point to the same channel (e.g., pixel shows a Facebook ad click and survey says "Social media > Facebook"), confidence in the attribution is high.

When Pixel and Survey Disagree

Disagreements are common and informative:

  • Pixel says "Google" but survey says "Friend/Family": The customer heard about you through word-of-mouth, then searched for your brand on Google. Google gets credit in pixel-based models, but the survey reveals the true catalyst was word-of-mouth.
  • Pixel says "Direct" but survey says "TikTok": The customer discovered your brand through organic TikTok content and typed your URL directly. Without the survey, this order would be attributed to "Direct/Unknown."
  • Pixel says "Facebook" but survey says "Google search": The customer may have forgotten about the Facebook ad they clicked and remembers searching for your product. This could indicate recall bias in the survey, or the Google search was genuinely more influential.

How to Use Both Signals

  1. Use Markov and Shapley as your primary decision views when available.
  2. Monitor the "Survey vs Pixel" comparison in the Surveys tab: Look for channels where the two signals consistently disagree -- these are your biggest attribution blind spots.
  3. Pay special attention to channels with high survey attribution but low pixel attribution: These are likely undervalued in pixel-only attribution tools.

Confidence Scores

Every attribution result in Attribution.ai includes a confidence score between 0 and 1. This score indicates how reliable the attribution is.

What Affects Confidence Scores

  • Number of touchpoints: More touchpoints provide more data, increasing confidence (up to a point).
  • Click ID presence: Orders with click IDs (gclid, fbclid, ttclid) have higher confidence because the click-to-conversion link is definitive.
  • Survey response availability: Orders with a completed post-purchase survey have higher confidence, especially when the survey agrees with pixel data.
  • Model type: Data-driven models (Markov, Shapley) generally have higher confidence scores than rule-based models (first-touch, last-touch) because they are validated against observed data patterns.
  • Data volume: Models become more confident as they process more data. Scores improve over time as your data accumulates.

Interpreting Confidence Scores

  • 0.80 and above: High confidence. The attribution is well-supported by multiple data signals.
  • 0.60 to 0.80: Moderate confidence. The attribution is reasonable but may be based on incomplete data (e.g., pixel only, no survey).
  • Below 0.60: Lower confidence. The order may have limited attribution data (e.g., no pixel session found, no survey response, short journey). The attribution is the system's best estimate given available information.

Using Confidence Scores in Decision-Making

When making budget allocation decisions, weight higher-confidence attributions more heavily. If a channel shows strong ROAS but most of its attributions have low confidence scores, treat the ROAS estimate with appropriate caution. Conversely, a channel with moderate ROAS but consistently high confidence scores provides a more reliable basis for investment decisions.

Actionable Guidance

Weekly Review Process

  1. Check the Overview dashboard for any significant changes in total ROAS, top channels, or order volume.
  2. Review the Attribution tab under Markov and Shapley views. Look for channels where ROAS has improved or declined compared to the previous week.
  3. Compare two or three attribution models to identify any channels where credit shifts significantly. Investigate the reason.
  4. Check the Surveys tab for changes in survey-reported channels. Watch for emerging channels in the "Other" free-text responses.
  5. Use your AI assistant (via MCP) to ask targeted questions: "Which channel improved most this week?" or "What is my Facebook ROAS trend over the last 30 days?"

Common Actions Based on Data

  • Increase spend on channels with consistently high ROAS (above 3.0), high confidence, and room to scale.
  • Decrease spend on channels with declining ROAS, especially if the trend has persisted for 2+ weeks.
  • Investigate channels where pixel and survey data significantly disagree -- this often reveals your biggest optimization opportunities.
  • Expand into channels that show strong survey attribution but where you have limited paid investment.
  • Avoid making changes based on a single day's data. Daily fluctuations are normal. Weekly and monthly trends are more reliable for budget allocation.

Review Frequency Recommendations

  • Daily: Quick glance at Overview dashboard for anomalies.
  • Weekly: Full review of Attribution tab, model comparison, and survey data. This is where most optimization decisions should be made.
  • Monthly: Deep analysis of channel trends, consideration cycle data (survey question 3), and confidence score patterns. Adjust your attribution lookback window if your consideration cycle has changed.

Related articles