← Back to Documentation

📊Understanding Metrics

Key Metrics in Abe CRO

Abe CRO focuses on metrics that actually matter to your business. Here's what each metric means and why it's important.

Core Metrics

Visitors

The number of unique visitors who viewed pages included in your test. This is based on unique visitor IDs, not raw page views, ensuring you're measuring real people, not bots or refreshes.

Add to Carts

The number of times visitors added items to their cart while viewing test pages. This metric is tracked via the embed block and shows engagement with your products.

Checkouts Started

The number of times visitors began the checkout process. This helps you understand how many people move from browsing to purchasing.

Orders

The total number of completed orders from visitors in your test. This is tracked via the web pixel on checkout completion.

Revenue

The total revenue generated from orders placed by visitors in your test. This is the metric that directly impacts your bottom line.

Computed Ratio Metrics

Revenue Per Visitor ($ / Visitor)

Total revenue divided by the number of visitors. This is one of the most important metrics—it shows how much revenue each visitor generates on average.

Formula: Revenue ÷ Visitors

Conversion Rate (CVR %)

The percentage of visitors who completed a purchase. This is a standard e-commerce metric that shows overall effectiveness.

Formula: (Orders ÷ Visitors) × 100

Average Order Value (AOV)

The average value of each order. Higher AOV means visitors are spending more per transaction.

Formula: Revenue ÷ Orders

Note: This metric includes significance testing to help you determine if differences are statistically meaningful.

Add to Cart Rate (ATC %)

The percentage of visitors who added items to their cart. This shows product interest and engagement.

Formula: (Add to Carts ÷ Visitors) × 100

Checkout / ATC %

The percentage of add-to-cart events that resulted in checkout starts. This shows how effective your cart experience is.

Formula: (Checkouts Started ÷ Add to Carts) × 100

Checkouts / Visitor %

The percentage of visitors who started checkout. This shows overall checkout engagement.

Formula: (Checkouts Started ÷ Visitors) × 100

Checkout Completion %

The percentage of checkout starts that resulted in completed orders. This shows how effective your checkout process is.

Formula: (Orders ÷ Checkouts Started) × 100

How Metrics Are Attributed

Understanding attribution is crucial for interpreting your test results correctly.

Multi-Test Attribution

All metrics (Add to Carts, Orders, Revenue) are attributed to all active test plans that a visitor is enrolled in. This means:

  • If a visitor is in multiple tests, their conversions count toward all of them
  • This is intentional—it reflects the real-world scenario where multiple tests affect the same visitor
  • Each test still shows accurate performance relative to its control/variant split

Store Isolation

Attribution is strictly isolated to the specific store where the event occurred:

  • If you manage multiple stores, each store's data is completely separate
  • Tests on Store A don't affect metrics on Store B
  • This ensures accurate measurement for each store's optimization program

Visitor Consistency

Each visitor maintains their assignment throughout their session:

  • If assigned to "variant" in a test, they see variant on all relevant pages
  • Assignment is stored in cookies that persist for 365 days
  • Returning visitors maintain their original assignment
  • This ensures consistent experiences and accurate data

Default Metrics

When you create a new test, the following metrics are shown by default:

  • Visitors
  • Add to Carts
  • Orders
  • Revenue

You can customize which metrics are visible in your test dashboard.

Default Ratio Metrics

New tests also show these ratio metrics by default:

  • $ / Visitor (Revenue Per Visitor)
  • Product Views (Unique) / Visitor
  • ATC % (Add to Cart Rate)
  • CVR % (Conversion Rate)

Understanding Statistical Significance

Some metrics (like Average Order Value) include significance testing. This helps you determine if differences between control and variant are statistically meaningful, not just due to chance.

What is Significance?

Statistical significance tells you whether the difference you're seeing is likely real or just random variation:

  • Significant: The difference is likely real and not due to chance
  • Not Significant: The difference could be due to random variation

Why It Matters

Without significance testing, you might:

  • Make decisions based on random fluctuations
  • Implement changes that don't actually improve performance
  • Waste time optimizing based on noise, not signal

Sample Size Impact

Significance depends on sample size:

  • Larger sample sizes = easier to detect significant differences
  • Smaller sample sizes = need larger differences to be significant
  • This is why we recommend letting tests run for adequate time

Interpreting Results

When reviewing test results:

  • Look for metrics marked as significant
  • Consider both the difference and the significance
  • Don't ignore non-significant results—they may become significant with more data
  • Use significance as a guide, not the only factor in decision-making

Exporting Data

You can export your test results to PDF or Excel for sharing with your team or stakeholders. This includes all metrics and visualizations.