← Back to Documentation

Best Practices

Best Practices for A/B Testing with Abe CRO

Follow these guidelines to get the most accurate and actionable results from your tests.

Test Planning

Start with Clear Hypotheses

Before creating a test, define what you're trying to learn:

  • What change are you testing?
  • Why do you think it will improve performance?
  • What metric will indicate success?

Test One Thing at a Time

While you can test multiple templates in one test, avoid testing too many changes simultaneously. This makes it difficult to determine which change drove the results.

Plan Your Test Calendar

Use the roadmap/timeline view to:

  • Avoid overlapping tests on the same templates
  • Plan tests around seasonal events
  • Coordinate with marketing campaigns
  • Ensure adequate time between tests

Test Configuration

Traffic Split

For most tests, a 50/50 split is recommended:

  • Provides balanced sample sizes
  • Allows for faster statistical significance
  • Minimizes risk if the variant performs poorly

Consider a smaller variant split (e.g., 20/80) if you're testing a significant change and want to minimize risk.

Test Duration

Let tests run long enough to gather meaningful data:

  • Minimum: 1-2 weeks (depending on traffic)
  • Recommended: 2-4 weeks for most stores
  • Consider: Full business cycles (weekly patterns, monthly patterns)

Don't end tests too early—you need enough data for statistical confidence.

Targeting

Use targeting options strategically:

  • Device Type: Test mobile vs desktop experiences separately if they differ significantly
  • Login Status: Test logged-in vs logged-out experiences if they differ
  • Be Careful: More targeting means smaller sample sizes—ensure you have enough traffic

During the Test

Don't Make Changes Mid-Test

Once a test is active, avoid making changes to:

  • The tested templates or themes
  • Product information (if testing product pages)
  • Other elements that could affect test results

Monitor Regularly

Check your test results regularly but avoid making decisions too early:

  • Daily monitoring is fine for awareness
  • Wait for statistical significance before drawing conclusions
  • Look for trends, not day-to-day fluctuations

Let Tests Complete

Unless there's a critical issue, let tests run to completion:

  • Early termination can skew results
  • Full business cycles provide more accurate data
  • Patience leads to better decisions

Analyzing Results

Focus on Revenue Metrics

Prioritize metrics that impact your bottom line:

  • Revenue Per Visitor: The most important metric
  • Conversion Rate: Overall effectiveness
  • Average Order Value: Transaction value

Don't get distracted by vanity metrics that don't translate to revenue.

Look for Statistical Significance

Pay attention to significance indicators:

  • Some metrics include significance testing
  • Significant differences are more reliable
  • Non-significant differences may be due to chance

Consider Sample Size

Ensure you have enough data:

  • More visitors = more reliable results
  • Low-traffic stores may need longer test periods
  • High-traffic stores can get results faster

After the Test

Document Results

Export and save your test results:

  • Use PDF or Excel export features
  • Share with your team
  • Keep records for future reference

Implement Winners

If a variant wins:

  • Make the winning variant your new default
  • Update your theme/templates accordingly
  • Consider testing further improvements

Learn from Losers

Even losing tests provide value:

  • Understand why the variant didn't perform
  • Use insights to inform future tests
  • Don't repeat the same mistakes

Common Mistakes to Avoid

Timing Mistakes

  • Ending tests too early: Wait for statistical significance and adequate sample sizes
  • Running tests during holidays only: Results may not reflect normal behavior
  • Ignoring business cycles: Account for weekly/monthly patterns in your data

Test Design Mistakes

  • Testing too many things: Focus on one significant change for clear attribution
  • Testing insignificant changes: Test changes that could actually impact conversions
  • Overlapping tests: Avoid running multiple tests on the same templates simultaneously
  • Poor variant quality: Ensure variants are fully functional and polished

Analysis Mistakes

  • Ignoring revenue metrics: Focus on Revenue Per Visitor and Conversion Rate
  • Chasing vanity metrics: Don't optimize for metrics that don't impact revenue
  • Ignoring significance: Pay attention to statistical significance indicators
  • Making decisions too quickly: Let data accumulate before drawing conclusions

Operational Mistakes

  • Making changes mid-test: Keep tests consistent throughout their duration
  • Over-targeting: Ensure adequate sample sizes when using targeting filters
  • Not documenting results: Export and save test results for future reference
  • Forgetting to implement winners: Actually make the winning variant your default

Building a Testing Culture

Successful CRO programs require ongoing commitment:

  • Regular Testing: Make testing a regular part of your optimization process
  • Document Everything: Keep records of what you tested and why
  • Learn from Failures: Even losing tests provide valuable insights
  • Share Results: Communicate findings with your team
  • Iterate: Use test results to inform future tests