Skip to main content

A/B Testing: A Complete Guide to Experimentation

7 min read
Updated 2026-02-01
Guide

A/B testing compares two versions to determine which performs better. It's the gold standard for data-driven decisions about products, marketing, and UX.

Key Takeaways

  • A/B tests compare control (A) vs variant (B) with random assignment
  • Statistical significance (p < 0.05) indicates results aren't due to chance
  • Sample size depends on baseline rate and minimum detectable effect
  • Run tests at least 1-2 weeks to capture full business cycles
  • Avoid peeking and stopping early when results look good

A/B Testing Basics

Process: Form hypothesis, create two versions, randomly assign users, measure outcome, determine significance, implement winner. Change only one thing at a time.

Determining Sample Size

Depends on: baseline rate (lower needs more), minimum effect (smaller needs more), power (80% typical), significance (95% typical). Rule of thumb: detecting 10% relative change in 5% conversion needs ~50,000 per variant.

Common Pitfalls

Peeking/early stopping (inflates false positives), too many tests (1 in 20 significant by chance), ignoring practical significance, contamination.

Quick start

Put this into practice for $9

You just read about a/b testing. Now test your own idea with predicted market data. Results in about 1 hour.

Validate Ideas Before Building

Use Inqvey to test concepts with surveys before investing in full A/B tests.

Test Your Concepts

Frequently Asked Questions

At minimum 1-2 weeks for business cycle effects. Continue until pre-calculated sample size is reached.
Yes, A/B/n tests work but need proportionally more traffic and interpretation is more complex.

Related Resources