A/B Testing
A method of comparing two or more versions of a webpage or feature by randomly assigning visitors to different variants and measuring which performs better against a defined metric.
A/B testing (also called split testing) uses statistical analysis to determine whether a change improves a metric like conversion rate, click-through rate, or time on page. Visitors are randomly and persistently assigned to a control group (the original) or one or more treatment groups (the variants). The test runs until results reach statistical significance.
From a QA perspective, A/B tests introduce complexity: multiple code paths run simultaneously, and the experience differs per user. QA teams must test all variants independently and verify that the assignment logic, analytics tracking, and fallback behavior all work correctly.
Why It Matters for QA Teams
QA teams must ensure that all A/B test variants function correctly and that analytics are tracking accurately, because a broken variant silently skews results and leads to bad business decisions.
Example
The marketing team runs an A/B test on the pricing page: Variant A shows monthly prices, Variant B shows annual prices with monthly breakdowns. QA verifies both variants render correctly, that the correct variant is tracked in the analytics tool, and that switching between pricing toggles does not cause the page to flash the wrong variant.