An A/B test is an experiment where two versions of a page, message, design, or experience are shown to different groups of users to determine which performs better. Common synonyms include: split test, variant test, and controlled experiment.
An A/B/C test, extends the same logic to contain a third variant that will contain changed elements
Why A/B Tests Matter
A/B testing helps teams make decisions based on evidence rather than assumptions. In ecommerce and merchandising, it’s one of the most reliable ways to understand how real customers respond to changes in:
- product pages
- checkout flows
- pricing and promotions
- navigation and search
- content and creative
Because customer behaviour is often counterintuitive, A/B testing reduces guesswork and helps teams optimise for outcomes that matter: conversion, revenue, and customer confidence.
How A/B Tests Are Calculated
A/B tests typically compare performance using metrics such as conversion rate:
Conversion Rate = Conversions/Visitors
Example: If Variant A converts at 3.2% and Variant B converts at 3.8%, Variant B is the winner, assuming the result is statistically significant.
Common Use Cases
- PDP optimisation: Testing images, copy, reviews, or layout.
- Checkout improvements: Payment options, form fields, delivery messaging.
- Promotions: Testing discount types, thresholds, or placement.
- Navigation: Menu structures, filters, or search layouts.
- Email and CRM: Subject lines, CTAs, send times, or creative.
Related Terms
- Conversion Rate
- Statistical Significance
- Multivariate Testing (MVT)
- Control Group
- Hypothesis
- Experimentation Framework
What A/B Tests Really Tell Us
When we look at A/B testing through a systems lens, it becomes more than a method. A/B tests reveal how customers actually behave, not how we expect them to behave. They expose the gap between internal assumptions and real‑world intent, turning data into a form of empathy. Every experiment is a conversation with the customer: “Did this help you? Did this make sense? Did this build trust?”
A/B tests also highlight cross‑functional interdependencies. A winning variant on the PDP might fail if supply chain can’t support the uplift. A checkout improvement might underperform if marketing traffic quality changes. The system reminds us that no experiment exists in isolation, results are shaped by the entire ecosystem.
From a storytelling perspective, A/B tests help teams move beyond opinions and toward shared narratives grounded in evidence. They create alignment, reduce friction, and build confidence in decision‑making. And when used well, they support sustainable growth by encouraging continuous learning rather than one‑off optimisation.
At its core, A/B testing is an act of humility and curiosity. It says: “Let’s test, learn, and evolve.” That mindset, iterative, human‑centred, and future‑focused, is what separates reactive teams from resilient ones.