Peeking at A/B Tests: Why it matters, and what to do about it
David Walsh (Stanford University);Ramesh Johari (Stanford University);Leonid Pekelis (Stanford University)
Abstract
This paper reports on novel statistical methodology, which has been deployed by the commercial A/B testing platform Optimizely to communicate experimental results to their customers. Our methodology addresses the issue that traditional p-values and confidence intervals give unreliable inference. This is because users of A/B testing software are known to {\em continuously monitor} these measures as the experiment is running. We provide {\em always valid} p-values and confidence intervals that are provably robust to this effect. Not only does this make it safe for a user to continuously monitor, but it empowers her to detect true effects more efficiently. This paper provides simulations and numerical studies on Optimizely’s data, demonstrating an improvement in detection performance over traditional methods.