Each A/B test requires a hypothesis around how your variations will measure up to the control – and repeated rounds of testing to gain usable insights. It’s a time-consuming process best suited to helping you learn about your target audience with incremental steps. But remember, true lifts in conversions come from understanding the user and serving relevant and valuable offers (something that may be more easily done with data and personas).
Did you know that only 1 out of 8 A/B tests have driven significant change? A/B testing is so wonky because you’d need to run tests until you reach statistical confidence – the probability that a test result is accurate. (Most researchers look for a 95% confidence level in order to reach a conclusion.) Otherwise, results may be influenced by random factors and produce false data. Calculating statistical confidence can be too complex for those of us who are not statisticians.
Another reason A/B testing fails may be sample size. There are tools available for determining the correct number, but some estimates suggest that 25,000 visitors may be needed to reach statistical confidence, and smaller businesses are unlikely to have that kind of traffic. This is one of the reasons that Optimizely, one of the most popular A/B testing tools, is discontinuing its free platform in favor of one more focused on large enterprises.
Contact our team today to learn more about understanding your key audiences – and making sure your websites are living up to their promise.