This meeting has been going on for three hours, and it's one ego vs. another. You sigh as you hear another "Our users want bla bla bla."
An hour ago, you forgot which idea was fueled by ego and which was fueled by research. You agree to set up a test so you can get back to your desk.
Testing ideas can take personal opinions out of the equation and move on when we need evidence, but A/B testing is complicated. It requires you to trust the process rather than your instincts.
Since it's easy to let your instincts take over, these tips will improve your tests.
The math involved in a statistically significant test result is intense. Don't even try on your own. Use an online test calculator to figure out if you have enough traffic and time for a proper test. I like this calculator from CXL. They also have a helpful guide for beginners that is just excellent. If you work with a data analyst, consult them as well.
If you've never played around with an A/B test calculator, you may be surprised to find that your startup doesn't have enough web traffic to run an A/B test. A good rule of thumb is if you don't have at least 1,000 conversions (signups, purchases, etc.) a month, you shouldn't A/B test [source]. For startups, you're better off with qualitative research and user testing.
Without a test calculator, you'll get terrible math. They might look good graphed out, but the results are inconclusive.
If you make big bets with your B version, your tests will likely show a perceivable change. You can make better use of your time by testing big ideas as often as possible.
If you've got a lot of traffic, you will be able to detect more minor changes, and A/B Tests make sense. If you have low traffic for your test, you'll need to see considerable improvements to make a significant blip in your results. If learning is your goal, bold changes should be your goal.
For example, if you ask people which is heavier, a raccoon or an opossum, the answers might look like this:
But if you ask consumers which is heavier, a mouse or an elephant, it might look a little better. The results might look like this:
If you test a bolder idea against the control, you're more likely to get proof that one idea is preferable to the other.
Tests are easy to set up these days, but they still take time to set up. Make sure you're testing things that will provide big payoffs for your time. Optimizing the button color is probably not worth your time. There are more significant ideas with bigger gains to the business.
A hypothesis is a prediction you make before running a test. Without a hypothesis, it's impossible to say you proved anything. Hypotheses take many formats. In tech, the "We believe..." form is popular, but I prefer the "If ____, then ____" as it sets you up for a bolder prediction and doesn't invite confirmation bias.
Testing without a hypothesis is like throwing spaghetti against the wall to see what sticks. If you don't call your shot beforehand, it doesn't matter where you end up.
Testing calculators will help you determine the timeframe for your test, but you should document your hypothesis well before you start the clock on your test. Agreeing on the prediction beforehand keeps you from changing your prediction as data starts to come in.
It's also a good idea to have a retro after the test is over to determine what you learned and check in on your prediction.
Today's newsletter is a taste of next week's workshop, Testing Product Ideas.
We only have a few spots lefts so grab one up if you want to design better tests.
Reserve Your Seat for November 22nd |
Until next week, start predicting and measuring.
Jeff Humble
Designer & Co-Founder of the Fountain Institute
P.S. If you want help setting up experiments, download our Experiment Cards
The Fountain Institute is an independent online school that teaches advanced UX & product skills.
Ready Beats Perfect (+ four habits from Hatch Conference) By Hannah Baker Dear Reader, Last week I had the pleasure of hosting the Dome Stage at Hatch, a design-leadership conference bringing product and UX folks together to share what’s working (and what isn’t). Q&A with Iris Latour, co-founder of THEFT Studio. Photos from Hatch Conference Photographers Rebecca ruetten, Indigo Lewisohn, Not because I’m fearless, but because I’d done two simple things: I prepped my intros for each speaker,...
10 Design Diagrams To Study Instead of Staring Into the Void by Jeff Humble Dear Reader, It's that time of the year. Another boring Q3 earnings call, and all you want is to go back to the beach. You look at yourself in Zoom, and all you see is a bottomless void. Hey. Stop that. Instead, check out some of the best Jeffing diagrams on the internet. At least you will look like you are kind of working... 1. Diverging and converging in action by Nicholas Frota Designers talk a lot about diverging...
Level-up your critiques in 3 questions By Hannah Baker Dear Reader, You know the critique that starts with “quick feedback” and ends 45 minutes later with five conflicting opinions and no next step? Or the one where a senior voice speaks first and the room quietly aligns, even when the data points elsewhere. Here’s a simple pattern, adapted from Visual Thinking Strategies (VTS), that pulls critiques out of taste debates and into clearer decisions. What VTS is (in 60 seconds) VTS is a...