A guide to A/B split testing
24 Jan 2012
Want to know how to optimise your email campaign with an A/B split test? The key to A/B split testing in email is patience. The longer you wait to analyse results, the more likely you are to get a true picture of what will happen when you roll out your eventual email campaign.
Does your email service provider offer split testing?
Email deployment tools often come with built-in A/B split testing software on the basis that they will make your programmes run with the minimum amount of resource. Depending on the platform, it either runs the entire split test, deploying the “winning” campaign automatically, or splits the data into two random sets.
However, this automatic A/B split testing tends to be generic. If you have a specific need or want to test something that is unique to your organisation, then the manual approach is often the best method, and it’s not half as difficult as you might think…
How do I start a manual A/B split test?
To ensure that your findings are robust and easy to implement, start with your test plan. Once you’ve decided what you’re going to test, there are three main areas you need to consider: data, set-up and analysis.
How do I split my data?
The best way to test the impact of one variable on another is to do a random selection. This essentially means splitting your test group in half.
However, if you only have a small amount of data, breaking the analysis down into smaller groups, such as demographic groups is a better approach. Whatever you decide, make sure that you have equal and sufficient representation in each group.
If you believe that one approach is likely to deliver significantly higher returns, you may choose to send it to the majority of addresses and only keep a small sample for the other variable.
How big does a representative sample need to be? The answer depends on a number of factors:
- Primarily, how statistically viable you need the results to be
- Your objectives
- How many factors you are testing.
There are differing opinions on sample size, but it is smaller than you would think. The sample size used to predict the general election is based on just 2000 people, for instance, although over 27 million people actually vote.
You can choose your sample size scientifically by using a sample size calculator. You can find one easily – just do an internet search for “sample size calculators”.
How to use a sample size calculator:
Say you wanted to find out how many people would open your email if you sent it to 10,000 people and you wanted the accuracy of your answer to be within 5% either way. For example, if 10% of your sample opened the email, you can say that 5% (10 minus 5) to 15% (10 plus 5) of your entire population would open the email.
So how many people do you need to sample? Here’s how to find out:
- Find a sample size calculator online, like this one from creative research systems or this from Raosoft
- Choose the size of the margin of error you want as your answer – in this example it’s 5%.
- Type in the population – in this case it’s 10,000.
- Choose how certain you want to be. You can never have total certainty, but most calculators let you choose 95% (pretty sure) or 99% (very sure, which means you’ll need to use a larger sample) – in this example it’s 99%.
- Press calculate and the calculator will work out your sample size. In this example, your ideal sample size is 624 people.
Say 10% of your sample opens your email. On the basis of this you can conclude that you’ll get a 5% to 15% open rate when you send it out to 10,000 people.
How do I set up my mailings?
Once you’ve split your data, all you need to do is set up your mailings. If you plan on delving deeper into your results to profile different segments, the best way may be to set the messages up as one campaign or use a pre-defined naming schema and then analyse the data offline after the campaign has finished.
Though this approach requires some additional data work initially, it will keep your campaigns organised within your platform and ensure your future regular campaign reporting remains consistent.
You could:
- Test different subject lines, content or creative by using dynamic content
- Test a variety of send days by setting up a triggered campaign, based on a field you add to your data
- Determine the most effective deployment time, if your list is large enough, by implementing your campaign very slowly.
Extra tip
Track how you split your campaign and make sure that you can identify which subscribers received which test.
How do I analyse the results?
You must let your campaign run its course – I can’t stress this enough. Any results gathered in less than two days will probably turn out to be inaccurate as the testing continues. Depending on your purchasing cycle, you may need to wait as long as a month. Then, gather as much information to include as you can. This can be anything that you can attribute to an email address, such as revenue, or even web stats.
To determine the impact of each of your tests, you can then layer this information over your “sent”, “delivered”, “opened” and “clicked” activities for each email address.
You will then be able to easily see which campaign met your objectives best. Look at all possibilities. By splitting the data in different ways (which is easier to do offline), you may uncover some surprising findings that could guide future testing or campaign development.
Extra tip
Maintain a “results log” with all of your learning. Don’t forget to keep track of when you did the test though. Results have a habit of changing over time!
Riaz Kanani, Marketing Director, Alchemy Worx
You can find more articles like this from the Email Marketing Council via .
Please login to comment.
Comments