7 ways to ensure successful A/B email testing
01 Aug 2013
Before I launch into my 7 tips to ensure successful A/B testing in email marketing, here’s a quick definition for those of you not familiar with the term.
What is an A/B email test?
An A/B test is a clean and clear experimental approach for testing two variations of an email to a statistically valid subset of a target audience, focusing on just one altered element. The result is quickly appreciated and acted upon to the remaining target audience with confidence, knowing that on the day, Route A was preferred to Route B. The key is to keep all elements constant bar one element. If more than one aspect is altered, then it will not be possible to isolate which factor is responsible for the change in consumer behaviour.
1. Carry out the A/B tests at exactly the same time as one another
This ensures that the results can be attributed to the altered variable(s) and not external environmental response influencers: time of day, day of week, market place conditions, economy etc.
2. Check the size of each cell
Ensure that each cell is of an appropriate size to give statistically valid test results. This can be worked out using a tool, or an analyst, to calculate the crucial confidence levels, based on the proposed cell size and your expected response rate. A 95% confidence level is the norm for email marketing purposes.
3. Don’t collect results too early
Tests should also always be allowed to run their natural course. There is a temptation to dive in and grapple with the figures as soon as they become available, but an A/B test needs to be measured over the same time period as campaigns would normally be tracked. If, historically, 80% of response is captured within 48 hrs, then let the A/B test run for a similar time frame, once again ensuring statistical robustness of results.
4. Be prepared for a spike in response
Always ensure that the conversion funnel is primed and ready to receive the increased volume of traffic following communication optimisation. This is especially true at peak sales or response periods throughout the year. There is little point in working hard to optimise response, if the backend management systems such as call centres, websites and e-shops aren’t stocked and ready to receive the increased volume of consumers being driven towards them. In essence, think past the marketing strategy and include other stakeholders in planning your campaign.
5. Share any learnings from your A/B test results
Remember to store and name your test campaigns in a structured way. You are expending energy and budget to learn something. Those learnings should be made available to all concerned to avoid the corporate marketing amnesia we all face as campaigns come and go and personnel change on a regular basis.
6. Be consistent with your customers
One other point to remember is that, once a consumer is treated in a certain way, then make sure you are consistent in the way you interact with them thereon in. Tracking tools and campaign management systems available today should give a clear understanding of who has seen what, and deliver content accordingly.
7. Keep on testing and testing…
Continue to test discretely, always striving to enhance the consumers experience and as a direct result elicit the behaviour you seek. Today’s outright winner in an A/B test will be tomorrow’s control cell.
By DMA guest blogger Lucy Acheson, Head of Data Planning at WDMP
Please login to comment.
Comments