How should you assess your email effectiveness?
11 May 2015
Once a month, the email council of the Direct Marketing Association meets to review its work promoting and improving the effectiveness of email as a marketing tool.
The May meeting, on the eve of the General Election, included an hour long debate on "Defining Email Effectiveness". Although the debate never arrived at a cast-iron 'definition', the contributions raised some important principles and explored how you should define (and hence measure) email effectiveness inside your own organisation.
This was a very rich discussion. And when pared down like this, there is a danger that what follows may come across be a statement of the blindingly obvious. But it is amazing how often the obvious gets ignored in the frenzy of everyday work! So here is what I heard:
Firstly, effectiveness requires an effect. It follows from this therefore that the effectiveness of an email lies in "how far it has its intended effect on the recipient".
And that's it, in a nutshell. The rest, as Hamlet nearly said, is detail.
Commercial Effectiveness
Two things are immediately woolly in this very straightforward definition...
Firstly, permitting such a general definition of effect could, in principle, allow an email manager to define almost any outcome as their intended 'effect' at an individual email level. This effect could thus be something as woolly as 'feeling good about my brand'. Or 'reminding customers we are here'. Or even 'sharing great content'. Conversely, these goals can become so email-specific (increase my open-rates by 5 per cent in the quarter...) that they lack any commercial or social impact. The first job of the email manager is thus to tighten up these definitions of effect to create a hard, transactional outcome. For a charity they might be: "enabling learning about a project as a prelude to investment"; "making a £50 donation to a specific project"; "signing a campaign petition to boycott a particular supermarket"; or "securing renewed contact opt-ins from 100 supporters to a lapsed channel". These simpler behavioural-type goals were generally felt to be more indicative mid-level examples of tangible commercial effects.
The second problem here lies in alighting on a good way of framing the judgement 'How far?'. What sort of metrics should an email manager be looking for? Again, it should be simple. In principle, the individual effectiveness of an email can be defined by how frequently it achieves its intended effect - i.e. what proportion of emails of this given type trigger their recipient's intended action.
However, this black and white view would risk ignoring the question of the 'degree' of action taken by recipients. Thus if your brand's goal is to have customers share a link to a new promotion, then having half a dozen brand ambassadors tweet out a web-link to their thousands of followers, may be much better, commercially, than having a hundred hermits send it to their Mum. It is the quality, not the quantity of actions that will make the difference in terms of 'aggregate effect' and hence commercial effectiveness.
Finally, there is one further nuance to add here, and that's the distinction between transactions and relationships - and also between short-term and long-term value. It's no use persuading 1,000 charity supporters to give to a crisis appeal, if they promptly cancel their direct debit out of affront. And it's no use bombarding them ten times a month is they then unsubscribe. Email effectiveness has to be understood as part of a communication journey. Emailers must never lose sight of the fact that the commercial ASSET at stake is the relationship with a supporter or customer, at whatever stage of the journey they may be.
Ultimately, the effectiveness we are really trying to capture here is the commercial effectiveness (where commercial can just as easily relate to a social impact). Utimately, what we are angling for is to assess the performance of an email in terms of how well it generating the desired transactions, while also having optimal beneficial (or at least no negative) impact on the customer relationship. This commercial effectiveness then needs to measured in terms of the value or influence of the outcomes, compared to the unit cost. In an ideal world, it would also contextualised by some form of engagement multiplier or conjoined to a negative factor such as complaints or unsubscribes. Both would seek to take account of the risk being added or removed from the relationship - and thus give a proxy for the altered propensity to stay loyal.
To borrow Simon Sinek's terminology, commercial effectiveness gives us the WHY? of email marketing.
Process Effectiveness
In borrowing Sinek's model though, one immediately feel the urge to interrogate the HOW of email.
We can easily imagine this as its "process effectiveness".
To be honest, marketing emails generally work in pretty predictable ways. A proportion of them land in inboxes. A proportion get scanned. A proportion get opened. A proportion get read. A proportion get 'interacted with'. And then follow-through actions get taken, by following instructions or suggestions.
The joy of such sequential processes is that they are beautifully simple to track as journeys, funnels and the like. As email managers, many charities now understand very well the management of bounces, and optimisation for mobile phone image-size limits for example to ensure deliver. Through smart design they can then readily maximise their behavioural conversion ratios via recipients opening and clicking through emails. All this diagnostic information then gives them great insight into process effectiveness.
However just as with commercial effectiveness, the most vital factor to take into account here is elapsed time i.e. how things are actually changing. Emails will generally be of clearly identifiable 'types' - welcome emails, thank you emails, informational emails, promotional emails and so forth. The instrinsic rate of completion of these (which may be good or bad) matters much less than the fact that it is improving over time.
One further disruptive factor in all this, that increasingly affects process-completeness is supporters travelling 'off-piste' when they receive an email, and failing to follow the intended journey. A simple example would be that a supporter sees an email on my phone which open beside my laptop, for example. and then just goes straight to the web-site of the sender; or perhaps they see the subject line of an email even without opening it, and it reminds them to visit your site. Only the context and content of your email will help you estimate how much of this is actually going on, or how much of a factor it is. Some misleading influences on effectiveness cannot easily be generalised.
Another disruptive factor here is the fact that your customer's journey is unlikely to JUST include the email itself. If the ultimate commercial or social effect lies in an action taken on a web-site, or in a shop, for example, then this completion choice too should be included within your view of process effectiveness. Similarly, if the trigger to receive the email comes through the choice of behaviour like clicking a web-link, viewing video inside social media, or ticking a contact preference, then this too is arguably "in-scope" in terms of assessing and managing email effectiveness. Certainly the email manager should be empowered to influence what triggers the email send, how soon it goes, and what constitutes its completion. Managing the timeliness of the email and setting clear expectations around its receipt can be crucial to successful action.
Finally, of course, the fact remains: a good email manager can distort all of this process effectiveness data simply by changing the people they target. Change the underlying relationship asset and you change communications performance. If you want to drive up open-rates, simply remove people from the list who haven't clicked in a while. The choice to target for volume of transactions or value per supporter is obviously dependent upon whether a brand is using cold or warm lists, and whether working on acquisition or retention, for example. But the key is to compare apples with apples.
To reiterate the point, ultimately the best way to assess the "process effectiveness" of email campaigns is the rate of improvement of transaction-completion for your emails - compared like for like with equivalent emails and equivalent audiences - over TIME.
Content Effectiveness
The third and final element of email effectiveness is of course the 'WHAT'. This is "content effectiveness" and comprises both a qualitative and quantitative assessment of the way an individual email is actually "built". It's about the quality of the 'stuff' that an email manager puts into an email.
It's worth realising, that even if you measured nothing, you could still increase the effectiveness of your emails by THINKING MORE about their content. Having simple buttons; crisp content; clear calls to action and well-sized images are all fundamentals of content effectiveness, for example. We curate some great (and some not so great) examples relating to the not for profit sector at Charity Email Gallery. While it remains vital to test, test and test again, to improve email effectiveness, testing alone does not give you an excuse to build bad emails.
In addition, today, content increasingly includes dynamic content taken from external databases. Not just name and financial data can be altered, but also distinct themes and key messages tailored by audience. Unless you are making use of these sort of personalisation techniques it's unlikely your email will be as effective as it could be.
You can also improve content effectiveness through effective adaptation to different channel formats. As an email manager you will need to recognise the constraints of different devices and different user email providers, or at least work with email service providers who can cover a lot of this for you.
Finally, as you construct your content for optimum effectiveness, you need to recognise that email is not just a tool to utilise insight but also a tool to gather it. At the simplest level, for example, you can increase the content-effectiveness of emails by attaching tracking codes to interactive content so that web experiences reached via emails will be intrinsically tailored to individual needs.
In summary, then, what did the DMA group actually conclude about effectiveness?
Firstly, that achieving your commercial (or social!) goals counts for most, but that they can be complex, can vary over time and can sometimes be very tough to measure - especially over the long haul.
Secondly, that assessing email's effectiveness as a medium is about understanding the full 'texture' of the online experience around it, not just abou evaluating any single dimension like 'engagement', or even worse optimising a single KPI like 'open rates'.
And finally, that there are, increasingly, some fundamental principles of 'craftsmanship' that apply to constructing good email journeys. These amount to a kind of creative heuristic or rule-set for effective content. And are well worth abiding by.
Please login to comment.
Comments