Monday 3 June 2013

Six Testing Features Your E-Mail Marketing Tool Needs

“Test and Target” is the way to success – especially in e-mail marketing where you can get results especially fast. Nevertheless, some e-mail marketing tools still make testing an overly tedious affair. So before you decide on a tool for your company, make sure it comes with these six testing features.
After six years of creating countless newsletters and other e-mail campaigns, I have come to the conclusion that, apart from the “basic e-mail marketing stuff”, it is really hard if not impossible to predict what makes your e-mail message perform better. With “basic e-mail marketing stuff”, I mean the rather obvious advice one usually reads about in the abundant guides that teach you how to improve your marketing e-mails: a clear call-to-action, a content-related subject line instead of just “Newsletter 7/2012”, personalizing content according to your recipient’s profile, and so on.

Is a short subject line better than a long one? Well…With the other “stuff”, it is difficult to generalize. Is a short subject line really better than a long one? Depends. Is it better to send the campaign out at 6am or 6pm? Depends. Is a subject line with an imperative (“Apply for this Web Analytics event now!”) better than a more descriptive one (“The latest Web Analytics trends: Google is doomed”)? Depends. Is a subject line informing of a limited offer better than one that leaves this out? Probably in most cases, but if you overdo it, your users might get tired of it. So yes: It depends!

Tendencies yes, but hardly general recipes
Christian, a colleague of mine, recently did some larger-scale e-mail campaign testing, using some typical testing variables (imperative vs. descriptive subject, morning vs. evening roll-out etc.). There were some tendencies. For example, a roll-out in the morning seemed to be better in most cases, but far from all cases. So it was hard to distill any company-wide guidelines out of the test results. The problem seems to be that there are so many intervening variables that it is hard to control for each of them.
To name some examples for these intervening variables, let’s look at some obvious examples from the recipient list. Here, we (a recruiting company) usually deal with very different demographics, eg. the recipients’ life phase (working, student, high school graduate), the recipients’ university majors, or their e-mail history (some recipient lists may contain a large portion of users that have already received a couple of mails this week, other lists may contain more users that haven’t received a mail in a week), and so on…
So yes, there are some general guidelines on what triggers user action in 60-plus percent of all cases, but every campaign is different: That is why the best way to make sure you are actually sending an effective e-mail campaign is to test at least two versions with a small sample before sending it out to the winner.

Some e-mail marketing tools do a woeful job facilitating tests
Of course, saying that testing leads to success has become trivial these days. But if you look at the testing tools that e-mail marketing providers offer, you get the impression that split testing is still reserved for some rare kind of overly ambitious marketing geek. Lamentably, it probably is that way. :(
So the following recommendations all hail from my daily frustrations with the overly tedious e-mail marketing tools I work with. I will shun “naming and shaming” here because I only know two larger e-mail marketing tools in depth, and it would be unfair to single them out while the other companies’ products might be just as poor. I would love to read your comments though on how your e-mail marketing tool deals with these things. I should note that the software I have worked with are enterprise solutions hosted by two of the many e-mail marketing service providers that call themselves “market leader in e-mail marketing” (I always wonder how there can be so many “market leaders”). To their benefit, both solutions are really great in other areas like segmenting or their API.

So which testing features should your ideal e-mail marketing tool offer?
A: Top priority features

1. A QUICK and EASY way to compare two or more versions of a message (ideally, with a separate, even quicker feature to test only subject lines)
Ok, probably every single e-mail marketing software provider will tell you their tool offers this, i.e. a split testing tool to test different versions of a message. The real question though is how! Ideally, it should take no more than three minutes to set up and roll out a simple A/B subject line test. If it is not quick and easy, people won’t use it – that’s what has happened in my company because our current tool is nerve-wreckingly tedious when it comes to split tests.
So check for the following usability issues:
a) In order to send out an e-mail campaign, most e-mail marketing tools require you to set up three things: the e-mail message, the recipient list, and the campaign that ties the first two together and sets the roll-out time etc.
Now, for a single A/B subject line test, could it be that you have to set up two entire campaigns and two entire messages, including the message body even though you only want to test the subject line? In that case, you are dealing with what is depicted in the graphic below as a “Tedious Split Test Campaign”.
b) And while you are setting up the second message and campaign, could it be that you have to fill in almost all the fields again (for campaign A and B and message A and B) even though only one of them differs (the subject line)?

c) Is it easy to jump back and forth between the settings of the split run and its associated campaigns and messages – i.e. one click?
Our current tool doesn’t match any of these criteria.
So how would the ideal tool manage this?The ideal tool would offer an option in the split campaign menu that allows you to add one or more messages to be tested, thus sparing you of creating additional campaigns for every version. The super-ideal tool focussed on facilitating quick insights would even distinguish between a more complex test of entire messages (where it is ok to create two messages) and a simple subject line test (where creating one message should do). In the latter case, you would determine the different versions of the subject in the message or campaign settings.

2. Determine a sample size for each variation.
If you want to try out something more outlandish, you’d prefer to throw it at just a tiny fraction of your recipients first instead of having to go into 50/50 mode.
So your tool should allow you to:
  • set the general sample size of your test run (say, ten percent of your recipients)
  • set the sample size of each of the variations you are testing (e.g. 80 percent of the 10 percent sample get version A, 20 percent get version B)
3. Assign different campaign tracking parameters to your links depending on the version
A successful e-mail campaign does not end with a click, it ends with a conversion. Some messages might draw a lot of clicks, but few conversions (e.g. because the e-mail copy promises too much), others draw fewer clicks, but more conversions. If you want to take that into account, your e-mail marketing tool needs to allow you to
  • automatically add campaign tracking parameters to your links (utm_source and the like for Google Analytics)
  • vary those parameters for each version to be tested (in Google Analytics, I usually use the utm_content variable for this purpose)
4. Automatic roll-out
So you have spent all day writing your wonderful newsletter, and your mailing schedule says it has to go out today. It’s 6.30pm. And naturally, you really want to stay at work for another two hours to see the results of your split test before being able to roll out the winner. What? You really don’t? Ok, so what does that lead to? Right, zero split tests! Zero improvement! The solution: Some tools (not ours!) offer an automatic roll-out of the winning variation (usually measured by the click rate) after a time you can specify.
B: Medium priority features

5. Multivariate tests
I have headlines H1 and H2 and images I1 and I2. With automated multivariate tests, I can find out whether H1 and I1 is a better combination than H1 + I2, H2 + I1 or H2 + I2. With a usable multivariate testing tool, I can do this all in one single message with some markup intelligible to non-programmers, and I don’t have to set up four campaigns and four messages. Once again, the URL parameters in each variation should reflect the respective combination of “variables” (in this case, the headline and image used). That way, you can tie them to conversions (see 3.).

6. Test the best time for roll-out
Is it better to send my newsletter at 7pm in the evening or the next morning at 7am? Especially for recurring campaigns like weekly newsletters, it is important to know the time your recipients are most responsive to your mails. That is why a great split testing tool allows you to mail samples of your campaign at different times. Again, you should be able to specify these times up front and not have to go back to your tool for each roll-out.

Don’t forget to check the reports
That being said, all these features should of course come with some sensible reporting. So make sure to check the split testing reports: Do you understand them right away? Are the necessary metrics included (at least open and click rates for each tested version)? Are they visually presented in a way that makes comparing results easy (eg. right next to each other)?

Discuss: How is your tool doing?
Are there any features that you think are missing on this list? Are you happy with your e-mail marketing solution’s testing features? I would be glad if you shared your experiences.

No comments:

Post a Comment