The Pitfalls of Email Marketing Metrics & A/B Testing [Video]

5 min read

“Success metrics” are tricky. In our data-flooded industry, there are lots of opportunities to go astray.

For example, sometimes marketers will use one yardstick to measure the success of all their campaigns, regardless of the goals of those campaigns. This is something that has tripped us up here at Litmus on occasion, and was part of what motivated us to share our top 10 email campaigns of 2015 by a variety of success criteria.

Another pitfall is that marketers are generally too focused on campaign-level metrics and don’t give enough attention to channel- and customer-level metrics that are critical to long-term success. While campaign metrics are useful, they only provide a limited snapshot in time. Meanwhile, broader, higher-level success metrics are vital because they show us trends over time that are more indicative of subscriber response.

To better understand which metrics are the best indicators of optimization, health, and success at the campaign, channel, and customer level, check out this Email Metrics Matrix.

Email-metrics-matrix-3

Holly Wright, Email Marketing Manager at marketing and ecommerce agency Phoenix Direct, has seen some of their clients struggle with metrics as well, whether with gauging email success or the success of A/B tests. At The Email Design Conference, I had the opportunity to sit down and interview Holly about these challenges and how email marketers can overcome them.

You can watch the full interview here, or read a transcript of it below.

I think a lot of B-to-C email marketers get a little hung up on some of the smaller intricacies of collecting data on their users. They get too concerned with, “Are we looking at unique opens or all opens? Are we looking at unique clicks or all clicks?” But ultimately, I don’t think those nitty-gritty details are really what make that big of a difference in their programs.

What I like to suggest to people is that they look for overall trends, and look for what happened last month, what happened this month. If we’re running the same campaign with different subscribers, what happened last time we ran it, and what happened this time we ran it.

Revenue per Subscriber

Because we’re in the ecommerce space, most of our big metrics revolve around revenue numbers and dollars. So in addition to looking at the basic email metrics—like open rates, click rates, unsubscribe rates, bounce rates—we also spend a lot of time digging into dollars per email. How much revenue did this particular deployment generate for us? Also, average order value once they’re on the website.

A metric that a lot of ecommerce people overlook is dollars per subscriber. And I think the reason that’s tricky is that you have to consider it over a period of time. So either dollars per subscriber for last week, or last month, or last quarter, or for the year. But that’s one of my favorite metrics, because you can really see how your program is driving more and more value for the subscribers that you do have.

Monitor List Fatigue

Because I work at an agency, we look at all the big metrics for our email campaigns for our clients. We do find that some of the executives at our client companies can be very focused on revenue, and they often want to ramp up their email frequency in order to drive more revenue, which is fine. But one of the things they often forget about is looking at other indicators that their list may be fatigued, or just disengaged. So when our clients asked us to ramp up email frequency, we often monitor things like unsubscribe rates, open rates, and click rates, just to make sure there’s not a decline in engagement.

A/B Testing

One of the biggest mistakes that I see people make when they try to design an A/B test is they’ll have two random variables they’re testing, or they’ll try to test more than one thing at a time. And then when they don’t see conclusive results, they’re not sure why. Another issue is if they don’t test for long enough.

You really have to get to a point of statistical significance before you can declare a winner, but you can’t get there if you don’t test enough. So it’s not enough usually just to do a one-off test. Usually you have to come up with a concept you’re testing, and implement the same concept in a variety of ways multiple times over a period of time until you do have enough data to declare a statistically significant winner.

More Expert Videos

GET THE LATEST DELIVERED STRAIGHT TO YOUR INBOX

Want to get more tips and advice like this? Subscribe to our newsletter and get the latest content for email marketing pros delivered straight to your inbox.

Chad S. White

Chad S. White is the Head of Research at Oracle Digital Experience Agency.