Posts Tagged ‘Email Timing’
Tuesday, August 26th, 2008
Many customers have asked us to help them better understand the effect marketing messages have on their customer base. Almost everyone we know uses multi-channel marketing in one form or another - whether it’s email and web ads or email, web ads, and direct mail - most companies are using more than one medium to get their message out to new and existing customers. The problem many companies have is determining how and when to target each customer with the appropriate message.
Read Multi-Channel Marketing and the Zone of Influence »
Tuesday, August 19th, 2008
One question that we constantly come up against is why online retailers shouldn’t blast their entire customer base with every email promotion they create. Granted, most companies use some form of segmentation to track email responses (the usual 0-12 month buyers vs. 12-24 buyers is a common example) , but besides being defined by arbitrary recency and monetary spend cut-offs, these groups have no real bearing how the customers contained in those groups will respond to a given ad. My colleague Matt wrote a nice entry about this back in July (see Does your email response rate depend on how many emails you send?), but I recently came across some new metrics that I think help drive the point home.
Read Timing and targeting: why you shouldn’t blast all your customers with every offer »
Wednesday, August 6th, 2008
These days even the most technophobic consumers have inboxes full of marketing from companies they have interacted with. As responsible marketers, we have ensured that these customers have opted in to our communications and we know that we must promptly remove them from our house file when they no longer want to hear from us. However, according to Marketing Sherpa’s Email Marketing Benchmark Guide 2008 (summary here), ensuring opt-in may no longer be enough to keep our company’s image clean.
In a survey of over 4000 consumers, half consider email to be spam if it arrives too frequently, even if it comes from a known sender. This has serious consequences for email marketers using “carpet-bombing” strategies to spur customers to purchase. Even if consumers have opted in and know a company well, they may come to think it as a spammer if they are receiving marketing emails every day or every week.
The sentiment that, regardless of permission, frequent email marketing is spam will only grow as inboxes become even more flooded. Marketers will be forced to migrate to a “surgical-strike” strategy where customers are targeted with highly personalized messages only at the most likely time to buy, and probably no more than once a month.
In an environment where consumer trust is hard to gain and can vanish with one misstep, nobody wants to be seen as a spammer. Unfortunately, the risk of marketing too frequently is now beginning to outweigh the benefit. If email marketers do not adapt through better targeting, they may find themselves relegated to the junk folder for good.
Thursday, July 24th, 2008
On Monday, I began a discussion about how Istobe evaluates the ROI from email marketing campaigns based on our predictive models. At the end of my post, I promised a discussion about other factors that we take into account when evaluating the lift. And…voila. Today we unveil those factors: the email influence zone and opt-outs, and we discuss how Istobe accounts for them in our lift calculations.
Email influence zone
Sometimes referred to as decay rate in the catalog industry, the email influence zone (EIZ) - not unlike the catalog influence zone (CIZ) - is essentially the time period after an email is sent. And we assume that each succeeding day after the email is received has less effect than the day before. Thus, the moniker decay rate. Catalogers have believed for years that their catalogs have a carry-over influence: the catalog accounts for many web purchases. In fact, this is very reason that catalogers are loathe to cut the number of catalogs that they ship. Even to those customers who have never purchased from the catalog itself. We believe this is also true of email marketing.
Basically, the idea behind the EIZ is that an email offer has an effect on online purchases that have no other obvious origin and which relate to the product that we predicted. For example, if our models predict that shoes are the likely next product for a particular customer and that customer purchases shoes online five days after receiving an email that advertises shoes, then we can assume that the email - and our product recommendation - influenced the customer’s purchase. Our model gets credit for a small percentage of this purchase even though the purchase didn’t come directly from an email click-through. The EIZ period that we calculate differs per client depending on the frequency with which our clients send emails.
Opt-outs on the Istobe watch
If we’re going to give ourselves some of the credit for purchases that occur in non-email channels, we also have to take a hit for bad events that occur during our watch. The bad event that Istobe tracks carefully is email opt-out. We track whether the opt-out rate goes up during our watch. If it does, we have to assume that next-best offer has somehow turned customers off. If the opt-out rate does go up, we deduct a portion of our lift because we believe that we were responsible for that incline in opt-out rate. We’re responsible for that small piece of customer attrition.
Taken together with the variables I spoke about last time, these are just four factors that we constantly adjust in determining how successful we are on behalf of clients. And we’re always looking for new ways to perceive actual lift. If you have new ideas for evaluating predictive-model efficacy, please email me. I’d love to talk about them.
Wednesday, July 23rd, 2008
Collaborative filters, the heart of the recommendation engines used by companies such as Amazon and Netflix, are quite good at predicting items that might be of interest to you. Essentially, these filters work by trying to group you with people that have expressed similar preferences — whether it’s by CDs you rated, the movies you chose, or the items you bought — and then finding the items that the other people in the group like that you have not yet seen.
When the dimension of time is introduced into the environment, however, collaborative filters can quickly lose their predictive power. This is particularly true when filters are used in retail for regularly purchased goods. What may interest a customer at checkout time is not likely to be what interests them six months down the road.
For example, my cat has a chronic eye condition that requires semi-regular treatment. On average, we purchase eye drops about once a year though it can vary anywhere from three to 18 months. Due to the quirks of cat physiology, after application the medication drips into the cat’s mouth and, if her reaction is any indication, tastes horribly. To mitigate the yuckiness we usually give her some treats. I imagine we’re not the only ones who do this.
A good collaborative filter might find that cat treats are a good cross-sell for eye drops. I would, in fact, be likely to add cat treats to my shopping cart at check out if they were offered. A retailer using a recommendation engine would get extra business from me. A win for everybody.
Three months later, however, that same retailer is now sending me promotional emails for cat treats because their collaborative filter has no concept of time. It should be sending me offers for eye drops — that’s something I would probably be interested in buying again. Instead, after a few weeks of receiving irrelevant offers, I find myself not opening these emails or, even worse, consider opting out of receiving them. Not only has retailer lost an opportunity to make an additional sale, but they’ve come close to losing a valuable way to communicate with me.
The key is to use recommendation engines that are specifically designed to handle the time varying nature of email. In work we’ve been doing with customers, we’ve seen than a recommendation engine that considers when a person is most likely to purchase coupled with the purchasing sequence of groups with similar preferences yields results that are three times better than traditional collaborative filters. I have little doubt that we are the only group pioneering this concept and suspect that we’ll see a lot more email specific recommendation engines in the future.
In the months and years to come, smart retailers will look past off-the-shelf recommendation engines that are optimized for cross-sell at check out. They will see that the opportunities are tremendous for those using recommendation engines that are specifically designed to understand time.
Monday, July 21st, 2008
Istobe develops predictive models that recommend which products to market to customers via email and which are the best times to market those products. But how does Istobe measure the actual ROI returned by these models? The Istobe team burns many cycles discussing measurement techniques for the lift that we are delivering to our clients. And we’re constantly updating the formulae that we use to evaluate how our predictive models actually perform in production. Ultimately, the measured lift that we generate is the result of another model where we tie in the relevant factors according to different weights. What are the relevant factors? Read on.
Our model vs. current practice or our model vs. the naive approach
This actually isn’t a debate among us but it’s the most important part of understanding what kind of monetary benefit we’re actually delivering to the customer. Oftentimes, a model’s output will simply deliver lift in contrast with the naive approach. That is, the model will assume that our client is, at worst, merely flipping a coin in terms of the next-best product for their customer. Or, at best, the model assumes that the client’s customers will likely want the most popular product. So our models self-reflexively examine their benefit against these two benchmarks. However, when it comes time to actually measure how much better our model is, we always measure against our clients’ current practices. The assumption is that our clients already have a smart strategy for targeting their customers. So we get their rules for targeting their customers and then figure out how much better our models are at generating the right type of product offering.
Our model’s email timing vs. typical email timing
Email timing is starting to get a lot of traction at Istobe these days. After all, if the email is never opened then it doesn’t matter if the product that our clients are offering is a better fit for a set of customers or not. And there are better and worse times to send emails if you want them to be opened. So we take into account the timing that we suggest vs. the normal send times of these emails. Basically, timing is just another part of our models’ output. The models take into account the whole path for purchasing a product and getting an email to the right person at the right time is the first step in that process. When we track the Istobe improvement, we build email open rate into our evaluation and track how much lift we give our clients by understanding how many more opens and click-throughs our models were responsible for.
That’s about enough for today but I’ll talk about two other evaluation factors on Thursday that are a little more arcane: Email influence zone and opt-out rate.
Thursday, July 17th, 2008
Maybe. But I can guarantee your revenue per customer does. And not in the way that you might believe. There is strong evidence that reducing email in an intelligent way actually increases your revenue per customer.
Just yesterday one of my colleagues asked me whether, in addition to the weekly timing of an email send, the quantity of emails sent to one person mattered. In other words, is there a limit to the email offers that a marketer should send? The intuitive answer is: of course. If we look at catalogs alone, consumer dissatisfaction with this method of direct marketing is at an all-time high. After all, no less than six websites have sprung up that allow consumers to opt out of catalogs. You’d have to have a powerful argument for me to believe that overzealous emailers are perceived any differently than overzealous catalogers.
My partner Doug Bright has already spent some time fleshing out this hidden cost of excessive email. So I’ll just add some more beef to his already meaty argument. In March, 2006, noted marketing researcher Dr. V Kumar, along with Rajkumar Venkatesan and Werner Reinartz came out with an article entitled “Knowing What to Sell, When, and to Whom.” You can see the abstract here at the Harvard Business Review. The article is utterly fantastic; you should get a hold of it.
What does this have to do with overemailing? Well, at the end of the article, the authors reveal an interesting, yet tangential, finding about email in their research. They found that purchase increases were tied to marketing communication in a strange way. It was not linear. In other words, more communication did not continually yield more purchasing. Instead, the authors found that above a certain threshold of communication, customers were put off. To quote the authors, “Clearly, many companies may be actively damaging their customer revenues in attempts to make sure that no opportunity for a sale is missed.”
The upshot is that they found that a data-driven approach to reducing marketing communication leads to “not only lower costs but to a revenue increase per customer.” When then tested this hypothesis using data-driven models and A/B testing at two client sites, the reduced communication strategy outperformed the traditional “blast ‘em” approach on both occasions. How much did it outperform the “blast ‘em” approach? I’m glad you asked, because these are the truly staggering numbers. For the B2B firm they worked with, the potential profit based on $1600 of additional revenue per customer, came to $320 million in additional profit. Now the cynical might say that this was mostly a reduction in cost. And I would have to admit that’s true. However, what the authors found was that the revenues for all product groups still increase, meaning that customers were spending, on average, $365 more with the reduced communication schedule. Similarly, at the financial services firm they worked with, the authors found an increase of $400 per customer using this data-based communication schedule.
To me, these results are unequivocal: sending too many emails not only is a waste of time and labor, it also hampers your sales. We all know it’s tempting to equate activity with results. But it may be better to turn your attention toward an intelligent use of your data to figure out who you really need to email and how many times you should email them.
Monday, July 14th, 2008
Someone asked me the other day, in response to my assertion that one-to-one marketing on a massive scale was the wave of the future, how a company could possibly send out so many personally-tailored emails. Being in the local Irish Pub, The Burren, I almost laughed Guinness out of my nostrils. But I couldn’t avoid the underlying message. One-to-one marketing never really has been embraced because no one really thinks that they do customer segmentation very well, that there are too many obstacles to customer segmentation for it to be entirely useful. Ultimately, this means that few believe they have homogenous enough segments to deliver the personalized goods.
What this also means is that one-to-one marketing is complex due to the fallacies of profiling. I once worked at a company that had such in-depth profiles for each segment that the profiles read like a Faulknerian novels. At this company, I learned that our target female customer in the 35-40 range probably once wanted to visit France but was now stuck with two kids in middle America and made meatloaf once a month for a husband she rarely saw. She obviously consoled herself by buying our software.
What’s my point with all this? This kind of profiling is for low-transaction sales, nothing more. Direct marketing units with high transaction rates should never take the tack of email blasting a segment based on their demographics. Nevermind writing fanciful biographies for said segment. Instead, direct marketing should ignore demographic profiles and concentrate on profiles that accomplish an immediate business goal (see below). Given the immediate needs that direct marketing normally serves, it needs to have a shortsighted, tactical approach, not the strategic approach that profiling represents. Below I look at the goal of getting rid of an overstock of shorts via the email channel. In doing so, I explore two, important dimensions of personalization: what the segment is willing to buy cross-referenced by when that segment most likely opens email.
The Group that Will Likely Buy Shorts Next
The truth is, the customer is not out there to buy from your company. They’re out there to purchase the product they want next and you’re merely there as a direct marketer to insinuate yourself into the buying equation. So which segment of your customers is likely to buy shorts next because that’s the group you want to reach when your shorts have been sitting in inventory for way too long and the leaves are already falling from the trees. So is this a profiling problem? In other words, is it time to blast every demographic who might wear shorts. I suppose you could. But then you’re likely to turn some people off. If you ran your customers past transactions through a classification data mining task, what you’d come up with is a list of people who are likely to buy discounted shorts at that time of the year. In fact, you’d probably come up with a few segments that demonstrate such a propensity. And they would definitely cut across your demographic profiles. You’ll have some moms buying shorts for their sons and some dads buying shorts for next summer’s Hawaii trip.
When Is the Best Time to Reach My Shorts Group?
Almost everyone out there sends me email blasts on Tuesdays and Thursdays. Why? Well, the general belief is that it adheres to the customers’ work/open schedule. I have seen elsewhere that most emails are opened on Sundays. That’s a compelling argument. But I tend to believe that each of your potential shorts purchasers has a more personalized schedule as to when they open and read emails. And that leads to the answer to the question in the subtitle. There are many times to best reach your customers who will buy your clearance shorts. The best web article I’ve read on this is by Bill Nussey of silverPOP who argues for tuning your send times per customer based on their last-recorded response. Couldn’t agree more. In fact, I believe that timing is the hidden axis of personalization. I would actually alter Nussey’s belief just slightly. And that’s simply to say that I would average their responses - and give the most recent responses just a bit more weight - to triangulate on the time your shorts buyers are most likely to open your email. For ease of use, you can bucket this into days or half-days so you don’t have to schedule an email every hour. If you record response data to your email blasts (opens), then this really shouldn’t be a problem.
So what do you ultimately have? You have customers that are most likely to want discount shorts and you have the best time to contact each of them. Now that’s personalized marketing.