As I discussed last week in Why Matchback Analysis Overstates the Importance of Catalogs, one of the most effective ways of figuring out how our direct marketing efforts drive online sales is to do holdout testing. Holdout testing is nothing more than a controlled experiment and, done correctly, is a low-risk way of producing the accurate results that matchback analysis can’t.
Let’s say we’re a cataloger and we want to know which of our online-only shoppers we can stop sending catalogs to. The simplest way to find out is to test it:
1) Separate the online-only customers into behavioral and demographic segments
If you already have a customer segmentation schema in place you can skip this step and use your existing segmentation instead. If you don’t have a schema, you have a couple of options.
You can do a manual segmentation by thinking about who your main customer groups are and what attributes they have. You can then developing rules based on those attributes to do segmentation (i.e., Age > 55, suburban address, often buy children’s items is classified as a grandparent).
If you want a more quantitative based approach and have a statistician or data miner on staff, consider using a clustering technique such as k-means or two-step. These will produce statistically sound groupings which are perfect for holdout testing. Sometimes, however, it’s no so clear what to call each group or what they look like.
2) Randomly choose a set of customers in each segment who will serve as the experimental group
One of the more common mistakes is selecting an experimental group that is needlessly large. We want to ensure the test doesn’t impact the business too much so it’s important to try to keep these groups small. This table to give you a rough idea of how big your sample should be per segment:
|Typical Response Rate
||Margin of Error
If you typically have a higher response rate you can afford a bigger margin of error in your testing. The reverse is also true. If your response rates are smaller, you’ll need a tighter margin of error in your testing to ferret out valid results.
3) Stop sending catalogs to the randomly chosen customers in each segment and track the results
For best results, run this test over a few months and see how the response rate of the control group who still receiving catalogs differs from the experimental group in each segment. If the experimental group’s response rate is only slightly smaller than the control group’s, the loss in revenue may be small enough that you can save money by not sending catalogs to that segment.
This experimental technique succeeds where matchback fails and helps you identify segments that no longer need your marketing dollars to spur spending. Finally you’ll know whether the catalog does indeed drive online sales.