There is no end to the improvements we can make to our sales materials. Even when we have a piece that works great, there are always further refinements we can make that will bump up the response rate.
With my own clients, I’m always testing changes to sales pieces. You have to, because with time even the best piece will begin to lose its effectiveness. Language begins to sound dated; cultural references get old; your prospects’ interests change; the makeup of your target population evolves. With a target that’s always moving, you have to keep adjusting your aim.
But one thing you have to be prepared for is that not every change brings a huge increase in response – and that’s not a bad thing. Some marketers make the mistake of thinking that only a major increase in response is worthwhile. They want to get a home run every time or they don’t see the value in even bothering to make any changes. But in my experience a lot of little base hits can be very effective, and may often be the best you can expect.
For example, we know that an increase in response of ½ a percent can be considered quite respectable. If your normal response rate is 2 percent, getting a ½ a percent jump is a 25% increase overall. In terms of numbers, that means out of a 10,000 piece mailing, you’d get an additional 50 orders.
That would be great. We would all love to see results like that! But that’s not always what you get. It’s not unusual to make one change (like adding a second color to a piece) that leads to a bump of only .10% - which would be one additional order from 1,000 pieces mailed, or 10 more orders from a 10,000 piece mailing. Okay, that doesn’t sound like much, but . . .
The Little Things Add Up!
I know an increase of a tenth of a percent sounds pretty small, but you should not be discouraged by this kind of outcome. In fact, you should consider it a great finding. Now you just have to find some more variables that do the same thing for you.
Let’s say that in your continuous testing you carefully test a large number of variables and you find five different changes, each of which yields a .1% increase. These might include adding the second color, including a bonus for ordering within 10 days, changing the headline a bit, adding a picture of the product, and adding a couple of testimonials.
You put all the changes together in one piece, and guess what? They have a cumulative effect. Five times .1% gives you a total increase of .5% - which was the home run you were looking for. But it was all because of those little base hits.
To have a great cumulative result like this you have to have a plan for uncovering all those powerful little pieces, and that’s where testing comes in.
How To Test
Your plan for testing is to vary just one variable at a time so that you can test out the results. If you have a large enough list to mail to, you can carefully split up your names into different groups, where each group tests a different variable, but you can mail all the pieces at one time so you’re not extending your testing over too long a period of time.
If you only have a small list, you may have to use a different mailing to test each variable, which isn’t ideal. If your testing is spread over too long a time, things could change. And if you’re mailing to the same people over and over (say, your house list), that could skew results as well.
With each test you do, you want to take the piece you’re using now – what we call your control piece – and compare that to a test piece that differs in the one variable you’re testing. Send the two pieces to two groups of prospects who are equivalent in terms of demographics, etc. Then compare the results.
Here’s a sample test setup, and to keep it simple, we’ll just look at two variables. Let’s say you want to test using a two-color format vs. a one-color format, and offering a bonus vs. not offering a bonus. The wrong way to run your test would be to divide your list with two pieces like this:
Version A: One-color format with no bonus
Version B: Two-color format with a bonus.
With a setup like this, each piece represents too many variables. If Version B does better, you won’t know if it’s because of the color, the bonus, or some combination of the two variables. You’ll still have to go back and test again.
But that doesn’t mean you can’t test both variables. You just have to break up your list into groups logically, and with two variables, that means four separate groups are required (do you see why you need a larger number of names?)
Here’s how to split your test groups so you will know exactly what each variable did and how they worked together:
Version A: One-color format with no bonus
Version B: One-color format with bonus
Version C: Two-color format with no bonus
Version D: Two-color format with bonus
Now you’ve got it set up so you can test for each individual variable and how they interact. For example, suppose you get results like this:
Version A: 23 orders
Version C: 30 orders
Version B: 35 orders
Version D: 42 orders
Now you can see that the one-color format with no bonus did the poorest. The two-color version with no bonus did a little better. The one-color format with the bonus did better still – so the bonus is more important than color. But the two-color version with the bonus did the very best, so the two variables worked together to bring stellar results – in fact, a home run!
With even more variables, you’d need a more complex design. The idea is to test every possible combination against every other. Again, you are limited by the size of your mailing list.
But getting back to the point of this article, even small improvements, when combined with other small improvements, can really boost your sales.
So test, test, test, and turn your small base hits into home runs!
Craig Simpson is the nations leading direct mail consultant and coach. He sends out over 200 mailings per year for his private clients. You can contact Craig at email@example.com or to order his book, The Direct Mail Solution, go to www.TheDirectMailBook.com. You can visit his website here.