Thursday, September 19, 2024

5 tips for landing page testing

My name is Matt Roche. I am Co-President and CEO of Offermatica. Offermatica provides a scientific testing platform for applying A/B, multivariate and Taguchi testing to increase online sales.

This column offers some of our most valuable experiences constructing and executing tests to increase the conversion of PPC traffic by testing the performance of their landing pages.

We selected these five because they produced significant value and were counterintuitive. I hope that you find them useful in your planning and execution of tests to increase your conversion, and order profitability.

#1: Never eat anything bigger than your head:

Marketers should resist the urge to test their biggest idea first.

There is a pent up demand for testing among sophisticated Internet marketers. Tests that seem like they should be easy to run are held up because they require new software or development and stagnate in the IT priority queue. When forces align and testing becomes available, marketers are often drawn to large, complex tests. One company asked us if they could “create custom landing pages for all of their 35 identified segments and test each against the default landing page.” Although a perfectly fine test, the amount of set-up, content creation and planning required for a test like this makes it less likely to yield useful data and a clear ROI.

An alternative test would be to identify a small number (2-3) of higher value and higher volume segments and design tests for them. A smaller number of tests allows for more careful thought about WHY the results are the way they are. It is very common that ideas that are “sure things” are duds and strange things matter. Which brings us to our second rule:

#2: Prepare for the “Castanza effect”:

Often the EXACT OPPOSITE of what you predict will occur.

There was an episode of Seinfeld where George Castanza decided to do the exact opposite of what he would normally do. As a result he gets a promotion, wins the girl… In testing, the “Castanza effect” dictates that things will often produce results that are the exact opposite of what you were absolutely sure they would be.

A highly respected consultant tested to quantify the positive effect of including thumbnail images of his books on a page soliciting email newsletter signup. He assumed, reasonably, that the images would add credibility to the offer and would increase the likelihood of signup.

The results were the opposite. Of the 5 elements tested on the page, the existence of the book images provided the greatest negative impact on sign-up. We were so surprised by the results we ran the test again with the a and b version flipped and as predicted by the system, the version with no book image (now the b version) was significantly more likely to produce a sign-up.

This is not to say that a marketer’s intuition about what will work is not valuable, it is the most valuable part of the equation. When the intuition is dramatically wrong, careful analysis usually uncovers a variation that produces the desired effect. Which leads us to rule #3:

#3: 99 bottles of beer on the wall 99 bottles of beer

Prepare for more than one iteration if you are looking for significant lift.

The best results rarely come from the first test. Even when we bring all of our best experience to a focused experiment design, the real wins come from the second experiment that is suggested by the results of the first. In other words, designing an experiment that answers the question “why did that happen?” or builds from a “that’s interesting, I wonder what will happen when I” typically turns out more profitably than one that starts with “we can grow sales if we”

Every smart marketer thinks that they know how to improve conversion rate, increase response, grow sign-ups or improve average order. “If only I could remove that step from the process,” “If only I had better targeting capability,” if only I could change the navigation from this to that.” Marketers are often held back by scarce IT resources, by conflicting demands from the BRAND or corporate marketing or just by internal disagreement. But when they finally get a chance to run their test, ALLMOST ALLWAYS, they yield less change or different results than expected.

The best approach comes from starting with a hypothesis like “I believe that our navigation is too complicated and if we could simplify it we would have fewer dropouts.” Next we create a series of relatively simple tests to isolate the cause of the complication and test several versions that we believe will simplify elements of the navigation. By decomposing the assumed answer into parts and trying variations of the parts we nearly always find a collection of changes that improve the overall result.

The worst-case scenario occurs when a marketer uses IT time and political capital to make a change or run a test that results in a negative or inconclusive result.

#4: Not enough monkeys:

Know how large your test population will have to be in advance.

There is a theory that if enough monkeys sat at enough typewriters, they would eventually type the entire works of William Shakespeare. Unfortunately, fewer than enough and you get mostly garbage. Take a look at the “Monkey Shakespeare Simulator” for more details. We have had several experiences where tests were run on an area of the site that received so little traffic, or had so few conversions that it would take years to reach an answer with reasonable level of confidence.

Fortunately it is easy to avoid this problem. At the beginning of an experiment, estimate the number of visitors and conversions on EACH BRANCH of your test. In most cases it requires between 40 and 100 conversions per branch to begin to achieve confidence. This means that if you running a a..n test with 9 versions and your conversion rate is .2% you will need between 18,000 and 45,000 visitors in the test to produce an accurate result.

For most businesses, tests should take no longer than 2 weeks and when planned properly, they can be concluded after a week.

#5: “A million here, a million there”:

If you combine enough small improvements you can create a large improvement.

There are many things that can be varied on a typical Web page to effect conversion. Unfortunately only a small number of them will yield any significant difference and only a few of those will IMPROVE your conversion, average order As a result marketers who have been able to construct and execute simple A/B split test often conclude that the results do not justify the cost and time required to run the test.

There is an alternative. Using relatively straightforward techniques, it is possible to test an almost unlimited number of potential page variations by only testing a few combinations or “recipes.” Imagine that you would like to test a new page treatment versus an existing “base” page. The base page has three elements, say a product image, a product description and a promotion, and the new page has the same three elements but with changes to each.

It is possible (and fairly typical) that the conversion rate will be no higher for the new page. It is also possible that one or more of the changes on the page increases conversion, but that its effect is canceled by negative effects from the other elements. For example, the new product image increases the likelihood of conversion by 10% but the new description and the new promotion each lower conversion by 5%.

We can construct a test that identifies the impact of each of the elements in their default and new versions so that you can create a new theoretical “best” page. In this example, the new product image with the old description and promotion.

This approach is useful with three elements, but it becomes even better when you are testing 5, 7 or 10 elements or a smaller number of elements in 3 or 4 variations. By running a cycle of tests that starts with testing a large number of elements in two versions to find which make a difference and then testing the important elements in 3-4 variations, we regularly see conversion improvements of 15% – 45% and higher.

Like the world of stocks and bonds and Billy Bean’s Oakland A’s, science and quantitative analysis will inevitably hit the world of online selling. I hope these tips help you to enter this next phase with great success.

Matthew Roche has a BA from Yale University and fouded Fort Point Partners Inc. now called Offermatica. Under his guidance, Fort Point build ecommerce applications for Nike, JCrew, Best Buy and many others.

Offermatica has captured the experience gained with these ecommerce leaders and prouced a plug and play ASP testing platform that brings quantitative testing tools to boost online sales.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles