Wednesday, November 13, 2024

Why 100 Percent Conversion is a Very Bad Thing

Even within my own company, opinions vary dramatically about how useful web reporting (as opposed to analysis) actually is.

Some of us tend to think reporting is just another one of those things clients insist on then don’t find much use in. While a majority still believe that reporting is essential – but have to admit that it often falls far short of our expectations in terms of usefulness. I fall mostly into this second camp.

This means, however, that we agree that reporting is often much less useful for a client than we’d hoped. This is, frankly, puzzling. The web channel is large and fairly complicated – and it stands to reason that managers and stakeholders will be able to make better decisions if they know what’s going on in the channel. Why isn’t this always and obviously the case?

Before I delve directly into that issue, I want to talk in this post about a second issue that I think bedevils many report sets and also makes many users unreasonably negative about their importance. I call this issue the myth of actionability.

The myth of actionability is conventional wisdom in web analytics – and it suggests that you shouldn’t report on anything unless changes in the measured value can be directly addressed by specific actions. In other words, if you can’t answer the question “What would I do if the value changed up/down?” then you shouldn’t report on the measure.

This criteria is designed to eliminate lots of useless data from report sets and insure that what is in report sets has substantive value.

Unfortunately, I believe the criteria of actionability is unsound in almost every way: being both wrong-headed about the purpose of reporting and impossible to actually satisfy in the real-world.

Let’s start with the first point. Reporting is designed to provide information back to key decision-makers within an organization about the web channel. It is an article of faith – and I think a reasonable one – that the deeper the understanding those decision-makers have about the channel, the better their decisions are likely to be. In this context, the criteria of actionability can be taken to read: only measures that suggest specific actions lead to a deeper understanding of the web channel.

Put this way, it already seems a lot less convincing. It implies that all the knowledge about the levers in the system must already be known and widely understood. If this wasn’t true, then the report system would miss key pieces of information because a measurement would not appear actionable (no lever) when it actually was. I think it is fair to suggest that the key levers in the web channel are not always or even often known.

A proponent of the actionability criteria might argue that when the levers become known, then new measures can be added to the report system – but till then the information would just be clutter. I don’t think that’s quite right. Managers make most of their decisions on instincts – and probably always will. And people can often internalize information and use it without being able to codify how it translates into a specific lever. To me, if a reporting item seems to deepen understanding, it may well play a role in some action even if the actionable levers are completely unknown.

My second objection is that unless the action is taken to be quite broad (yelling at a subordinate or starting a study), then measurements cannot possibly tie directly to an actionable lever.

Here’s an example to see why this is true. Suppose an online retailer measures the average cart size for the site as an essential measure. This seems reasonable – and surely measures an important part of the business. Now suppose that the average cart size changes for the worse. What’s the action?

There is, of course, no way to know from this information. Indeed, there may not even be a necessary action. Let me outline three scenarios that each might explain this change:

  • The business introduced a new low-end product that is generating significant sales.
  • The business’ SEO effort has significantly improved rankings – changing the mix of prospect quality and interest – increasing the number of low-end shoppers. This has resulted in an overall decrease in cart-size as these first-time buyers tend to be lower-end shoppers.
  • The company stopped a cross-sell promotion on the shopping cart page.

Now, it should be obvious that a report set will capture #1 in a way. It might capture #2 in an even more complex fashion. It probably won’t capture #3 at all (except by this measure). But in these three scenarios, the third explanation is immediately actionable. The second scenario is probably a positive not a negative. And the first scenario may be either positive or negative depending on other information and business goals.

In other words, a downward trend in a key measurement might be good, bad or indifferent! It might necessitate an action or it might not. And the levers for moving it might or might not be available or known.

Oddly enough, there is a close corollary to this problem found in the scientific world. Once upon a time, Philosophers of Science advanced the idea that science worked through a process of falsification. Scientist’s advanced theories and then tested them. If they were falsified, then they advanced new theories to explain the data. On this view, falsification was the key requirement for a scientific theory. It is the potential of a single hypothesis to be falsified that gives it meaning and makes it scientific (just as we think of measures being meaningful because actionable). This view was enormously popular and is still the layman’s view of how science works.

Sadly, it is also completely wrong. Philosophers quickly demonstrated that it was, in principle, impossible to falsify a single theory residing within a larger system with any test. Science cannot possibly be an exercise in pure falsification.

No single measurement can ever suggest an action – cannot, in fact, even be interpreted directionally as either good or bad. Only in the context of a complete view of the business system (and the knowledge that all other things are equal or heading in some specific direction) can a judgement be made about the meaning of single measure. I think this make it clear that no one measure can ever really be “actionable” when taken in isolation. And if no one measure is actionable, then surely the criteria of actionability is fruitless.

Let me give another example suggested by CableOrganizer’s Paul Holstein. Paul asked me (rhetorically) whether 100% was a good conversion rate. He was annoyed by always hearing about sites with 30% or 40% conversion rates. My response was “no.” And while I’d be pretty damn certain my answer was right, I can’t prove it is in every case. My thinking (and Paul’s) is that if a site is at 100% conversion rate, then they are not driving enough traffic to actually maximize revenue. There must be other marketing methods that would drive additional traffic that might drop the conversion rate to 95% (or even 1%) but still lead to more profit. It’s really just another example of a Laffer curve – you don’t maximize tax revenues with a 100% tax rate. On the other hand, there is no way in purely theoretic terms to establish what the actual maximum point is. And, in fact, there is no reason to believe that the maximum won’t shift in time as markets evolve.

From this, it should be clear that even the venerable “conversion-rate” cannot be meaningfully tracked as directionally “good” or “bad.” A drop in conversion rate might be just what the doctor ordered to improve profitability. It should also be clear that there is no such thing as a “good” conversion rate except in the context of a specific market, specific business, specific web site and specific set of marketing initiatives.

A true contrarian might, at this point, insist that site profitability is not susceptible to interpretation in this manner. But if this means that the only measure you can report on is total site profitability then we can all pretty much pack our bags and start working on other things besides web reporting. And, in any case, profitability is susceptible to this type of argument. Many a business has cannibalized long-term revenue for short-term gain. And many a web site has done the same. Want to drive up short term profitability? Add very aggressive pop-ups to your site on exit. Will this enhance your site long-term? Not very likely.

The upshot of all this is that no single measurement is ever directly actionable and cannot, in principle, ever be directly actionable

All of which may help explain why report sets aren’t – and never will be – as immediately useful as proponents of the myth of actionability like to suppose. But none of which really explains why we, too, are often disappointed in the report sets we generate. I’ll tackle that issue next time and then start to dive into the features in the tools that actually do make reporting easier, better and more useful.

Comments

Add to Del.icio.us | Digg | Reddit | Furl

Bookmark Murdok:

Gary Angel is the author of the “SEMAngel blog – Web Analytics and Search Engine Marketing practices and perspectives from a 10-year experienced guru.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles