Friday, September 20, 2024

Benchmarking, Web Analytics and Functionalism

I had a comment/question yesterday that I thought raised an essential question about Functionalism, web analytics and how to think about them.

(Quick Note to Readers: you’ve probably noticed that we don’t show comments on posts – we made a decision when we began that the Blog is not intended as a discussion area but more as a corporate viewpoint. I don’t know how wise that is, but we do read and reply to every comment, and they often trigger new posts. So I do welcome comments even if don’t log them – and feel free to comment on that as well.)

The question was this: what is good score for a Router page – 50%, 70%, 10% – and is there a standard benchmark an analyst can use? The question applied to Router pages, but it could just as easily be asked for any other page type in the Functionalist library.

I’ll start by saying that this is a question we get frequently from out clients. And not just about Functional concepts. Clients want to know how their conversion rate compares to the competition, how their PPC performance stacks up and so on and so on.

These are good questions. In fact, knowing how you compare to the competition (or to some gold standard) is one of the most valuable pieces of contextual information a marketer or analyst can have. It’s also one of the most difficult to obtain and use correctly – and it’s something we’ve usually had to punt on.

Let me explain why.

Let’s start with the obvious. You can’t compare your web site to just anyone’s. A portal is a fundamentally different beast than an e-commerce site, a customer support site, and operations site or a multi-purpose corporate site. The performance of pages for these different sites will invariably be different. There is no possibility of a meaningful comparison – even at the Functional level. That’s why benchmarking firms go out of their way to compare apples to apples. Nor is likely that a seller of a 100K service will have similar page performances to a seller of a $20 product. The sales cycle is just too different to make the way visitors use your pages comparable.

But how similar does an apple need to be? Here, I think there’s actually some good news. Without something like Functionalism, I think the apples are never going to be similar enough. Even your closest competitor will have such different sourcing, online campaigns, pass-by traffic and merchandising mix that basic conversion rates will be meaningless. Let’s say your closest competitor has a 4% conversion rate and you have a 3.5% conversion rate. Is your website underperforming? Possibly. But what if you also know that you have twice the organic volume of the other site on words that tend not to drive to conversion? And without that extra volume, your website conversion rate would be 4.1%. Are you better? Possibly. But there are sure to be a thousand other differences that make comparisons almost useless. Nor is conversion a “gold standard” – you certainly need to understand revenue per customer and probably even net income per customer.

Variations in website traffic quality are much larger than in same-store comparisons by region for traditional retailers. That means that many reasonable comparative measures in the bricks-and-mortar world don’t work well when applied to the internet.

Functionalism can help to bridge that gap. By comparing pages with specific functions, you’ve netted out a considerable quantity of the traffic quality variation and self-selection that can otherwise mar comparisons. But comparisons will still be vulnerable to many significant differences.

Indeed, this problem isn’t isolated to understanding competitive benchmarking – it makes it a continuing challenge to measure page performance over time. Because your own traffic mix is always changing, the visitors you drive to your web site are always varying in quality. If you add a PPC program, chances are that every KPI on every page is going to change – sometimes dramatically and sometimes subtly. This isn’t all a bad thing – it can help you understand how your PPC traffic differs from (and is similar to) other channels. But it also means that a simple comparison of before and after page performance won’t necessarily be meaningful.

The problem of measuring improvement when multiple variables are being changed will always be with us (and not even multivariate testing will solve them all). Much of the real work for an analyst is trying to insure that aside from the variable you’re testing, everything else is as constant as the real-world will allow.

All this being said, there are ways to think about the KPIs within Functionalism that can help you. With Routers, for instance, exits are much more sensitive to visitor quality than sideways routes. If you change sourcing and see a significant up-tick in exits, this might cue you a simple decline in visitor quality. And if there is a decline in visitor quality, you should see a matching decline on the landing page. If you don’t, then it may be that the page really doesn’t work as well for the new source. And since sideways routes are much less vulnerable to visitor quality, a Router paging losing a majority of its traffic to sideways routes is nearly always in need of re-design.

Even more important, you need to think about Functionalism as supporting a process – one of continuous measurable improvement. You use Functional KPIs to measure an existing state, suggest possible changes, and then try them. By A/B testing, you can screen off virtually every exogenous effect. But even with simple time-based rotation testing, you can be pretty sure – in the absence of dramatic changes surrounding your site or business – that you’re measuring a real effect.

Part of the reason you can be sure is that the KPIs allow you measure relatively subtle shifts in behavior over fairly short periods of time. So you can see if a Router is performing differently over a time frame that makes significant outside effects unlikely.

All of which leads me back to benchmarks. Of course, if 90% of the visitors on a Router page exit or go sideways it needs a re-design. But 50%? 30%? That’s harder – indeed – impossible in the abstract – to say. Perhaps using the Functionalist paradigm and gathering information from a wide variety of sites, we might begin to see enough true patterns emerge to make a good industry-vertical benchmark practical. We, unfortunately, are well short of the amount of data necessary to see if those benchmarks are even possible – much less publish them back to the world.

However, by isolating functions and appropriate measurements, you can get a much better sense of how pages in your web site compare. This can help you decide which ones to target testing changes for. And it can provide you with a nearly bullet-proof method to tell if you’ve gotten better.

Naturally, you’d still like to know how you really compare. And someday, perhaps, we’ll have apples to apples comparisons on the web that are as useful and available as those in traditional retail. But while Functionalism may bring that day a tad bit closer, it is a long way from making it a reality.

Tag:

Add to Del.icio.us | Digg | Yahoo! My Web | Furl

Bookmark Murdok:

Gary Angel is the author of the “SEMAngel blog – Web Analytics and Search Engine Marketing practices and perspectives from a 10-year experienced guru.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles