Friday, September 20, 2024

A is for Analytics…

Being a father of two young girls, I’m more than a little familiar with Alphabet books. You know the ones I mean.

They have a page for each letter the alphabet. In the upper-left corner of each page is the letter in question – printed in upper and lower case – usually in some ornate font that makes it difficult for small children to recognize but that looks really good. The thing about alphabet books is that it is quite possible to write a good alphabet book. But virtually impossible to write an alphabet book that’s any good! The format is so inherently restrictive and artificial that it is almost certain to defeat even the best author. And if they fail as stories, they are just as flawed instructionally – lacking the simplicity and repetition (as well as interest) necessary for learning letters, sounds and reading.

And no, I haven’t gotten my entries confused and posted from my children’s book blog on to my analytics blog. What actually brought alphabet books to my mind was a meeting I had this last week that reminded me of a particularly insidious web analytics implementation strategy.

With so many companies rushing to acquire or build web analytics expertise and a real drought of knowledgeable practictioners, it’s inevitable that agencies and internal measurement departments are both going to struggle to get their feet on the ground. And it’s probably not surprising that they will end up hiring lots of folks from the traditional BI world.

Unfortunately, web analytics practice is quite distinct from most traditional BI systems – and the web analytics implementation strategy I’m talking about was a perfect example of why pulling old BI methodologies unthinkingly into the web analytics world might be both attractive and also a really bad idea.

Here’s the method. The business client team needs to develop the set of reports they want. The agency/measurement group will then decode the reports into a set of data models required to support them. After accomplishing this, the vendor will then develop tags that capture the data model. When the implementation is done, the client is guaranteed to get the reporting they want – because the vendor has done a vast amount of detail work to make sure that every single report item is captured in the tag.

It’s all right out of Data Modeling 101 from your basic MBA BI handbook isn’t it? It’s also fundamentally misguided.

The original BI implementations where this sort of methodology was used were built in a completely different world. They were extracting data from large corporate systems where the contents and use of the subject were well known to at least a few “subject matter experts.” These subject matter experts generally knew exactly what information they needed – often they were already getting that information – just not in a timely or convenient fashion. What they didn’t know, was how to get that information easily.

In that world, the IT analyst could rely on the “subject matter expert” to really understand the underlying data, its purposes and his or her needs. In addition, the translation from reports to data model was an essential step because the IT analyst needed to map the necessary reports back to complex internal database systems and build a reporting system that would do the necessary translation work.

For that world, a “report-based” BI process really worked. It made sense.

That it doesn’t even remotely apply to the world of web analytics is probably obvious from the description above. First, the web marketing manager is by no means a subject matter expert in web analytics the way an accounting or HR expert was on the data in their internal corporate management systems. The web product and marketing managers almost never know what data is actually available, how web analytics data translates into real-world behavior and what kinds of reporting and analysis are necessary and effective.

Without a subject matter expert, then, the whole idea of getting requirements as a set of reports is deeply flawed. But the problems don’t end there.

The classic BI analyst had to build a data model because a BI system had to translate the data stored in some internal system into a database reporting platform. Inferring the data model from the reporting requirements was absolutely essential – it was a step that simply couldn’t be ignored.

There is no corresponding translation in web analytics. A reporting system and data model are already done – captured in the enterprise software delivered by the vendor. So most of the work done in building a data model is unnecessary, and serves only to mask the essential problem with this approach.

And that problem is that, at root, it confuses the purpose of web analytics with web reporting – virtually insuring that both will implemented poorly.

The simple fact is that most of the information you need to do analysis is not generally captured in reports. And shouldn’t be. Analysis is invariably a deep dive into the data. Reporting is a business tracking and alerting tool. The types of data necessary for reports and analysis overlap – but are often widely disparate.

The BI “report” methodology might (if you ignore the absence of real subject matter expert) capture all of the requirements necessary for reporting – but it would still leave a great deal that is necessary for analysis out.

So why would anyone adopt this approach?

Well there are reasons, some of which go beyond just plain old muddy thinking (and there’s every reason to believe that most practitioners of the classic BI methodology don’t have the faintest clue as to its roots and limitations).

I’ve remarked elsewhere that I think laziness is THE fundamental fact of human nature when it comes to business. And this methodology definitely plays to that trait. Because the report set is client generated, it means the client is on the hook to provide virtually all the intellectual property. So an agency or internal measurement department don’t really have to know much about the business or the analytics. Believe me, it’s a lot easier to spend your time culling a data model from a report than it is coming up with a real analytic strategy. So despite looking like a lot of work, this method is intellectually easy.

In addition, a measurement system that supports nothing more than an overly complicated and unreadable reporting system is pretty much guaranteed to maintain the status quo. And not everybody has a real interest in strong, pointed measurement and analysis – there are always people benefitting from the status quo. So this method can create the appearance of exhaustive measurement without any of the discomforting reality.

Best of all, this methodology creates an environment that is nearly risk free. If and when the client needs some piece of data for analysis and it isn’t available, then who is at fault? The client! Because the client didn’t put the requirement into the report set. Which – given how risk averse big vendors are – may be the most attractive aspect of the whole approach.

Lot’s of hours. No thought. No risk. If it somehow involved a trip to Hawaii, it would be nirvana.

Of course, business managers aren’t completely fooled by this approach. So they try to anticipate their needs and build everything they can think of into the reports. This results in a KPI Stew (http://semphonic.blogs.com/semangel/2006/09/kpi_stew.html) – and makes a mess of the reporting too – by including so much information that the basic report set is unmanageable and complex.

From a vendor perspective, that’s just fine. The more complicated the report set is, the more hours they get to bill against the decoding. And here’s the really wonderful part – the client knows they need some piece of data tagged – but instead of just saying so, they have to think of report to include it in so that the vendor can data model it back out of the report! Think how much less time the vendor could bill if the client just listed data items they needed.

Instead, the business user has to do more work by building the requirement into a useless report. Then the analyst spends lots of time decoding the reports into a data model to extract the data item the client had in mind originally. The analyst then wastes time creating the report that nobody wanted in the first place. And if the client objects, he’ll probably be told not to tag data unless they have a real use for it in their reporting.

Beautiful.

All of which brings me back to my Alphabet books. Like an alphabet book, a tagging implementation based on a report set is structurally bound to be bad in the two functions it is designed to serve – analysis and reporting. Just as it’s virtually impossible to have a good story when you have to have 26 sections running from A to Z, it’s nearly impossible to capture your analytic requirements in a report set – the two functions just aren’t the same. And just as no one ever really learned their letters (much less to read) from an alphabet book, a good report set isn’t built this way either. In trying to make a report set capture all the variables you know you need, you usually end up with a lousy, complex and unmanageable report set.

The bottom line is this – a report set can be an input into the tagging implementation process (though a good vendor will help the client understand what’s useful and what isn’t), but it shouldn’t be the bible – the definitive word on what needs to be tagged and what doesn’t. And to really do the job right, you need to have carefully considered your likely analytic directions in the near future and insured that your implementation supports those needs. Anything less is a prescription for failure.

Borrowing old BI methods for data modeling and applying them to web analytics tagging implementations isn’t just sloppy thinking – it has real advantages. Unfortunately, all of them accrue to the vendor not to the client. It gives them a formal process, takes lots of work, makes them look smart, and lets them use bodies like interchangeable pieces. In the end, that’s what makes it so dangerous. An unsatisfactory process with less advantages to the vendor would probably be quickly discarded. But in a world where web analytics is poorly understood and greatly in demand, it is definitely a case of buyer beware!

Comments

Add to Del.icio.us | Digg | Reddit | Furl

Bookmark Murdok:

Gary Angel is the author of the “SEMAngel blog – Web Analytics and Search Engine Marketing practices and perspectives from a 10-year experienced guru.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles