Thursday, September 19, 2024

Web Traffic Analytics and User Experience

“Analytics.” The word sounds technical, number-crunchy, maybe even a bit boring. We information architects and user experience folks tend to prefer dealing with the real users, the designs, and the creative expression of our ideas, and not so much with the numbers. We spend our time developing prototypes, testing designs with users, and then interpreting those results for a creative solution that provides outstanding user experiences. But our exposure to the data and measurement end can be limited, or nonexistent.

By looking at the data on what users do on the site, however, you can enhance your effectiveness as a specialist in the user. You already have information and knowledge gained through observation and direct questioning of individual users. Now, you can add to that insights gained from the broad swath of information pulled during their actions on the site. These numbers represent the real-world behavior and interests of the user.

By looking at this information you will get a fuller picture of the user’s behavior, not in a lab, but in the true user environment. Looking at this information in conjunction with user experience research can yield tangible benefits. It can transform your role as the “research guy/gal” at your organization into a strategic position. It can provide quantitative data that backs up your qualitative findings, since there are some people out there who still don’t “believe” qualitative research. Finally, web analytics can help you learn whether your recommendations really work and further cement your value as a user experience expert.

“But where does data come from?”

This real behavior data is generated from the software programs that read and interpret the many browser “requests” that pour into the site server every second of every day. Placing Web Traffic Analysis (WTA) in a broad category, it can be considered a form of data mining. Data mining is the process of delving into large databases and looking for patterns and relationships in order to better understand customer behavior. Other types of data that can be processed to glean customer insights include: loyalty program data (such as frequent flyer or club rewards), purchase data, customer service contacts, and response to promotional activity (i.e., web registration, email response, mail-in response).

Web traffic analytics refers specifically to looking at the data that comes from user activity on a given website. It may be used in combination with other data but, for the sake of simplicity, we’re just going to look at user data on a website in order to focus the most-popular area of user experience research.

In most cases, web data comes from log files, which were originally used by webmasters to monitor the level of traffic on their servers. Today, log file software serves more of a business and marketing purpose, catching all (or most of) the requests that users make to the website’s servers, and then dumps those requests into a database. The software then translates the coded requests into tables, charts, and graphs that give a picture of what users are doing on the site. Other types of web measurement tools exists that embed a “tag” into the code of each page, then track the action to each page through the server. This data is then pulled into a software program or through an Application Service Provider (also known as an ASP) that allows the analyst to interpret the data.

Where to begin? At the beginning – with objectives

Don’t worry – I won’t end this article without providing some suggestions on what metrics user experience people should measure. However, it’s always critical to keep in mind the main purpose of the website you’re working on. Is it a marketing site, mainly existing to provide basic information to customers about the company? Is it a content site, deep with information and articles? Is it a commerce site? Are there any key calls to action that you want site visitors to answer? Business to business? Customer service? Websites can and usually do have multiple objectives. Your future analysis will be tied directly back to these objectives, so it is critical to gain agreement on the website’s specific objectives prior to measurement efforts.

Write down the key objective(s) of your site. As user experience folks, that should be a breeze. Of course, sometimes website management gets so diffuse that it can be difficult to get everyone to agree on what the site needs to achieve. Setting up a measurement plan provides a good opportunity to revisit and confirm those objectives. Without objectives, you will be looking at piles and piles of reports, graphs and numbers, and you will have a tough time culling down the data into something manageable.

Once you have the objectives in hand, it is much easier to identify what you need to measure. For example, if a “call-to-action” objective is getting visitors to sign up for email updates, then you want to be sure to measure all the site activity that leads up to and includes that sign-up.

A goal that is common to most web measurement programs is to keep track of the data on an ongoing basis. This is typically called “trending,” which is usually done on a month-by-month basis, but depending on the issues you are working with you may want to trend data on a daily or weekly basis. Trending is useful to be able to track the baseline activity, and then to note when there are ups and downs. These can often be used to link to other online efforts (such as advertising or promotions) as well as to outside or “offline” marketing or business efforts that might affect website traffic.

Some very useful metrics

There are some basic metrics that I always like to look at, no matter what site I’m analyzing. Typically, you will receive data in the form of monthly summary reports. This is standard practice, so unless you have some reason to look at shorter or longer period of time, start out with using monthly reports.

I’ve sorted these general metrics into two types:

Overall site metrics: This is data that aggregates all the site activity over the time period. It is looked at on an ongoing basis and is the core overview of data trending. It also helps in deriving metrics.

Page-level metrics: This is data that is examined for specific pages. Usually you would look at page-level metrics for all the important pages of the site, such as the homepage, key sub-navigation pages, and all the pages identified in your objectives as important to achieving the site?s goals.

The table below describes some of the metrics typically available for website analysis.


Table 1. Basic Site Metrics

Once I’ve looked at these general metrics, I start doing some very simple number crunching. I call this analysis derived metrics, since it involves using the data that already exists and doing some calculations and comparisons.

Derived metrics involves a bit of spreadsheet finesse. At this point, I copy the page-level and site-level metrics from the web-based report and paste them into a spreadsheet. This method provides all the raw data about the number of site visits, page visits, page views, and so on. What I can then do is calculate percentages in order to get an idea of the relative importance of each page view. This weighting helps because often the raw numbers are hard to interpret, but percentages are easy to compare to each other and to rank.

Derived metrics can provide some amazing insights into what’s really going on with your site. Typically, this is the methodology that is most proprietary and protected in the world of web traffic analytics. But, as a way to illustrate this technique, I have created data for a fictional site that shows a point-in-time analysis, as well as how data could be trended over time.


Table 2. Sample Site Activity Data

In this example, the website received 7,000 visits in May and 10,000 visits in June. I note that the percentage of all visits that went to the homepage was fairly stable, from 57 percent in May to 60 percent in June. From this I conclude that about the same number of people are getting to the website from the usual sources – search engines, advertising, and so on.
However, in June, the Giveaway Promo Page received 5,000 visits, which means that half of all visits to the site went to this page. This is good news, as it was the key promotional event. But if we go down the page, we see that the percentage of visits to the Registration Form went from only 1 percent in May to 5 percent in June. We would of course want to look at the actual number of registrations, but even without access to that data, we could assume that the promotion was encouraging visitors to register.

There also seems to be a lift from the promotion to the product areas of the site: in May the product page received only 16 percent of visits, and the two featured products each reached only 2 percent of visits. But in June, the product page reached 22 percent of visits, and the single featured product reached 6 percent. This shows that not only did the promotion bring in more registrations, but it also generated a higher interest in the products featured on the site.


Graph 1. Sample Trend Graph

In this example, I hope you can see how much more valuable it is to calculate the percentage of visits rather than just the raw numbers. While the raw numbers have the same information, it can be hard to analyze data from month to month, and page to page, without having a foundation to compare them.

Graph 1 makes the point visually. This type of trend graph would be important to keep up as part of an ongoing effort. In fact, for any registration page, it is wise to keep track of the visits and page views each time period for that month, and to know what might contribute to its rise and fall. Graph 2 shows an example of this for the first half of 2003.


Graph 2. Sample Trend Graph

You have the technology, you have the data?now what? OK, so now that you’ve put all this data together, it’s time to do the analysis. The first time you look at website data, it will be hard to know exactly what to look at. I usually look for patterns, and then look for breaks or exceptions in those patterns. I look at the data from those pages that were established in the objective-setting stage as critical to the website’s success efforts.
I also like to understand whether there is big jump or drop-off in the entry or exit page. This helps me understand how the site retains visitors, and where visitors leave, whether they’ve met their objective or they lost interest. For example, if out of 1,000 people visiting your site, you find only 20 people (2 percent) get beyond the homepage, this suggests that the interior content of the site is not drawing people in.

I look to see if the percentage of people leaving the site from a particular page is higher than the average percentage of people leaving the site from an average page. So, if typically about 30 percent of visitors leave the site on average on any given page, and 60 percent of visitors leave on a particular page, I want to know more about what’s happening on that page. I may then do some clickstream analysis to see how people arrived at that page. It may be that people completed a task at that page, and then left the site. Or, it may be that the page was too complex, and users were frustrated.

Even though this is working with numbers, there are no hard and fast answers. I do a lot of comparisons. I sort and resort. I reexamine the site content and strategy. I check the site objectives. And usually some interesting things emerge.

A good example of a useful look at web logs prior to redesign was a project I worked on for a nonprofit organization. The site’s mission is to provide help to parents on early childhood development. They provide online content, and produce and sell books and videos directly to parents. The site’s objectives could be articulated as: 1) to provide compelling online content for the target audience; and 2) to sell the organization’s books and videos. The organization already knew they weren’t selling a lot of books and wanted to improve in that area.

The website itself had a high average visit time – just over nine minutes per visit. In looking at the data, we saw that people spent a lot of time on three key pages: one page on parenting information for dads, one on advice about getting children to sleep, and one other that contained a parenting quiz. We also found out that a large percentage of people were arriving and exiting through these pages.

This data suggested at least two critical insights: First, the site had compelling content about child-rearing that people enjoyed interacting with. So we could check off objective one – almost. The data showed that people would read a single article, then leave the site. This pattern indicated the site was not doing the best job it could at either suggesting additional interesting content, or of connecting the content to the product they offered.

There was a strong opportunity to feature more links and navigational elements that would draw the site visitor to similar content, and point them to related books or videos on the topic. Without the web traffic analysis, we may have addressed some of these issues, but the WTA gave us specific direction as well as backup for our design recommendations. The results were impressive – an increase in traffic up to three times the level prior to the redesign, and a 300 percent increase in sales.

When is WTA most useful?

I’ve found that WTA helps the most in the following situations:

1. Site redesigns
2. Zeroing in on challenging issues
3. Confirming the value of user experience

Data-driven redesign

Redesign is probably the bread and butter of the IA community. We are constantly challenged with how to make existing sites better serve current or new customers. Before undertaking traditional user experience activities for redesign (i.e., heuristic analysis, user testing, wireframe development), it’s worth checking existing log files to see what the user activity has been. The WTA can help establish specific objectives for the redesign, determine what the site is already doing well, and where it is weak. WTA often provides good evidence for what areas of a site need to be fixed; while the conclusions of a heuristic analysis can be seen as subjective, web traffic logs are more objective.

Zeroing in

WTA also helps when you are struggling to understand a particular problem or issue on your site, or when there may be disagreement about how to address it.

I faced this problem some time ago when I worked on a measurement project for a car manufacturer. In this situation, the page featured two large and awkwardly designed buttons that linked to a brochure download and a test drive, respectively. We felt strongly that the goal of the site should be to drive visits to the dealers, and that the large brochure link would siphon users off from the more critical link to sign up for a test drive. This thought did occur to us before looking at the data, and in fact was discussed as the page was in development. A quick deadline kept the team from pushing the client on this issue. But once the page launched, everyone wanted to make sure it was as successful as possible, and we came back to the initial question.

The WTA confirmed our suspicions. It showed that only a handful of users were signing up for the test drive, but the brochure download was getting a lot of activity. We also learned that after the brochure was downloaded, users typically left the site, rather than coming back to sign up for a test drive.

Our recommendation was to make the test drive button much more prominent, and to feature the brochure download deeper within the site. We were convinced that the number of links to the test drive page would increase.

The nice thing about WTA when combined with user experience smarts? We could measure it! And indeed, after the redesign, we did see an increase in the percentage of users who registered. Which brings us to the last area where WTA comes in handy?

Showing that a focus on user experience works

Nobody said that UX and IA folks are perfect. Fortunately, we have the ability to check whether our recommendations have been successful or not. The critical factor in knowing whether what you’ve done has worked is to design a simple pre-design and post-design measurement.

Be sure to hold onto those log files that you looked at prior to your redesign. Look at the recommendations you made and what aspect of the design was changed. Then, after the site is redone, wait a month or so and check the log files again. Did you solve that problem where users dropped off before registering for the site? The data should tell you.

This can be useful when either internal team members or clients start to question the value of user experience. If you can pull up a little bit of information that shows how registration levels improved following user testing and redesign, then you have a powerful argument that is hard to ignore.

You can also maintain a small web traffic database that notes how site activity changes when designs change. You may want to check the website traffic each time the home page is tweaked or the navigation is adjusted to see how site activity changes. After any design decision is made, it is useful to keep some data at hand to note corresponding changes in user activity. With this data, you can increase your understanding of how design affects user behavior and response.

Make it so

Web Traffic Analytics not only keeps us honest, it can, in the best possible sense, help justify our existence and our value. When clients can see that user experience recommendations lead to specific, positive changes in the behavior on their site, that’s powerful stuff. And the user experience expert becomes someone to consult not just when testing is needed, but as an integral part of the design and development process.

*Originally published at Boxes and Arrows

An experienced researcher, moderator and user-experience specialist, Fran Diamond develops and designs customized experience-research projects through her company Firstwater, which she founded in 1999. She applies both qualitative and quantitative techniques that bring user insights into the development and design of Internet and web projects. With a background in traditional marketing and advertising as well as with digital solutions and technology, Fran bridges both online and offline marketing, communications and business strategy. Fran has a masters degree in Integrated Marketing Communication from Northwestern University. She can be reached at fran (at) firstwater.net.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles