Monday, November 4, 2024

Comparing Search Engine Results – My Experiment

A couple months back I was covering the launch of a new shopping search engine. As part of my event coverage I was allowed to be part of the beta group, prior to the launch only a handful of people [mostly employees and family members of employees] were able to take part in the Become beta test.

Read Jason’s coverage of Become.com’s launch.

But, Michael gave me an open invite and said he would welcome my feedback, good or bad. With that I said, hmmmm, can I break this? After I played around trying to break it I realized my time might be better spent doing some actual research.

So that’s what I did. Initially, I created a spreadsheet comparing the search results of the top search engines for a generic shopping related term “furniture” but when I saw what the data looked like I decided to dig a bit deeper and limited the scope of the engines I compared but broadened the scope of the terms I used in my comparison.

Before I tell you what terms I used and how I picked them I’d like to first let you in on a little secret. I had a hypothesis. Yes, that’s right, the guy that never brought is high school lab notebook to class actually had a hypothesis before he conducted an experiment. Dr. Scott McCord would be proud.

My Hypothesis On Shopping Search Result Comparison
I believe there is no significant difference in the search results of Google, Yahoo and Microsoft. This statement applies to their search results in general but, for sake of limiting the scope of this experiment, I will be focusing soley on shopping related queries.

The Method
Step 1: Getting A List of Objective Shopping Related Search Terms

First, I selected keyword phrases from Froogle’s year end Zeitgest list for 2004. This consisted of 6 different categories of queries with 10 keyword phrases per category. So that gave me a total of 60 terms and 600 results I could use in my experiment.

Second, I used the top 20 queries on Shopping.com for each week starting in June 2004 and going through February 2005. Even though there are 20 queries for each week there are a lot of duplicate queries in those lists. So, in order to get 60 unique queries from Shopping.com I had to go all the way back to June 2004. I thought that was pretty funny in and of itself.

Lastly, I used the top 100 searches on MySimon for the week ending February 25, 2005. This list contained 100 queries that were completly unique.

Step 2 Data Aggregation and Comparison of Search Results
In order to determine the similarity of search results it’s important to know how unique their results are. To accomplish this, I performed 3 different sets of comparisons [because there were 3 unique sources of data used as noted in step 1 above]. Then, within each report I compared the top 10 search results of each keyword phrase per search engine.

If a url was found within the search results of a single search engine [engine A] but the url didn’t exist in the top 10 results of either one of the other engines [engines B, C & D] then that url was considered unique for the search engine in which it was found [engine A]. This comparison could be drilled down to even further if people request that.

For example, I could compare the number urls in the search results of Search Engine A to the urls of Search Engine B to determine how 2 specific search engines fare against one another. For my purposes, I only wanted to compare the results each engine tested against all the engines so I’d have an unbiased report.

Let me give an example to clarify that.
For the term furniture, the url “http://www.nhfa.org/consumer.asp” was found within the results of Become but not in the results of Google, Yahoo or MSN so this would count as a “unique result” for become but not for any of the other search engines.

The graphs below show the uniqueness of each search engine per keyword set. The keyword sets [explained in step 1] are labeled Froogle, Shopping.com and MySimon for clarification. Some terms overlapped between each of those 3 sources of keywords but I felt it was important to not merge the lists and run a single report for objectivity purposes. I wanted to see how each search engine fared when compared on 3 completely different sets of shopping related keyword searches without trying to create one master list.

Froogle comparison of 60 shopping terms and 600 results mySimon.com comparison of 100 shopping terms and 1,000 results Shopping.com comparison of 60 shopping terms and 600 results
From these charts I think it’s fairly clear that the top search engines have a long way to go before they can ever be considered truly unique. On the other hand, since I started this little experiment as a way to test Become’s algorithm and whether or not they were bringing something truly unique to the table, I think the results speak for themselves.

When compared to the major search engines they delivered the most unique urls on every list of keywords tested. They also tended to return more deep results in their top 10 than the other engines. By deep results, I mean pages other than the homepage or main domain of a web site.

When I did a “spot check” of the actual quality of the results from Become versus the other Big 3 search engines I felt [I know, I know, you’re not suppose to have feeling when you’re conducting an experiment] the results were much better from the standpoint of shopping research. I was frustrated, however, that I couldn’t do any comparison shopping when I found a product that met my needs but I’m told that feature is currently in development.

On the “big picture” side of things, it’s a bit concerning that Google had the least amount of unique results when compared to the other search engines. I’m guessing this is because the other engines do indeed crawl Google’s index to seed their own from time to time and to conduct qualitative checks of their own.

The mentality of most search engines is “we need to beat Google team, now go do it!” and I’m certain many a search engineer has crawled the results of their competitors as a guage of how well they’re actually doing about uncovering those hidden gems [urls] of the web.

Another culprit of Google’s low uniqueness factor is the fact that they tend to return the primary domain of a site in their results rather than the deep pages found within a site. I’m not sure why this behaviour exists but I’ve spoken with many an seo that said their client was in the first position for their main keyword but only for the homepage and not for the page they want visitors to come to.

I’m predicting there will be a change along these lines at Google and the other major engines because it would result in higher conversion ratios for their natural results which means their advertisers and users would be happier as well.

Advertisers will think “heck, if my natural results did that good, just think how good my ppc campaign will do” and consumers will be more satisfied in general because they’d be getting better results right from the start instead of having to drill down into the sites found in the results themselves and trying to find what they were looking for all over again.

I hope you find this data and my interpretation of this data interesting. Feel free to comment on this post or to email me directly [marketingshift at gmail.com] with your thoughts. As always, I’m open to criticism. If you’d like a copy of the 3 lists of terms I used to conduct my experiment then let me know.

Jason Dowdell is a technology entrepreneur and operates the Marketing Shift blog.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles