Tuesday, November 5, 2024

Escaping Google’s Supplemental Dead Zone

Recently throughout the blogosphere, a discussion has begun to gain steam about how exactly Google’s supplemental results are determined, and what steps webmasters can take in order to rescue these left-for-dead pages and return them to the main index.

In what can seem like a black hole of search results, Google’s supplemental index seems to be growing at a steady, almost ominous rate. As the company refines how it ranks and indexes pages, those left behind the curb are surprisingly finding themselves relegated to the “dead zone” of search.

What is a Supplemental Result? According to Google:

A supplemental result is just like a regular web result, except that it’s pulled from our supplemental index. We’re able to place fewer restraints on sites that we crawl for this supplemental index than we do on sites that are crawled for our main index. For example, the number of parameters in a URL might exclude a site from being crawled for inclusion in our main index; however, it could still be crawled and added to our supplemental index.

I want to meet the PR person that wrote this, because this is some of the best spin I have ever seen. Google wants webmasters to be happy that the supplemental index offers “less restraints” on sites – never mind the fact that there’s virtually no way the site will rank highly in any keyword by existing in the SI.

Let’s just say, for arguments sake, that a webmaster would want his/her site to rank highly in Google’s main index. (This is a pretty bourgeois concept, I know.) What steps would need to be taken to avoid ending up in the dead zone?

First, we should look at how sites end up there in the first place. Search Engine Guide’s Matt McGee outlines some trouble spots that could contribute to a site’s inclusion with in the supplemental index:

•   Duplicate content. This is often the main reason a page ends up in the supplemental index.

•   Too many variables (parameters) in the URL. Google mentions this on the “help” page linked up above.

•   Poor overall link profile. Matt Cutts specifically mentioned earlier this year that the Bigdaddy software upgrade would result in more supplemental results for “sites where our algorithms had very low trust in the inlinks or the outlinks of that site. Examples that might cause that include excessive reciprocal links, linking to spammy neighborhoods on the web, or link buying/selling.”

•   The page is buried. Orphaned pages are candidates to go supplemental. These are pages which can only be reached by a deep crawl of your site’s internal links, or pages which can’t be reached at all.

There doesn’t seem to be a big mystery here. These types of improvements are common when it comes to SEO, and you would be hard pressed to find anyone who is knowledgeable about search that didn’t already understand the importance of these fundamental concepts.
Vanessa Fox, however, lets us in on a tip that may not be so obvious in a Google Groups post outlining the critique of a site:

Looking at your site in the search results, it appears that your pages would be well served by meta description tags. For most queries, the generated snippet is based on where the query terms are found on the page, and in those cases, your results are fine. But for some more generic queries, where a logical snippet isn’t found in the text, the generated snippet seems to be coming from the first bits of text from the page – in this case, boilerplate navigation that is the same for every page.

Rusty Brick at Search Engine Roundtable breaks down Vanessa’s suggestion:

In summary, by adding a meta description tag, a unique one, for each page, Google will use that information as extra criteria to determine the uniqueness of the page. That is how I understand it. Otherwise, Google will use the top text of your page’s content, and that can potentially be your top navigation or worse. This comes in handy for conducting site: command searches with no keyword specific data given after the site command.

That’s a great start to the process, but what other techniques can webmasters employ to avoid being sucked into the dead zone?

Joe Whyte offers these tips:

•   Get inbound links to your orphaned pages and make sure they are quality link that are relevant to your pages topic.

•   Remove all duplicate content and write your own copy for your pages.
•   If you don’t have content or enough content on these pages that are supplemental then you need to create more.

Once you make these changes your pages will be revisited and re-cached. You can also create a Google sitemap in order to help Google crawl more pages of your site.

The Google Success blog adds:

•   Rewrite your page title and description tags so that they are descriptive and relevant to your site, taking care that they are not too long or contain repetitive keywords.

•   You may need to rewrite your PHP code for e-commerce websites using mod-rewrite to simplify the cryptic URLs and also add unique meta tags to each page.

Hmm, maybe there’s something to this meta tag thing after all, as this is the second mention of the importance of having unique tags on each and every page. I guess it really is the “little things” that make all the difference.

Of course, don’t tell that to Jason Calacanis, he still thinks that SEO is bullsh*t.

In any event, if you’re finding that your page is trapped in the desolate landscape of Google’s dead zone, you might try applying some of these optimization techniques in your efforts to escape.

My apologies in advance to Stephen King; I swear I’m not ripping you off.

Add to Del.icio.us | Digg | Reddit | Furl

Joe is a staff writer for Murdok. Visit Murdok for the latest ebusiness news.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles