The concept of page cloaking has come under fire; again, because the idea is being used by a number of legitimate sites in order to protect or hide their content from users and/or search engine bots. The fact that these sites do not get punished for using cloaking techniques has become a sore spot with some bloggers.
Wikipedia defines cloaking as:
Cloaking is a black hat search engine optimization (SEO) technique in which the content presented to the search engine spider is different from that presented to the users’ browser. This is done by delivering content based on the IP addresses or the User-Agent HTTP header of the user requesting the page. When a user is identified as a search engine spider, a server-side script delivers a different version of the web page, one that contains content not present on the visible page. The purpose of cloaking is to deceive search engines so they display the page when it would not otherwise be displayed.
Basically, you are presenting search engine bots with a certain kind of content while delivering different content to the site visitor. Normally, the cloaked pages are created to fool search engines in order to get better result rankings. However, what if you are using cloaking procedures for legitimate reasons like protecting paid content or serving different content based on the visitor’s IP address? Should sites doing this be subject to the same penalties? It depends on whom you ask.
On the Graywolf SEO blog, readers are asked how they save articles from the New York Times because they are only available to the public for a limited amount of time. Once an article gets to a certain age (2 weeks), the NYT hides it unless the Google crawler (or other search engine bot) requests it – fitting the definition of cloaking, something Graywolf takes the search engines (and the NYT) to task over.
Philipp Lenssen of Google Blogoscoped also has some issues with Google seemingly allowing WebmasterWorld to cloak their pages, which goes against the search engine’s webmaster guidelines. For his post, Philipp conducted a search related to CMS and PHP and a WebmasterWorld post held the first position. However, when Lenssen tried to access the page from the search results, he was taken to a login page – another example of cloaking in action (unfortunately, when I try to duplicate the search, I am taken directly to the content).
Both Lenssen and Graywolf wonder how these otherwise legitimate sites get away with these cloaking exercises when Google and the rest are explicitly against the act. However, the examples given by both bloggers represent the “white-hat” side of cloaking in the sense they are not trying to game the search engines. These sites and companies are merely trying to protect their content.
However, this does not matter to either Lenssen or Graywolf. Because Google has actually addressed this issue in their guidelines, both believe there should be no quarter when it comes to punishing the guilty parties, whether the sites have a legitimate reason for cloaking or not. They also feel Google’s Matt Cutts should address the situation so there will be no more confusion.
At the Chicago SES, while it was never explicitly stated (at least in the sessions I attended), there seems to be a growing sentiment that as long as the webmaster isn’t trying to be deceptive, search engines will tolerate some cloaking. The Wikipedia page discusses delivering content based on a visitor’s IP location (IP Delivery) as one of the instances where cloaking is indeed accepted. Although, the explanation also points out IP delivery isn’t the best example of cloaking because the content in question is not being hidden from search engines or users; it’s just being manipulated based on the visitor’s location.
The question remains, however – should the search engines punish pages being cloaked for content protection reasons? If you follow the two bloggers cited in this article, then yes, all sites doing so should be punished. If they are not going to punish these sites, then the search engine spokesmen and women should speak up and address the confusion.
Add to Del.icio.us | Digg | Reddit | Furl
Chris Richardson is a search engine writer and editor for Murdok. Visit Murdok for the latest search news.