Saturday, December 14, 2024

The Dilemma Of Duplicate Content

Share

Everyone wants to have his or her website get the highest ranking possible in search results.

But many people face the problem of not having their site show up at all. One cause of this can be duplicate content, which search engines try to weed out.

Duplicate content occurs when more than one page has identical or highly similar material. Search engines tend to disregard all but one version of duplicate content so as not to inundate users with repeat results. Not indexing alike pages also saves resources.

William Slawski discusses the issue in great depth here.

One problem he cites is the great similarity between pages within a single company’s site. Having the same header, footer, and other features on every page could conceivably lead to a search engine perceiving some of the pages as duplicate content.

Another issue is much the same, and also within your control to change or fix: if a number of product descriptions are much the same, they could be interpreted as duplicate content.

“Nadir,” when responding to Slawski’s post, suggests avoiding formatted texts provided by manufacturers.

He also recommends using things like customer reviews to help provide identifiably unique content.

Mirrored sites are another sticking issue for Slawski. “Search engines may be able to recognize duplicated URL structures . . . and may ignore some mirrored sites that they find.”

Most companies tend to be shifting away from this practice in favor of multiple servers and load balancing, however.

This problem affects many sites, and is still a mystery to some; people have even created duplicate content in the hopes of raising their search ranking.

There are ways to address the issue, though, and it’s in every site owner’s best interest to do so.

Add to document.write(“Del.icio.us”) | DiggThis | Yahoo! My Web

Technorati:

Doug is a staff writer for Murdok. Visit Murdok for the latest eBusiness news.

Table of contents

Read more

Local News