Telegraph Cluelessly Attacks Google News Indexing

1
63

This time it is the UK-based Daily Telegraph that is complaining about a need to protect their content from search engines.

Telegraph Cluelessly Attacks Google News Indexing

Telegraph Cluelessly Attacks Google News Indexing

Telegraph Cluelessly Attacks Google News Indexing

Which makes us wonder if they, like Copiepresse and AFP before them, are willfully clueless about implementing robots.txt properly on the Telegraph’s web servers. Doing so would be trivial for a skilled webmaster, and resources for robots.txt can be found easily online.

We have to think Google would help out with this. Oh wait, they do. Google offers a whole bunch of tools on their Webmaster Central pages. Among those, guess what, a robots.txt analysis tool.

Instead of just updating robots.txt and going on with their lives, we get bluster from the Daily Telegraph’s editor Will Lewis, acting as mouthpiece for CEO Murdoch MacLennan. Journalism.co.uk has more, citing Lewis’/MacLennan’s remarks:

“Our ability to protect that content is under consistent attack from those such as Google and Yahoo who wish to access it for free.

“These companies are seeking to build a business model on the back of our own investment without recognition. All media companies need to be on guard for this.

“Success in the digital age, as we have seen in our own company, is going to require massive investment – [this needs] effective legal protection for our content, in such a way that allows us to invest for the future.”

That massive investment could be avoided, or redirected to other areas (how about raises across the board for reporters and editors instead of feeding it to lawyers?) Rather than quickly place updated technological protection in place, MacLennan wants the lawyers involved.

It is difficult for us to see this as nothing more than a cash grab. Google’s Copiepresse fight, and its settlement with AFP, gives the Telegraph hope of scoring a nice sack of cash for doing nothing more than they are today.

The refusal to place an effective robots.txt on their website has to be the most damning evidence the Telegraph wants to force a settlement. If they cared about protecting content, their robots file wouldn’t look like this:

# Robots.txt file
# All robots will spider the domain

User-agent: *

Disallow: */ixale/

That’s it, start to finish. It doesn’t look like the Telegraph wants to keep out Google, Yahoo, MSN, Ask, or any other engine. The paper benefits from the traffic they receive. By rumbling about legal protection, it sounds like they want to benefit just a little bit more from search traffic.

The Telegraph has every right to protect its content. What they lack as shown by their robots.txt is the willingness to do it.

LEAVE A REPLY

Please enter your comment!
Please enter your name here