“Digital terrorism” isn’t a phrase one hears often. There might be good reason for that: it’s not abundantly clear what digital terrorism entails. Is it hacking into air traffic control to give dangerous instructions to pilots? Is it using YouTube to promote a violent, hateful cause? Is it setting up a Facebook group to give members a chance to voice a yea in favor of something offensive? Is it trolling comment areas and flaming an author?
Using Google’s define: function brings back nothing. Google doesn’t know what digital terrorism is, and the top search result for the phrase lands at HomelandSecurityWeekly, where the phrase is used in the title of an article about hackers, but is not revisited in the body.
Also high in the search results is a press release regarding a study conducted by the Simon Wiesenthal Center, an organization devoted to fighting anti-Semitism. The study, titled Facebook, YouTube+: How Social Media Outlets Impact Digital Terrorism and Hate, found that 30 percent of new postings on Facebook are extremist in nature. The organization to date has identified 10,000 “problematic hate and terrorist websites, hate games and other internet postings,” much of it from user-generated sites allowing “the viral spread of extremism online” where “expressions of hate can easily flow unchallenged.”
The Los Angeles-based Wiesenthal Center, known for its Museum of Tolerance, includes the usual hate-group and terrorism suspects: neo-Nazis, Ku Klux Klan, Hamas, Hezbollah, and Taliban. It also includes South American Communist revolutionaries, who, though the government it fights labels it a “terrorist group,” might not entirely fit that definition, digital or otherwise.
The problem is the fine line between terms. What separates a revolutionary from a terrorist? If Britain had labeled American settlers terrorists instead of rebels, would France have been so eager to lend a hand? Could the settlers have achieved such moral support for their insurrection?
That’s a whole pile of apples and oranges, maybe.
There was a big stink regarding Holocaust denial groups peppering Facebook, and Facebook heads took a stand for freedom of speech, a stand surprising many considering the social network’s aggressive deletion of breastfeeding images upon offended user request. The slightest hint of areola provided perhaps a clear demarcation line for identifying what is obscene, but Facebook needed actual threats or advocating of violence before it would pull the terms-of-use plug on hate groups.
At 225 million members, Facebook needs a sort of governance, and the company revealed the buds of such governance recently through new quasi-democratic processes. Any site reaching that level of mainstreaming requires some mirror of polite society prohibitions, but prohibition in “real-world” polite society was always dicey with blurry boundaries between acceptable and offensive, with definitions of each changing with the regional topography. Imagine how much finer the lines, how many more toes are smashed by insensitive boots in a virtual world without the physical restraints of geography.
Facebook is just the popular target of the day, though. YouTube has faced similar quandaries regarding the same fringe undesirables. Luckily for both, private companies don’t have the same obligations as the government they operate in*. They can turn anyone away at the door. But it’s not always so easy. A jihadist, a white supremacist, sure, easy to spot, easy to toss out, only those who think like them complain. But what should Twitter do with the influx of tweets praising the assassination of an abortion doctor?
Yes, dicey waters indeed this censorship business.
At the moment it’s of little consequence. These are private companies with terms of use to enforce and the right to do so. Alienated groups are pushed out to the fringes of the Net, never without an outlet for their brands of speech so long as they can pay to host their own content or find a public site sympathetic to them. One wonders if the pressures of the mainstream will one day convince Google to de-index them, hosting companies to kick them down, and what criteria will be used to do so. One also wonders what kind of entity eventually is tasked with policing all dicey Internet waters, who will become the ultimate arbitrators of acceptable speech, who will decide the difference between art and pornography, between a sexy nipple and a maternal one, between dangerous speech and offensive speech.
Last week, Websense Security Labs cited the Wiesenthal study of extremist communities proliferating on social networks and issued its own report. Categorized as “Militancy and Extremist” and “Racism and Hate,” Websense found a threefold increase of such material over the past year on social sites like Yahoo and Google Groups and YouTube.
Websense has devoted some of its security research to identification and classification of extremist groups, doing so by analyzing symbols and content, separating extremists from cultists, “legitimate” news sites from “propaganda” sites. From Websense analysts Ruth Mastron and Eva Cihalova’s blog post on the subject:
A wide breadth of peripheral information sends a site to our “police lineup.” User profile, channel subscribers, links in and out of a page, connections among users, the architectural style of a page, textual and multimedia content, among other attributes, all feed further into processes that help us uncover their “buddies.” We have learned that birds of a feather flock together. Once we identify one such site, we almost always find many more.
So there we have now an example of Web security ballooning to include the tracking and identifying of hate groups and extremists and what they say. In Salem, a couple hundred years ago, nobody felt sorry for the “witches,” and very few today are going to feel sorry for racists, hate-mongers, and terrorists, so why not keep a good, close eye on them? But then again, whom do we trust to draw the right lines?
It’s not a new idea that “good” causes are eventually abused by those with good, moral, utopian desires. In fact, that idea is inked into the US Constitution, along with the ideal that innocence is not to be jeopardized in pursuit of corruption. Things like suspicion and dislike are not just enough causes to deprive a person of his rights. Ideally, that’s the rule of law in America, even if—over and over again—the ideal is not carried out.
But things change, especially when a matter of security, eh? One wonders how long before, if a person wants a modicum of freedom of speech, one keeps his or her speech off the Internet completely. Not to get all 1984 on you, but Big Brother is polishing up his computer monitor. A term like “digital terrorism” provides a just vague enough blanket to throw over lots of people.
*Though, as we’ve seen with telecoms, the government may rely on private companies to collect information it can’t legally collect itself.