Law in the Internet Society
Cyber Sticks and Stones

When the hit internet series Awkward Black Girl won the online Shorty Award, one tweeter responded, "Anthony Cumia lost to a N*ggerette!!! Another wrote, "ThingsBetterthanAwkwardBlackGirl: The smell coming from Treyvon Martin." Noting the difference between these comments and her real life experiences, Issa Rae, the show's creator wrote,

The crazy thing is, I've actually never been called a n*gger in real life...In the online world, however, which is where most of my social life resides…[users] hide comfortably behind their computer screens and type the most obnoxious, offensive things that they can think of…(cite).

Speech such as that above occurs in various online forums and can be said to fall under the much broader definition of trolling or, online speech designed to provoke an emotional response. In the US, such expression is generally protected under the First Amendment's right to freedom of speech. This paper highlights some of the more problematic aspects of trolling with a view towards exploring solutions that can curtail any negative externalities it presents while also protecting freedom of speech online.

Real Life: Broken Bones

Those who have ever come across troll posts may agree with Rae that both the frequency and intensity of inflammatory comments is heightened online as compared to offline. Although the online cannot perfectly mirror the offline, we might, as one method of enquiry examine how we have evolved as a society to deal with similar social behaviors in real life. Indeed, in real life, fewer individuals would be willing to make any of the statements discussed in the first paragraph simply for the purpose of evoking a response. Not only is the threat of physical violence a poignant deterrent, but the possibility of other types of social repercussions also constrain such behavior. An individual might be concerned, for instance, that the comment might reflect poorly on his character or jeopardize his job. Even where this is not the case, the deliverer of such speech in real life would likely have to look at his listeners and personally witness (and perhaps experience) their reactions.

How the Words Can Hurt

On the internet, however, the ability of society to respond to such behavior is substantially limited. For one, trolls can "hit and run" with relative ease. Also, some users consciously decide to not "feed the trolls" and don't reply to troll posts. Assuming that there is some value in the free exchange of ideas, erroneously dismissing an individual as a troll could mean that an opportunity for useful social exchange is missed. Many users may be able to recall a time when they have come across a thread where the original poster has asked a question or posted a controversial idea and was readily dismissed—whether accurately or not— as a “troll.”

In other instances, users might change their behavior to avoid this kind of speech in the future. Wouldn't this inevitably result in negative externalities by limiting the opportunity for these individuals to engage in what could otherwise be socially beneficial behavior? At present, a user who browses user comments simply to gage public reaction to a funny video or participate in an online political debate, for instance, will almost certainly encounter inflammatory language.

So? You still haven't explained why this is a problem. If it's the word itself on the page, modify your browser so it won't show you some seven dirty words. If it's the knowledge that people have such thoughts, or at least make such utterances, how is that a harm we have any reason, let alone right, to prevent?

Even if one were to try to simply avoid troll posts, she will likely find this difficult. That most of us know about the crudity of internet commentary perhaps attests to this notion. Indeed, the ease of access to the internet also means that inflammatory language can be, and often is, found in the midst of otherwise useful, entertaining or informative content. Adjusting one’s browsers or filter settings is a viable option. However, some of the most inflammatory language can be highly dependent on interpretation and social context; “The smell coming from Trayvon Martin’s grave” does not use offensive language per se.

Also, one of the benefits of the internet is that it is more easily accessible to different types of users. If younger individuals, for example, witness such speech without also witnessing the social response couldn't this negatively affect their behavior down the road in real life? We might also ask if we would simply ignore such behavior if it were occurring on the sidewalk in front of us? If not, why? Is the impact of the behavior on society or on the receiver any less damaging online?

Happy Medium?

The concepts of pseudonymity and anonymity also contribute to the discussion of trolling. In posting anonymously, an individual can avoid personal accountability and thus some of the negative consequences of their actions. The ability to use a pseudonym similarly allows an individual to assume an online mask of sorts. To be sure, anonymity and pseudonymity serve important functions both online and off. The Supreme Court, in McIntyre? v. Ohio Board of Elections, articulated this view when it wrote, "Anonymity is a shield from the tyranny of the majority...It thus exemplifies the purpose behind the Bill of Rights and the First Amendment in particular; to protect unpopular individuals from retaliation..." Freedom of speech is not absolute, however, and the type of speech referred to by the court can arguably be distinguished from trolling, which is primarily designed to garner an emotional response.

Companies like Youtube have already begun to consider possible solutions. One optional new policy prompts users to change their username to their real name. Such measures perhaps assume that individuals who will elect to post identifying information may be less likely to troll. It might be useful for sites to go a step further and give users the option to only see posts from users who choose to identify themselves. While steps such as those above are unlikely to eliminate trolling, it may help users distinguish such behavior from other types of expression— such as art or simply controversial opinions—and may also enhance freedom of speech by reducing the efficacy of the movement by some to try to ban trolling altogether.

What is the real problem? Neither this nor the prior draft has explained why "inflammatory language" is something society needs to worry about. Conduct (the putting in fear that is assault, for example) we regulate extensively, and language we let be, in the interest of freedom of thought. Why should we be concerned to move this line, or more riskily yet, endow others with power to move the line for us?


Webs Webs

r6 - 23 Aug 2014 - 19:33:51 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM