Law in the Internet Society

View   r6  >  r5  >  r4  >  r3  >  r2  >  r1
ShakimaWellsSecondPaper 6 - 23 Aug 2014 - Main.EbenMoglen
Line: 1 to 1
Changed:
<
<
META TOPICPARENT name="SecondPaper"
>
>
META TOPICPARENT name="SecondEssay"
 Cyber Sticks and Stones

When the hit internet series Awkward Black Girl won the online Shorty Award, one tweeter responded, "Anthony Cumia lost to a N*ggerette!!! Another wrote, "ThingsBetterthanAwkwardBlackGirl: The smell coming from Treyvon Martin." Noting the difference between these comments and her real life experiences, Issa Rae, the show's creator wrote,


ShakimaWellsSecondPaper 5 - 31 Mar 2013 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="SecondPaper"
Cyber Sticks and Stones
Line: 19 to 19
 In other instances, users might change their behavior to avoid this kind of speech in the future. Wouldn't this inevitably result in negative externalities by limiting the opportunity for these individuals to engage in what could otherwise be socially beneficial behavior? At present, a user who browses user comments simply to gage public reaction to a funny video or participate in an online political debate, for instance, will almost certainly encounter inflammatory language.
Added:
>
>
So? You still haven't explained why this is a problem. If it's the word itself on the page, modify your browser so it won't show you some seven dirty words. If it's the knowledge that people have such thoughts, or at least make such utterances, how is that a harm we have any reason, let alone right, to prevent?

 Even if one were to try to simply avoid troll posts, she will likely find this difficult. That most of us know about the crudity of internet commentary perhaps attests to this notion. Indeed, the ease of access to the internet also means that inflammatory language can be, and often is, found in the midst of otherwise useful, entertaining or informative content. Adjusting one’s browsers or filter settings is a viable option. However, some of the most inflammatory language can be highly dependent on interpretation and social context; “The smell coming from Trayvon Martin’s grave” does not use offensive language per se.

Also, one of the benefits of the internet is that it is more easily accessible to different types of users. If younger individuals, for example, witness such speech without also witnessing the social response couldn't this negatively affect their behavior down the road in real life? We might also ask if we would simply ignore such behavior if it were occurring on the sidewalk in front of us? If not, why? Is the impact of the behavior on society or on the receiver any less damaging online?

Line: 28 to 37
 The concepts of pseudonymity and anonymity also contribute to the discussion of trolling. In posting anonymously, an individual can avoid personal accountability and thus some of the negative consequences of their actions. The ability to use a pseudonym similarly allows an individual to assume an online mask of sorts. To be sure, anonymity and pseudonymity serve important functions both online and off. The Supreme Court, in McIntyre? v. Ohio Board of Elections, articulated this view when it wrote, "Anonymity is a shield from the tyranny of the majority...It thus exemplifies the purpose behind the Bill of Rights and the First Amendment in particular; to protect unpopular individuals from retaliation..." Freedom of speech is not absolute, however, and the type of speech referred to by the court can arguably be distinguished from trolling, which is primarily designed to garner an emotional response.

Companies like Youtube have already begun to consider possible solutions. One optional new policy prompts users to change their username to their real name. Such measures perhaps assume that individuals who will elect to post identifying information may be less likely to troll. It might be useful for sites to go a step further and give users the option to only see posts from users who choose to identify themselves. While steps such as those above are unlikely to eliminate trolling, it may help users distinguish such behavior from other types of expression— such as art or simply controversial opinions—and may also enhance freedom of speech by reducing the efficacy of the movement by some to try to ban trolling altogether.

Added:
>
>

What is the real problem? Neither this nor the prior draft has explained why "inflammatory language" is something society needs to worry about. Conduct (the putting in fear that is assault, for example) we regulate extensively, and language we let be, in the interest of freedom of thought. Why should we be concerned to move this line, or more riskily yet, endow others with power to move the line for us?

 \ No newline at end of file

ShakimaWellsSecondPaper 4 - 05 Mar 2013 - Main.ShakimaWells
Line: 1 to 1
 
META TOPICPARENT name="SecondPaper"
Added:
>
>
Cyber Sticks and Stones
 
Changed:
<
<
--+Trolls
>
>
When the hit internet series Awkward Black Girl won the online Shorty Award, one tweeter responded, "Anthony Cumia lost to a N*ggerette!!! Another wrote, "ThingsBetterthanAwkwardBlackGirl: The smell coming from Treyvon Martin." Noting the difference between these comments and her real life experiences, Issa Rae, the show's creator wrote,
 
Changed:
<
<
-- ShakimaWells - 15 November 2012
>
>
The crazy thing is, I've actually never been called a n*gger in real life...In the online world, however, which is where most of my social life resides…[users] hide comfortably behind their computer screens and type the most obnoxious, offensive things that they can think of…(cite).
 
Changed:
<
<
When the hit internet series Awkward Black Girl won the Shorty Award, one user tweeted in response, "Anthony Cumia lost to a N*ggerette!!! Another wrote, "ThingsBetterthanAwkwardBlackGirl:The smell coming from Treyvon Martin." And still another quipped, "Congrats on winning, do you get 3/5 of an award?" Noting the difference between these comments and her real life experiences, Issa Rae, the show's creator wrote,
>
>
Speech such as that above occurs in various online forums and can be said to fall under the much broader definition of trolling or, online speech designed to provoke an emotional response. In the US, such expression is generally protected under the First Amendment's right to freedom of speech. This paper highlights some of the more problematic aspects of trolling with a view towards exploring solutions that can curtail any negative externalities it presents while also protecting freedom of speech online.
 
Added:
>
>
Real Life: Broken Bones
 
Changed:
<
<
Is it possible that we could take too seriously the stuff put in a microblogging service by people idly commenting on some entertainment industry hype "event"? It's a little like paying any attention whatever to comments on YouTube, don't you think? Is it a step up or a step down from being concerned with the dialog in Quentin Tarantino movies?
>
>
Those who have ever come across troll posts may agree with Rae that both the frequency and intensity of inflammatory comments is heightened online as compared to offline. Although the online cannot perfectly mirror the offline, we might, as one method of enquiry examine how we have evolved as a society to deal with similar social behaviors in real life. Indeed, in real life, fewer individuals would be willing to make any of the statements discussed in the first paragraph simply for the purpose of evoking a response. Not only is the threat of physical violence a poignant deterrent, but the possibility of other types of social repercussions also constrain such behavior. An individual might be concerned, for instance, that the comment might reflect poorly on his character or jeopardize his job. Even where this is not the case, the deliverer of such speech in real life would likely have to look at his listeners and personally witness (and perhaps experience) their reactions.
 
Changed:
<
<
The crazy thing is, I've actually never been called a n*gger in real life...In the online world, however, which is where most of my social life resides, the word is tossed about as freely as it was in the 50s and 60s. Users hide comfortably behind their computer screens and type the most obnoxious, offensive things that they can think of and wait for a response (cite).
>
>
How the Words Can Hurt
 
Changed:
<
<
The racially inflammatory speech directed toward Rae can be said to fall under the much broader definition of trolling or, online speech designed to provoke an emotional response.
>
>
On the internet, however, the ability of society to respond to such behavior is substantially limited. For one, trolls can "hit and run" with relative ease. Also, some users consciously decide to not "feed the trolls" and don't reply to troll posts. Assuming that there is some value in the free exchange of ideas, erroneously dismissing an individual as a troll could mean that an opportunity for useful social exchange is missed. Many users may be able to recall a time when they have come across a thread where the original poster has asked a question or posted a controversial idea and was readily dismissed—whether accurately or not— as a “troll.”
 
Changed:
<
<
Is it "directed" towards anybody? These are the disembodied voices of a bullshit universe, where sensible people do not look for conversation. Whatever is there, whatever its degree of offensiveness, hatred, stupidity, insensitivity, or vernacular genius, we are in no danger whatever of being forced to interact with it, or subjected to any power it represents. So why is it a problem at all?
>
>
In other instances, users might change their behavior to avoid this kind of speech in the future. Wouldn't this inevitably result in negative externalities by limiting the opportunity for these individuals to engage in what could otherwise be socially beneficial behavior? At present, a user who browses user comments simply to gage public reaction to a funny video or participate in an online political debate, for instance, will almost certainly encounter inflammatory language.
 
Added:
>
>
Even if one were to try to simply avoid troll posts, she will likely find this difficult. That most of us know about the crudity of internet commentary perhaps attests to this notion. Indeed, the ease of access to the internet also means that inflammatory language can be, and often is, found in the midst of otherwise useful, entertaining or informative content. Adjusting one’s browsers or filter settings is a viable option. However, some of the most inflammatory language can be highly dependent on interpretation and social context; “The smell coming from Trayvon Martin’s grave” does not use offensive language per se.
 
Changed:
<
<
In the US, such expression (assuming that it doesn't venture into the realm of illegality) is protected under the First Amendment right to freedom of speech. In light of the crudity of some internet commentary, this paper argues that 1) trolling as exhibited by the above comments is problematic and 2) private entities should take measures to encourage users to post limited identifying information in an effort to enhance personal responsibility for such behavior.

Many individuals, particularly those on the receiving end of trolling,

There isn't any receiving end. You're talking about Twitter, which is a service no one needs and no one is compelled to receive any particular communications on. It does not matter what is written by whom on the wall of a bathroom you will never have to use. You are arguing, you say, that this is "problematic." So what is the argument?

may agree with Rae that both the frequency and intensity of inflammatory comments is heightened online as compared to real life. Indeed, in real life, fewer individuals would be willing to make any of the statements discussed in the first paragraph simply for the purpose of evoking a response. Not only is the threat of physical violence a poignant deterrent, but the possibility of other types of social repercussions also constrain such behavior. An individual might be concerned, for instance, that the comment might reflect poorly on his character or jeopardize his job. Even where this is not the case, real life has an interpersonal element where the deliverer of such speech would likely have to look at his listeners and personally experience their reactions.

This is one set of points about why context matters. There are many other respects in which context matters, too. If you are going to argue that there is a problem, you need to be attentive to the context where the problem is supposed to be. Arguing that there would be less or more of a problem in a different context is not sufficient to establish this point. What one cannot say on the street one can say through digital media. This is both good and bad, depending on context. What a fool can shout down a rain barrel does not seem to me to be an important question of social policy or legal theory, because the context is trivial. Why isn't the context trivial here?

On the internet, however, the ability of society to respond to such behavior is substantially limited. For one, trolls can "hit and run" with relative ease. Also, some users consciously decide to not "feed the trolls" and don't reply to troll posts. Assuming that there is some value in the free exchange of ideas, erroneously dismissing an individual as a troll could mean that an opportunity for useful social exchange is missed.

What is potentially erroneous about deciding to ignore people using stupid offensive language on a microblogging service? Might the speaker actually be Henry James or Virginia Woolf, and we missed it?

This is a poor argument because no better argument was available to you. In fact, ignoring the people who shout obscenities in the digital street is costless, harmless, and effective. Because both negative and positive reinforcement increases the frequency of behavior, ignoring is precisely the behavior that represses the undesired phenomenon. It has no drawbacks and works perfectly. So it should be adopted. But if that's the conclusion, then your argument has developed in a direction you did not intend. So a bad argument is used to stop the gap. Better would be a broader reconsideration.

In other cases, the sheer distance between the speaker and the listener online encourages bystanders to be apathetic. Would these same individuals ignore this behavior if it were occurring on the sidewalk in front of them? If not, why? Is the impact of the behavior on society or on the receiver any less damaging online? What if users changed their behavior to avoid this kind of speech in the future? Wouldn't this inevitably result in negative externalities by limiting the opportunity for these individuals to engage in what could otherwise be socially beneficial behavior? Also, one of the benefits of the internet is that it is more easily accessible to different types of users. If small children, for example, witness such speech without also seeing the social response couldn't this negatively affect their development and behavior in real life?

Even if one were to try to simply ignore troll posts in a way that they might try to avoid inflammatory material found in other forms, she will likely find this difficult. The ease of access to the internet also means that inflammatory language can be, and often is, found in the midst of otherwise useful, entertaining or informative content. At present, a user who browses user comments simply to gage public reaction to a funny video or political debate, for instance, will almost certainly encounter inflammatory language.

Unless she decides to modify either her browser or her browsing. But in any event, the strategy of ignoring isn't made less useful by being used. Evidently if there were nothing to ignore no ignoring would be necessary. Why is that an argument against ignoring?
>
>
Also, one of the benefits of the internet is that it is more easily accessible to different types of users. If younger individuals, for example, witness such speech without also witnessing the social response couldn't this negatively affect their behavior down the road in real life? We might also ask if we would simply ignore such behavior if it were occurring on the sidewalk in front of us? If not, why? Is the impact of the behavior on society or on the receiver any less damaging online?
 
Added:
>
>
Happy Medium?
 The concepts of pseudonymity and anonymity also contribute to the discussion of trolling. In posting anonymously, an individual can avoid personal accountability and thus some of the negative consequences of their actions. The ability to use a pseudonym similarly allows an individual to assume an online mask of sorts. To be sure, anonymity and pseudonymity serve important functions both online and off. The Supreme Court, in McIntyre? v. Ohio Board of Elections, articulated this view when it wrote, "Anonymity is a shield from the tyranny of the majority...It thus exemplifies the purpose behind the Bill of Rights and the First Amendment in particular; to protect unpopular individuals from retaliation..." Freedom of speech is not absolute, however, and the type of speech referred to by the court can arguably be distinguished from trolling, which is primarily designed to garner an emotional response.
Deleted:
<
<
The correct answer to this proposition was given by Justice Holmes in Gitlow v. New York: "Every idea is an incitement."

In light of the problematic aspects of trolling noted above, companies like Youtube have already begun to brainstorm solutions.

Does that mean "to consider"?

One new policy prompts commenters to change their username to their real name. Currently, it is still optional and users can decide to continue to post under their usernames if they provide a reason. The explanation “I’m not sure, I’ll decide later” is presently accepted as sufficient (cite).

Saying you're going to make a link is not the same as making one.

Such measures perhaps assume that individuals who will elect to post identifying information may be less likely to post comments simply to get a response and those who do not elect to do so and proceed to post inflammatory material will be more easily and correctly identified as trolls. It might be useful for sites to go a step further and give users the option to only see posts from users who choose to identify themselves.

If you want to censor your reading, you can have your browser do it for you. You are not required to read anything you don't want to read on the Web, including ads.

While steps such as those above are unlikely to totally eliminate trolling, it may help users distinguish such behavior from other types of expression and may actually enhance freedom of speech by reducing the efficacy of the movement by some to ban trolling altogether.

The "type" of expression you are discussing is the type whose purpose is to create an emotional reaction? That "type" includes all art. Or did you mean the type that uses offensive words, which includes only some art? But it seems to me that the type is stupid offensive commentary in a locale no one needs to use and that no one has to read unfiltered and in which, accordingly, there isn't any problem anyway. If I'm wrong about that, the essay has to explain why.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.

 \ No newline at end of file
Added:
>
>
Companies like Youtube have already begun to consider possible solutions. One optional new policy prompts users to change their username to their real name. Such measures perhaps assume that individuals who will elect to post identifying information may be less likely to troll. It might be useful for sites to go a step further and give users the option to only see posts from users who choose to identify themselves. While steps such as those above are unlikely to eliminate trolling, it may help users distinguish such behavior from other types of expression— such as art or simply controversial opinions—and may also enhance freedom of speech by reducing the efficacy of the movement by some to try to ban trolling altogether.

ShakimaWellsSecondPaper 3 - 28 Jan 2013 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="SecondPaper"
Added:
>
>
--+Trolls
 
Changed:
<
<

Trolls

-- ShakimaWells? - 15 November 2012

>
>
-- ShakimaWells - 15 November 2012
 When the hit internet series Awkward Black Girl won the Shorty Award, one user tweeted in response, "Anthony Cumia lost to a N*ggerette!!! Another wrote, "ThingsBetterthanAwkwardBlackGirl:The smell coming from Treyvon Martin." And still another quipped, "Congrats on winning, do you get 3/5 of an award?" Noting the difference between these comments and her real life experiences, Issa Rae, the show's creator wrote,
Deleted:
<
<
The crazy thing is, I've actually never been called a n*gger in real life...In the online world, however, which is where most of my social life resides, the word is tossed about as freely as it was in the 50s and 60s. Users hide comfortably behind their computer screens and type the most obnoxious, offensive things that they can think of and wait for a response (cite).
 
Changed:
<
<
The racially inflammatory speech directed toward Rae can be said to fall under the much broader definition of trolling or, online speech designed to provoke an emotional response. In the US, such expression (assuming that it doesn't venture into the realm of illegality) is protected under the First Amendment right to freedom of speech. In light of the crudity of some internet commentary, this paper argues that 1) trolling as exhibited by the above comments is problematic and 2) private entities should take measures to encourage users to post limited identifying information in an effort to enhance personal responsibility for such behavior.
>
>
Is it possible that we could take too seriously the stuff put in a microblogging service by people idly commenting on some entertainment industry hype "event"? It's a little like paying any attention whatever to comments on YouTube, don't you think? Is it a step up or a step down from being concerned with the dialog in Quentin Tarantino movies?

The crazy thing is, I've actually never been called a n*gger in real life...In the online world, however, which is where most of my social life resides, the word is tossed about as freely as it was in the 50s and 60s. Users hide comfortably behind their computer screens and type the most obnoxious, offensive things that they can think of and wait for a response (cite).

The racially inflammatory speech directed toward Rae can be said to fall under the much broader definition of trolling or, online speech designed to provoke an emotional response.

Is it "directed" towards anybody? These are the disembodied voices of a bullshit universe, where sensible people do not look for conversation. Whatever is there, whatever its degree of offensiveness, hatred, stupidity, insensitivity, or vernacular genius, we are in no danger whatever of being forced to interact with it, or subjected to any power it represents. So why is it a problem at all?

In the US, such expression (assuming that it doesn't venture into the realm of illegality) is protected under the First Amendment right to freedom of speech. In light of the crudity of some internet commentary, this paper argues that 1) trolling as exhibited by the above comments is problematic and 2) private entities should take measures to encourage users to post limited identifying information in an effort to enhance personal responsibility for such behavior.

Many individuals, particularly those on the receiving end of trolling,

There isn't any receiving end. You're talking about Twitter, which is a service no one needs and no one is compelled to receive any particular communications on. It does not matter what is written by whom on the wall of a bathroom you will never have to use. You are arguing, you say, that this is "problematic." So what is the argument?

may agree with Rae that both the frequency and intensity of inflammatory comments is heightened online as compared to real life. Indeed, in real life, fewer individuals would be willing to make any of the statements discussed in the first paragraph simply for the purpose of evoking a response. Not only is the threat of physical violence a poignant deterrent, but the possibility of other types of social repercussions also constrain such behavior. An individual might be concerned, for instance, that the comment might reflect poorly on his character or jeopardize his job. Even where this is not the case, real life has an interpersonal element where the deliverer of such speech would likely have to look at his listeners and personally experience their reactions.

This is one set of points about why context matters. There are many other respects in which context matters, too. If you are going to argue that there is a problem, you need to be attentive to the context where the problem is supposed to be. Arguing that there would be less or more of a problem in a different context is not sufficient to establish this point. What one cannot say on the street one can say through digital media. This is both good and bad, depending on context. What a fool can shout down a rain barrel does not seem to me to be an important question of social policy or legal theory, because the context is trivial. Why isn't the context trivial here?

On the internet, however, the ability of society to respond to such behavior is substantially limited. For one, trolls can "hit and run" with relative ease. Also, some users consciously decide to not "feed the trolls" and don't reply to troll posts. Assuming that there is some value in the free exchange of ideas, erroneously dismissing an individual as a troll could mean that an opportunity for useful social exchange is missed.

What is potentially erroneous about deciding to ignore people using stupid offensive language on a microblogging service? Might the speaker actually be Henry James or Virginia Woolf, and we missed it?

This is a poor argument because no better argument was available to you. In fact, ignoring the people who shout obscenities in the digital street is costless, harmless, and effective. Because both negative and positive reinforcement increases the frequency of behavior, ignoring is precisely the behavior that represses the undesired phenomenon. It has no drawbacks and works perfectly. So it should be adopted. But if that's the conclusion, then your argument has developed in a direction you did not intend. So a bad argument is used to stop the gap. Better would be a broader reconsideration.

 
Deleted:
<
<
Many individuals, particularly those on the receiving end of trolling, may agree with Rae that both the frequency and intensity of inflammatory comments is heightened online as compared to real life. Indeed, in real life, fewer individuals would be willing to make any of the statements discussed in the first paragraph simply for the purpose of evoking a response. Not only is the threat of physical violence a poignant deterrent, but the possibility of other types of social repercussions also constrain such behavior. An individual might be concerned, for instance, that the comment might reflect poorly on his character or jeopardize his job. Even where this is not the case, real life has an interpersonal element where the deliverer of such speech would likely have to look at his listeners and personally experience their reactions.
 
Changed:
<
<
On the internet, however, the ability of society to respond to such behavior is substantially limited. For one, trolls can "hit and run" with relative ease. Also, some users consciously decide to not "feed the trolls" and don't reply to troll posts. Assuming that there is some value in the free exchange of ideas, erroneously dismissing an individual as a troll could mean that an opportunity for useful social exchange is missed. In other cases, the sheer distance between the speaker and the listener online encourages bystanders to be apathetic. Would these same individuals ignore this behavior if it were occurring on the sidewalk in front of them? If not, why? Is the impact of the behavior on society or on the receiver any less damaging online? What if users changed their behavior to avoid this kind of speech in the future? Wouldn't this inevitably result in negative externalities by limiting the opportunity for these individuals to engage in what could otherwise be socially beneficial behavior? Also, one of the benefits of the internet is that it is more easily accessible to different types of users. If small children, for example, witness such speech without also seeing the social response couldn't this negatively affect their development and behavior in real life?
>
>
In other cases, the sheer distance between the speaker and the listener online encourages bystanders to be apathetic. Would these same individuals ignore this behavior if it were occurring on the sidewalk in front of them? If not, why? Is the impact of the behavior on society or on the receiver any less damaging online? What if users changed their behavior to avoid this kind of speech in the future? Wouldn't this inevitably result in negative externalities by limiting the opportunity for these individuals to engage in what could otherwise be socially beneficial behavior? Also, one of the benefits of the internet is that it is more easily accessible to different types of users. If small children, for example, witness such speech without also seeing the social response couldn't this negatively affect their development and behavior in real life?
 Even if one were to try to simply ignore troll posts in a way that they might try to avoid inflammatory material found in other forms, she will likely find this difficult. The ease of access to the internet also means that inflammatory language can be, and often is, found in the midst of otherwise useful, entertaining or informative content. At present, a user who browses user comments simply to gage public reaction to a funny video or political debate, for instance, will almost certainly encounter inflammatory language.
Added:
>
>
Unless she decides to modify either her browser or her browsing. But in any event, the strategy of ignoring isn't made less useful by being used. Evidently if there were nothing to ignore no ignoring would be necessary. Why is that an argument against ignoring?

 The concepts of pseudonymity and anonymity also contribute to the discussion of trolling. In posting anonymously, an individual can avoid personal accountability and thus some of the negative consequences of their actions. The ability to use a pseudonym similarly allows an individual to assume an online mask of sorts. To be sure, anonymity and pseudonymity serve important functions both online and off. The Supreme Court, in McIntyre? v. Ohio Board of Elections, articulated this view when it wrote, "Anonymity is a shield from the tyranny of the majority...It thus exemplifies the purpose behind the Bill of Rights and the First Amendment in particular; to protect unpopular individuals from retaliation..." Freedom of speech is not absolute, however, and the type of speech referred to by the court can arguably be distinguished from trolling, which is primarily designed to garner an emotional response.
Changed:
<
<
In light of the problematic aspects of trolling noted above, companies like Youtube have already begun to brainstorm solutions. One new policy prompts commenters to change their username to their real name. Currently, it is still optional and users can decide to continue to post under their usernames if they provide a reason. The explanation “I’m not sure, I’ll decide later” is presently accepted as sufficient (cite). Such measures perhaps assume that individuals who will elect to post identifying information may be less likely to post comments simply to get a response and those who do not elect to do so and proceed to post inflammatory material will be more easily and correctly identified as trolls. It might be useful for sites to go a step further and give users the option to only see posts from users who choose to identify themselves. While steps such as those above are unlikely to totally eliminate trolling, it may help users distinguish such behavior from other types of expression and may actually enhance freedom of speech by reducing the efficacy of the movement by some to ban trolling altogether.
>
>
The correct answer to this proposition was given by Justice Holmes in Gitlow v. New York: "Every idea is an incitement."

In light of the problematic aspects of trolling noted above, companies like Youtube have already begun to brainstorm solutions.

Does that mean "to consider"?

One new policy prompts commenters to change their username to their real name. Currently, it is still optional and users can decide to continue to post under their usernames if they provide a reason. The explanation “I’m not sure, I’ll decide later” is presently accepted as sufficient (cite).

Saying you're going to make a link is not the same as making one.

Such measures perhaps assume that individuals who will elect to post identifying information may be less likely to post comments simply to get a response and those who do not elect to do so and proceed to post inflammatory material will be more easily and correctly identified as trolls. It might be useful for sites to go a step further and give users the option to only see posts from users who choose to identify themselves.

If you want to censor your reading, you can have your browser do it for you. You are not required to read anything you don't want to read on the Web, including ads.

While steps such as those above are unlikely to totally eliminate trolling, it may help users distinguish such behavior from other types of expression and may actually enhance freedom of speech by reducing the efficacy of the movement by some to ban trolling altogether.

The "type" of expression you are discussing is the type whose purpose is to create an emotional reaction? That "type" includes all art. Or did you mean the type that uses offensive words, which includes only some art? But it seems to me that the type is stupid offensive commentary in a locale no one needs to use and that no one has to read unfiltered and in which, accordingly, there isn't any problem anyway. If I'm wrong about that, the essay has to explain why.

 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.

ShakimaWellsSecondPaper 2 - 15 Nov 2012 - Main.ShakimaWells
Line: 1 to 1
 
META TOPICPARENT name="SecondPaper"
Changed:
<
<
-- By ShakimaWells - 14 Nov 2012
>
>

Trolls
 
Changed:
<
<
When the hit internet series Awkward Black Girl won the Shorty Award, one user tweeted in response, "Anthony Cumia lost to a N*ggerette!!! Another wrote, "ThingsBetterthanAwkwardBlackGirl:The smell coming from Treyvon Martin." And still another quipped, "Congrats on winning, do you get 3/5 of an award?" Noting the difference between these comments and her real life experiences, Issa Rae, the show's creator wrote,
>
>
-- ShakimaWells? - 15 November 2012
 
Changed:
<
<
The crazy thing is, Ive actually never been called a n*gger in real life...In the online world, however, which is where most of my social life resides, the word is tossed about as freely as it was in the 50s and 60s. Users hide comfortably behind their computer screens and type the most obnoxious,offensive things that they can think of and wait for a response (cite).
>
>
When the hit internet series Awkward Black Girl won the Shorty Award, one user tweeted in response, "Anthony Cumia lost to a N*ggerette!!! Another wrote, "ThingsBetterthanAwkwardBlackGirl:The smell coming from Treyvon Martin." And still another quipped, "Congrats on winning, do you get 3/5 of an award?" Noting the difference between these comments and her real life experiences, Issa Rae, the show's creator wrote,
 
Changed:
<
<
The racially inflammatory speech directed toward Rae can be said to fall under the much broader definition of trolling or, online speech designed to provoke an emotional response. In the US, such expression (assuming that it doesnt venture into the realm of illegality) is protected under the First Amendment's right to freedom of speech. In light of the well-known crudity of some internet commentary, this part argues that 1) trolling as exhibited by the above comments is problematic and 2) private entities should take measures to encourage users to post identifying information in an effort to enhance personal responsibility for such behavior.
>
>
The crazy thing is, I've actually never been called a n*gger in real life...In the online world, however, which is where most of my social life resides, the word is tossed about as freely as it was in the 50s and 60s. Users hide comfortably behind their computer screens and type the most obnoxious, offensive things that they can think of and wait for a response (cite).
 
Changed:
<
<
Many individuals, particularly those on the receiving end of trolling, may agree with Rae that both the frequency and intensity of inflammatory comments is heightened online as compared to real life. Indeed, in real life, fewer individuals would be willing to make any of the statements discussed in the first paragraph simply for the purpose of evoking a response. Not only is the threat of physical violence a poignant deterrent, but the possibility of other types of social repercussions also constrain such behavior. An individual might be concerned, for instance, that the comment might get back to his place of work or affect his standing within his community. Even if this were not the case, real life activity delivery might be constrained by the interpersonal fact that the deliverer of such speech would likely have to look at his listeners and personally experience their reactions.
>
>
The racially inflammatory speech directed toward Rae can be said to fall under the much broader definition of trolling or, online speech designed to provoke an emotional response. In the US, such expression (assuming that it doesn't venture into the realm of illegality) is protected under the First Amendment right to freedom of speech. In light of the crudity of some internet commentary, this paper argues that 1) trolling as exhibited by the above comments is problematic and 2) private entities should take measures to encourage users to post limited identifying information in an effort to enhance personal responsibility for such behavior.
 
Changed:
<
<
On the internet, however, the ability of society to respond to such behavior is substantially limited. For one, trolls can effectively hit and run with relative ease. Also, some users consciously decide to "not feed the trolls" and dont reply to troll posts. Assuming that there is some value in the free exchange of ideas, erroneously dismissing an individual as a troll could mean that an opportunity for useful social exchange is missed. In other cases, the sheer distance between the speaker and the listener online encourages bystanders to be apathetic. Would these same individuals ignore this behavior if it were occurring on the sidewalk in front of them? If not, why? Is the impact of the behavior on society or on the receiver any less damaging online than in real life? What if listeners decided to change their behavior to avoid this kind of speech in the future? Couldn't this also result in negative social externalities by limiting the opportunity for these individuals to engage in what could otherwise be socially beneficial behavior? Also, one of the benefits of the internet is that it is more easily accessible to different types of users. If small children witness such speech without also witnessing the social response, couldnt this negatively affect their behavior and development?
>
>
Many individuals, particularly those on the receiving end of trolling, may agree with Rae that both the frequency and intensity of inflammatory comments is heightened online as compared to real life. Indeed, in real life, fewer individuals would be willing to make any of the statements discussed in the first paragraph simply for the purpose of evoking a response. Not only is the threat of physical violence a poignant deterrent, but the possibility of other types of social repercussions also constrain such behavior. An individual might be concerned, for instance, that the comment might reflect poorly on his character or jeopardize his job. Even where this is not the case, real life has an interpersonal element where the deliverer of such speech would likely have to look at his listeners and personally experience their reactions.
 
Changed:
<
<
Even if one were to try to simply ignore troll posts in a way that they might try to avoid inflammatory material found in other forms, online users will find that they can be hard to escape. The ease of access to the internet also means that inflammatory language can be, and often is, found in the midst of otherwise useful, entertaining or informative content. A user who browses user comments simply to gage public reaction to a funny video or political debate, for instance, will almost certainly encounter inflammatory language.
>
>
On the internet, however, the ability of society to respond to such behavior is substantially limited. For one, trolls can "hit and run" with relative ease. Also, some users consciously decide to not "feed the trolls" and don't reply to troll posts. Assuming that there is some value in the free exchange of ideas, erroneously dismissing an individual as a troll could mean that an opportunity for useful social exchange is missed. In other cases, the sheer distance between the speaker and the listener online encourages bystanders to be apathetic. Would these same individuals ignore this behavior if it were occurring on the sidewalk in front of them? If not, why? Is the impact of the behavior on society or on the receiver any less damaging online? What if users changed their behavior to avoid this kind of speech in the future? Wouldn't this inevitably result in negative externalities by limiting the opportunity for these individuals to engage in what could otherwise be socially beneficial behavior? Also, one of the benefits of the internet is that it is more easily accessible to different types of users. If small children, for example, witness such speech without also seeing the social response couldn't this negatively affect their development and behavior in real life?
 
Changed:
<
<
The concepts of pseudonymity and anonymity also contribute to the discussion of trolling. By posting anonymously, an individual can avoid personal accountability and thus some of the abovementioned negative consequences of their actions. The ability to use a pseudonym similarly allows an individual to assume an online mask of sorts. To be sure, anonymity and psuedonmymity serve important sociopolitical functions. The Supreme Court, in McIntyre? v. Ohio Board of Elections, articulated this view when it wrote, "Anonymity is a shield from the tyranny of the majority...It thus exemplifies the purpose behind the Bill of Rights and the First Amendment in particular; to protect unpopular individuals from retaliation..." Freedom of speech is not absolute, however, and the type of speech referred to by the court can arguably be distingished from trolling, which is primarily designed to garner an emotional response.
>
>
Even if one were to try to simply ignore troll posts in a way that they might try to avoid inflammatory material found in other forms, she will likely find this difficult. The ease of access to the internet also means that inflammatory language can be, and often is, found in the midst of otherwise useful, entertaining or informative content. At present, a user who browses user comments simply to gage public reaction to a funny video or political debate, for instance, will almost certainly encounter inflammatory language.
 
Changed:
<
<
In light of the problematic aspects of trolling noted above, it is perhaps unsurprising that companies like Youtube have already begun to brainstorm solutions. One new policy prompts commenters to change their username to their real name. Currently, it is still optional and users can decide to continue to post under their usernames if they provide a reason. The explanation “I’m not sure, I’ll decide later” is presently accepted as sufficient (cite). Such measures perhaps assume that individuals who will elect to post identifying information may be less likely to post comments simply to get a response and those who do not elect to do so and proceed to post inflammatory material will be more easily distinguished from other users. While steps such as those above are unlikely to totally eliminate online behavior that is simply intended to inflame other users, it may help users distinguish such behavior from other types of expression and may actually enhance freedom of speech by reducing the push by some to ban trolling.
>
>
The concepts of pseudonymity and anonymity also contribute to the discussion of trolling. In posting anonymously, an individual can avoid personal accountability and thus some of the negative consequences of their actions. The ability to use a pseudonym similarly allows an individual to assume an online mask of sorts. To be sure, anonymity and pseudonymity serve important functions both online and off. The Supreme Court, in McIntyre? v. Ohio Board of Elections, articulated this view when it wrote, "Anonymity is a shield from the tyranny of the majority...It thus exemplifies the purpose behind the Bill of Rights and the First Amendment in particular; to protect unpopular individuals from retaliation..." Freedom of speech is not absolute, however, and the type of speech referred to by the court can arguably be distinguished from trolling, which is primarily designed to garner an emotional response.
 
Added:
>
>
In light of the problematic aspects of trolling noted above, companies like Youtube have already begun to brainstorm solutions. One new policy prompts commenters to change their username to their real name. Currently, it is still optional and users can decide to continue to post under their usernames if they provide a reason. The explanation “I’m not sure, I’ll decide later” is presently accepted as sufficient (cite). Such measures perhaps assume that individuals who will elect to post identifying information may be less likely to post comments simply to get a response and those who do not elect to do so and proceed to post inflammatory material will be more easily and correctly identified as trolls. It might be useful for sites to go a step further and give users the option to only see posts from users who choose to identify themselves. While steps such as those above are unlikely to totally eliminate trolling, it may help users distinguish such behavior from other types of expression and may actually enhance freedom of speech by reducing the efficacy of the movement by some to ban trolling altogether.
 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.

ShakimaWellsSecondPaper 1 - 15 Nov 2012 - Main.ShakimaWells
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="SecondPaper"

-- By ShakimaWells - 14 Nov 2012

When the hit internet series Awkward Black Girl won the Shorty Award, one user tweeted in response, "Anthony Cumia lost to a N*ggerette!!! Another wrote, "ThingsBetterthanAwkwardBlackGirl:The smell coming from Treyvon Martin." And still another quipped, "Congrats on winning, do you get 3/5 of an award?" Noting the difference between these comments and her real life experiences, Issa Rae, the show's creator wrote,

The crazy thing is, Ive actually never been called a n*gger in real life...In the online world, however, which is where most of my social life resides, the word is tossed about as freely as it was in the 50s and 60s. Users hide comfortably behind their computer screens and type the most obnoxious,offensive things that they can think of and wait for a response (cite).

The racially inflammatory speech directed toward Rae can be said to fall under the much broader definition of trolling or, online speech designed to provoke an emotional response. In the US, such expression (assuming that it doesnt venture into the realm of illegality) is protected under the First Amendment's right to freedom of speech. In light of the well-known crudity of some internet commentary, this part argues that 1) trolling as exhibited by the above comments is problematic and 2) private entities should take measures to encourage users to post identifying information in an effort to enhance personal responsibility for such behavior.

Many individuals, particularly those on the receiving end of trolling, may agree with Rae that both the frequency and intensity of inflammatory comments is heightened online as compared to real life. Indeed, in real life, fewer individuals would be willing to make any of the statements discussed in the first paragraph simply for the purpose of evoking a response. Not only is the threat of physical violence a poignant deterrent, but the possibility of other types of social repercussions also constrain such behavior. An individual might be concerned, for instance, that the comment might get back to his place of work or affect his standing within his community. Even if this were not the case, real life activity delivery might be constrained by the interpersonal fact that the deliverer of such speech would likely have to look at his listeners and personally experience their reactions.

On the internet, however, the ability of society to respond to such behavior is substantially limited. For one, trolls can effectively hit and run with relative ease. Also, some users consciously decide to "not feed the trolls" and dont reply to troll posts. Assuming that there is some value in the free exchange of ideas, erroneously dismissing an individual as a troll could mean that an opportunity for useful social exchange is missed. In other cases, the sheer distance between the speaker and the listener online encourages bystanders to be apathetic. Would these same individuals ignore this behavior if it were occurring on the sidewalk in front of them? If not, why? Is the impact of the behavior on society or on the receiver any less damaging online than in real life? What if listeners decided to change their behavior to avoid this kind of speech in the future? Couldn't this also result in negative social externalities by limiting the opportunity for these individuals to engage in what could otherwise be socially beneficial behavior? Also, one of the benefits of the internet is that it is more easily accessible to different types of users. If small children witness such speech without also witnessing the social response, couldnt this negatively affect their behavior and development?

Even if one were to try to simply ignore troll posts in a way that they might try to avoid inflammatory material found in other forms, online users will find that they can be hard to escape. The ease of access to the internet also means that inflammatory language can be, and often is, found in the midst of otherwise useful, entertaining or informative content. A user who browses user comments simply to gage public reaction to a funny video or political debate, for instance, will almost certainly encounter inflammatory language.

The concepts of pseudonymity and anonymity also contribute to the discussion of trolling. By posting anonymously, an individual can avoid personal accountability and thus some of the abovementioned negative consequences of their actions. The ability to use a pseudonym similarly allows an individual to assume an online mask of sorts. To be sure, anonymity and psuedonmymity serve important sociopolitical functions. The Supreme Court, in McIntyre? v. Ohio Board of Elections, articulated this view when it wrote, "Anonymity is a shield from the tyranny of the majority...It thus exemplifies the purpose behind the Bill of Rights and the First Amendment in particular; to protect unpopular individuals from retaliation..." Freedom of speech is not absolute, however, and the type of speech referred to by the court can arguably be distingished from trolling, which is primarily designed to garner an emotional response.

In light of the problematic aspects of trolling noted above, it is perhaps unsurprising that companies like Youtube have already begun to brainstorm solutions. One new policy prompts commenters to change their username to their real name. Currently, it is still optional and users can decide to continue to post under their usernames if they provide a reason. The explanation “I’m not sure, I’ll decide later” is presently accepted as sufficient (cite). Such measures perhaps assume that individuals who will elect to post identifying information may be less likely to post comments simply to get a response and those who do not elect to do so and proceed to post inflammatory material will be more easily distinguished from other users. While steps such as those above are unlikely to totally eliminate online behavior that is simply intended to inflame other users, it may help users distinguish such behavior from other types of expression and may actually enhance freedom of speech by reducing the push by some to ban trolling.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 6r6 - 23 Aug 2014 - 19:33:51 - EbenMoglen
Revision 5r5 - 31 Mar 2013 - 23:00:52 - EbenMoglen
Revision 4r4 - 05 Mar 2013 - 17:12:03 - ShakimaWells
Revision 3r3 - 28 Jan 2013 - 17:40:49 - EbenMoglen
Revision 2r2 - 15 Nov 2012 - 07:14:23 - ShakimaWells
Revision 1r1 - 15 Nov 2012 - 04:47:15 - ShakimaWells
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM