Law in the Internet Society

Legal Internet Use by Terrorists

Introduction

One of the greatest features of the internet is that it is a powerful tool available to everyone. However, when such a tool falls into the wrong hands, there is little that regulators can do without threatening the very features of the internet that give it value. This short essay considers this inherent dilemma of choosing between a free and open internet and one that can be harnessed to further evil purposes.

I’m not talking about actual attacks via information networks (i.e. cyberterrorism), as such behavior is simply destructive regardless of who is doing it. Where the question really gets interesting is where regulation that would require some kind of discrimination—for instance, allowing online community organization efforts by the Obama campaign but not by Al Qaeda (see chart below). The ultimate question I seek to answer is whether or not peaceful and otherwise legal uses of the internet should be regulated based on the user’s purpose. I believe that unless there is violence or other extreme circumstances involved, it should remain as open and powerful a tool as possible.

Features of the Internet

The following list briefly considers (a) the internet’s value to both good and evil organizations and (b) what may be lost or significantly altered by counterterrorist regulation efforts.

  1. Neutrality: There are much fewer discriminatory institutional and technological hurdles online than there are elsewhere. Size or stature does not matter—a personal blogger is as “online” as powerful corporate and government bodies.
  2. Vast audiences: Once online, any website is available to all of the 1.5 billion people on the internet.
  3. Financial and media capabilities: Such interactions are not limited to text or even hypertext—the internet offers a complete multimedia environment, as well as the ability to make online financial transactions.
  4. Cheapness and anonymity: An online presence does not cost very much to create or maintain, and can be done by virtually anyone in an automated and anonymous manner.

Legal Uses of the Internet

Below is a breakdown of some of the legal uses of the internet that can be either devastating or beneficial depending on its user. For comparison, I discuss how terrorists use the internet to further their violent goals in largely the same manner that grassroots election campaigns further democratic principles during elections.

Use Terrorists Campaigns
Recruitment A large and increasing percentage (80%[1]) of recruitment takes place online. The internet allows recruiters to connect personally with individuals across the globe, channeling their rage that otherwise would have been dissipated by isolation. As shown in the recent Obama campaign, the internet allows groups to cheaply and effectively harness the will of thousands of volunteers. Without the internet, many of these people would have wanted to help but simply wouldn’t have known how.
Publicity/Propaganda Using the internet to enhance their image and argue their side, often using the language of persecution and nonviolent resolution (in the case of terrorists) to lend themselves legitimacy. This is particularly useful when they lack access to traditional forms of media.
Fundraising Soliciting financial donations either directly or through legitimate allies.
Networking/Coordination Complex, real-time communication allows for a decentralized structure, leading to efficient outsourcing as well as more cooperation between different groups. Additionally, specific activities can be more precisely coordinated thanks to these technologies.

Discussion and Conclusion

Counterterrorism in many ways is simply a line-drawing exercise—i.e. what we are willing to sacrifice in order to preserve our safety. For instance, while we are not ready to ban air travel completely, we are willing to submit ourselves to restrictions on what we can carry onto a plane.

In a recent microcosm of the issue, Senator Joe Lieberman demanded that YouTube remove all videos from identified terrorist organizations. YouTube responded by only removing those that were violent enough to violate its own community standards while defending the organizations’ right to post legal nonviolent material as well as the total benefit to society from a diverse range of views.

I tend to agree with YouTube’s approach--filtering by content rather than by user--and believe that content should only be regulated where there is violence, or the imminent threat of some kind of violence in relation to that content. This is an extremely narrow standard, and I do not believe there has been a clear example of when this would have been allowed in recent history. A hypothetical example of something I would block is if the Department of Defense had clear intelligence that a certain terrorist strike would be triggered by the appearance of certain text on a website. Content alone would rarely if never fulfill the standard--even if the text stated "ATTACK NOW!" this does not mean that we have the requisite intelligence to suggest beyond a reasonable doubt that in fact it is a trigger for a real attack.

First, when it comes to criminal and terrorist activity, we should regulate the violence they may use to achieve their goals, rather than their beliefs alone. We do not fear people for what they believe or desire; we fear what they might do to us to achieve their goals.

Second, a line drawn too far would simply do more harm than good for national security. Regardless of intent, prohibiting even peaceful communications amounts to outright censorship, which would only strengthen the resolve and hatred of terrorists and their potential supporters. Also, the automated and decentralized nature of the internet would turn most efforts at regulation into a futile game of Whac-A-Mole, with new sites popping up every time another is quashed. Any gain in security achieved by such measures would likely be far outweighed by the sympathy and support terrorists would receive as a result of such censorship.

Third, I believe the freedom and neutrality that the internet represents is something worth protecting. In many ways it is a technological embodiment of democratic and free-speech ideals, right now allowing me to publish my uncensored thoughts for the whole world to see, regardless of who I am and what I believe. Also, technologically, any filtering methods that successfully block all content from certain would have to be extremely invasive and likely overbroad (given the “Whac-A-Mole” problem stated above). And at the end of the day, the internet would be subject to the discretion of an unelected and largely unregulated body of decisionmakers.

Finally, from a constitutional standpoint, such a line does not tread very much on First Amendment free speech. In fact, it requires positive intelligence that the main if not sole intent of the blocked "speech" be a mechanism, rather than the expression of a personal belief or opinion. This type of speech is far from the core of what the constitution attempts to protect. The terrorists may use numbers or random coded symbols to achieve the same goal; any "speech-like" characteristics are a subordinate if not irrelevant purpose of the speech in question. Given (a) that the number of cases of blockable speech would be extremely minimal (if not zero), (b) that the speech itself would not be for the purpose of expressing opinions or ideas, and (c) that there blocking it with the requisite intelligence may help prevent serious harm and death to many citizens, I believe this extremely narrow standard falls far from the core protections of the First Amendment.

National security is and should remain a top priority of our nation. However, any line drawn beyond violent content or imminent attack would not only be futile, but actually detrimental to both our security and the principles we’ve fought for since the inception of our country.


[1] - See Bobbitt, Philip, Terror and Consent: The Wars for the Twenty-First Century (New York: Alfed A. Knopf, 2008), Pg. 56

-- StevenHwang - 18 Nov 2008

Steven, I'm a bit confused by where you draw the line. You seem to be saying that violent footage should be taken down, but you're also arguing that filtering would be overbroad and subject to the abuse of an unregulated body. How do you reconcile these two statements? Even if you're only filtering for violent content, you still face the same general ills of filtration. Furthermore, how does a filter distinguish between CNN footage and an Al Qaeda beheading distributed for fear-mongering purposes? And in that case, should there necessarily be any filtration at all? Isn't that a fact for the world to see and scholars and politicians to cite?

Or maybe you want this line to only be applicable to content aggregators like YouTube? , but not the Internet at large? That would clear up the logical inconsistency in your statements, but I'm not sure that's a proper solution, either.

-- KateVershov - 05 Dec 2008

Your'e right, I don't think I was very clear on the line I drew.

I think the line should be drawn where the content actually causes or creates imminent physical violence (as opposed to simply depicting it). It would be very difficult to meet this standard--it would have to be something on the magnitude of outright instruction. For instance, if a terrorist organization were to post a call for its followers to riot at a particular time on a particular street or to assassinate a certain official, we should be able to take that down. I would take this even farther to say that even if the message were hidden--e.g. if the content was ostensibly innocuous, but the government had really good reason to believe it was some kind of "trigger" for a coordinated attack, we should also be able to take that down. On the other hand, if they say that America is bad or simply depict violent actions, I might dislike them, but I do not think that it should be taken down.

YouTube? 's own community standard is much stricter than mine, but I agree with their filtering based on CONTENT rather than by USER.

I'll clear this up in my rewrite. Thanks for the comment!

-- StevenHwang - 06 Dec 2008

Steven, have you considered the locality of the servers in your argument? If the content in question is located on a server outside of the United States, should they still be able to take it down (assuming arguendo that they could do so)? Should other nations have the right to do the same with content on US servers they feel incites violence against or by their citizens? What about differing views on what constitutes "imminence?" That is a concept that already leads to difficulties, both in our own criminal justice system's analysis of self-defense and in regards to the United Nations charter and lawful military action. See Bobbitt, Philip, Terror and Consent: The Wars for the Twenty-First Century (New York: Alfed A. Knopf, 2008), Pg. 452.

-- JohnPowerHely - 09 Dec 2008

Steven, I hope you won't mind if I follow up on your response to Kate above. In your comment, you say that you would take down content that creates imminent physical violence. This made me think of Jyllands-Posten's cartoons (http://en.wikipedia.org/wiki/Jyllands-Posten_Muhammad_cartoons_controversy), which sparked riots resulting in over 100 deaths. Would you take these down? Does the user/intent ever matter? What if similar cartoons were posted by a terrorist group in hopes of causing rioting?

-- DavidHambrick - 10 Dec 2008

Steve, I posted a comment a few days ago, but for some reason it did not show up on the page. I don't remember exactly what I wrote (perhaps it is recoverable from the server?), but I will try again here. For the most part I agree with your paper. I do think that regulation would be mostly futile and could also possibly lead us to a slippery slope. Who is deciding what to censor? I do, however, think that clear attempts to instigate violence can be easily identified. But, as David writes above, there can certainly be gray areas. The beauty of an open forum website like youtube is that people can leave comments or post video responses to debunk hate speech, correct misinformation, etc. The problem is that video posters can choose to not allow comments and can block video responses. Do you think that a site like youtube could effectively deal with controversial videos (that might not be inherently violent) by mandating that all comments be allowed and all video responses be linked?

-- MarcRoitman - 10 Dec 2008

I have no idea why my comments aren't posting. Here is my response to John's and David's comments:

Thanks for the comments guys.

John, regarding the locality of the servers, I would think it would depend on how serious the threat--i.e. the ever-elusive concept of "imminence." I would say that if it were "the signal" for a coordinated attack or something like that, by all means we should block it to the extent we can. Even if we don't have access to foreign servers, we could block the incoming traffic a la China/Google.

As for imminence, I think we really have to decide it on a case-by-case basis, within certain guidelines. I think it absolutely does lead to difficulties, but such difficulties are not foreign to criminal justice or counterterrorism. That does not mean that it should be a blank slate, either. I think "imminent violence" as related to online content should be defined as violence that will take place in the future (especially near future) with a high degree of certainty, unless the content is removed.

As eluded to above, I think the trigger scenario (think "Relax" in Zoolander) is the clearest example of something that falls into this category. An angry person or group that says that America deserves to be bombed does not. Websites that recruit terrorists tread the line--on one hand, recruitment is not violence itself, but it might lead to violence in the future. I would say that for this scenario, we should not block general recruiting, but we should block recruiting that our other intelligence tells us is for the specific purpose of something big that will happen soon. A lot of the line-drawing really just depends on what other information we have, and how certain we are given all circumstances that taking down the content will save lives that may be lost if we don't.

David, regarding the cartoons--Thanks for bringing these up. I did not know about the situation, but it is very interesting question given my paper. As stupid as I think the cartoons are, I think that such speech should generally be protected absent an imminent threat. It's a complex question and a really slippery slope here. The violence was an unfortunate and unintended consequence of their actions. I think if they planned to repeat it or make a series out of it, that then it would be appropriate for the powers that be to have a talk with them and let them know that that's not a good idea. I feel like once you've done something that's already caused violence, you don't have the same liberties to do it again--you can't just claim non-intent if you know it will happen. But it looks like it was printed only once, and that beyond that it was other outlets that were reprinting it. The riots were sudden outbursts rather than planned attacks. The situations that I'm thinking of for taking down content is when you know that something will surely happen unless you take down the content--posting tasteless political cartoons does not fall into this category absent that knowledge.

With regard to your question about user and intent, I think that actual effect is always more important. For instance, I might protect the terrorists cartoons that are trying to start violence (but were failing miserably), but take down political cartoons if we knew they were definitely going to cause violence.

-- StevenHwang - 12 Dec 2008

Marc, thanks for the comment and yeah I don’t know why but half of my comments never show up either. I've just decided to edit the pages directly from now on. Really irritating...

Anyways as for your question about who is deciding what to censor, I think that should be the Department of Defense. They are the ones that have the intelligence needed to determine what is and is not actually potentially violent. Under "my regime," at least, most if not all of the cases of violent content will require some other intelligence to prove that there is an imminent threat. There is rarely anything that could meet this standard by its content alone--even direct instruction online would need some kind of intelligence to say whether it's credible.

I think that allowing for comments on certain content that is not violent enough to necessitate taking down but still quite controversial is a very creative and perhaps effective approach to the problem. I think if a place like YouTube? decides to remove users' abilities to block all comments/video responses, it would definitely increase the amount of discourse around controversial postings and topics. Allowing this might even turn people's reactive anger (i.e. in the cartoons described in David's post) toward a more positive method of discourse.

That being said, there may also be some other concerns involved as well. For instance, if a terrorist organization were to be perhaps extremely offended by the video responses to their content, it might simply fuel their fire. Also, if I were to post a personal video from my wedding for my friends and family to see, and someone out there links a crazy nsfw video to it… maybe I might want the ability to unlink those kinds of videos and comments.

All in all, I'd say it'd be nice for the ideal of community and discourse if sites like YouTube? and other content-hosting services had at least some form of commenting available for all of its postings. Personally, I would propose the following: you cannot disable comments and video responses. However, if a response does arise that offends you or that you just don't want up, you can hide it. The poster of that comment can then appeal it, and someone from YouTube? can just make the decision (with a very strong bias toward respecting the author's wishes ).

-- StevenHwang - 12 Dec 2008

  • So after you finish advocating military censorship of websites, are we going to discuss the United States Constitution eventually?

I have edited my paper to include some of the discussion in the comments as well as a brief discussion of the First Amendment.

-- StevenHwang - 09 Feb 2009

 

Navigation

Webs Webs

r14 - 09 Feb 2009 - 15:23:47 - StevenHwang
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM