Law in the Internet Society

Does Censorship with Artificial Intelligence kill our internet communities?

-- By TorahikoMasutani - 09 Dec 2021

1. Censorship both by private companies in the internet community

In the past, censorship in the internet community was mainly conducted by governments authorities. These days, however, many private companies also conduct censorship in the internet community that they provide to the public. For instance, Facebook removes millions of violating posts and accounts every day on its app. Most of those actions are conducted automatically with artificial intelligence technology (Meta Platforms, Inc., “Community Standards Enforcement Report Q3 2021 report”, 2021,; Meta Platforms, Inc., “AI advances to detect hate speech better", May 12, 2020,; Khari Johnson, “Facebook is using more AI to detect hate speech”, May 12, 2020, Moreover, Intel announced that it released “Bleep”, an artificial intelligence program that censors hate speech in real-time voice chat during gameplay (Ana Diaz, “Intel responds to hate speech tool getting roasted by the internet”, April 9, 2021, Almost all of us, especially those who live in countries with highly developed internet infrastructure, heavily rely on services provided by such tech giants. We use Google search engine and Facebook, Instagram, Twitter, or other kinds of social media every day to receive information from others and to transmit our information. Thus, censorship conducted by such private companies also does serious harm to our rights.

Even in the past, some traditional media, such as newspapers, conducted self-censorship when publishing information. However, censorship lately conducted by social media is different from such traditional self-censorship. The First Amendment also includes the right to silence and supports self-censorship in the sense that a media organization decides to refrain from speaking or publishing. On the other hand, The First Amendment does not include the right to force other parties to be silent (SYMPOSIUM ON CENSORSHIP & THE MEDIA: MEDIA SELF-CENSORSHIP: SELF-CENSORSHIP AND THE FIRST AMENDMENT, 25 ND J. L. Ethics & Pub Pol'y 13). Social media outsource a vast majority of moderation to armies of overseas contractors who screen flagged information and make judgment calls based on guideline compliance. The rest is left to algorithms (ARTICLE: FREE SPEECH ON PRIVATELY-OWNED FORA: A DISCUSSION ON SPEECH FREEDOMS AND POLICY FOR SOCIAL MEDIA, 28 Kan. J.L. & Pub. Pol'y 113, 120).

The Supreme Court regards social media as the most powerful free speech vehicle available to citizens and forums deserving of constitutional protection (Packingham v. North Carolina, 137 S. Ct. 1730, 1735 (2017). Some people state that social media is a modern-age public forum (28 Kan. J.L. & Pub. Pol'y 113). At the same time, the Court does not allow viewpoint discrimination: "[t]he public expression of ideas may not be prohibited merely because the ideas are themselves offensive to some of their hearers." E.g., Street v. New York, 394 U.S. 576, 592 (1969). However, social media opposes this doctrine and moderates nude photographs, which could impact the arts, sexual education, gender politics, and other meaningful speech (Marjorie Heins, The Brave New World of Social Media Censorship, 127 HARV. L. REV. F. 325 (2014) , supra note 36, at 326). On the other hand, conferring free speech rights to users on social media may allow indecent and hateful expressions to reach sensitive ears and eyes. Furthermore, such speech could negatively affect users and, thus, business (28 Kan. J.L. & Pub. Pol'y 113). The Communication Decency Act encourages restriction of constitutional speech. 47 U.S.C. 230(c)(2) (2018).

What is the distinction between censorship and editing? Is every newspaper's "letters to the editor" censored? Without some precision here the argument loses most of its credibility.

2. Censorship with Artificial Intelligence by private companies

Recently, many social media are introducing artificial intelligence technologies to conduct censorship in the community managed by them. For instance, Mike Schroepfer, Facebook’s chief technology officer, insists that “A central focus of Facebook’s AI efforts is deploying cutting-edge machine learning technology to protect people from harmful content…Our goal is to spot hate speech, misinformation, and other forms of policy-violating content quickly and accurately, for every form of content, and for every language and community around the world.” (Sam Shead, “Facebook claims A.I. now detects 94.7% of the hate speech that gets removed from its platform”, November 19, 2020, Such kind of censorship is indeed helpful for private companies as “forum providers.” However, is it good for the internet communities and us as “forum participants” to introduce censorship by artificial intelligence? I do not believe so.

It is true that more and more people raised concerns about witnessing and experiencing toxicity of hate speech, sexual expression, or other kinds of offensive expression in accordance with the expansion of social media and it is essential to deal with such concerns. This kind of offensive expression sometimes leads to severe problems like “Cyber-bullying.” (Cyberbullying Research Center, “TWEEN CYBERBULLYING IN 2020”, 2021, It is also acceptable that forum providers ask their participants to conduct moderated conversation.

However, if forum providers are allowed to decide “what can be expressed in the platform and what can be not,” the expression in the forum becomes totally different from “free and liberal expression.” Is there any free and liberal expression in the internet community where its users can access only a highly censored, monitored and manipulated version of the expressions of each other? I think it is not. Such a platform censored by providers may easily lead to the forum controlled by the selfish-intention of providers and distort discussion among people in the internet society.

This issue becomes much severe when speeches are restricted based on viewpoint discrimination. Artificial intelligence enables providers to censor the expressions in the forum on a larger scale than ever before. Use of artificial intelligence also has a risk of broad viewpoint discrimination. Additionally, the risk of broad censorship is not mitigated by after-censorship at this moment. For instance, Facebook allows an appeal of censored pages and profiles, but not posts. The after-censorship response from Facebook is a terse, vague explanation of the violative content and the company's quest to promote an inclusive environment. (28 Kan. J.L. & Pub. Pol'y 113, 120).

Once again, moderated conversation is hardly censorship, or the absence of "free and liberal" thought, unless classrooms are to be regarded as unfree and illiberal. More precision is necessary.

So far as "AI" (which is not artificial intelligence) is concerned, what difference does it make what tools the editor uses? If I use a pattern-marching search or concordance tool in the course of editing this wiki, does that change anything significant, and if so, why?

3. Conclusion - Community without Censorship by providers / with self-regulation by participants

Thus, I think censorship in internet forums shall be denied if such censorship is conducted under the self-imposed control of forum providers. I believe that hate speech or other harmful expressions in the internet forum shall be excluded. It is also acceptable that forum providers ask their participants to conduct moderated conversation is promoted not by technologies such as artificial intelligence but basically through education to all forum participants. Forum providers shall conduct only the minimum essential censorship to prevent harm such as hate speech and shall not impose broad restrictions on expressions in the forum by using artificial intelligence.

You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Webs Webs

r3 - 24 Jan 2022 - 01:59:34 - TorahikoMasutani
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM