Law in the Internet Society
-- MaxE - 20 Nov 2023


the U.S. government should regulate bot presence on social media to combat the issue of content oversaturation and censorship on social media platforms before establishing other, less important, regulations. This term, the Supreme Court will decide whether social media sites have a First Amendment right to choose which information they publish on their websites. Currently, social media company’s remove, demote, or hide lawful content to minimize speech (i) that the business does not associate with, (ii) that puts off their consumers or advertisers and (iii) that is of little interest or value for users. There’s a perception amongst consumers that social media platforms disproportionately silence conservative voices. Therefore, some states want to regulate platforms’ curatorial decisions to “prohibit censorship.” If the Supreme Court rules against social media companies, lawmakers will replace the private entities’ editorial voice with government-dictated preferences. This is the wrong issue for the Supreme Court to decide regarding social media because state-imposed censorship does nothing to solve the bot infestation problem online. In this paper, I argue that: (i) bots constitute a significant issue driving content oversaturation and censorship on social media platforms today; (ii) lawmakers should impose maximum bot presence thresholds and audit social media platforms using bot-detecting algorithms to enforce those restrictions.

Bots constitute a significant issue driving content oversaturation and censorship on social media platforms.

Today, the abundance of fake accounts on social media intrudes upon the free exchange of ideas, particularly when users can’t tell whether they’re talking to other organic users or manipulative bots. Twitter and Facebook have publicly claimed that the total number of false or spam accounts is a mere 5% of daily users. In 2022, Profesor Soumendra Lahiri and Drubayoti Ghosh tested this representation by using an algorithm to identify inorganic users with up to 98% accuracy. Their data set was limited to two million tweets collected over a six-week-period. Furthermore, Ghosh and Lahiri’s found that bots account for 25-68% of Twitter users, depending on the time and issues being discussed. Brenda Curtis, an investigator for NIH’s Intramural Research Program, conducted another study where she asked humans and algorithms to detect social bots online. Social bots refer to bots that pose as humans and mimic human behaviors like excessive posting or tagging other users to emulate and alter the behavior of real people. Human participants correctly identified social bots less than 25% of the time compared to the algorithms. Curtis’s results suggest two things. First, organic users do not always know whether they are interacting with fake or real accounts. Second, bot-detection algorithms can address the problem of content oversaturation and censorship on social media platforms driven by bot-generated content better than humans can. Therefore, the federal or state governments should mandate disclosure requirements that permit third-party audits and produce bot/human ratios on social media platforms instead of creating bias preference lists. Preference lists constitute good optics for regulating social media. But this will not fix real issue. Furthermore, the Constitution empowers legislators with the means to regulate bot-generated content on social media because the First Amendment expressly protects the rights “of the people,” which includes free speech. Bots are not people. Therefore, they have no right to free speech and lawmakers should incentivize their removal from online platforms.

Lawmakers should mandate disclosure requirements and conduct 3rd-party algorithmic audits on social media platforms to decrease bot presence online.

Social media platforms already have their own systems that identify and remove bot accounts. Governments can learn from their models to effectively audit bot presence online. Specifically, social media companies reverse engineer the data sets of known bots and use them to train new bot-detection algorithms. One bot-detection algorithm is called the social honey pot. Honey pots are host-site generated fake accounts that mimic the profiles of organic users to lure spam bots. When spam bots interact with honey pots, company researchers identify the former and deactivate them. Unfortunately, the current regulatory landscape relative to social media permits the concealment of bot protection measures when it should promote practices that increase company-investor transparency and user safety. Companies have many reasons to misrepresent the prevalence of bots on their platforms and will not do so without the threat of sanctions. For example, Elon Musk famously tried to pull out of his $44 billion acquisition of Twitter because he believed that “20% fake/spam accounts, while 4 times what Twitter claims, could be much higher” than expected. His “offer was based on Twitter’s SEC filings being accurate” and considered Twitter’s misrepresentation materially adverse to his $44 billion bid. If the SEC imposed disclosure requirements and mandated regulatory approvals involving bot/human ratios, Elon may have paid less to acquire Twitter or terminated the agreement entirely after the SEC’s audit. Additionally, if the SEC found Twitter liable for misrepresenting their bot/user ratio to the public, the agency could sanction Twitter. Regulatory action like this could incentivize host-company bot extermination and encourage user protection. For these reasons, lawmakers should regulate the industry accordingly to protect future investors and users rather than handing the reigns over to biased legislatures that answer to constituents.


In conclusion, the U.S. government should regulate bot presence on social media to combat the issue of bot-driven content oversaturation and censorship in propaganda before establishing other, less important, regulations. The current, preference list, approach at the state level will fail to solve these issues because social bots can amplify or suppress content volume on social media platforms. Additionally, the currently regulatory landscape prohibits practical regulatory practices for auditing and imposing bot-presence thresholds on social media companies to protect investors and users. Researchers at MIT note that the problem of inaccurate bot detection stems from a lack of transparency. The federal and state governments should require companies like Twitter, Facebook, and Instagram to be more transparent with their data so that auditors can create reliable, comprehensive, bot-detection algorithms. This way, the government could successfully regulate social media by decreasing bot presence and protecting the free speech of organic users.

Word Count: 996

Sources 1) Jennifer Stisa Granick & Vera Eidelman, The Supreme Court Will Set an Imporant Precedent for Free Speech Online, ACLU ( Oct. 19, 2023) 2) U.S. Const. amend. I. 3) Shawn Ballard, Are bots winning the war to control social media?, Wash. Univ. Dept. of Poli Sci (Nov. 1. 2022) 4) Brenda Curtis, McKenzie? Himelein-Wachowiak, Salvatore Giorgi, Bots and Misinformation Spread of Social Media: Implications for COVID-19, Journal of Medical Internet Research (Dec. 5, 2021) 5) Dylan Walsh, Study finds bot detection software isn’t as accurate as it seems, MIT Management Sloan School (June 12, 2023) 6) Jon Porter, Elon Musk says Twitter deal “cannot move forward” until it proves bot numbers: Tesla CEO says fake / spam accounts could make up “much more” than 20 percent of users, The Verge ( May 17, 2022) https://www.t

Not mentioning the First Amendment is a something of a drawback to the credibility of the argument, Max. As is retailing bullshit from Elon Musk. If no one uses a stupid service except bots, why exclude the bots?

The simple question at least deserves an answer: Who cares? Not using platform services is obviously better than using them. No one makes you use them. Why should we allow ourselves to waste the public force on the regulation of conduct we could simply replace non-coercively with better alternatives?

In future, please just replace old revisions with new ones, rather than making a new topic. The wiki preserves all history of every page, and changing topics from revision to revision breaks my tools.


Webs Webs

r3 - 08 Jan 2024 - 18:43:39 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM