Law in the Internet Society

Regulating Bot Use on Media Platforms

-- By KjSalameh - 20 Nov 2020

Introduction

As online bot use has increased in recent years, so has national concern about their negative functions. There has been mounting pressure for legislation to regulate the use of bots on social media platforms, and rightfully so; it is worthwhile to mitigate harm to unaware Internet users. The general motivating principle behind proposed and enacted legislation is to reduce the deceptiveness of social bots. However, without an established classification scheme for bot use, policy is likely to be under inclusive or over encompassing. I mention obstacles to current approaches, discuss what is needed to progress, and offer a classification system to guide effective regulation.

Current Approaches and Issues

Current and proposed legislation targeting bot use broadly implicates social media platforms, placing an onus on platforms to identify and remove bot accounts. Other legislation does not require such action from the platforms, but instead adopts a policy forcing the bot users to disclose their bot use. Both of these routes are flawed.

The former runs into issues with §230 of the CDA. The sweeping flexibility for media platforms to control what content is allowed to exist on their sites makes it incredibly difficult for the law to intervene effectively. One solution would be to amend §230 and carve out an exception for bot use. While it is not unprecedented, the law should proceed with appropriate caution in this regard. An exception must be narrow enough as to not demand unreasonable foresight from platforms. Unsophisticated media providers could easily be held liable despite technical incompetence to adequately remove malicious bots.

The second route runs into what I call the remote origination problem. Namely, that the actors behind online bot use may be so distant from any infringing bot use that attempting to regulate the actors themselves would be practically impossible. It is patently unclear how a regulatory agency could impose liability on remote actors or foreign entities. Russian interference in the 2016 U.S. election and the Department of Justice’s futile attempts to prosecute the responsible entities epitomizes this issue.

The Way Forward

The only viable solution to the remote origination problem is to enforce regulation through social media platforms. One method is to develop a series of public reporting guidelines that merely incentivize the platforms to self-regulate harmful bot use. This could include requiring platforms to publicize annual reports on the prevalence of content that violates their own self-developed content policies. This approach would still allow providers to moderate their own platforms, but simply require higher standards of reporting on the results of their moderation. Such guidelines could take into account the scale of regulated platforms to place a higher burden on platforms more capable of self-regulation. This would in effect place more pressure on platforms such as Facebook and Twitter while avoiding the same kind of pressure on “Joe’s Blog.” While this option avoids §230 issues altogether, it relies on what may be a tenuous deference to social media platforms to self-regulate. Hence, this method runs the risk of being under inclusive and inconsistent across platforms.

Another method to solve the remote origination problem and enforce regulation through media platforms is to require design features on social networks to minimize undesired bot use. This would not run afoul of §230 protections for social media providers, but would require the platforms to develop a structural design system to conspicuously disclose malicious bots. The FTC can enforce this by alleging a failure to implement such a system would be an unfair practice under Section 5 of the FTC Act.

Enforceability

A combination of the above two methods seems most appropriate to pursue. Public reporting standards call attention to the distortions of public discourse over social media and place pressure on media providers not to fall short in the public eye. Design disclosure requirements help cure discourse distortions and can be enforced through the FTC. Critically, however, the FTC should adopt a clear classification system for online bots that successfully communicates to social media providers what kinds of bot uses are targeted.

A classification system is crucial since not all bot use is malicious, and there is an astonishing lack of clarity as to what current legislative measures are actually attempting to regulate. I argue that the following tripartite classification system should be implemented:

Define: (1) software agent to be any automated program that runs on the internet as an agent of the program’s writer. (2) commercial bots as software agents whose principal purpose is broadly grounded in economic motives. (3) political bots as software agents whose principal purpose is grounded in deceptively manipulating public opinion. (4) creative bots as all software agents that do not fall under the commercial or political classification.

These definitions are necessarily broad. The field of automated software agency is too uncertain and innovative to make use of overly specific definitions for regulation. Instead, the goal should be for the FTC to oversee public reporting and disclosure guidelines and work with social media platforms as opposed to against them. The more co-regulation can exist among the FTC and media providers, the better. Should conflicts arise, the law can rely on the interpretive expertise of the FTC. Under this classification system, political bots and commercial bots should be the targets of regulation.

Conclusion

There is no denying the danger of deception and manipulation in the public discourse through automated software agents. Democracy relies on the open and accessible exchange of ideas, but the law should be wary to fall into a fallacy of free speech; false amplification, artificial dilution of public opinion, deceptive commercial inducement, and similar problems run afoul of notions of fairness and transparency. The relatively novel arena of online bots has created difficult problems, demanding the need for effective regulation without unduly subduing media providers. The law must provide clarity with respect to the kinds of bots in online discourse, and look to the FTC and social media platforms themselves to best regulate harmful bot use.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.

Navigation

Webs Webs

r1 - 20 Nov 2020 - 22:18:24 - KjSalameh
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM