Law in the Internet Society

View   r3  >  r2  >  r1
KjSalamehSecondEssay 3 - 23 Jan 2021 - Main.KjSalameh
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Line: 11 to 11
 

Introduction

Changed:
<
<
As online bot use has increased in recent years, so has national concern about their negative functions. There has been mounting pressure for legislation to regulate the use of bots on social media platforms, and rightfully so; it is worthwhile to mitigate harm to unaware Internet users. The general motivating principle behind proposed and enacted legislation is to reduce the deceptiveness of social bots. However, without an established classification scheme for bot use, policy is likely to be under inclusive or over encompassing. I mention obstacles to current approaches, discuss what is needed to progress, and offer a classification system to guide effective regulation.
>
>
As online bot use has increased in recent years, so has national concern about their negative functions.[_Endnote 1_] There has been mounting pressure for legislation to regulate the use of bots on social media platforms, and rightfully so; it is worthwhile to mitigate harm to unaware Internet users. The general motivating principle behind proposed and enacted legislation is to reduce the deceptiveness of social bots.[_Endnote 2_] However, without an established classification scheme for bot use, policy is likely to be under inclusive or over encompassing. I mention obstacles to current approaches, discuss what is needed to progress, and offer a classification system to guide effective regulation.
 

Current Approaches and Issues

Line: 19 to 19
 The former runs into issues with §230 of the CDA. The sweeping flexibility for media platforms to control what content is allowed to exist on their sites makes it incredibly difficult for the law to intervene effectively. One solution would be to amend §230 and carve out an exception for bot use. While it is not unprecedented, the law should proceed with appropriate caution in this regard. An exception must be narrow enough as to not demand unreasonable foresight from platforms. Unsophisticated media providers could easily be held liable despite technical incompetence to adequately remove malicious bots.
Changed:
<
<
The second route runs into what I call the remote origination problem. Namely, that the actors behind online bot use may be so distant from any infringing bot use that attempting to regulate the actors themselves would be practically impossible. It is patently unclear how a regulatory agency could impose liability on remote actors or foreign entities. Russian interference in the 2016 U.S. election and the Department of Justice’s futile attempts to prosecute the responsible entities epitomizes this issue.
>
>
The second route runs into what I call the remote origination problem. Namely, that the actors behind online bot use may be so distant from any infringing bot use that attempting to regulate the actors themselves would be practically impossible. It is patently unclear how a regulatory agency could impose liability on remote actors or foreign entities. Russian interference in the 2016 U.S. election and the Department of Justice’s futile attempts to prosecute the responsible entities epitomize this issue.
 

The Way Forward

Line: 43 to 43
 

Conclusion

Changed:
<
<
There is no denying the danger of deception and manipulation in the public discourse through automated software agents.
>
>
There is a present danger of deception and manipulation in the public discourse through automated software agents. As we broach the ever-evolving digital sphere, the law should consider the ramifications of unbridled opportunism in the online setting; false amplification, artificial dilution of public opinion, deceptive commercial inducement, and similar problems run afoul of notions of fairness and transparency. The relatively novel arena of online bots has created difficult problems, demanding the need for effective regulation without unduly subduing media providers. The law must provide clarity with respect to the kinds of bots in online discourse, and look to the FTC and social media platforms themselves to best regulate harmful bot use.
 
Changed:
<
<
Why? Of course it can be denied. There's nothing about an automated speaker that makes it more dangerous than a non-automated speaker, and some obvious qualities that might be said to make it less dangerous. You can't just assert something that a reader actively doubts, unless you're not writing for that reader.
>
>

Endnotes

 
Added:
>
>
[1] For a variety of sources that highlight the problems posed by bot use, See Varol et al, Online Human-Bot Interactions: Detection, Estimation, and Characterization, CCNSR and ISN, March 2017 (https://arxiv.org/pdf/1703.03107.pdf) (finding that up to 15% of Twitter profiles – or 50/330 million – are bots), and How much to fake a trend on Twitter? In one country about £150, BBC News, March 2018 (https://www.bbc.com/news/blogs-trending-43218939) (showing how Twitter trends can be bought through bot use), and Study finds quarter of climate change tweets from bots, BBC News, Feb. 2020 (https://www.bbc.com/news/amp/technology-51595285) (finding 38% of “fake science” Tweets written by bots, and 28% of Tweets related to Exxon Mobile generated by bots), and Tess Owen, Nearly 50% of Twitter Accoutns Talking About Coronavirus Might be Bots, Vice, April 2020 (https://www.vice.com/en_us/article/dygnwz/if-youre-talking-about-coronavirus-on-twitter-youre-probably-a-bot) (finding that 45.5% of Tweets concerning the coronavirus are likely generated by bots), and Defining Russian Election Interference: An Analysis of Select 2014 to 2018 Cyber Enabled Incidents, Atlantic Council, Sept. 2018 (https://www.atlanticcouncil.org/wp-content/uploads/2018/09/Defining_Russian_Election_Interference_web.pdf) (finding bots have been used to sow discord by impersonating extreme opinions, amplifying particular political sentiments, posting fabricated content media platforms, and circumventing security measures in electronic elections to manipulate votes).
 
Changed:
<
<
Democracy relies on the open and accessible exchange of ideas, but the law should be wary to fall into a fallacy of free speech; false amplification, artificial dilution of public opinion, deceptive commercial inducement, and similar problems run afoul of notions of fairness and transparency.

This is only rhetoric. If we believe in free speech, and we think false advertising can be regulated as it is currently regulated, you must show there is a problem that is somehow different in quality in order to begin a conversation in which you can realistically propose to regulate how software I make and run for myself in my own computer should be designed.

The relatively novel arena of online bots has created difficult problems, demanding the need for effective regulation without unduly subduing media providers. The law must provide clarity with respect to the kinds of bots in online discourse, and look to the FTC and social media platforms themselves to best regulate harmful bot use.

The most important route to improvement, in my view, is to replace the rhetoric about the problem with evidence of a problem. The notion that programs making statements on platforms I don't use is a serious social problem that cannot be dealt with inside the existing First Amendment paradigms is an extraordinary claim requiring at least more than no evidence. Not one citation, not one fact, not one scintilla of actual evidence is present here, which should be relatively easy to rectify, if there is indeed a problem that so far exceeds the scope of our First Amendment understanding that we should be prepared to alter it.
>
>
[2] See e.g. Pair of Hertzberg Technology Bills Signed by Governor, Sept. 2018 (https://sd18.senate.ca.gov/news/9282018-pair-hertzberg-technology-bills-signed-governor), and S.2125: Bot Disclosure and Accountability Act of 2019 (https://www.govtrack.us/congress/bills/116/s2125).
 

KjSalamehSecondEssay 2 - 28 Dec 2020 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Line: 43 to 43
 

Conclusion

Changed:
<
<
There is no denying the danger of deception and manipulation in the public discourse through automated software agents. Democracy relies on the open and accessible exchange of ideas, but the law should be wary to fall into a fallacy of free speech; false amplification, artificial dilution of public opinion, deceptive commercial inducement, and similar problems run afoul of notions of fairness and transparency. The relatively novel arena of online bots has created difficult problems, demanding the need for effective regulation without unduly subduing media providers. The law must provide clarity with respect to the kinds of bots in online discourse, and look to the FTC and social media platforms themselves to best regulate harmful bot use.
>
>
There is no denying the danger of deception and manipulation in the public discourse through automated software agents.
 
Added:
>
>
Why? Of course it can be denied. There's nothing about an automated speaker that makes it more dangerous than a non-automated speaker, and some obvious qualities that might be said to make it less dangerous. You can't just assert something that a reader actively doubts, unless you're not writing for that reader.

Democracy relies on the open and accessible exchange of ideas, but the law should be wary to fall into a fallacy of free speech; false amplification, artificial dilution of public opinion, deceptive commercial inducement, and similar problems run afoul of notions of fairness and transparency.

This is only rhetoric. If we believe in free speech, and we think false advertising can be regulated as it is currently regulated, you must show there is a problem that is somehow different in quality in order to begin a conversation in which you can realistically propose to regulate how software I make and run for myself in my own computer should be designed.

The relatively novel arena of online bots has created difficult problems, demanding the need for effective regulation without unduly subduing media providers. The law must provide clarity with respect to the kinds of bots in online discourse, and look to the FTC and social media platforms themselves to best regulate harmful bot use.

The most important route to improvement, in my view, is to replace the rhetoric about the problem with evidence of a problem. The notion that programs making statements on platforms I don't use is a serious social problem that cannot be dealt with inside the existing First Amendment paradigms is an extraordinary claim requiring at least more than no evidence. Not one citation, not one fact, not one scintilla of actual evidence is present here, which should be relatively easy to rectify, if there is indeed a problem that so far exceeds the scope of our First Amendment understanding that we should be prepared to alter it.
 

KjSalamehSecondEssay 1 - 20 Nov 2020 - Main.KjSalameh
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="SecondEssay"

Regulating Bot Use on Media Platforms

-- By KjSalameh - 20 Nov 2020

Introduction

As online bot use has increased in recent years, so has national concern about their negative functions. There has been mounting pressure for legislation to regulate the use of bots on social media platforms, and rightfully so; it is worthwhile to mitigate harm to unaware Internet users. The general motivating principle behind proposed and enacted legislation is to reduce the deceptiveness of social bots. However, without an established classification scheme for bot use, policy is likely to be under inclusive or over encompassing. I mention obstacles to current approaches, discuss what is needed to progress, and offer a classification system to guide effective regulation.

Current Approaches and Issues

Current and proposed legislation targeting bot use broadly implicates social media platforms, placing an onus on platforms to identify and remove bot accounts. Other legislation does not require such action from the platforms, but instead adopts a policy forcing the bot users to disclose their bot use. Both of these routes are flawed.

The former runs into issues with §230 of the CDA. The sweeping flexibility for media platforms to control what content is allowed to exist on their sites makes it incredibly difficult for the law to intervene effectively. One solution would be to amend §230 and carve out an exception for bot use. While it is not unprecedented, the law should proceed with appropriate caution in this regard. An exception must be narrow enough as to not demand unreasonable foresight from platforms. Unsophisticated media providers could easily be held liable despite technical incompetence to adequately remove malicious bots.

The second route runs into what I call the remote origination problem. Namely, that the actors behind online bot use may be so distant from any infringing bot use that attempting to regulate the actors themselves would be practically impossible. It is patently unclear how a regulatory agency could impose liability on remote actors or foreign entities. Russian interference in the 2016 U.S. election and the Department of Justice’s futile attempts to prosecute the responsible entities epitomizes this issue.

The Way Forward

The only viable solution to the remote origination problem is to enforce regulation through social media platforms. One method is to develop a series of public reporting guidelines that merely incentivize the platforms to self-regulate harmful bot use. This could include requiring platforms to publicize annual reports on the prevalence of content that violates their own self-developed content policies. This approach would still allow providers to moderate their own platforms, but simply require higher standards of reporting on the results of their moderation. Such guidelines could take into account the scale of regulated platforms to place a higher burden on platforms more capable of self-regulation. This would in effect place more pressure on platforms such as Facebook and Twitter while avoiding the same kind of pressure on “Joe’s Blog.” While this option avoids §230 issues altogether, it relies on what may be a tenuous deference to social media platforms to self-regulate. Hence, this method runs the risk of being under inclusive and inconsistent across platforms.

Another method to solve the remote origination problem and enforce regulation through media platforms is to require design features on social networks to minimize undesired bot use. This would not run afoul of §230 protections for social media providers, but would require the platforms to develop a structural design system to conspicuously disclose malicious bots. The FTC can enforce this by alleging a failure to implement such a system would be an unfair practice under Section 5 of the FTC Act.

Enforceability

A combination of the above two methods seems most appropriate to pursue. Public reporting standards call attention to the distortions of public discourse over social media and place pressure on media providers not to fall short in the public eye. Design disclosure requirements help cure discourse distortions and can be enforced through the FTC. Critically, however, the FTC should adopt a clear classification system for online bots that successfully communicates to social media providers what kinds of bot uses are targeted.

A classification system is crucial since not all bot use is malicious, and there is an astonishing lack of clarity as to what current legislative measures are actually attempting to regulate. I argue that the following tripartite classification system should be implemented:

Define: (1) software agent to be any automated program that runs on the internet as an agent of the program’s writer. (2) commercial bots as software agents whose principal purpose is broadly grounded in economic motives. (3) political bots as software agents whose principal purpose is grounded in deceptively manipulating public opinion. (4) creative bots as all software agents that do not fall under the commercial or political classification.

These definitions are necessarily broad. The field of automated software agency is too uncertain and innovative to make use of overly specific definitions for regulation. Instead, the goal should be for the FTC to oversee public reporting and disclosure guidelines and work with social media platforms as opposed to against them. The more co-regulation can exist among the FTC and media providers, the better. Should conflicts arise, the law can rely on the interpretive expertise of the FTC. Under this classification system, political bots and commercial bots should be the targets of regulation.

Conclusion

There is no denying the danger of deception and manipulation in the public discourse through automated software agents. Democracy relies on the open and accessible exchange of ideas, but the law should be wary to fall into a fallacy of free speech; false amplification, artificial dilution of public opinion, deceptive commercial inducement, and similar problems run afoul of notions of fairness and transparency. The relatively novel arena of online bots has created difficult problems, demanding the need for effective regulation without unduly subduing media providers. The law must provide clarity with respect to the kinds of bots in online discourse, and look to the FTC and social media platforms themselves to best regulate harmful bot use.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 3r3 - 23 Jan 2021 - 20:37:05 - KjSalameh
Revision 2r2 - 28 Dec 2020 - 15:41:39 - EbenMoglen
Revision 1r1 - 20 Nov 2020 - 22:18:24 - KjSalameh
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM