Law in the Internet Society

Bad Robot

-- By SamSchaffer - 05 Dec 2019

Though I continue to wait for my robot butler, internet bots are already here, and they are ubiquitous, infiltrating even our dating lives. My most recent interaction with an internet bot was with a chatbot for a certain cellular service provider. Despite the bot’s best of intentions in trying to resolve my billing issue, the dialogue ended with a string of regrettable words uttered by one of the parties (that were apparently recorded and alluded to during a subsequent phone call with a human customer service representative). Clearly there is room for improvement on the automated customer service front.

But chatbots are not the only frustrating iterations of these internet bots. Anyone who followed the 2016 American Presidential Election has no doubt heard of the influence of "social media bots" on the election results. Research suggests that social bots are responsible for much of the spread of low-credibility content. One report estimated that between 9% and 15% of Twitter accounts are automated.

These bots also undermine the credibility of earnest political activists, who may be mistaken for bots and banned from social media platforms. In 2018, for instance, Facebook deleted the page of Shut It Down DC, an organization formed to combat the spread of white supremacists in Washington, D.C. Online discussion forums such as Reddit have also been disrupted, as users sometimes doubt whether they are dialoguing with others who harbor genuine beliefs. The First Amendment journalist Jared Schroeder notes that these bots tend to undermine the “marketplace of ideas”. In the words of Supreme Court Justice Oliver Wendell Holmes, “the best test of truth is the power of the thought to get itself accepted in the competition of the market.” _Abrams v. United States_, 250 U.S. 616, 630 (1919) (Holmes, J., dissenting). Schroeder notes that the internet has already eroded this marketplace of ideas due to the increased fragmentation of communities, and he further argues that social bots – which he calls “AI communicators” – accelerate this erosion through their ability to create more content than humans.

We should keep in mind that not everyone who encounters a social bot is fooled into believing it’s human. The Pew Research Center found that 66% of Americans had heard of social bots. Of those, 47% were either somewhat or very confident they could identify a bot. But that still leaves 34% of people who have never heard of social bots, and over half of people who were not at least somewhat confident they could identify a social bot. If these figures are an indication of who could be fooled by a social bot, then there is a large portion of the American public who are susceptible to the bot’s influence.

California, in an effort to remedy this new-age problem, passed the Bolstering Online Transparency Act, also known as the "B.O.T. bill" (SB 1001). The law, which took effect on July 1st, 2019, forbids the use of a bot “to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.” A bot is defined as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.” However, bots are permissible if their non-human nature is disclosed.

Currently, there is no legislation in the US at the federal level that restricts the use of social bots. However, in June 2018, Senator Dianne Feinstein introduced a bill known as the Bot Disclosure and Accountability Act. On its face, the bill is much broader than California’s law, as it proscribes all social media bots that pose as humans without disclosure. California’s law forbids only those bots that are designed to induce the purchase of goods or services or to influence a vote in an election. Another distinction is that the federal bill enlists social media providers to enforce the disclosure or discovery of the bots. Additionally, the federal bill prohibits political candidates, parties, corporations, and labor organizations from using the misleading bots. The federal bill tasks the Federal Trade Commission with figuring out the details, including the precise definition of “automated software program or process intended to impersonate or replicate human activity online”.

As Slate.com noted, the federal bill creates some tension with the First Amendment, as the language seems to reach even apolitical "creative bots". My friend’s self-developed Facebook Messenger bot that offers movie and documentary recommendations comes to mind. If the bot were forced to disclose that it wasn’t human, some of the charm would be lost. And another question is raised: would such a bot have to disclose its true self to me each and every time I say something to it? Only at the start of the relationship? Or only at the start of each conversation?

Whether the existing and pending legislation will have the intended effect remains to be seen. Regardless, my opinion is that the 2016 Presidential Election provides sufficient evidence that something needs to be done, and Senator Feinstein’s legislation is on the right track. Shifting some of the load onto social media providers is a wise choice, as they possess the behavioral data that can help catch these bad bots. Otherwise, if the government is left to be the only enforcer of the new law, then it will have a difficult time of curtailing malignant social bots. Government lawyers will have to develop greater expertise in the field, monitor social media data consistently (an expensive prospect), and will have to work with the social media companies anyway in order to obtain access to that data. Costs could be minimized by shifting the initial burden to the social media companies, who already monitor their data and police their platforms for inappropriate content.

What is the problem to which the essay is addressed? Which deceptive practices that are not already unlawful are at stake, and why are they harder to make unlawful than other deceptive practices? Why is a "bot" the proper subject of regulation rather than the practices of the business or person using the software? Why does the issue focus on whether the "bot" is known to be software? It isn't that easy to pass the Turing test, under most conditions, so in what proportion of the non-Twitter cases is the "bot" not known to be software? Who other than the platform companies is or could be responsible for whatever the "bot" problem is, given that they are the people who create the larger unsafe software environments in which whatever the "bot" problem is develops itself? Standards for cellular carrier customer service are directly set by interaction with the captured FCC. If FCC were controlled by customers instead of providers, you wouldn't have a cellular customer service bot problem, right?

These are basic questions that should have basic answers. The draft when be stronger when it provides them.

Navigation

Webs Webs

r8 - 11 Feb 2020 - 04:49:20 - SamSchaffer
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM