Law in the Internet Society

Bad Robot

-- By SamSchaffer - 05 Dec 2019

Though I continue to wait for my robot butler, internet bots are already here, and they are ubiquitous, infiltrating even our dating lives. My most recent interaction with an internet bot was with a chatbot for a certain cellular service provider. Despite the bot’s best of intentions in trying to resolve my billing issue, the dialogue ended with a string of regrettable words uttered by one of the parties (that were apparently recorded and alluded to during a subsequent phone call with a human customer service representative). Clearly there is room for improvement on the automated customer service front.

But chatbots and girls that ask me to send them money before our date are not the only frustrating iterations of these internet bots. Anyone who followed the 2016 American Presidential Election has no doubt heard of the influence of "social media bots" on the election results. Research suggests that social bots are responsible for much of the spread of low-credibility content. One report estimated that between 9% and 15% of Twitter accounts are automated.

These bots also undermine the credibility of earnest political activists, who may be mistaken for bots and banned from social media platforms. In 2018, for instance, Facebook deleted the page of Shut It Down DC, an organization formed to combat the spread of white supremacists in Washington, D.C. Online discussion forums such as Reddit have also been disrupted, as users sometimes doubt whether they are dialoguing with others who harbor genuine beliefs.

California, in an effort to remedy this new-age problem, passed the Bolstering Online Transparency Act, also known as the "B.O.T. bill" (SB 1001). The law, which took effect on July 1st, 2019, forbids the use of a bot “to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.” A bot is defined as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.” However, bots are permissible if their non-human nature is disclosed.

Currently, there is no legislation in the US at the federal level that restricts the use of social bots. However, in June 2018, Senator Dianne Feinstein introduced a bill known as the Bot Disclosure and Accountability Act. On its face, the bill is much broader than California’s law, as it proscribes all social media bots that pose as humans without disclosure. California’s law forbids only those bots that are designed to induce the purchase of goods or services or to influence a vote in an election. Another distinction is that the federal bill enlists social media providers to enforce the disclosure or discovery of the bots. Additionally, the federal bill prohibits political candidates, parties, corporations, and labor organizations from using the misleading bots. The federal bill tasks the Federal Trade Commission with figuring out the details, including the precise definition of “automated software program or process intended to impersonate or replicate human activity online”.

As Slate.com noted, the federal bill creates some tension with the First Amendment, as the language seems to reach even apolitical "creative bots". My friend’s self-developed Facebook Messenger bot that offers movie and documentary recommendations comes to mind. If the bot were forced to disclose that it wasn’t human, some of the charm would be lost. And another question is raised: would such a bot have to disclose its true self to me each and every time I say something to it? Only at the start of the relationship? Or only at the start of each conversation?

Whether the existing and pending legislation will have the intended effect remains to be seen. Regardless, my opinion is that the 2016 Presidential Election provides sufficient evidence that something needs to be done, and Senator Feinstein’s legislation is on the right track. Shifting some of the load onto social media providers is a wise choice, as they possess the behavioral data that can help catch these bad bots.

Navigation

Webs Webs

r3 - 06 Dec 2019 - 05:22:17 - SamSchaffer
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM