Computers, Privacy & the Constitution

The Character Set of an Algorithmic Opus: Rethinking the Relationship Between User-Generated Content, Algorithmic Recommendation Systems, and Associated Harms

-- By AndrewIwanicki - 13 Apr 2020

Overview

Below, I explore potential methods of curbing certain harms of algorithmic amplification while preserving CDA §230 in its present form.

Introduction

The flow of information online is increasingly shaped by proprietary algorithms that rank search results, recommend content, and create personalized content-feeds. Though intermediary platforms claim their systems are “content-neutral,” their algorithms are specifically designed to serve their business goals and, consequently, favor particular types of content. Whether or not content bias is intentional, prioritization of engagement-maximization has myriad harmful consequences.

Many presume CDA §230 grants blanket immunity to platforms from liability for the third-party content they select, arrange, and promote (narrow exceptions: sex trafficking, child protections, IP). Understandably, many are calling for amendment of CDA §230 to expand platform accountability.

However, §230 is the backbone of online free-speech protection. Accordingly, we should consider alternative methods of reform that leave §230 intact.

Specifically, I argue that these creative, sophisticated, and impactful algorithms produce unique harms that should be recognized as distinct from underlying third-party content. To place responsibility for these harms solely upon third parties and ignore algorithms’ critical roles is dangerously reductionist. Such harms should not qualify for §230 immunity.

Recent Harms

Platforms’ algorithms play a central role in the sensemaking, decision-making, and social dynamics of modern society as they warp information flows to serve commercial ends. Billions depend on these services daily to learn about current events, survey associated perspectives, establish their own beliefs, and take related actions.

To produce desired effects, platforms have knowingly designed their tools to exploit human weaknesses. In particular, platforms commonly “enrage to engage.” The 25 most common verbs among YouTube? ’s recommended video titles include: dismantles, debunks, snaps, screams, obliterates, shreds, defies, owns, insults, destroys, stuns, smashes, crushes (algotransparency.org). Zuckerberg admits that “borderline” content triggers maximum engagement (https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-content-governance-and-enforcement/10156443129621634/). To exploit this innate psychological inclination, algorithms disproportionately favor the extreme and the anti-moral.

We need not look any further than Google’s own services to see this distortion. In 2019, a Google search for “flat earth” returned ~20% pro-flat-earth results, a YouTube? search returned ~35%, and YouTube? recommendations on pro-flat-earth pages returned ~90%.

Similar recommendation patterns on Facebook are implicated in the rise of the anti-vaccination movement: from Q1 2018 to Q1 2019, global measles cases increased 300% (700% in Africa).

Facebook’s echo-chamber effects are also implicated in political polarization: their algorithmic ad-pricing charges more to target users across party lines (not to mention unequal pricing for competing candidates) (https://arxiv.org/pdf/1912.04255.pdf).

Foundation of Immunity and Why Algorithms Should Be Excluded

Platform immunity is founded upon §230.c.1: “no provider . . . shall be treated as the publisher or speaker of any information provided by another information content provider.” However, the aforementioned effects are not the inevitable result of user-generated content. Platforms’ biased algorithm-designs are a direct, if not the primary, cause.

As the intended scope of immunity is relatively vague, the context provided in §230.b should be considered. §230.b.3 clearly expresses the foundational intent to “encourage development of technologies which maximize user control over what information is received.” Many algorithms are specifically designed to limit choice and control behavior.

Algorithms are protected as proprietary trade secrets, obscuring valuable context regarding why particular results and advertisements are served. User controls are extremely limited, generally forcing users to simply accept platforms’ preferences.

Distinguishing Algorithms from User Content

Because harms are clearly associated with user-generated content, many fail to recognize the critical role that algorithms play, or assume §230 preempts intermediary liability.

However, the above harms would either not exist or would be greatly reduced but for platforms’ deliberate design and deployment of systems through which billions engage with information online. These sophisticated works that intentionally distort information-flow should be recognized as distinct from the mere “content provision” contemplated in §230.

U.S. law clearly recognizes that the “collection and assembling of preexisting materials or of data that are selected, coordinated, or arranged” may “constitute an original work of authorship,” (1976 Copyright Act). One who “chooses which facts to include, in what order to place them, and how to arrange the collected data” with a “minimal degree of creativity” (Feist) is recognized as the author of an original compilation.

This recognition of compilers as authors provides support for the distinction of algorithmic compilation (and its effects) from its underlying components.

Compilation authorship is recognized for works as simple as the arrangement of ten preexisting songs, yet the originality involved in algorithm authorship and the complexity of resultant works (personalized content feeds) is beyond that of most commonly recognized works. A 40,000-word book is merely a combination of ~300,000 characters (from a ~50-character set); YouTube? selects from several billion videos to provide recommendations that account for ~70% of ~1 billion hours watched daily; Facebook selects from billions of posts to arrange personalized feeds for ~1 billion daily users.

Considering the incredible scale of platforms’ original works, granting blanket immunity based upon the third-party origination of underlying components is absurd. It would be more reasonable to claim that a book is not an original work due to its foundation upon ~50 preexisting characters. It is far more appropriate to characterize user-generated content as the character set utilized to construct original content feeds. Through creative content configurations, platforms author infinitely scrolling opuses of propaganda intended to compel users to stay on-application, serve aggressive growth goals, and maximize profits at all costs.

Relevant Factors for Reformed Liability

“He who controls the menu controls the outcome,” (Tristan Harris).

If platforms wish to minimize their liability for user-generated content (and we do not establish blanket liability for all algorithmically amplified content), they should be obliged to serve §230’s stated objective of “maximiz[ing] user control of information.” The scope of platform liability should be tied to platforms’ objectives (commercial vs. public-interest), transparency, and the extent and accessibility of user controls.


Navigation

Webs Webs

r1 - 13 Apr 2020 - 03:59:49 - AndrewIwanicki
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM