Computers, Privacy & the Constitution

Combatting Digital Misinformation to Protect Democracy

-- By MartinMcSherry - 12 Mar 2020

I. How Social Media Threatens Democracy

A: The "Post-Truth" Era

The Oxford English Dictionary named “post-truth” the word of the year for 2016, defining it as, “Relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” Of course, 2016 was the year an outsider candidate, armed with modern tools of communication, capitalized on anti-elite fervor to fuel a successful White House run built on a foundation of falsehoods.

The democratization of information on the internet has empowered individuals to seek out content that reaffirms their own views. Social media users, including leaders at the highest levels, can directly communicate to millions, bypassing legacy media organizations that once served as gatekeepers. Instead of preventing falsehoods from entering the national conversation or presenting newsworthy statements through a critical lens, gatekeepers find themselves defending facts from a popular revolt.

According to a BuzzFeed News analysis, in the final months of the 2016 election, hoax election stories -- almost entirely supporting Donald Trump and opposing Hillary Clinton -- outperformed actual news on social media. The 20 top-performing false election stories generated nearly 9 million engagements, far more than the 7.3 million earned by actual news. These false stories, memes, and ads can be weaponized even further by leveraging the vast amount of personal data social media companies mine from their users.

B: Cambridge Analytica and the Power of Microtargeting

In 2018, it was revealed that Cambridge Analytica, a data firm hired by the Trump campaign, had harvested the personal data of 50 million Americans without their consent to build psychological profiles used to effectively microtarget political propaganda to individuals. Cambridge said it had as many as three to five thousand data points on each individual, including age, income, debt, hobbies, criminal histories, purchase histories, religious leanings, health concerns, gun ownership, homeownership, and more. It used this data to create so-called “dark posts,” or messages seen only by the users predisposed to its content.

In addition to using data from companies like Facebook, political operatives can and often do use the practice of geofencing, defined as “technology that creates a virtual geographic boundary, enabling software to trigger a response when a cellphone enters or leaves a particular area”. For example, one group, Catholic Vote, used geofencing to identify over 90 thousand Catholics not registered to vote in Wisconsin, a key battleground state, based on their mass attendance. The group intends to tailor messages to this untapped electoral resource.

Digital advertising of this kind allows candidates and organizations to run highly effective, personalized, and cost-efficient shadow campaigns on social media using highly sensitive and private information. In doing so, they avoid fact-checking, standards of decency, and government oversight. In response, several leaders have announced plans of varying strength and effectiveness to hold social media companies accountable.

Section II. Assessing Plans to Fight Digital Misinformation

A. Senator Elizabeth Warren’s Plan

In October 2019, Senator Elizabeth Warren (D-MA) released a plan to combat digital misinformation. The plan urges social media companies to alert users affected by disinformation campaigns, ban accounts that knowingly disseminate false information, open up data to researchers, and share information about algorithms. Warren also controversially called for “criminal penalties for knowingly disseminating false information about when and how to vote in U.S. elections”, noting the suppression turnout among key voters is a particularly invidious tactic of shadow campaigns.

Immediately, conservatives denounced the plan as unconstitutional. Editor of the National Review Charles Cooke opined that the plan amounts to a repeal of the First Amendment. This criticism has no basis in law. The civil and criminal sanctions in Warren’s plan are narrowly tailored to address the spread of one kind of misinformation: voting requirements and procedures. As recently as 2018, a 7-2 majority of the Supreme Court wrote in dicta, “We do not doubt that the State may prohibit messages intended to mislead voters about voting requirements and procedures.” Indeed, states like Virginia, Illinois, and Minnesota have statutes on the books outlawing a person from knowing deceiving another person about election information.

Though the announcement of criminal penalties made waves, their narrow application would do nothing to combat disinformation beyond the narrow scope of voting requirements and procedures. Indeed, the rise of shadow campaigns and microtargeting suggests such communications would rarely, if ever, reach the attention of relevant authorities. While the rest of her plan offers helpful suggestions for companies to adopt, they lack any real enforcement and are unlikely to be implemented without incentives and penalties.

B. Senator Josh Hawley’s Plan

In 2019, Senator Josh Hawley (R-MO) introduced legislation that would revoke protection from liability platforms enjoy for the content users post under Section 230 of the Communications Decency Act. Companies would be able to earn immunity back if they submit to government audits and can prove that they are “politically neutral.” The move is motivated by the perception that social media companies are biased against conservatives, despite the [][well-documented evidence]] that right-wing groups and the Trump campaign dominate digital campaigning and are responsible for the majority of widely-shared fake news pieces.

Hawley’s bill presents a far greater risk of violating the First Amendment and raises a number of questions about who decides what is and is not politically neutral. It also may have the effect of worsening the problem, as companies would fear taking down false messages from either side as it may be perceived as targeting and removing posts reflecting a certain ideology.

C. Alternatives

Neither Warren’s nor Hawley’s plans go far enough to address the scourge of digital misinformation. Leaders should consider funding media literacy programs in schools, requiring platforms to label fake accounts, and offering new rights to consumers similar to the European Union’s General Data Protection Regulation (GDPR). These rights include implementing and enforcing consent requirements for data collection, the right to be forgotten, and even placing a monetary value on an individual user’s data.

You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Webs Webs

r3 - 12 Mar 2020 - 21:01:03 - MartinMcSherry
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM