The Misinformation Crisis and Intermediary Liability

The role of Big Tech in spreading misinformation has long been in the public eye, but increasingly so considering the mass hysteria surrounding the pandemic, the 2021 Capitol Hill riots, the Cambridge Analytica scandal, and the mob lynchings in India. In November 2021, Nobel peace prize laureate Maria Ressa said that social media is creating a “virus of lies”, and “manipulating our minds insidiously, creating alternate realities, making it impossible for us to think slow”. In various APAC countries such as Singapore and Indonesia, ‘fake news’ statutes have been enacted, holding intermediaries liable for the spread of fake news and falsehoods.

In the US, internet service providers (ISPs) are afforded some immunity from being held liable for hosting harmful content posted by third parties. § 230 of the Communications Decency Act (CDA), which, inter alia, provides that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”, is often said to have “created the internet” , as we know it. However, given the impact of the rising wave of misinformation, there have been several calls to limit the scope of the immunity granted by § 230 of the CDA, and hold social media intermediaries or ISPs liable for the content they host. However, holding ISPs liable for failing to regulate content posted by third parties runs the risk of ISPs over-regulating and taking down more content than necessary, thereby causing a chilling effect on free speech and expression.

The balancing act between the independence of ISPs, the First Amendment, the right to privacy and the harms of not regulating misinformation has been rendered particularly difficult given the ubiquity of data and the ease of access to such data. While the role of intermediaries has come into question more and more often lately, the question of intermediary liability is certainly not new. In 1964, in New York Times Co. v. Sullivan, the United States Supreme Court (SCOTUS) held that to sustain a claim of defamation or libel, the First Amendment requires that the plaintiff prove that the defendant knew that a statement was false or was reckless in deciding to publish the information without investigating whether it was accurate.

A decade later, in Miami Herald Publishing Co. v. Tornillo, the SCOTUS overturned a state law in Florida which required newspapers to provide space to political candidates to reply in case there is an editorial or any endorsement in the paper. The SCOTUS held that as the statute “exacts a penalty on the basis of the content”, it would disincentivize and dissuade newspapers from publishing any content likely to cause controversy, which would in turn have a chilling effect on free speech and expression. Intermediaries have, thus, always been accorded immunity in the interest of free speech. It would be untoward if such immunity was taken away from them now.

What can be done to curb misinformation then? While there is, of course, no fool proof way of ending the spread of misinformation once and for all and nor should there be, given the dynamic nature of information and facts as well as the fundamental right to speech and expression, there are smaller measures that may be taken.

As per a study conducted by MIT Sloan, people do not want to share misinformation or false news – the sensory experiences associated with scrolling through one’s news feed simply leave them too overwhelmed to remember or focus on whether something they read was accurate. Professor David Rand of MIT Sloan observed that, “…if the social media platforms reminded users to think about accuracy—maybe when they log on or as they’re scrolling through their feeds—it could be just the subtle prod people need to get in a mindset where they think twice before they retweet”. In line with this, social media platforms and other similar ISPs could implement simple accuracy prompts to shift user’s attention towards the reliability of the post they are making before sharing the same. This immediately instigates users to think about the content they are posting. This approach relies, not on over-screening or underscreening by ISPs, but instead on the user’s autonomy and ability to discern misinformation before they share it.

Lastly, while this may be harder or more arduous to organize and achieve, it may also help to invest in ground-level media literacy programs nationally. In 2017, the Yale Information Society Project and the Floyd Abrams Institute for Freedom of Expression hosted a workshop on fighting fake news, which suggested that consumers must be “better educated, so that they are better able to distinguish credible sources and stories from their counterparts.” While media literacy programs have already been introduced through legislation in states such as California and Washington, this ought to be adopted on a national level, following suit from countries like Canada and Australia.

In summary, while the threat of misinformation looms large, the liability of intermediaries should continue to be treated as it is. Programming changes reminding users about the veracity of the content being posted, along with aggressive media literacy programs undertaken by both governments and private entities alike will assist in, at the very least, early identification of misinformation. In turn, the harmful effects of such misinformation will be minimized to the greatest possible degree without, both, arming private ISPs with unfettered power to control speech, and holding them liable for their inability to do so.