Law in the Internet Society

Immunity for Platforms and its Utility in 2017

-- By EddyBrandt - 10 Nov 2017

Is 230 suited for 2017?

In 1995, in the midst of the internet's explosion of growth, a changing legal landscape led to the decision in Stratton Oakmont v Prodigy Services Co. and its prompt overruling via Section 230 of the Communications Decency Act in 1996 (the Act). The Act’s rationale was clear: if we impose liability upon, and treat as publishers for legal purposes, internet platforms when illegal acts are committed by third-party users of the platform services because the platforms decided to filter content, then the providers of said services will have reduced incentive to screen offensive material. Society has certainly held up its half of the bargain - providers of different sorts have escaped liability in the face of many a tortious act committed by their users on their platforms. Goddard v Google, Barnes v Yahoo!, Inc.. And, while complaints have been filed in the court system as well as with the FEC, companies like Facebook have yet to face any liability as publishers. But much like the child explorers of the internet which the Act sought to inoculate from indecent images in 1996, today young people and adults alike browse platforms that are awash in a greater, and likely unforeseen, form of danger: false information that masquerades as truth - fake news. The depth of Russia’s interference in the 2016 US presidential election, involving thousands of paid-for ads on Facebook, has become public knowledge. But less so are the news stories from the developing world. False information, blocked from government oversight through encryption, has set off mob attacks in India, killing several. Facebook, lacking an office in Myanmar, has become a breeding ground for hate speech and virulent posts about the Rohingya. Political institutions and real lives are at stake. Where is the bargained-for filtering that justifies the immunity granted to these platforms?

The Path Forward

It’s been argued that leniency has been crucial to Silicon Valley's explosion, that legal immunity subsidized a nascent industry, similar to 19th century common law's embrace of industrial development. And we need not disavow entirely the benefits of flexible regulation for online service providers to acknowledge that new circumstances necessitate change. Whether through Facebook’s ad-directing algorithms, or twitter’s disposition to soundbite-style communication that allows bots and trolls to drown out more reasoned debate, curated social feeds are being manipulated to devastating effect on the public discourse. And, unlike ever before, tremendous power to direct the flow of information now belongs to a very small group of private individuals, and their decisions on the matter will have far-reaching consequences for the whole world, and life-or-death consequences for many.


One response to the mayhem - the one that platform giants advocate - is to allow the companies to self-regulate. There are arguments for at least some level of self-regulation; Professor Urs Gasser of Harvard University contends that platforms not only have the incentives to clean up their act, but reservoirs of data, and the capacity to combine those incentives and resources into effective action. To this point, Facebook is responding: the company has embarked on a public relations campaign amidst public outcry over the 2016 election, and has implemented different features for combating the spread of false information. But unfortunately, available context is less reassuring than Facebook’s public statements on the matter. Facebook’s responsive efforts in countries like Bolivia and Cambodia have, in a sense, backfired. A new “explore” tab on Facebook, which works to filter more “professional” information away from content of friends and family, has had the effect of cutting off traffic to legitimate news sources. In places like Cambodia and Bolivia where independent media outlets, sometimes the only voices of opposition to dangerous governments, need Facebook to subsist, the danger becomes apparent. The notion that the government might be able to buy their way back onto the feeds of its citizens while less wealthy independent news outlets are shut out presents the possibility of a serious blow to ability of individuals to remain informed.status quo of loose regulation and widespread legal immunity has already fostered damaging outcomes for many, and with a rapidly shifting news cycle that threatens to leave these failures in the past, there is little reason to believe that society can rest solely on these assurances.

Time itself presents another problem. When section 230 was passed, individual problems like defamation, which are more likely to be corrected through lawsuits, occupied a larger portion of the pie of public concern. But as time passes, the internet’s composition, its inhabitants, and its uses all evolve. Now, diffuse problems like fake news pose collective action issues. The question of who is the proper plaintiff to vindicate structural damage to a political institution is a difficult one, and this militates for legislative answer.


As Gasser points out, total deference is insufficient on its own; gap-filling regulation with an eye toward transparency will be needed where self-regulation falls flat. Across the Atlantic, some have begun to heed the call of proaction and show a willingness to encroach upon the immunity of platform giants: a new German law purporting to fine social networks large sums for failing to remove hate speech posted by users went recently into effect, and Prime Minister Theresa May has stated that Britain is examining the role of Google and Facebook, and the publisher/platform distinction that has so far served to immunize both from much liability. And now, even in the United States, legislative measures that would have up until recently gone without a sliver of support from online platforms garner approval from the same.

The regime of immunity has, in some relevant sense, failed. Society's response, still in the making, is unclear. But one thing is certain: without public discourse on the topic of legal liability for online platforms, it becomes much more difficult to imagine a serious change to the status quo that has facilitated a structural blow to the American political institution, and spurred several horrors in the developing world, right before our eyes.

I don't know whether this was a response to my comments or another approach to revision. I do think that much good work went into a better introduction. If the best case for immunity, however, is some gas from Urs Gasser, that doesn't seem like a hell of an improvement over what you had before. The conclusion that says immunity has failed and we don't know what to do about that but if we don't talk about it we won't do anything is not the summing up of a strong case, either.

Perhaps the draft prefigures, then, the current state after Cambridge Analytica, in which everyone knows that the radical critics of the platforms (myself included), have been completely right, but—for a variety of reasons—people resist accepting that those who have been right about the problem all along also had the right solutions all along, too.

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Webs Webs

r7 - 31 Mar 2018 - 14:37:22 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM