Law in the Internet Society

View   r6  >  r5  ...
EddyBrandtFirstEssay 6 - 16 Jan 2018 - Main.EddyBrandt
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Line: 7 to 7
 -- By EddyBrandt - 10 Nov 2017
Deleted:
<
<

Introduction

In 1995, in the midst of the internet's explosion of growth, a case was decided that threatened to derail the growth of online service providers (for the purposes of this essay, I will be using this term to refer primarily to providers of online platforms). The court in Stratton Oakmont v Prodigy Services Co. found liability for the defendant, Prodigy Services, whose online forum was the site of defamatory material posted by third-party users. The “conscious choice, to gain the benefits of editorial control, opened (Prodigy Services) up to a greater liability to... other computer networks that make no such choice” Stratton Oakmont. This decision was promptly overruled via Section 230 of the Communications Decency Act in 1996 (the Act). The Act’s rationale was clear: if we impose liability upon, and treat as publishers for legal purposes, internet platforms when illegal acts are committed by third-party users of the platform services because the platforms decided to filter content, then the providers of said services will have reduced incentive to screen offensive material. As a society we have an interest in those materials being screened, so we won’t impose liability on internet platforms on the basis of their coquetry with publication. But has this mutual symbiosis run its course? Has society's give outweighed its take?

 

Is 230 suited for 2017?

Changed:
<
<
Society has certainly held up its half of the bargain - providers of different sorts have escaped liability in the face of many a tortious act committed by their users on their platforms. Goddard v Google, Barnes v Yahoo!, Inc.. And, while complaints have been filed in the court system as well as with the FEC, companies like Facebook have yet to face any liability as publishers. But much like the child explorers of the internet which the Act sought to inoculate from indecent images in 1996, today young people and adults alike browse platforms that are awash in a greater, and likely unforeseen, form of danger: false information that masquerades as truth - fake news. The depth of Russia’s interference in the 2016 US presidential election, involving thousands of paid-for ads on Facebook, has become public knowledge. But less so are the news stories from the developing world. False information, blocked from government oversight through encryption, has set off mob attacks in India, killing several. Facebook, lacking an office in Myanmar, has become a breeding ground for hate speech and virulent posts about the Rohingya. Political institutions and real lives are at stake. Where is the bargained-for filtering that justifies the immunity granted to these platforms?
>
>
In 1995, in the midst of the internet's explosion of growth, a changing legal landscape led to the decision in Stratton Oakmont v Prodigy Services Co. and its prompt overruling via Section 230 of the Communications Decency Act in 1996 (the Act). The Act’s rationale was clear: if we impose liability upon, and treat as publishers for legal purposes, internet platforms when illegal acts are committed by third-party users of the platform services because the platforms decided to filter content, then the providers of said services will have reduced incentive to screen offensive material. Society has certainly held up its half of the bargain - providers of different sorts have escaped liability in the face of many a tortious act committed by their users on their platforms. Goddard v Google, Barnes v Yahoo!, Inc.. And, while complaints have been filed in the court system as well as with the FEC, companies like Facebook have yet to face any liability as publishers. But much like the child explorers of the internet which the Act sought to inoculate from indecent images in 1996, today young people and adults alike browse platforms that are awash in a greater, and likely unforeseen, form of danger: false information that masquerades as truth - fake news. The depth of Russia’s interference in the 2016 US presidential election, involving thousands of paid-for ads on Facebook, has become public knowledge. But less so are the news stories from the developing world. False information, blocked from government oversight through encryption, has set off mob attacks in India, killing several. Facebook, lacking an office in Myanmar, has become a breeding ground for hate speech and virulent posts about the Rohingya. Political institutions and real lives are at stake. Where is the bargained-for filtering that justifies the immunity granted to these platforms?
 

The Path Forward

It’s been argued that leniency has been crucial to Silicon Valley's explosion, that legal immunity subsidized a nascent industry, similar to 19th century common law's embrace of industrial development. And we need not disavow entirely the benefits of flexible regulation for online service providers to acknowledge that new circumstances necessitate change. Whether through Facebook’s ad-directing algorithms, or twitter’s disposition to soundbite-style communication that allows bots and trolls to drown out more reasoned debate, curated social feeds are being manipulated to devastating effect on the public discourse. And, unlike ever before, tremendous power to direct the flow of information now belongs to a very small group of private individuals, and their decisions on the matter will have far-reaching consequences for the whole world, and life-or-death consequences for many.

Deference

Changed:
<
<
One response to the mayhem - the one that platform giants advocate - is to allow the companies to self-regulate. There are arguments for at least some level of self-regulation; Professor Urs Gasser of Harvard University contends that platforms not only have the incentives to clean up their act, but reservoirs of data, and the capacity to combine those incentives and resources into effective action. And to this point, Facebook is responding: the company has embarked on a public relations campaign amidst public outcry over the 2016 election, and has implemented different features for combating the spread of false information. But the status quo of loose regulation and widespread legal immunity has already fostered damaging outcomes for many, and with a rapidly shifting news cycle that threatens to leave these failures in the past, there is little reason to believe that society can rest solely on these assurances.
>
>
One response to the mayhem - the one that platform giants advocate - is to allow the companies to self-regulate. There are arguments for at least some level of self-regulation; Professor Urs Gasser of Harvard University contends that platforms not only have the incentives to clean up their act, but reservoirs of data, and the capacity to combine those incentives and resources into effective action. To this point, Facebook is responding: the company has embarked on a public relations campaign amidst public outcry over the 2016 election, and has implemented different features for combating the spread of false information. But unfortunately, available context is less reassuring than Facebook’s public statements on the matter. Facebook’s responsive efforts in countries like Bolivia and Cambodia have, in a sense, backfired. A new “explore” tab on Facebook, which works to filter more “professional” information away from content of friends and family, has had the effect of cutting off traffic to legitimate news sources. In places like Cambodia and Bolivia where independent media outlets, sometimes the only voices of opposition to dangerous governments, need Facebook to subsist, the danger becomes apparent. The notion that the government might be able to buy their way back onto the feeds of its citizens while less wealthy independent news outlets are shut out presents the possibility of a serious blow to ability of individuals to remain informed.status quo of loose regulation and widespread legal immunity has already fostered damaging outcomes for many, and with a rapidly shifting news cycle that threatens to leave these failures in the past, there is little reason to believe that society can rest solely on these assurances.

Time itself presents another problem. When section 230 was passed, individual problems like defamation, which are more likely to be corrected through lawsuits, occupied a larger portion of the pie of public concern. But as time passes, the internet’s composition, its inhabitants, and its uses all evolve. Now, diffuse problems like fake news pose collective action issues. The question of who is the proper plaintiff to vindicate structural damage to a political institution is a difficult one, and this militates for legislative answer.

 

Proactivity

As Gasser points out, total deference is insufficient on its own; gap-filling regulation with an eye toward transparency will be needed where self-regulation falls flat. Across the Atlantic, some have begun to heed the call of proaction and show a willingness to encroach upon the immunity of platform giants: a new German law purporting to fine social networks large sums for failing to remove hate speech posted by users went recently into effect, and Prime Minister Theresa May has stated that Britain is examining the role of Google and Facebook, and the publisher/platform distinction that has so far served to immunize both from much liability. And now, even in the United States, legislative measures that would have up until recently gone without a sliver of support from online platforms garner approval from the same.

Revision 6r6 - 16 Jan 2018 - 05:59:16 - EddyBrandt
Revision 5r5 - 04 Dec 2017 - 21:53:47 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM