Law in the Internet Society

View   r7  >  r6  >  r5  >  r4  >  r3  >  r2  ...
EddyBrandtFirstEssay 7 - 31 Mar 2018 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Line: 29 to 29
 

Changed:
<
<
Your draft adequately introduces the issues, though the introduction to the introduction is too long; everything from Stratton Oakmont to the 2016 election can be put succinctly in three or four sentences.

You haven't shown why there is any argument for continued "safe harbor" immunity at all. The "infant industry" subsidy makes no sense with respect to companies strong enough to control elections and sway governments. They are media companies, equipped not only with their own First Amendment rights like the publishers with whom they compete, but with special immunities that others don't have. They can no longer claim that they don't edit or shape content; that's the source of immense market power for them. They can only depend on the idea, statutorily defined, that the user is "another content provider," just like themselves for all the difference it makes to section 230.

So the place to begin is: the Web was centralized by the platform companies based on an extraordinary subsidy to centralization of function without aggregation of responsibility. A range of bad social effects immediately followed. Now the platform companies ask that "self-regulation" be decreed for the perpetuation of the subsidy, for which no sufficient argument has been given unless one believes that they are not powerful enough already, and need dispensation from the requirements of the rule of law as it applies to everyone else. Capacious as the First Amendment's protections of the media are, they need more. Just as Mr Zuckerberg bought all the houses around his own because he needed more privacy, right?

I think the best route to improvement here is to take the bull by the horns and give the best case you can for offering the legal immunity subsidy to the companies as they are now. If you can do that, and the case is any good, you have a Hell of an essay, and Kevin Martin of Facebook has a job for you.

>
>
I don't know whether this was a response to my comments or another approach to revision. I do think that much good work went into a better introduction. If the best case for immunity, however, is some gas from Urs Gasser, that doesn't seem like a hell of an improvement over what you had before. The conclusion that says immunity has failed and we don't know what to do about that but if we don't talk about it we won't do anything is not the summing up of a strong case, either.

Perhaps the draft prefigures, then, the current state after Cambridge Analytica, in which everyone knows that the radical critics of the platforms (myself included), have been completely right, but—for a variety of reasons—people resist accepting that those who have been right about the problem all along also had the right solutions all along, too.

 


EddyBrandtFirstEssay 6 - 16 Jan 2018 - Main.EddyBrandt
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Line: 7 to 7
 -- By EddyBrandt - 10 Nov 2017
Deleted:
<
<

Introduction

In 1995, in the midst of the internet's explosion of growth, a case was decided that threatened to derail the growth of online service providers (for the purposes of this essay, I will be using this term to refer primarily to providers of online platforms). The court in Stratton Oakmont v Prodigy Services Co. found liability for the defendant, Prodigy Services, whose online forum was the site of defamatory material posted by third-party users. The “conscious choice, to gain the benefits of editorial control, opened (Prodigy Services) up to a greater liability to... other computer networks that make no such choice” Stratton Oakmont. This decision was promptly overruled via Section 230 of the Communications Decency Act in 1996 (the Act). The Act’s rationale was clear: if we impose liability upon, and treat as publishers for legal purposes, internet platforms when illegal acts are committed by third-party users of the platform services because the platforms decided to filter content, then the providers of said services will have reduced incentive to screen offensive material. As a society we have an interest in those materials being screened, so we won’t impose liability on internet platforms on the basis of their coquetry with publication. But has this mutual symbiosis run its course? Has society's give outweighed its take?

 

Is 230 suited for 2017?

Changed:
<
<
Society has certainly held up its half of the bargain - providers of different sorts have escaped liability in the face of many a tortious act committed by their users on their platforms. Goddard v Google, Barnes v Yahoo!, Inc.. And, while complaints have been filed in the court system as well as with the FEC, companies like Facebook have yet to face any liability as publishers. But much like the child explorers of the internet which the Act sought to inoculate from indecent images in 1996, today young people and adults alike browse platforms that are awash in a greater, and likely unforeseen, form of danger: false information that masquerades as truth - fake news. The depth of Russia’s interference in the 2016 US presidential election, involving thousands of paid-for ads on Facebook, has become public knowledge. But less so are the news stories from the developing world. False information, blocked from government oversight through encryption, has set off mob attacks in India, killing several. Facebook, lacking an office in Myanmar, has become a breeding ground for hate speech and virulent posts about the Rohingya. Political institutions and real lives are at stake. Where is the bargained-for filtering that justifies the immunity granted to these platforms?
>
>
In 1995, in the midst of the internet's explosion of growth, a changing legal landscape led to the decision in Stratton Oakmont v Prodigy Services Co. and its prompt overruling via Section 230 of the Communications Decency Act in 1996 (the Act). The Act’s rationale was clear: if we impose liability upon, and treat as publishers for legal purposes, internet platforms when illegal acts are committed by third-party users of the platform services because the platforms decided to filter content, then the providers of said services will have reduced incentive to screen offensive material. Society has certainly held up its half of the bargain - providers of different sorts have escaped liability in the face of many a tortious act committed by their users on their platforms. Goddard v Google, Barnes v Yahoo!, Inc.. And, while complaints have been filed in the court system as well as with the FEC, companies like Facebook have yet to face any liability as publishers. But much like the child explorers of the internet which the Act sought to inoculate from indecent images in 1996, today young people and adults alike browse platforms that are awash in a greater, and likely unforeseen, form of danger: false information that masquerades as truth - fake news. The depth of Russia’s interference in the 2016 US presidential election, involving thousands of paid-for ads on Facebook, has become public knowledge. But less so are the news stories from the developing world. False information, blocked from government oversight through encryption, has set off mob attacks in India, killing several. Facebook, lacking an office in Myanmar, has become a breeding ground for hate speech and virulent posts about the Rohingya. Political institutions and real lives are at stake. Where is the bargained-for filtering that justifies the immunity granted to these platforms?
 

The Path Forward

It’s been argued that leniency has been crucial to Silicon Valley's explosion, that legal immunity subsidized a nascent industry, similar to 19th century common law's embrace of industrial development. And we need not disavow entirely the benefits of flexible regulation for online service providers to acknowledge that new circumstances necessitate change. Whether through Facebook’s ad-directing algorithms, or twitter’s disposition to soundbite-style communication that allows bots and trolls to drown out more reasoned debate, curated social feeds are being manipulated to devastating effect on the public discourse. And, unlike ever before, tremendous power to direct the flow of information now belongs to a very small group of private individuals, and their decisions on the matter will have far-reaching consequences for the whole world, and life-or-death consequences for many.

Deference

Changed:
<
<
One response to the mayhem - the one that platform giants advocate - is to allow the companies to self-regulate. There are arguments for at least some level of self-regulation; Professor Urs Gasser of Harvard University contends that platforms not only have the incentives to clean up their act, but reservoirs of data, and the capacity to combine those incentives and resources into effective action. And to this point, Facebook is responding: the company has embarked on a public relations campaign amidst public outcry over the 2016 election, and has implemented different features for combating the spread of false information. But the status quo of loose regulation and widespread legal immunity has already fostered damaging outcomes for many, and with a rapidly shifting news cycle that threatens to leave these failures in the past, there is little reason to believe that society can rest solely on these assurances.
>
>
One response to the mayhem - the one that platform giants advocate - is to allow the companies to self-regulate. There are arguments for at least some level of self-regulation; Professor Urs Gasser of Harvard University contends that platforms not only have the incentives to clean up their act, but reservoirs of data, and the capacity to combine those incentives and resources into effective action. To this point, Facebook is responding: the company has embarked on a public relations campaign amidst public outcry over the 2016 election, and has implemented different features for combating the spread of false information. But unfortunately, available context is less reassuring than Facebook’s public statements on the matter. Facebook’s responsive efforts in countries like Bolivia and Cambodia have, in a sense, backfired. A new “explore” tab on Facebook, which works to filter more “professional” information away from content of friends and family, has had the effect of cutting off traffic to legitimate news sources. In places like Cambodia and Bolivia where independent media outlets, sometimes the only voices of opposition to dangerous governments, need Facebook to subsist, the danger becomes apparent. The notion that the government might be able to buy their way back onto the feeds of its citizens while less wealthy independent news outlets are shut out presents the possibility of a serious blow to ability of individuals to remain informed.status quo of loose regulation and widespread legal immunity has already fostered damaging outcomes for many, and with a rapidly shifting news cycle that threatens to leave these failures in the past, there is little reason to believe that society can rest solely on these assurances.

Time itself presents another problem. When section 230 was passed, individual problems like defamation, which are more likely to be corrected through lawsuits, occupied a larger portion of the pie of public concern. But as time passes, the internet’s composition, its inhabitants, and its uses all evolve. Now, diffuse problems like fake news pose collective action issues. The question of who is the proper plaintiff to vindicate structural damage to a political institution is a difficult one, and this militates for legislative answer.

 

Proactivity

As Gasser points out, total deference is insufficient on its own; gap-filling regulation with an eye toward transparency will be needed where self-regulation falls flat. Across the Atlantic, some have begun to heed the call of proaction and show a willingness to encroach upon the immunity of platform giants: a new German law purporting to fine social networks large sums for failing to remove hate speech posted by users went recently into effect, and Prime Minister Theresa May has stated that Britain is examining the role of Google and Facebook, and the publisher/platform distinction that has so far served to immunize both from much liability. And now, even in the United States, legislative measures that would have up until recently gone without a sliver of support from online platforms garner approval from the same.

EddyBrandtFirstEssay 5 - 04 Dec 2017 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Line: 29 to 29
 The regime of immunity has, in some relevant sense, failed. Society's response, still in the making, is unclear. But one thing is certain: without public discourse on the topic of legal liability for online platforms, it becomes much more difficult to imagine a serious change to the status quo that has facilitated a structural blow to the American political institution, and spurred several horrors in the developing world, right before our eyes.
Added:
>
>

Your draft adequately introduces the issues, though the introduction to the introduction is too long; everything from Stratton Oakmont to the 2016 election can be put succinctly in three or four sentences.

You haven't shown why there is any argument for continued "safe harbor" immunity at all. The "infant industry" subsidy makes no sense with respect to companies strong enough to control elections and sway governments. They are media companies, equipped not only with their own First Amendment rights like the publishers with whom they compete, but with special immunities that others don't have. They can no longer claim that they don't edit or shape content; that's the source of immense market power for them. They can only depend on the idea, statutorily defined, that the user is "another content provider," just like themselves for all the difference it makes to section 230.

So the place to begin is: the Web was centralized by the platform companies based on an extraordinary subsidy to centralization of function without aggregation of responsibility. A range of bad social effects immediately followed. Now the platform companies ask that "self-regulation" be decreed for the perpetuation of the subsidy, for which no sufficient argument has been given unless one believes that they are not powerful enough already, and need dispensation from the requirements of the rule of law as it applies to everyone else. Capacious as the First Amendment's protections of the media are, they need more. Just as Mr Zuckerberg bought all the houses around his own because he needed more privacy, right?

I think the best route to improvement here is to take the bull by the horns and give the best case you can for offering the legal immunity subsidy to the companies as they are now. If you can do that, and the case is any good, you have a Hell of an essay, and Kevin Martin of Facebook has a job for you.

 


EddyBrandtFirstEssay 4 - 04 Dec 2017 - Main.EddyBrandt
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Line: 24 to 24
 One response to the mayhem - the one that platform giants advocate - is to allow the companies to self-regulate. There are arguments for at least some level of self-regulation; Professor Urs Gasser of Harvard University contends that platforms not only have the incentives to clean up their act, but reservoirs of data, and the capacity to combine those incentives and resources into effective action. And to this point, Facebook is responding: the company has embarked on a public relations campaign amidst public outcry over the 2016 election, and has implemented different features for combating the spread of false information. But the status quo of loose regulation and widespread legal immunity has already fostered damaging outcomes for many, and with a rapidly shifting news cycle that threatens to leave these failures in the past, there is little reason to believe that society can rest solely on these assurances.

Proactivity

Changed:
<
<
As Gasser points out, total deference is insufficient on its own; gap-filling self-regulations with an eye toward transparency will be needed where self-regulation falls flat. Across the Atlantic, some have begun to heed the call of proaction and show a willingness to encroach upon the immunity of platform giants: a new German law purporting to fine social networks large sums for failing to remove hate speech posted by users went recently into effect, and Prime Minister Theresa May has stated that Britain is examining the role of Google and Facebook, and the publisher/platform distinction that has so far served to immunize both from much liability. And now, even in the United States, legislative measures that would have up until recently gone without a sliver of support from online platforms garner approval from the same.
>
>
As Gasser points out, total deference is insufficient on its own; gap-filling regulation with an eye toward transparency will be needed where self-regulation falls flat. Across the Atlantic, some have begun to heed the call of proaction and show a willingness to encroach upon the immunity of platform giants: a new German law purporting to fine social networks large sums for failing to remove hate speech posted by users went recently into effect, and Prime Minister Theresa May has stated that Britain is examining the role of Google and Facebook, and the publisher/platform distinction that has so far served to immunize both from much liability. And now, even in the United States, legislative measures that would have up until recently gone without a sliver of support from online platforms garner approval from the same.
 

The regime of immunity has, in some relevant sense, failed. Society's response, still in the making, is unclear. But one thing is certain: without public discourse on the topic of legal liability for online platforms, it becomes much more difficult to imagine a serious change to the status quo that has facilitated a structural blow to the American political institution, and spurred several horrors in the developing world, right before our eyes.


EddyBrandtFirstEssay 3 - 13 Nov 2017 - Main.EddyBrandt
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"

EddyBrandtFirstEssay 2 - 11 Nov 2017 - Main.EddyBrandt
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Line: 14 to 14
 

Is 230 suited for 2017?

Changed:
<
<
Society has certainly held up its half of the bargain - providers of different sorts have escaped liability in the face of many a tortious act committed by their users on their platforms. Goddard v Google, Barnes v Yahoo!, Inc.. And, while complaints have been filed in the court system as well as with the FEC, companies like Facebook have yet to face any liability as publishers. But much like the child explorers of the internet which the Act sought to inoculate from indecent images in 1996, today young people and adults alike browse platforms that are awash in a greater, and likely unforeseen, form of danger: false information that masquerades as truth - fake news. The depth of Russia’s interference in the 2016 US presidential election, involving thousands of paid-for ads on Facebook, have become public knowledge. But less so are the news stories from the developing world. False information, blocked from government oversight through encryption, has set off mob attacks in India, killing several. Facebook, lacking an office in Myanmar, has become a breeding ground for hate speech and virulent posts about the Rohingya. Political institutions and real lives are at stake. Where is the bargained-for filtering that justifies the immunity granted to these platforms?
>
>
Society has certainly held up its half of the bargain - providers of different sorts have escaped liability in the face of many a tortious act committed by their users on their platforms. Goddard v Google, Barnes v Yahoo!, Inc.. And, while complaints have been filed in the court system as well as with the FEC, companies like Facebook have yet to face any liability as publishers. But much like the child explorers of the internet which the Act sought to inoculate from indecent images in 1996, today young people and adults alike browse platforms that are awash in a greater, and likely unforeseen, form of danger: false information that masquerades as truth - fake news. The depth of Russia’s interference in the 2016 US presidential election, involving thousands of paid-for ads on Facebook, has become public knowledge. But less so are the news stories from the developing world. False information, blocked from government oversight through encryption, has set off mob attacks in India, killing several. Facebook, lacking an office in Myanmar, has become a breeding ground for hate speech and virulent posts about the Rohingya. Political institutions and real lives are at stake. Where is the bargained-for filtering that justifies the immunity granted to these platforms?
 

The Path Forward


EddyBrandtFirstEssay 1 - 10 Nov 2017 - Main.EddyBrandt
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="FirstEssay"

Immunity for Platforms and its Utility in 2017

-- By EddyBrandt - 10 Nov 2017

Introduction

In 1995, in the midst of the internet's explosion of growth, a case was decided that threatened to derail the growth of online service providers (for the purposes of this essay, I will be using this term to refer primarily to providers of online platforms). The court in Stratton Oakmont v Prodigy Services Co. found liability for the defendant, Prodigy Services, whose online forum was the site of defamatory material posted by third-party users. The “conscious choice, to gain the benefits of editorial control, opened (Prodigy Services) up to a greater liability to... other computer networks that make no such choice” Stratton Oakmont. This decision was promptly overruled via Section 230 of the Communications Decency Act in 1996 (the Act). The Act’s rationale was clear: if we impose liability upon, and treat as publishers for legal purposes, internet platforms when illegal acts are committed by third-party users of the platform services because the platforms decided to filter content, then the providers of said services will have reduced incentive to screen offensive material. As a society we have an interest in those materials being screened, so we won’t impose liability on internet platforms on the basis of their coquetry with publication. But has this mutual symbiosis run its course? Has society's give outweighed its take?

Is 230 suited for 2017?

Society has certainly held up its half of the bargain - providers of different sorts have escaped liability in the face of many a tortious act committed by their users on their platforms. Goddard v Google, Barnes v Yahoo!, Inc.. And, while complaints have been filed in the court system as well as with the FEC, companies like Facebook have yet to face any liability as publishers. But much like the child explorers of the internet which the Act sought to inoculate from indecent images in 1996, today young people and adults alike browse platforms that are awash in a greater, and likely unforeseen, form of danger: false information that masquerades as truth - fake news. The depth of Russia’s interference in the 2016 US presidential election, involving thousands of paid-for ads on Facebook, have become public knowledge. But less so are the news stories from the developing world. False information, blocked from government oversight through encryption, has set off mob attacks in India, killing several. Facebook, lacking an office in Myanmar, has become a breeding ground for hate speech and virulent posts about the Rohingya. Political institutions and real lives are at stake. Where is the bargained-for filtering that justifies the immunity granted to these platforms?

The Path Forward

It’s been argued that leniency has been crucial to Silicon Valley's explosion, that legal immunity subsidized a nascent industry, similar to 19th century common law's embrace of industrial development. And we need not disavow entirely the benefits of flexible regulation for online service providers to acknowledge that new circumstances necessitate change. Whether through Facebook’s ad-directing algorithms, or twitter’s disposition to soundbite-style communication that allows bots and trolls to drown out more reasoned debate, curated social feeds are being manipulated to devastating effect on the public discourse. And, unlike ever before, tremendous power to direct the flow of information now belongs to a very small group of private individuals, and their decisions on the matter will have far-reaching consequences for the whole world, and life-or-death consequences for many.

Deference

One response to the mayhem - the one that platform giants advocate - is to allow the companies to self-regulate. There are arguments for at least some level of self-regulation; Professor Urs Gasser of Harvard University contends that platforms not only have the incentives to clean up their act, but reservoirs of data, and the capacity to combine those incentives and resources into effective action. And to this point, Facebook is responding: the company has embarked on a public relations campaign amidst public outcry over the 2016 election, and has implemented different features for combating the spread of false information. But the status quo of loose regulation and widespread legal immunity has already fostered damaging outcomes for many, and with a rapidly shifting news cycle that threatens to leave these failures in the past, there is little reason to believe that society can rest solely on these assurances.

Proactivity

As Gasser points out, total deference is insufficient on its own; gap-filling self-regulations with an eye toward transparency will be needed where self-regulation falls flat. Across the Atlantic, some have begun to heed the call of proaction and show a willingness to encroach upon the immunity of platform giants: a new German law purporting to fine social networks large sums for failing to remove hate speech posted by users went recently into effect, and Prime Minister Theresa May has stated that Britain is examining the role of Google and Facebook, and the publisher/platform distinction that has so far served to immunize both from much liability. And now, even in the United States, legislative measures that would have up until recently gone without a sliver of support from online platforms garner approval from the same.

The regime of immunity has, in some relevant sense, failed. Society's response, still in the making, is unclear. But one thing is certain: without public discourse on the topic of legal liability for online platforms, it becomes much more difficult to imagine a serious change to the status quo that has facilitated a structural blow to the American political institution, and spurred several horrors in the developing world, right before our eyes.

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 7r7 - 31 Mar 2018 - 14:37:22 - EbenMoglen
Revision 6r6 - 16 Jan 2018 - 05:59:16 - EddyBrandt
Revision 5r5 - 04 Dec 2017 - 21:53:47 - EbenMoglen
Revision 4r4 - 04 Dec 2017 - 20:42:05 - EddyBrandt
Revision 3r3 - 13 Nov 2017 - 02:57:44 - EddyBrandt
Revision 2r2 - 11 Nov 2017 - 00:02:11 - EddyBrandt
Revision 1r1 - 10 Nov 2017 - 04:55:45 - EddyBrandt
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM