Law in the Internet Society

View   r3  >  r2  >  r1
AmishiMagdaniFirstEssay 3 - 10 Jan 2022 - Main.AmishiMagdani
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Changed:
<
<
>
>

The Misinformation Crisis and Intermediary Liability

 
Deleted:
<
<

Big Tech and the Misinformation Crisis: Policy Recommendations

 
Changed:
<
<
-- By AmishiMagdani - 23 Oct 2021
>
>
 
Added:
>
>
The role of Big Tech in spreading misinformation has long been in the public eye, but increasingly so considering the mass hysteria surrounding the pandemic, the 2021 Capitol Hill riots, the Cambridge Analytica scandal, and the mob lynchings in India. In November 2021, Nobel peace prize laureate Maria Ressa said that social media is creating a “virus of lies”, and “manipulating our minds insidiously, creating alternate realities, making it impossible for us to think slow”. In various APAC countries such as Singapore and Indonesia, ‘fake news’ statutes have been enacted, holding intermediaries liable for the spread of fake news and falsehoods.
 
Changed:
<
<
A few days ago, Frances Haugen, a former Facebook data scientist, gave a damning whistleblower testimony which brought to the fore Facebook’s knowledge of (and its indifference towards) its wrongdoings. News also broke that leaked internal documents showed that employees at Facebook repeatedly raised red flags about misinformation surrounding the 2020 U.S. elections, which were ignored by Facebook executives.
>
>
In the US, internet service providers (ISPs) are afforded some immunity from being held liable for hosting harmful content posted by third parties. § 230 of the Communications Decency Act (CDA), which, inter alia, provides that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”, is often said to have “created the internet” , as we know it. However, given the impact of the rising wave of misinformation, there have been several calls to limit the scope of the immunity granted by § 230 of the CDA, and hold social media intermediaries or ISPs liable for the content they host. However, holding ISPs liable for failing to regulate content posted by third parties runs the risk of ISPs over-regulating and taking down more content than necessary, thereby causing a chilling effect on free speech and expression.
 
Changed:
<
<
The role of Big Tech in spreading misinformation has also long been in the public eye, but increasingly so considering the mass hysteria surrounding the pandemic. Studies demonstrate that those who rely on social media for news are less likely to get the facts right about coronavirus and politics and more likely to hear some unproven claims. While this may seem intuitive, it is extremely concerning, given that one in five Americans claim that they get their political news primarily through social media. In 2020, there were several clampdowns globally on WhatsApp? for its role spreading misinformation regarding the pandemic. In various APAC countries such as Singapore and Indonesia, ‘fake news’ statutes have been enacted, holding intermediaries liable for the spread of fake news and falsehoods.
>
>
The balancing act between the independence of ISPs, the First Amendment, the right to privacy and the harms of not regulating misinformation has been rendered particularly difficult given the ubiquity of data and the ease of access to such data. While the role of intermediaries has come into question more and more often lately, the question of intermediary liability is certainly not new. In 1964, in New York Times Co. v. Sullivan, the United States Supreme Court (SCOTUS) held that to sustain a claim of defamation or libel, the First Amendment requires that the plaintiff prove that the defendant knew that a statement was false or was reckless in deciding to publish the information without investigating whether it was accurate.
 
Changed:
<
<
The obvious impact of social media on the human psyche, thus, has not gone unnoticed by governments and courts globally. Until very recently, the main issue was how sheltered intermediaries tended to be due to the ‘safe harbor’ provisions in most statutes governing intermediaries. Now, considering the shifting global digital landscape, the question is not whether social media intermediaries ought to be held liable, but to what extent does it suffice to hold social media companies liable for the harms caused by it. Should the role of intermediaries be reactionary and ad hoc in curbing misinformation and content that harms individuals and society, or should they be more proactive and preemptive in their approach, even at the risk of compromising neutrality and endangering free speech and data privacy? I believe that a good approach should involve a fine balance between the two through both self-regulation and external checks and balances involving appellate mechanisms for misinformation.
>
>
A decade later, in Miami Herald Publishing Co. v. Tornillo, the SCOTUS overturned a state law in Florida which required newspapers to provide space to political candidates to reply in case there is an editorial or any endorsement in the paper. The SCOTUS held that as the statute “exacts a penalty on the basis of the content”, it would disincentivize and dissuade newspapers from publishing any content likely to cause controversy, which would in turn have a chilling effect on free speech and expression. Intermediaries have, thus, always been accorded immunity in the interest of free speech. It would be untoward if such immunity was taken away from them now.
 
Changed:
<
<
Self-regulation in any behemoth can be tricky given the sheer number of people who use said behemoths (reason enough to disband them). However, it is also easiest for such behemoths to device systems and formulate policies for self-regulation given the vast array of resources at their disposal. I believe that there are a few different steps that Big Tech companies can take to combat misinformation, such as (a) having efficient Notice and Takedown systems in place, (b) having an internal appellate mechanism in place, (c) formulating externally vetted and publicly available policies for treatment of misinformation (including identifying sources of misinformation), and (d) preparing periodical publicly available compliance reports with such misinformation policy. In this model, (a) and (b) follow the reactionary and ad hoc approach and (c) and (d) follow a more preemptive approach.
>
>
What can be done to curb misinformation then? While there is, of course, no fool proof way of ending the spread of misinformation once and for all and nor should there be, given the dynamic nature of information and facts as well as the fundamental right to speech and expression, there are smaller measures that may be taken.
 
Changed:
<
<
(a) Notice and Takedown: While most Big Tech companies already have a content removals team, these are ordinarily focused on takedown requests for hate speech, copyright infringement, harassment, etc. Big Tech should formulate dedicated misinformation teams within their existing content removal teams to assess each takedown request for misinformation. Admittedly, the threshold for what an “efficient” Notice and Takedown system would entail may vary given that misinformation itself is difficult to characterize as such. Once a notice has alleging misinformation been received, fact-checking and attesting to the veracity of information can be a long-drawn and complicated process. However, having a dedicated team of specialists may significantly cut down on the time between notice and takedown.
>
>
As per a study conducted by MIT Sloan, people do not want to share misinformation or false news – the sensory experiences associated with scrolling through one’s news feed simply leave them too overwhelmed to remember or focus on whether something they read was accurate. Professor David Rand of MIT Sloan observed that, “…if the social media platforms reminded users to think about accuracy—maybe when they log on or as they’re scrolling through their feeds—it could be just the subtle prod people need to get in a mindset where they think twice before they retweet”. In line with this, social media platforms and other similar ISPs could implement simple accuracy prompts to shift user’s attention towards the reliability of the post they are making before sharing the same. This immediately instigates users to think about the content they are posting. This approach relies, not on over-screening or underscreening by ISPs, but instead on the user’s autonomy and ability to discern misinformation before they share it.
 
Changed:
<
<
(b) Internal appellate mechanisms: Of course, any Notice and Takedown system will always run the risk of over-removal, given that that most firms follow a ‘when in doubt, remove’ policy for content removal requests. In order to combat this, and in the interest of free speech, an appellate system should be established within each Big Tech company that allows users to appeal takedowns in case they believe that the information is not misinformation.
>
>
Lastly, while this may be harder or more arduous to organize and achieve, it may also help to invest in ground-level media literacy programs nationally. In 2017, the Yale Information Society Project and the Floyd Abrams Institute for Freedom of Expression hosted a workshop on fighting fake news, which suggested that consumers must be “better educated, so that they are better able to distinguish credible sources and stories from their counterparts.” While media literacy programs have already been introduced through legislation in states such as California and Washington, this ought to be adopted on a national level, following suit from countries like Canada and Australia.
 
Deleted:
<
<
(c) Formulating publicly available policies: While most Big Tech companies already have publicly available policies to combat misinformation in place, especially following the Cambridge Analytica scandal (see, for example: https://transparency.fb.com/policies/community-standards/false-news/), these policies are focused on being more user / reader friendly and less elaborate.

(d) Publicly available compliance reports: Given that one of the biggest criticisms of Big Tech (especially Facebook) today is the lack of transparency in compliance with internal policies (such as the one listed in (c) above, policies should also be set in place to ensure that periodical compliance reports are made publicly available. While these reports can be fabricated, there is a greater risk of misrepresentation to the public and its shareholders, increasing the accountability of each company.

In addition to the aforesaid system of self-regulation, another method of keeping a check on misinformation is through co-regulation, i.e., with a regulator being responsible for overseeing the aforesaid internal systems of the companies and acting as a final appellate authority. While different forms of co-regulation between these intermediaries and governments have been proposed in certain jurisdictions (such as France), I personally believe that this may not be desirable owing to the impact of legislation and government involvement on free speech and data privacy. Further, co-regulation also leaves questions of jurisdiction unanswered.

Of course, each of the suggestions above poses its own set of challenges, and these suggestions may not apply to social media platforms which operate on a smaller scale, I believe these serve as a good starting point to moving towards combating misinformation without infringing free speech or data privacy.

I'm not sure I understand the genre of this draft. It isn't actually policy analysis, I don't think. What policy-maker, even in a system that didn't have a first amendment, would consider a requirement for publishers of any other kind to "take down" misinformation? How could there be a regime of such a kind, whether called censorship or self-censorship, for Facebook if there could not (as there obviously could not ever be) for Fox News, NBC, or the New York Times? This is law school, so presumably it would not be out of place to mention Miami Herald Publishing Co. v. Tornillo.

But I don't think this is satire, either. So the best route to improvement certainly begins by clarifying whether "Recommendations" is in the nature of Swift's "Modest Proposal." If not, I think it would probably be helpful to boil down all the recitation, which we don't need in addressing such a knowledgeable readership as this class. That space is needed to present the analysis, and perhaps to learn some of the relevant law, that actual policy-making would require.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.

 \ No newline at end of file
Added:
>
>
In summary, while the threat of misinformation looms large, the liability of intermediaries should continue to be treated as it is. Programming changes reminding users about the veracity of the content being posted, along with aggressive media literacy programs undertaken by both governments and private entities alike will assist in, at the very least, early identification of misinformation. In turn, the harmful effects of such misinformation will be minimized to the greatest possible degree without, both, arming private ISPs with unfettered power to control speech, and holding them liable for their inability to do so.

AmishiMagdaniFirstEssay 2 - 04 Dec 2021 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Deleted:
<
<
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
 

Big Tech and the Misinformation Crisis: Policy Recommendations

Line: 28 to 27
 Of course, each of the suggestions above poses its own set of challenges, and these suggestions may not apply to social media platforms which operate on a smaller scale, I believe these serve as a good starting point to moving towards combating misinformation without infringing free speech or data privacy.
Added:
>
>
I'm not sure I understand the genre of this draft. It isn't actually policy analysis, I don't think. What policy-maker, even in a system that didn't have a first amendment, would consider a requirement for publishers of any other kind to "take down" misinformation? How could there be a regime of such a kind, whether called censorship or self-censorship, for Facebook if there could not (as there obviously could not ever be) for Fox News, NBC, or the New York Times? This is law school, so presumably it would not be out of place to mention Miami Herald Publishing Co. v. Tornillo.

But I don't think this is satire, either. So the best route to improvement certainly begins by clarifying whether "Recommendations" is in the nature of Swift's "Modest Proposal." If not, I think it would probably be helpful to boil down all the recitation, which we don't need in addressing such a knowledgeable readership as this class. That space is needed to present the analysis, and perhaps to learn some of the relevant law, that actual policy-making would require.

 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.

AmishiMagdaniFirstEssay 1 - 23 Oct 2021 - Main.AmishiMagdani
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="FirstEssay"
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.

Big Tech and the Misinformation Crisis: Policy Recommendations

-- By AmishiMagdani - 23 Oct 2021

A few days ago, Frances Haugen, a former Facebook data scientist, gave a damning whistleblower testimony which brought to the fore Facebook’s knowledge of (and its indifference towards) its wrongdoings. News also broke that leaked internal documents showed that employees at Facebook repeatedly raised red flags about misinformation surrounding the 2020 U.S. elections, which were ignored by Facebook executives.

The role of Big Tech in spreading misinformation has also long been in the public eye, but increasingly so considering the mass hysteria surrounding the pandemic. Studies demonstrate that those who rely on social media for news are less likely to get the facts right about coronavirus and politics and more likely to hear some unproven claims. While this may seem intuitive, it is extremely concerning, given that one in five Americans claim that they get their political news primarily through social media. In 2020, there were several clampdowns globally on WhatsApp? for its role spreading misinformation regarding the pandemic. In various APAC countries such as Singapore and Indonesia, ‘fake news’ statutes have been enacted, holding intermediaries liable for the spread of fake news and falsehoods.

The obvious impact of social media on the human psyche, thus, has not gone unnoticed by governments and courts globally. Until very recently, the main issue was how sheltered intermediaries tended to be due to the ‘safe harbor’ provisions in most statutes governing intermediaries. Now, considering the shifting global digital landscape, the question is not whether social media intermediaries ought to be held liable, but to what extent does it suffice to hold social media companies liable for the harms caused by it. Should the role of intermediaries be reactionary and ad hoc in curbing misinformation and content that harms individuals and society, or should they be more proactive and preemptive in their approach, even at the risk of compromising neutrality and endangering free speech and data privacy? I believe that a good approach should involve a fine balance between the two through both self-regulation and external checks and balances involving appellate mechanisms for misinformation.

Self-regulation in any behemoth can be tricky given the sheer number of people who use said behemoths (reason enough to disband them). However, it is also easiest for such behemoths to device systems and formulate policies for self-regulation given the vast array of resources at their disposal. I believe that there are a few different steps that Big Tech companies can take to combat misinformation, such as (a) having efficient Notice and Takedown systems in place, (b) having an internal appellate mechanism in place, (c) formulating externally vetted and publicly available policies for treatment of misinformation (including identifying sources of misinformation), and (d) preparing periodical publicly available compliance reports with such misinformation policy. In this model, (a) and (b) follow the reactionary and ad hoc approach and (c) and (d) follow a more preemptive approach.

(a) Notice and Takedown: While most Big Tech companies already have a content removals team, these are ordinarily focused on takedown requests for hate speech, copyright infringement, harassment, etc. Big Tech should formulate dedicated misinformation teams within their existing content removal teams to assess each takedown request for misinformation. Admittedly, the threshold for what an “efficient” Notice and Takedown system would entail may vary given that misinformation itself is difficult to characterize as such. Once a notice has alleging misinformation been received, fact-checking and attesting to the veracity of information can be a long-drawn and complicated process. However, having a dedicated team of specialists may significantly cut down on the time between notice and takedown.

(b) Internal appellate mechanisms: Of course, any Notice and Takedown system will always run the risk of over-removal, given that that most firms follow a ‘when in doubt, remove’ policy for content removal requests. In order to combat this, and in the interest of free speech, an appellate system should be established within each Big Tech company that allows users to appeal takedowns in case they believe that the information is not misinformation.

(c) Formulating publicly available policies: While most Big Tech companies already have publicly available policies to combat misinformation in place, especially following the Cambridge Analytica scandal (see, for example: https://transparency.fb.com/policies/community-standards/false-news/), these policies are focused on being more user / reader friendly and less elaborate.

(d) Publicly available compliance reports: Given that one of the biggest criticisms of Big Tech (especially Facebook) today is the lack of transparency in compliance with internal policies (such as the one listed in (c) above, policies should also be set in place to ensure that periodical compliance reports are made publicly available. While these reports can be fabricated, there is a greater risk of misrepresentation to the public and its shareholders, increasing the accountability of each company.

In addition to the aforesaid system of self-regulation, another method of keeping a check on misinformation is through co-regulation, i.e., with a regulator being responsible for overseeing the aforesaid internal systems of the companies and acting as a final appellate authority. While different forms of co-regulation between these intermediaries and governments have been proposed in certain jurisdictions (such as France), I personally believe that this may not be desirable owing to the impact of legislation and government involvement on free speech and data privacy. Further, co-regulation also leaves questions of jurisdiction unanswered.

Of course, each of the suggestions above poses its own set of challenges, and these suggestions may not apply to social media platforms which operate on a smaller scale, I believe these serve as a good starting point to moving towards combating misinformation without infringing free speech or data privacy.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 3r3 - 10 Jan 2022 - 03:58:37 - AmishiMagdani
Revision 2r2 - 04 Dec 2021 - 21:59:20 - EbenMoglen
Revision 1r1 - 23 Oct 2021 - 02:29:09 - AmishiMagdani
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM