Law in the Internet Society

View   r1
AmishiMagdaniFirstEssay 1 - 23 Oct 2021 - Main.AmishiMagdani
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="FirstEssay"
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.

Big Tech and the Misinformation Crisis: Policy Recommendations

-- By AmishiMagdani - 23 Oct 2021

A few days ago, Frances Haugen, a former Facebook data scientist, gave a damning whistleblower testimony which brought to the fore Facebook’s knowledge of (and its indifference towards) its wrongdoings. News also broke that leaked internal documents showed that employees at Facebook repeatedly raised red flags about misinformation surrounding the 2020 U.S. elections, which were ignored by Facebook executives.

The role of Big Tech in spreading misinformation has also long been in the public eye, but increasingly so considering the mass hysteria surrounding the pandemic. Studies demonstrate that those who rely on social media for news are less likely to get the facts right about coronavirus and politics and more likely to hear some unproven claims. While this may seem intuitive, it is extremely concerning, given that one in five Americans claim that they get their political news primarily through social media. In 2020, there were several clampdowns globally on WhatsApp? for its role spreading misinformation regarding the pandemic. In various APAC countries such as Singapore and Indonesia, ‘fake news’ statutes have been enacted, holding intermediaries liable for the spread of fake news and falsehoods.

The obvious impact of social media on the human psyche, thus, has not gone unnoticed by governments and courts globally. Until very recently, the main issue was how sheltered intermediaries tended to be due to the ‘safe harbor’ provisions in most statutes governing intermediaries. Now, considering the shifting global digital landscape, the question is not whether social media intermediaries ought to be held liable, but to what extent does it suffice to hold social media companies liable for the harms caused by it. Should the role of intermediaries be reactionary and ad hoc in curbing misinformation and content that harms individuals and society, or should they be more proactive and preemptive in their approach, even at the risk of compromising neutrality and endangering free speech and data privacy? I believe that a good approach should involve a fine balance between the two through both self-regulation and external checks and balances involving appellate mechanisms for misinformation.

Self-regulation in any behemoth can be tricky given the sheer number of people who use said behemoths (reason enough to disband them). However, it is also easiest for such behemoths to device systems and formulate policies for self-regulation given the vast array of resources at their disposal. I believe that there are a few different steps that Big Tech companies can take to combat misinformation, such as (a) having efficient Notice and Takedown systems in place, (b) having an internal appellate mechanism in place, (c) formulating externally vetted and publicly available policies for treatment of misinformation (including identifying sources of misinformation), and (d) preparing periodical publicly available compliance reports with such misinformation policy. In this model, (a) and (b) follow the reactionary and ad hoc approach and (c) and (d) follow a more preemptive approach.

(a) Notice and Takedown: While most Big Tech companies already have a content removals team, these are ordinarily focused on takedown requests for hate speech, copyright infringement, harassment, etc. Big Tech should formulate dedicated misinformation teams within their existing content removal teams to assess each takedown request for misinformation. Admittedly, the threshold for what an “efficient” Notice and Takedown system would entail may vary given that misinformation itself is difficult to characterize as such. Once a notice has alleging misinformation been received, fact-checking and attesting to the veracity of information can be a long-drawn and complicated process. However, having a dedicated team of specialists may significantly cut down on the time between notice and takedown.

(b) Internal appellate mechanisms: Of course, any Notice and Takedown system will always run the risk of over-removal, given that that most firms follow a ‘when in doubt, remove’ policy for content removal requests. In order to combat this, and in the interest of free speech, an appellate system should be established within each Big Tech company that allows users to appeal takedowns in case they believe that the information is not misinformation.

(c) Formulating publicly available policies: While most Big Tech companies already have publicly available policies to combat misinformation in place, especially following the Cambridge Analytica scandal (see, for example: https://transparency.fb.com/policies/community-standards/false-news/), these policies are focused on being more user / reader friendly and less elaborate.

(d) Publicly available compliance reports: Given that one of the biggest criticisms of Big Tech (especially Facebook) today is the lack of transparency in compliance with internal policies (such as the one listed in (c) above, policies should also be set in place to ensure that periodical compliance reports are made publicly available. While these reports can be fabricated, there is a greater risk of misrepresentation to the public and its shareholders, increasing the accountability of each company.

In addition to the aforesaid system of self-regulation, another method of keeping a check on misinformation is through co-regulation, i.e., with a regulator being responsible for overseeing the aforesaid internal systems of the companies and acting as a final appellate authority. While different forms of co-regulation between these intermediaries and governments have been proposed in certain jurisdictions (such as France), I personally believe that this may not be desirable owing to the impact of legislation and government involvement on free speech and data privacy. Further, co-regulation also leaves questions of jurisdiction unanswered.

Of course, each of the suggestions above poses its own set of challenges, and these suggestions may not apply to social media platforms which operate on a smaller scale, I believe these serve as a good starting point to moving towards combating misinformation without infringing free speech or data privacy.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 1r1 - 23 Oct 2021 - 02:29:09 - AmishiMagdani
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM