EstherStefaniniSecondEssay 3 - 05 Jan 2021 - Main.EstherStefanini
|
|
META TOPICPARENT | name="SecondEssay" |
| |
< < | “This post may contain misleading information” – an analysis of the use of warning labels on social media posts | > > | Should Social media platforms be held accountable for false news? | | -- By EstherStefanini - 22 Nov 2020
Introduction | |
< < | Whilst I have been recently able to pry myself from using Instagram (I had abandoned Facebook several years ago), I have failed to reduce my Twitter usage. It acts as a messaging platform, source of entertainment for me and most importantly, my source for news and global affairs. | > > | Whilst I have been recently able to pry myself from using Instagram (I abandoned Facebook several years ago), I have failed to reduce my Twitter usage. It acts as a messaging platform, source of entertainment for me and most importantly, is my source for news and global affairs commentary. During the current pandemic, the site has been flooded with tweets about COVID-19: suddenly every user is an expert on transmission rates and the apparent uselessness of mask-wearing. The amount of conflicting information on such an important topic troubled me and led me to start double-checking the accuracy of tweets by searching more reputable news sites for corroboration. | | | |
< < | | > > | Many other young people rely on social media for news consumption – about 48% of 18-29 year olds in the US primarily consume news content this way. In my experience, I often resorted to Twitter because of the short and snappy tweet lengths, variety of tweet topics and the ability to instantly comment and interact with other users – I can consume as much news in as little time as possible and discuss it with people my age around the globe. But this pandemic has shown me the risk of relying on a site like Twitter for accurate information – the platform is saturated with misinformation, to the extent that there have been calls for such platforms to be held accountable for being “responsible for thousands of [COVID] deaths”. However, this is a dangerous demand to make. I will explore the effects of holding social media platforms accountable and demonstrate that other solutions should be utilised. | | | |
< < | What do you think it would take to help you rebalance the media diet by getting actual news from actual news organizations? What tools or forms of presentation would reduce the dependence on Twitter for what highly sophisticated social structures have evolved to do better? | > > | Should we censor misinformation?
Social media platforms currently hold no responsibility for any falsehoods posted on their site as Section 230 of the Communications Decency Act 1996 protects them from liability for content published by third parties. Regardless, platforms do have some restrictions - they typically restrict threatening language and hate speech, including race, age, and disability discrimination. This type of censorship is perfectly legal as private platforms are not subject to the First Amendment. Furthermore, it is generally expected by the public. As such, it is not unreasonable to demand that platforms also censor content that is deemed to be false, especially if it relates to politics or the COVID-19 pandemic. The business model of social media (relying on shares and likes) the almost-addictive nature and the propensity of users to believe everything at face value, creates a platform in which users are easily influenced, even without realizing it. Hence, there is a genuine life-threatening risk if we do not limit posts that claim that the COVID vaccine contains tracking microchips or the pandemic is a hoax and hospitals are in fact empty. Despite this, I do not think social media platforms should censor such content and we should be very hesitant before suggesting censorship of misinformation is the best solution. | | | |
< < | | > > | Mass censorship of false news on social media is ideal but impractical. Platforms would have to rely on user reports and detection by algorithms to recognise ‘fake’ posts and subsequently run them through a fact checker. However, algorithms cannot pick up every single questionable post - many will fall under the radar due to the sheer volume of content that is uploaded every day. Even now, death threats which have been banned for some time and are meant to be removed instantly, sometimes go undetected. Similarly, some posts may be censored wrongly thereby causing posts that are actually factual or merely opinion to be wrongly removed from the platform. Moreover If platforms commit to banning misinformation, any inaccurate posts that are not flagged and left on the site will then be considered reliable and accurate which defeats the purpose of such censorship. Realistically, such censorship would not invoke confidence in users and create a more reliable platform. Rather, it would be likely to lead to mass migration to other platforms such as Parler, the controversial meeting place of many far-rights and Nazi affiliates. Instead, we must emphasise that social media platforms are not reliable news sources! We must educate and we must highlight the falsehoods that are posted. Flagging such posts with warning labels is one way to achieve this. | | | |
< < | During the recent election frenzy, I found myself on the app a lot more than usual. However, I noticed contents warnings on political posts, particularly those posted by President Trump. The warning message: “this may contain misleading information” followed by a link to more sites with supposedly more impartial information appeared under quite a few of his posts. I was intrigued – since taking this course I had spent some time thinking about how social media has detrimentally changed society and its prowess to spread ‘false news’ is one such issue. Yet, it seems as if a solution had been found. Unfortunately, this is not the case. I will analyze how social media platforms have recently utilized such warning labels and demonstrate that they are merely a band-aid and not a cure.
Why Warning Labels are Needed
The success of Trump’s 2016 campaign has been partly attributed to its warming embrace of Facebook and Twitter as an advertising tool.
Mostly by them. I don't think the persistence of large TV advertising expenditures dwarfing the ad spending on social media suggests that the professionals who run that and other campaigns believe any very strong form of the proposition.
Most notably, thousands of these ads were affiliated with Russians, spreading exaggerated, and often false information. It is not just the US who has seen Facebook being used to erode democracy; the now defunct Cambridge Analytica used social media data to influence the elections in dozens of other nations including Kenya, the UK, Trinidad and Tobago and even triggered a genocide in Myanmar. Whilst political ad campaigning is normal and expected, the business model of social media (relying on shares and likes), the almost-addictive nature and the propensity of users to believe everything at face value, creates a platform in which users are quite easily influenced, even without realizing it. Facebook, Twitter and Instagram were pressured to acknowledge their role in circumventing electoral democracy in the 2016 elections and have adopted similar methods to address the spread of misinformation on their platforms.
When Did They Start?
In December 2016, Facebook announced the introduction of ‘disputed flags’ – red badges which appeared under articles after being checked by third-party fact checkers. This was later replaced a year later and instead of red flags, Facebook provided links under questionable posts to more reputable sites. The company argued that related articles were more effective than disputed flags in discouraging users to share false news. (although they failed to explain how they met this conclusion. I am inclined to think that the new initiative was more palatable to the right who complain that flagging censors a lot of their promotional media). As of September 2019, Facebook continues to address misinformed posts in this way, as well as removing fake accounts (bots) and utilising AI systems to recognise such content. However, politicians are exempt from their fact-checking program.
Instagram, a subsidiary of Facebook, marks false information in a similar way. Anyone can report a post that sounds suspicious, which will cause it to be checked by an independent party. Instagram states they “work with 45 third-party fact-checkers across the globe who are certified through the non-partisan International Fact-Checking Network to help identify, review and label false information”. Instagram’s labelling goes a step further than Facebook – marked posts are blurred out until users click ‘view post’ after they understand that it may contain false information. They also make it harder to find the post by hashtag search and it will not show up in the ‘Explore’ page.
Twitter caught up with the other major platforms this year, introducing labels and warning messages and affiliated links for users to find out more. Tweets mentioning COVID-19 particularly came with links to global and health information. Unlike Facebook, Twitter does flag tweets made by politicians and public officials which sparked my interest in the topic – dozens of tweets published by Trump have been marked as ‘disputed’ or ‘false’, including his recent claim that he “won the election”.
How Effective Are the Warnings?
A study conducted in January 2019 found that “the “Disputed” tag [on Facebook posts] reduced the mean proportion of respondents who accept a headline as “Somewhat accurate” or “Very accurate” when no general warning was provided from 29% in the baseline condition to 19%, a ten-percentage point decline”. A later study in July 2019 also concluded that the warning labels on Facebook posts had the effect of reducing the likelihood of fake news being shared – after being presented with a fabricated Facebook post accompanied with a warning label, 23% of respondents said they were generally likely to share the fabricated post. However, the comments sections below articles that discussed the introduction and efficacy of misinformation labels revealed quite a different sentiment. One Guardian reader believes that “fake news is a left wing boogie man scapegoat for their failures”. Another said, it is “pointless, no one reads the original article nevermind the related ones. The battle is won and lost with memes”.
Despite the attempts, it is clearly difficult to assess if the warning labels will effectively work and protect easily influenced minds and democracy generally. Instagram and Facebook have admitted that they cannot find and label all suspicious posts, many will fall under the radar due to the sheer number of content that is uploaded every day. Furthermore, the readers who are most likely to align themselves with the exaggerated right-wing propaganda are not likely to be deterred by a warning sign, as the comment above suggests. | > > | What is the other solution?
Platforms have come up with a halfway solution which is probably the most appropriate method – flagging fake news. Instagram, a subsidiary of Facebook, marks suspicious posts with a “false information” label. Anyone can report a post that sounds suspicious, which will cause it to be checked by an independent party. Instagram states they “work with 45 third-party fact-checkers across the globe who are certified through the non-partisan International Fact-Checking Network to help identify, review and label false information”. Instagram’s labelling goes a step further than Facebook – marked posts are blurred out until users click ‘view post’ after they understand that it may contain false information. They also make it harder to find the post by hashtag search and it will not show up in the ‘Explore’ page.
Twitter caught up with the other major platforms this year, introducing labels and warning messages and affiliated links for users to find out more. Tweets mentioning COVID-19 particularly came with links to global and health information. Unlike Facebook, Twitter does flag tweets made by politicians and public officials (Instagram and Facebook do not) – dozens of tweets published by Trump have been marked as ‘disputed’ or ‘false’, including his recent claim that he “won the election”. | | Conclusion | |
< < | Fake news existed before social media and it will continue to exist. In my opinion, the warning labels are an admirable addition but will make little to no difference to their spread and does not deal with any of the other problems of social media platforms. I think the real issue is that people (including me) are over-reliant on social media for keeping up with the news.
Hence my question at the top. If that's the real issue, let's see what we can learn about it by studying you.
I think the best route to the improvement of the present draft is to take a lawyer's view of the labeling behavior. What are the companies trying to achieve by way of reducing their legal and/or social liabilities? What legal measures are they trying to avoid? How can legal and technical measures be devised that accord with the political effort to reduce their distorting effects on the epistemic confidence that we are conducting democratic self-governance with due regard to the establishment of shared social "facts"?
| > > | Social media platforms should not be held accountable for false news and made to censor such content. Fake news existed before social media and it will continue to exist no matter what solution we implement. Instead, we should teach future generations to be extra prudent on the internet – do not believe everything you read. Be wary and double check stats that sound wrong with a more reputable site. Maybe avoid reading news on social media altogether – I, personally, no longer look for COVID updates on Twitter as I found the proliferation of warning labels and links to more information offputting and rely on traditional media outlets. | |
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines: |
|
EstherStefaniniSecondEssay 2 - 31 Dec 2020 - Main.EbenMoglen
|
|
META TOPICPARENT | name="SecondEssay" |
| |
< < | It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted. | | “This post may contain misleading information” – an analysis of the use of warning labels on social media posts | |
Introduction | |
< < | Whilst I have been recently able to pry myself from using Instagram (I had abandoned Facebook several years ago), I have failed to reduce my Twitter usage. It acts as a messaging platform, source of entertainment for me and most importantly, my source for news and global affairs. During the recent election frenzy, I found myself on the app a lot more than usual. However, I noticed contents warnings on political posts, particularly those posted by President Trump. The warning message: “this may contain misleading information” followed by a link to more sites with supposedly more impartial information appeared under quite a few of his posts. I was intrigued – since taking this course I had spent some time thinking about how social media has detrimentally changed society and its prowess to spread ‘false news’ is one such issue. Yet, it seems as if a solution had been found. Unfortunately, this is not the case. I will analyze how social media platforms have recently utilized such warning labels and demonstrate that they are merely a band-aid and not a cure. | > > | Whilst I have been recently able to pry myself from using Instagram (I had abandoned Facebook several years ago), I have failed to reduce my Twitter usage. It acts as a messaging platform, source of entertainment for me and most importantly, my source for news and global affairs.
What do you think it would take to help you rebalance the media diet by getting actual news from actual news organizations? What tools or forms of presentation would reduce the dependence on Twitter for what highly sophisticated social structures have evolved to do better?
During the recent election frenzy, I found myself on the app a lot more than usual. However, I noticed contents warnings on political posts, particularly those posted by President Trump. The warning message: “this may contain misleading information” followed by a link to more sites with supposedly more impartial information appeared under quite a few of his posts. I was intrigued – since taking this course I had spent some time thinking about how social media has detrimentally changed society and its prowess to spread ‘false news’ is one such issue. Yet, it seems as if a solution had been found. Unfortunately, this is not the case. I will analyze how social media platforms have recently utilized such warning labels and demonstrate that they are merely a band-aid and not a cure. | |
Why Warning Labels are Needed | |
< < | The success of Trump’s 2016 campaign has been partly attributed to its warming embrace of Facebook and Twitter as an advertising tool. Most notably, thousands of these ads were affiliated with Russians, spreading exaggerated, and often false information. It is not just the US who has seen Facebook being used to erode democracy; the now defunct Cambridge Analytica used social media data to influence the elections in dozens of other nations including Kenya, the UK, Trinidad and Tobago and even triggered a genocide in Myanmar. Whilst political ad campaigning is normal and expected, the business model of social media (relying on shares and likes), the almost-addictive nature and the propensity of users to believe everything at face value, creates a platform in which users are quite easily influenced, even without realizing it. Facebook, Twitter and Instagram were pressured to acknowledge their role in circumventing electoral democracy in the 2016 elections and have adopted similar methods to address the spread of misinformation on their platforms. | > > | The success of Trump’s 2016 campaign has been partly attributed to its warming embrace of Facebook and Twitter as an advertising tool.
Mostly by them. I don't think the persistence of large TV advertising expenditures dwarfing the ad spending on social media suggests that the professionals who run that and other campaigns believe any very strong form of the proposition.
Most notably, thousands of these ads were affiliated with Russians, spreading exaggerated, and often false information. It is not just the US who has seen Facebook being used to erode democracy; the now defunct Cambridge Analytica used social media data to influence the elections in dozens of other nations including Kenya, the UK, Trinidad and Tobago and even triggered a genocide in Myanmar. Whilst political ad campaigning is normal and expected, the business model of social media (relying on shares and likes), the almost-addictive nature and the propensity of users to believe everything at face value, creates a platform in which users are quite easily influenced, even without realizing it. Facebook, Twitter and Instagram were pressured to acknowledge their role in circumventing electoral democracy in the 2016 elections and have adopted similar methods to address the spread of misinformation on their platforms. | | When Did They Start?
In December 2016, Facebook announced the introduction of ‘disputed flags’ – red badges which appeared under articles after being checked by third-party fact checkers. This was later replaced a year later and instead of red flags, Facebook provided links under questionable posts to more reputable sites. The company argued that related articles were more effective than disputed flags in discouraging users to share false news. (although they failed to explain how they met this conclusion. I am inclined to think that the new initiative was more palatable to the right who complain that flagging censors a lot of their promotional media). As of September 2019, Facebook continues to address misinformed posts in this way, as well as removing fake accounts (bots) and utilising AI systems to recognise such content. However, politicians are exempt from their fact-checking program. | | Conclusion
Fake news existed before social media and it will continue to exist. In my opinion, the warning labels are an admirable addition but will make little to no difference to their spread and does not deal with any of the other problems of social media platforms. I think the real issue is that people (including me) are over-reliant on social media for keeping up with the news. | |
> > |
Hence my question at the top. If that's the real issue, let's see what we can learn about it by studying you.
I think the best route to the improvement of the present draft is to take a lawyer's view of the labeling behavior. What are the companies trying to achieve by way of reducing their legal and/or social liabilities? What legal measures are they trying to avoid? How can legal and technical measures be devised that accord with the political effort to reduce their distorting effects on the epistemic confidence that we are conducting democratic self-governance with due regard to the establishment of shared social "facts"?
| |
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines: |
|
EstherStefaniniSecondEssay 1 - 22 Nov 2020 - Main.EstherStefanini
|
|
> > |
META TOPICPARENT | name="SecondEssay" |
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
“This post may contain misleading information” – an analysis of the use of warning labels on social media posts
-- By EstherStefanini - 22 Nov 2020
Introduction
Whilst I have been recently able to pry myself from using Instagram (I had abandoned Facebook several years ago), I have failed to reduce my Twitter usage. It acts as a messaging platform, source of entertainment for me and most importantly, my source for news and global affairs. During the recent election frenzy, I found myself on the app a lot more than usual. However, I noticed contents warnings on political posts, particularly those posted by President Trump. The warning message: “this may contain misleading information” followed by a link to more sites with supposedly more impartial information appeared under quite a few of his posts. I was intrigued – since taking this course I had spent some time thinking about how social media has detrimentally changed society and its prowess to spread ‘false news’ is one such issue. Yet, it seems as if a solution had been found. Unfortunately, this is not the case. I will analyze how social media platforms have recently utilized such warning labels and demonstrate that they are merely a band-aid and not a cure.
Why Warning Labels are Needed
The success of Trump’s 2016 campaign has been partly attributed to its warming embrace of Facebook and Twitter as an advertising tool. Most notably, thousands of these ads were affiliated with Russians, spreading exaggerated, and often false information. It is not just the US who has seen Facebook being used to erode democracy; the now defunct Cambridge Analytica used social media data to influence the elections in dozens of other nations including Kenya, the UK, Trinidad and Tobago and even triggered a genocide in Myanmar. Whilst political ad campaigning is normal and expected, the business model of social media (relying on shares and likes), the almost-addictive nature and the propensity of users to believe everything at face value, creates a platform in which users are quite easily influenced, even without realizing it. Facebook, Twitter and Instagram were pressured to acknowledge their role in circumventing electoral democracy in the 2016 elections and have adopted similar methods to address the spread of misinformation on their platforms.
When Did They Start?
In December 2016, Facebook announced the introduction of ‘disputed flags’ – red badges which appeared under articles after being checked by third-party fact checkers. This was later replaced a year later and instead of red flags, Facebook provided links under questionable posts to more reputable sites. The company argued that related articles were more effective than disputed flags in discouraging users to share false news. (although they failed to explain how they met this conclusion. I am inclined to think that the new initiative was more palatable to the right who complain that flagging censors a lot of their promotional media). As of September 2019, Facebook continues to address misinformed posts in this way, as well as removing fake accounts (bots) and utilising AI systems to recognise such content. However, politicians are exempt from their fact-checking program.
Instagram, a subsidiary of Facebook, marks false information in a similar way. Anyone can report a post that sounds suspicious, which will cause it to be checked by an independent party. Instagram states they “work with 45 third-party fact-checkers across the globe who are certified through the non-partisan International Fact-Checking Network to help identify, review and label false information”. Instagram’s labelling goes a step further than Facebook – marked posts are blurred out until users click ‘view post’ after they understand that it may contain false information. They also make it harder to find the post by hashtag search and it will not show up in the ‘Explore’ page.
Twitter caught up with the other major platforms this year, introducing labels and warning messages and affiliated links for users to find out more. Tweets mentioning COVID-19 particularly came with links to global and health information. Unlike Facebook, Twitter does flag tweets made by politicians and public officials which sparked my interest in the topic – dozens of tweets published by Trump have been marked as ‘disputed’ or ‘false’, including his recent claim that he “won the election”.
How Effective Are the Warnings?
A study conducted in January 2019 found that “the “Disputed” tag [on Facebook posts] reduced the mean proportion of respondents who accept a headline as “Somewhat accurate” or “Very accurate” when no general warning was provided from 29% in the baseline condition to 19%, a ten-percentage point decline”. A later study in July 2019 also concluded that the warning labels on Facebook posts had the effect of reducing the likelihood of fake news being shared – after being presented with a fabricated Facebook post accompanied with a warning label, 23% of respondents said they were generally likely to share the fabricated post. However, the comments sections below articles that discussed the introduction and efficacy of misinformation labels revealed quite a different sentiment. One Guardian reader believes that “fake news is a left wing boogie man scapegoat for their failures”. Another said, it is “pointless, no one reads the original article nevermind the related ones. The battle is won and lost with memes”.
Despite the attempts, it is clearly difficult to assess if the warning labels will effectively work and protect easily influenced minds and democracy generally. Instagram and Facebook have admitted that they cannot find and label all suspicious posts, many will fall under the radar due to the sheer number of content that is uploaded every day. Furthermore, the readers who are most likely to align themselves with the exaggerated right-wing propaganda are not likely to be deterred by a warning sign, as the comment above suggests.
Conclusion
Fake news existed before social media and it will continue to exist. In my opinion, the warning labels are an admirable addition but will make little to no difference to their spread and does not deal with any of the other problems of social media platforms. I think the real issue is that people (including me) are over-reliant on social media for keeping up with the news.
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines:
Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list. |
|
|
|
This site is powered by the TWiki collaboration platform. All material on this collaboration platform is the property of the contributing authors. All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
|
|