Law in the Internet Society

View   r4  >  r3  ...
LouisEnriquezSaranoSecondEssay 4 - 08 Jan 2021 - Main.LouisEnriquezSarano
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Changed:
<
<

Is Surveillance Capitalism a Threat to National Security?

>
>

Is Facebook a Threat to National Security?

 

-- By LouisEnriquezSarano - 22 Nov 2020

Changed:
<
<
In a word: Yes. Because social media and other behavior-capture firms (BCFs) specialize in influencing people and selling that influence to the highest bidder, they are vulnerable to exploitation. BCFs need divisive content to keep users engaged and our adversaries can create such content, thereby using the BCFs’ algorithms to manipulate public opinion. This threat is woefully asymmetrical—the countries that wish to destabilize democracies and weaken American power are quasi-invulnerable to reciprocal destabilization. Democracies must counterattack by abandoning the behavior capture business and requiring powerful technology firms to protect their users’ anonymity on the net.

Who’s Afraid of Little China? Or, the Call is Coming from Inside the House

Alarm surrounding China’s projection of global technological power, embodied in the debates about Tik Tok and Huawei, signals growing awareness about BCFs’ power. But thus far, relatively few commentators have connected the dots: if Tik Tok is threat, then so is Facebook. That is, the problem is not Tik Tok’s behavioral data aggregation as such, but its ability to use that data to influence Americans citizens. In that case, democracy’s adversaries need only master our own blue-chip BCFs’ advertising businesses.

As we know, the BCFs’ advertising platforms can easily and cheaply weaponized. Russia, for instance, believes it benefits from destabilizing America’s social fabric and maintains a campaign whose sole purpose is to sow chaos. It benefits by poisoning the public forum, by influencing even just a few people to adopt more reactionary positions.

This makes Russia’s (et al.) goals coextensive with Facebook’s (et al.) profits and business model. As one internal Facebook report put it: “Our algorithms exploit the human brain’s attraction to divisiveness.” The logical progression is simple: people are attracted to divisive content, so Facebook uses its behavior collection algorithms to give people the content that is most likely to keep them on its platform, and advertisers pay for this attention, so parties that create divisive and false content provide a direct benefit to Facebook. It’s a symbiotic relationship—Facebook maximizes the content-makers’ power by using its behavior collection and predictive algorithms to get that content in front of the people who are most susceptible to its influence.

BCFs are thus incompatible with democratic institutions in an adversarial geopolitical environment, at least to some significant, if debatable, extent. Public opinion drives voting and political engagement, and public opinion is increasingly derived from and shaped by the BCFs. Elected representatives are still reactive to public opinion and thus to the divisions in public opinion stoked by BCF content. By definition, purpose, and design democratic institutions are susceptible to marginal changes in public opinion; and the BCF business model is built on its power to influence individual thoughts and behavior. (Zuboff, 214, 294). This power is used by our adversaries to influence public opinion and thereby distort and weaken democratic institutions. By undermining public trust and cohesiveness, malign powers secure the freedom to pursue their own ends in Xinjiang, Kurdistan, Armenia, South Ossetia, and elsewhere without interference.

You Can’t Fight Fire with Fire

De-Weaponization

Deweaponization would require a dramatic shift in default rules: telecoms and online platforms have to affirmatively secure internet users’ privacy, secrecy, and anonymity online. They can no longer trawl for behavioral surplus or build digital doppelgängers to use to sell us toothpaste. As long as that business model remains in place, the BCFs power to shape public debate will only grow; their market share and profits are definitionally tied with their power to influence people. Competition between the BCFs will also prevent them from truly policing the content on their platforms; if one firm removes the divisive content that keeps eyeballs on advertisements, then another won’t, and that platform will attract more users and more advertisers. Therefore, only direct intervention can end this destructive cycle and shield the public forum from at least some manipulation.

>
>
Facebook and other behavior-capture firms (BCFs) have become a weapon used by malign actors—Russia most prominently—to attack American democracy. Facebook makes money by aggregating its users’ behavior (on Facebook but also across the net) and using that behavioral data to (1) keep users on the platform by presenting them with content tailored to appeal to them and (2) present them with “targeted” advertisements. Because Facebook makes money when its users interact with ads, it wants to keep users on Facebook for as long as possible. And because people are attracted to extreme and divisive content, Facebook presents its users with such content to keep them on the platform. Russia famously spread false and divisive content on Facebook for the express purposes of electing Donald Trump and weakening the the United States. It is clear that Facebook was not a passive victim in this project; by spreading the Russian content Facebook kept its users online and maximized the number of ads they viewed.
 
Changed:
<
<

The Asymmetry of Destabilization Warfare—Counterattack

>
>
This threat is woefully asymmetrical—those seeking to destabilize the US are quasi-invulnerable to reciprocal counterattacks because they have few intact democratic institutions to target and are already flooding their net with disinformation and propaganda. Thus, the US cannot induce Russia to stop its efforts by starting American troll farms. Instead, we must protect ourselves by passing legislation that prohibits BCFs from using aggregated behavioral data to present their users with content that will keep them engaged. Doing so would not limit the quantity of disinformation on the web but would necessarily inhibit its power since it would not be systematically presented to individuals pre-disposed to believing it.
 
Changed:
<
<
Behavior collection isn’t the 21st century nuclear arms race; there is no mutually assured destruction because we have no equivalent target. Our opponents have already dismantled their own institutions; they are the state and they control public debate. If they think their social media is under attack from the outside, they can shut it down. They have no qualms about overwhelming their own public forums with propaganda-spouting trolls and bots. So, unless the US is willing and capable of forming its own troll farms to combat Russian, Chinese, or Iranian propaganda, there just isn’t a way to fight back other than by dismantling surveillance capitalism. Not only is such a strategy unlikely to be successful from a tactical perspective, it is incompatible with a free democracy.
>
>

The Call is Coming from Inside the House

 
Changed:
<
<
Supporting robust encrypted and anonymous communication services provides a more effective form of counterattack against the governments that seek to destabilize American and European democracies. The more Russian, Chinese, or Kurdish dissidents can speak freely on the American internet, the weaker are our adversaries.
>
>
Alarm surrounding China’s projection of global technological power, embodied in debates about Tik Tok and Huawei, signals growing awareness of the power of aggregated behavioral data. But few commentators have connected the dots: if Tik Tok is a threat, then so is Facebook. The problem is not Tik Tok’s behavioral aggregation as such, but its ability to use that data to influence Americans. Facebook data can do that too, thus our adversaries need only master our own BCF platforms to attack American democracy.
 
Changed:
<
<
The hawks might answer that these suggestions will harm US intelligence by inhibiting their own behavior collection. But that’s just an argument for mass surveillance, and it might help us win the battle, but it will ultimately lose us the war. Liberal democratic values just aren’t compatible with half-in, half-out cyber-influence war. Either we have to become Putin, or we have to find new weapons. Freedom is the ultimate weapon.
>
>
Facebook’s business model is well adapted to Russia’s (et al.’s) goals. As one internal Facebook report put it: “Our algorithms exploit the human brain’s attraction to divisiveness.” The logical progression is simple: (1) people are attracted to divisive content; (2) Russia publishes false and divisive content on Facebook to destabilize American democracy; (3) Facebook uses the behavioral data it aggregates to feed people this content, which will keep their attention on Facebook; and (4) advertisers pay for this attention. It’s a symbiotic relationship: when Russia publishes divisive and false content on Facebook it is leveraging the platform while providing it with a direct benefit.
 
Changed:
<
<
>
>
This feedback cycle is readily observable. Russia, in its campaign to destabilize American democracy, actively supported Donald Trump’s candidacy by spreading misinformation on Facebook. At the 2020 election cycle’s zenith, the FBI warned that other actors were engaged in similar activities. Extremist political ideologies often go hand in hand with viral misiniformation—and Facebook not only "provid[es] a platform to QAnon [(a viral network of extreme conspiracy theories)] groups … [but also] actively recommend[s] them to users who may not otherwise have been exposed to them." As many as 64% of online extremist groups’ members join only after Facebook actively recommends the group. The QAnon conspiracy has been promoted by Russian organizations and has been tied to several violent crimes. Recently, this symbiosis has been used by Trump himself in an attempt to undue the 2020 election.
 
Changed:
<
<
I don't understand (1) how making anonymity stronger protects against disinformation efforts, and (2) how democratization of communications (regardless of whether through contemporary platform architecture or through a fully federated open Web and balanced services) can not lead to an increase in the volume of disinformation. The draft would be stronger if it clarified (1) and dealt directly with (2). If disinformation can be expected to increase precisely because we are democratizing communications ecologies, giving orders of magnitude more voice to individuals, then we should focus on how to help people think clearly in an environment of more disinformation.
>
>
Facebook and other BCFs are therefore a threat to American democracy and national security. Public opinion drives voting and political engagement, and public opinion is increasingly shaped by media consumed primarily on BCF platforms. Thus, our adversaries are able to influence and distort public opinion to weaken American democracy and power.
 
Changed:
<
<
Whatever we do surely involves our "system of freedom of expression," as Tom Emerson called it. The use of national security arguments to set freedom of expression policy is ominous, to say the least. We should surely have a set of clear and present dangers to set us on that road. But the present draft offers no evidence that we are at such a point. Substance on this point would much strengthen the draft.
>
>

The Asymmetry of Destabilization Warfare

 
Changed:
<
<
>
>
Behavioral aggregation is not the nuclear arms race; there is no mutually assured destruction because we have no equivalent target. Our opponents have already dismantled and coopted their democratic institutions. If Russia thought its social media was under attack it could simply shut it down or overwhelming its own public forums with propaganda-spouting trolls. So, unless the US is willing to start its own troll farms to combat misinformation attacks the only viable defense is to dismantle the BCF business model.
 
Added:
>
>

How to Deweaponize the Net

 
Changed:
<
<

>
>
To deweaponize the net BCFs must be forbidden by law from aggregating behavioral surplus and using it to target their users with content and advertising. As long as that business model remains in place, the BCFs’ profits are tied to their ability to influence people and thus their power to distort public opinion will only grow. This same profit-motive ensures that the BCFs will never truly police the content on their platforms, despite whatever fig leaves they may publicly adopt. Only legislative intervention can end this destructive cycle. Despite having no direct effect on individuals’ capacity to create divisive misinformation, such a law would necessarily reduce its power by eliminating its primary vector: the BCF algorithms which target particular users with the misinformation most likely to appeal to them.
 
Deleted:
<
<
Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.
 \ No newline at end of file
Added:
>
>
The intrusion of “national security” into discussions on the vitality of the public forum is worrisome but likely inevitable. Moreover, those who are less inclined to support laws protecting individual privacy as such may be more supportive of reining in the BCFs in the name of national security. Privacy advocates would therefore do well to address the issue head on and use it to protect individual privacy while they still can.

Revision 4r4 - 08 Jan 2021 - 05:39:09 - LouisEnriquezSarano
Revision 3r3 - 31 Dec 2020 - 17:23:52 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM