Computers, Privacy & the Constitution
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.

The Political Objective of Disinformation

-- By MadihaZahrahChoksi - 26 Apr 2018

In all of my reading and thinking about the debacle that was the 2016 US presidential campaign and election, I thought I would use this opportunity to sum up my thoughts, fitting them into this space in 1000 words or less, and moving on from it.

A Great Deception:

Jürgen Habermas defines the “public sphere” as a place where private citizens come together to form the “public.” In the 21st century however, following a few hundred years of continued economic, industrial and political development, there exists an entirely new public sphere that exists on the World Wide Web. The “virtual sphere” or the “networked public sphere” as defined by Zeynep Tufekci, constitute an open, shared social space existing in the digital realm. Similar to the Habermasian public sphere, the networked public sphere encourages the free exchange of ideas—however, embedded within the seamless and intuitive digital interfaces are deceptive mechanisms controlling the free exchange of ideas. The Habermasian theory fails to account for the ways in which differentiated public spheres emerge, or develop over time; a valid criticism in a society where human to information interaction is increasingly curated. This can be applied to the case study of the 2016 U.S. Presidential Campaign (USPC), where dialogue was artificially shifted and manipulated on the Internet. Social media platforms, search engines, and web pages acting as gatekeepers for expression within the net, heavily mediate user experience in this newly-networked public space. They prescribe an information diet that confirms and conforms to the patterns of beliefs and tastes gleaned from users’ behavior on the platform.

The deceptive nature of information disseminated during the 2016 US presidential campaign (USPC) is epitomized by the relationship between the dubious nature of facts, and algorithmic biases. The first, more tangible characteristic of illusive information is the unreliability of facts themselves, and their immediate ability to guide partisan politics thanks to the networked public sphere. For example, in the context of the USPC, the false perception amongst the majority of Trump supporters is that he is a strong anti-urbanist. Trump’s focus on the working class in rural areas enabled him to paint his political identity as one who is fundamentally concerned with issues faced by the communities in these geographically remote areas. Although Trump has never lived outside of a city during his adult life, his campaign turned to social media platforms whose seamless integration into everyday lives shared these alternative facts as truths, and ultimately reached those targeted marginalized rural supporters.

In Pax Technica, Philip Howard describes how in “emerging democracies” political groups rely on the networked public sphere “…to raise funds, rally supporters, and outmaneuver opponents in policy debates.” This way, political objectives such as those of Mr. Trump that may otherwise be hard to entice support, can find their momentum through the manipulation of the web-based spaces where there are little to no financial consequences. Howard claims that “[i]n more and more elections, political victory goes to the most tech-savvy campaigner. Ideological packaging seems impressive party machine is one that uses social media to create a bounded news ecology for supporters. It mines data on shared affinity networks, and otherwise mobilizes voters on election day.” In this sense, Mr. Trump’s exasperating opinions packed in one-hundred-and-forty-characters-or-less on Twitter not only attracted media attention across all platforms, but transformed into a critical tool through which he further connected with his supporters.

Secondly, the illusive characteristics of digital interactions further constrained the networked public sphere as a free exchange of ideas by introducing artificial forms of manipulation such as algorithms and bots. Social media platforms in the mold of Twitter and Facebook who depend on algorithmic control for “optimal user experience,” alongside Russia’s algorithmic manipulation, penetrated specious information surrounding political ideologies into the networked public sphere. While platform companies argue that their algorithms are not trained to influence political dialogue one way or the other, the experiential reality of these features demonstrates otherwise.

Immeasurable Outcomes

Algorithms and Society

The trending algorithm does much more than spread information based on popularity, it becomes a symbolic representation with cultural significance. Regardless of the topic, the fact that a certain story or idea is “trending” or “viral” is influential in and of itself. A group of trending topics taken together can represent a pseudo collective consciousness, a moment in time in which a certain set of ideas defined the outlook of the public. While the process of content reaching this status is often spontaneous, the ability to dependably promote content has spawned an industry. Using technology such as “bots,” or zombie accounts that automatically like/retweet content, these trends can be controlled by experienced actors—usually marketing firms with relatively innocuous motives. However, this space is increasingly occupied by actors affiliated with national governments, whose objectives include manipulating public opinion.

Algorithms and the Controlled Urban Experience

Algorithms, and algorithmic infrastructure have become unassailable truth mechanisms. Coupled with the dissemination of untruthful and misleading information online, they have such an extraordinary amount of power that “[t]he internet has become not just a weapon in the world’s great political battles. It has become the weapon for ideological influence, and careful use can mean the difference between winning and losing.” Whereas the boundaries of physical spaces are concrete, the ones of online spaces are fluid and are not subject to the types of outspoken and critical behavior (such as public shaming or protesting) that one would find in physical public spaces.

Social media usage is frequently referred to as taking place outside of reality, but people are increasingly faced with the truth that the nature of users’ experience on these platforms affects not only sociopolitical experiences, but national security as well. Networked “urban” spaces are designed to connect like-minded people, and in the case of neo-fascists and geographically isolated Americans, algorithms reached across societal norms and physical distance to encourage interactions between users who shared similar beliefs. Rather than fulfilling their promise to foster a diverse dialogue, these platforms artificially concentrate and segment users without regard for the basis for their union—be it environmentalism or eugenics. The effective control thrust upon communities who blindly trust virtual sources such as Facebook and Twitter as disseminators of truths is immeasurable. Ceding control to these digital spaces is a decision made for the public without their consent, but whose consequences are directly felt by the same public, not the digital mechanisms who are in charge of facilitating the flow of information taking place on their networks.

Thinking Beyond the Blame Game

Whereas in the 90s, the world wide web was championed for its democracy producing effects, less than three decades later, in an era of fake news, stolen elections, and the impending death of privacy, the human interests sales pitch of the world wide web has been betrayed by the privatization of the net. In the context of USPC, user agency was sidelined and wholly corrupted by those very firms who professed to enhance user agency on the network through tools that encourage communication and dialogue between citizens.

The aftermath of the 2016 PC and election urge us reflect on the controlling features that have become an unalterable reality of the world-wide web’s net infrastructure, and perhaps also start considering our role as agent-less bystanders as there is, and always was, more that we could do.

Response - Joe Bruner

Madiha, this was very thoughtful and covered a lot of ground in a concise way. Using social media's role in the 2016 election to set up a starting point sends me in some interesting directions.

The first interesting direction to me is thinking more specifically about manipulation and algorithms. To me, bots are a less interesting form of manipulation for now - when we get more realistic-seeming bots, that will be more dangerous, but for now they are primarily interesting because they interact with algorithms to make things trend and to make pages appear popular and legitimate. In the same way you spend more money to buy your ad a better place in the local newspaper, now you buy bots to make your post appear more popular and significant. What is ironic to me and what this reveals is that they may be a better way to pay your way to notability than the "promotion" function Facebook and Twitter have tried to monetize. It has several advantages: If you already have a spare botnet and dummy accounts, it's free. The post does not have that delegitimizing little "promoted" label. It lets you build a greater screen of plausible deniability between who is posting and who is spending to promote the post. On the one hand we might expect Twitter and Facebook to be angry about this behavior and create algorithms to ban bots, but on the other hand, assuming Eben's ideas about behavior collect are correct, the idea of annoying fake comments motivating all the real people in the machine to behave back at things may be more desirable than the revenue. At least it may be enough of a counter-benefit that they do not find it cost-effective to try and ban all bots.

Everyone is talking a lot about algorithms but I think the discussion rarely gets specific enough about how the algorithms actually affect our behavior and thinking. In the winter I met and was really impressed by Miranda Fricker. I'll give you the Wikipedia blurbs on her biggest idea, the two forms of epistemic injustice:

"Testimonial injustice consists in prejudices that cause one to "give a deflated level of credibility to a speaker's word":[5] Fricker gives the example of a woman who due to her gender is not believed in a business meeting. She may make a good case, but prejudice causes the listeners to believe her arguments to be less competent or sincere and thus less believable. In this kind of case, Fricker argues that as well as there being an injustice caused by possible outcomes (such as the speaker missing a promotion at work), there is a testimonial injustice: "a kind of injustice in which someone is wronged specifically in her capacity as a knower."

Here, this seems relevant to me because likes, shares, and views on the platform become a de facto currency of credibility. I recently saw a message calling for bringing together left-wing "social media influencers." The importance and credibility of a speaker is in large part their following on the platform, which is hypermediated (you love that word) and demarcated by the terms of interaction which the platform companies determine for us. If all our political organizing and discussion is moving into the platforms, the ability to be believed or have a say is diminished, almost certainly unjustly, if you do not use the platform and command a sufficiently large following. And obviously if your messages are too long, nuanced, or sophitistcated for the terms of interaction a platform provides, you will have no following. Even more interestingly, not having enough pictures or information about your personal life linked to the platform also diminishes your credibility in the new form of social interaction. Who is this strange person with a flag as their profile picture? Facebook and Twitter have successfully inverted the dynamics of the old forms of internet political discussion on 4chan, Something Awful, and Slashdot where too much information about yourself was seen as foolish vanity and a sign someone was a self-promoter, not to be trusted, rather than someone authentically expressing their thoughts and interacting for the interaction's own sake. But this is actually the smaller of the two ideas for our purposes, in my opinion.

"Hermeneutical injustice, then, describes the kind of injustice experienced by groups who lack the shared social resources to make sense of their experience. One consequence of such injustice is that such individuals might be less inclined to believe their own testimony. For example, Fricker describes a woman attending a meeting in the late 1960s at which post-partum depression was discussed; in this case, the shared social resource - a linguistic label and sharing of experiences - enabled an understanding of a condition she had experienced and was previously blamed for."

Your own Zeynep Tufekci is undoubtedly correct when she points out Youtube is now a huge radicalization engine, but I think the sketch of how this happens is usually incomplete. Sophisticated ideology to interpret one's experiences is hard to come by, and it requires a lot of dedicated focus, ability to read and study, and so on, especially when you do not have a teacher teaching you firsthand. When I was a teenager I was reading Hayek and Engels and so on, but if my ability to pay attention had been compromised, I might have instead defaulted to sources that were easier to pay attention to. It's a lot easier for Youtube videos to hold a kid's attention and they give clearer answers. Antisemitism is so easy, you just tell people that because the Jews are behind everything that sucks and they only care about themselves the solution is either "another shoah" or "itbah al-yahud" depending on what language the video is in. The communism and socialism that goes around is a horrible cut-rate version because instead of answering for the horrors of Stalin, it's much easier to say that was all bullshit. We're making it really hard for kids to use good tools to interpret their world by diminishing their attention span and providing flashy hypermediated alternatives, and we're giving them easy-to-use CONVENIENT shitty tools. But it's not only kids - the whole Trump/Hilary interplay was so shallow it left most Americans feeling totally dissatisfied because everyone was just hitting back stupid little talking points on their social media pages, and the majority of Americans who did not vote called it out. If not voting were a candidate, they would have won by a landslide.

I suppose the next big lingering question is how to structure a networked public sphere that facilitates real, meaningful interactions and study of the questions, rather than facilitating vaguely poliical behavior which is reactionary in every sense of the word. Social media platforms don't engage the rational mind and aren't conducive to healthy, reasoned debate and discussion. This isn't even really controversial now, but the hard question to me is what structural arrangement of humans interactiving over a computing network does a good job, rather than a bad one, of approaching the Habermasian ideal. If you like Jurgen Habermas you understand his shame at the fact that more Germans were attending the Reichsparteitag in the Luitpoldarena der Nuremberg than were ever discussing bourgeoisie ideas in his precious Öffentlichkeit salons. Everyone having a networked computer should give everyone a chance to participate in that kind of idealized Öffentlichkeit culture and allow us to take back power from the old broadcasters who Eben called the Eyeball Merchants. And I actually think the tech for that is mostly fine already, but we have to facilitate a transition towards people using it.

140 characters is not enough, but this Wiki is pretty nice when people use it, don't you think?

You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Webs Webs

r3 - 17 May 2018 - 22:50:43 - JoeBruner
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM