Law in the Internet Society

Our Digital Ecology of Deception

-- By PatricioMartinezLlompart - 04 Nov 2016

“Technology is neither good nor bad; nor is it neutral.” – Melvin Kranzberg, 1986

Something’s wrong with what we have made of the Net. In the aftermath of the presidential election, reckoning with how our direct bond to the Net, particularly via Twitter and Facebook, makes some kinds of evil easy has been all the rage. From the proliferation of fake news to the disregard of systemic user harassment, the social networks we presumably turn to for knowledge and connection have become safe spaces for misinformation and hate speech. The norms of civility, truthfulness, and respect that bound together a vibrant democratic society are eroding fast in the Net, and we are alarmed at what this will mean for our non-digital lives.

This collective outrage at how the Net is deceiving us to the beat of fake news, however, is also pure irony. Deception powers our relationship with many of the “services” we obtain through the Net. We don’t only go online to simplify daily errands, obtain information, and pursue human connection; we are also in the Net to self-promote and police our peers. In turn, and whether we are aware or not, service providers surveil each of our digital steps and bask us with untruthful news. Our digital lives navigate a multidimensional highway of deception—and before the current moment’s moral panic about fake news—many didn’t seem to care.

Fake News Get Real

If the system thrives on deceiving and allowing its users deceive, what explains our indignation at fake news? Why do we care more about the untruth going viral than about the fundamental injustice of our digital lives being constantly surveilled and commodified? An immediate answer may lie in the recent behavior that fake news have unleashed in our physical world:

A man armed with an assault rifle stormed a popular pizza restaurant in Washington DC to investigate “Pizzagate,” a conspiracy theory widely shared across social media alleging that Hillary Clinton’s campaign operatives procured child prostitutes from the restaurant.

Around the same time, a Huffington Post journalist learned he was under FBI investigation for tweeting about engaging in voter fraud as a joke in response to a fake news story.

And most recently, Florida authorities charged a woman who believes the Sandy Hook massacre was a hoax to advance gun-control legislation for threatening a parent of one of the massacre’s victims.

These three incidents exemplify how the Machine not only collects but also generates behavior—the tangibility and concreteness of which perhaps explains why we are more scandalized by fake news than by a regime of surveillance and data mining that, albeit pervasive, feels “invisible.” Moreover, the first large-scale study on the effect of fake news reveals that American adults are deceived by untrue news articles 75% of the time, and that they are mostly unable to distinguish between real and fake stories. Such a phenomenon may beg rethinking what constitutes censorship in our age of Big Fake News: Although the “Net interprets censorship as damage and routes around it,” aren’t Facebook and Twitter censoring the truth by allowing it to drown in a sea of misinformation?

The Front to Misinform and the Back to Surveil

In light of these recent events, then, it should be no surprise that over the past year we have been more interested in learning about fake news than the collection of our online behavior. Meanwhile, we remain appeased residents of a “curiously fabricated privatised commons of data and surveillance.”

In-Q-Tel-backed Dataminr retains its monopoly over the mining of our tweets. Out of our Millenial distaste for carrying cash, Venmo’s default sharing features and limited privacy controls expose the financial transaction data of over two million users. A recent study shows Venmo behaves just like Facebook in choosing the profit of user growth over addressing identified privacy vulnerabilities. And our entryway to Facebook, Twitter, and Venmo—the ISPs—are still able to access everything that exits and enters their customers’ computers (although, in a welcome change, broadband providers will soon need to obtain permission before collecting and sharing data on consumers' online activities as new FCC rules come into effect).

As such, deception in the form of fake news is not only a product of these services; deception is also their business model. The trick? Luring user traffic with the promise of hyper— connectivity and awareness of what’s going on. The real deal? Packaging our behavior for advertising. Think Facebook is the product? You are deceived; the Machine's most prized product is you.

Awakening to Deception is Liberation: A Return to First Principles

Our freak out over fake news ultimately makes me hopeful because it suggests we are no longer oblivious to what’s happening around us. It recognizes that using Facebook and other mediums to obtain information is an illusion; it means we have awakened to the reality that the Net is not serving us as it should; it may signal we are ready to rein control of both the Machine’s viral production of untruths and commodification of our behavior.

This past September, 91% of adults told Pew “that consumers have lost control of how personal information is collected and used by companies.” Pew also noted that privacy was no longer a “condition” of American life but a “commodity to be purchased.” Only 9% of those surveyed believe they retain significant control regarding how their data is collected and used. We have taken a hard look at our Black Mirror and are conscious about the urgency of pushing back.

As we ready to take action, the Net’s founding principles of inter-operation and collaboration should serve as guides. Professor Lessig identified markets, norms, law, and tech as the four regulators of online conduct. Given the Net’s current political economy, law and markets don’t provide a way. But in the spirit of the command for collaboration embedded in the norms and code of the Net’s original structure, we should first use our knowledge of the Machine’s deceit to alert others who remain in the dark. Our mandate is clear: now that we know we can be surveilled at all times, we have a responsibility to resist.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.

Navigation

Webs Webs

r4 - 09 Dec 2016 - 05:35:50 - PatricioMartinezLlompart
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM