Law in the Internet Society

The Means of Prediction

-- By LiasBorshan - 21 Nov 2020

Setting the Scene : The Means of Behavioral Prediction

The uninhibited rise of surveillance capitalist enterprises over the last two decades has created an immense imbalance with respect to who controls one of humanity’s most prized resources : knowledge. Companies like Amazon, Facebook, and Google have at their fingertips “configurations of knowledge”(to borrow from Shoshanna Zuboff) or "data" about individuals and society at large that are historically unprecedented. All the while the individuals whose data is being collected have virtually no access to this proprietary knowledge. To make things worse, this knowledge gives surveillance capitalists (or rather their machine learning algorithms) the ability to predict and shape people’s behaviors surreptitiously without their being consciously aware that it is occurring. Currently, there is no real accountability or regulatory framework for dealing with the trading of this knowledge. These companies wield an uncontrolled power over our behavior that threatens to consume human agency and subjugate organized society in the pursuit of the generation of capital.

The Pandemic : An Opportunity

In the midst of the SARS-CoV-2 global pandemic, all of these problems are being exacerbated, as these companies have noticed an opportunity to drastically expand the kinds of data that they collect to a new domain : medical data. And yet, there can be no doubt as to the potential benefits that these companies and the behavioral data that they collect could provide in helping government’s deal with the pandemic. The ability, for instance, to use location data collected on mobile phones to support the creation of contact tracing programs, which could allow governments to more easily manage their responses to the virus could be immensely useful and ultimately life saving. They could also be used for social mobilization, health promotion/communication with the public about the virus and evaluations of public health interventions. In fact, one need only look to the UK to see that these various options are being considered and even being implemented to a certain extent. A number of UK medical researchers requested that tech companies share their data and become involved in the pandemic response. Boris Johnson invited many of these companies to share their resources with the NHS and several companies like Amazon and Microsoft have agreed. As a result of this, the NHS now has a data sharing agreement whereby they can share any patient data that they want with any organization that they want.

A Familiar Narrative : The Greater Good

But do the potential benefits of having these companies participate actively in the Covid response, justify the furtherance of the surveillance capitalist regime?

Typically, two types of points are made for why these companies should share their data and develop programs in collaboration with government organizations to help with the response effort, despite the threat of surveillance capitalism. The first is the fact that the companies are already collecting this data anyway. Rather than using the data for surveillance capitalism, the argument goes that these companies will just help governments or their medical services in containing an extremely dangerous virus. The second argument is that desperate times call for desperate measures. Of course, one could still have very serious objections to the idea of them having collecting this data in the first place, but if it means that lives could be saved, why should we not involve these companies and let them use these technologies for the greater good?

Arguments like this have swayed many people, but they require one to make the untenable assumption that the various surveillance capitalist companies can be trusted to do this work with public health as their actual objective. While it is true that the technologies, as well as the data that these companies have at their disposal could prove to be useful, we have no way of controlling or regulating how all of this data gets used by these companies. The kind of undertakings I described above would ultimately involve tech companies like Google expanding their data collection to include our medical/health data. Inevitably, that process would involve the commercialization of our medical information. This would become yet another (perhaps the most)intrusive data point for these machine learning algorithms to predict our behaviors, monitor us, advertise to us, capture our attention, and shape our beliefs. The fact of the matter is that without regulations to limit how behavioral data is used, there is currently no mechanism through which these companies could be compelled to only use this medical data for the purposes of Covid-19 related research. Given this lack of regulation, the pandemic presents the perfect opportunity for these companies to fast track the process of accessing huge swaths of new data that they can use to ultimately deprive us further of our agency.

Ground Rules

As a society in the midst of this pandemic, we stand at the precipice of what could be the next phase in obliterating human freedom and autonomy. This does not mean that we should not embrace the technologies we have available to us and the myriad of benefits that they may confer to the human race. We must, however, tread carefully. The free, unrestricted use of our personal information, behavioral data, and medical data, by these companies allows their algorithms to mold our behaviors in ways that we cannot fully comprehend. Before we begin considering how all of this data could be useful, we need to set the ground rules for how it can be used and who can use it. In doing so, the focus must be on democratizing this form of knowledge, to combat the immense imbalance that a monopoly on this knowledge grants. The pandemic has made clear just how urgently countries need to figure out the limits on what can be done with data and the extent to which it should be anonymized. Only once regulations are developed, can we begin re-contextualizing these technologies and their uses, without compromising our freedom and privacy.

Set the scene you do, very well. But having opened with a broad general argument, your movement to the particulars of the data-miners' response to the epidemic is too abrupt. The draft would benefit from having that transition occur in a more articulated manner.

It would be better to put the epidemic-related activities in their larger context. With respect only to Google, for example, you should point out that health information is already an entire letter of the Alphabet. A little more clarity about what is already going on under the V would set the scene better in another way. Instead of one item, about the UK NHS, the essay should have command of a broader view of activities of the companies. Again, with respect only to Google, a collation of its own statements is a good place to start.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.

Navigation

Webs Webs

r3 - 29 Dec 2020 - 21:37:20 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM