The Means of Prediction

-- By LiasBorshan - 21 Nov 2020

Setting the Scene: The Means of Behavioral Prediction

The uninhibited rise of surveillance capitalist enterprises over the last two decades has created an immense imbalance with respect to who controls one of humanity’s most prized resources: knowledge. Companies like Amazon, Facebook, and Google have at their fingertips “configurations of knowledge” (to borrow from Shoshanna Zuboff) or "data" about individuals and society at large that are historically unprecedented. All the while, the individuals whose data is collected have virtually no access to this proprietary knowledge. To make things worse, this knowledge gives surveillance capitalists (or rather their machine-learning algorithms) the ability to predict and shape people’s behaviors surreptitiously without their being consciously aware that it is occurring. Currently, there is no real accountability or regulatory framework for dealing with the trading of this knowledge. These companies wield an uncontrolled power over our behavior that threatens to consume human agency and subjugate organized society in the pursuit of the generation of capital.

The Pandemic: An Opportunity

In the midst of the SARS-CoV-2 global pandemic, all of these problems are being exacerbated, as these companies have noticed an opportunity to drastically expand the kinds of data that they collect to a new domain: medical data. To be certain, the effort to enter the medical field is not a new one for these companies. Examples of this include Alphabet’s (Google’s parent company) research organization “Verily Life Sciences,” Alphabet’s pending acquisition of FitBit? ; Apple’s “Health App,” which promises to aggregate patient health records from multiple institutions alongside their patient-generated data on Apple products; or Amazon’s most recent and disturbing announcement HealthLake? , which promises to aggregate/centralize patient data to make it easier for corporate customers to analyze and use the data. This pandemic, however, has created the opportunity to collect vast amounts of medical data in order to assist in curbing the impact of the virus. In April, Google and Apple announced they were suspending their rivalry to work with countries on a new mobile contact-tracing API that would alert users when they are in the radius of a device of someone with the virus. In June, Verily launched its Healthy at Work program. To make programs like these work, they will require vast amounts of medical data. This data, if handed over, will likely become just another knowledge source for companies to use in predicting and shaping our behavior.

A Familiar Narrative: The Greater Good

And yet, there can be no doubt as to the potential benefits of allowing these companies access to this data in helping countries deal with the pandemic. The ability, for instance, to use location data and medical data to support the creation of contact tracing programs, social mobilization programs, apps for health promotion/communication with the public about the virus and evaluations of public health interventions could ultimately be lifesaving.

But do the potential benefits of having these companies participate in the COVID-19 response as they are doing currently justify the furtherance of surveillance capitalism? Two types of arguments are often made for why these companies should be given access to health data to help with the pandemic response effort, despite the threat of surveillance capitalism. The first is that the companies are already collecting exhaustive amounts of data anyway. Rather than using the data for surveillance capitalism, these companies will help governments or their medical services in containing an extremely dangerous virus. The second argument is that desperate times call for desperate measures. Of course, one could still have serious objections to the idea of them collecting this data in the first place, but if it means that lives could be saved, why should we not involve these companies and let them use these technologies for the greater good? Arguments like this may seem persuasive, but they require one to make the untenable assumption that the various surveillance capitalist companies can be trusted to do this work with public health as their actual objective. While the technologies—as well as the data these companies have at their disposal— could prove to be useful, we have no way of controlling or regulating how all of this data gets used. Inevitably, the bulk of our medical data would become just another commodity and source of knowledge for surveillance capitalists. This would become yet another intrusive data point for machine-learning algorithms to predict our behaviors, monitor us, advertise to us, capture our attention, and shape our beliefs. Without regulations to limit how behavioral data is used, there is currently no mechanism through which companies could be compelled to only use this medical data for the purposes of COVID-19 related research. Given this lack of regulation, the pandemic presents the perfect opportunity for companies to fast track the process of accessing huge swaths of new data that they can use to ultimately deprive us further of our agency.

Ground Rules

As a society in the midst of this pandemic, we stand at the precipice of what could be the next phase in obliterating human freedom and autonomy. This does not mean we should not embrace the technologies we have available to us and the myriad of benefits that they may confer to humanity. We must, however, tread carefully. The free, unrestricted use of our personal information, behavioral data, and medical data, by surveillance capitalists allows their algorithms to mold our behaviors in ways that we cannot fully comprehend. Before we begin considering how all of this data could be useful, we need to set the ground rules for how it can be used and who can use it. In doing so, the focus must be on democratizing this form of knowledge, to combat the immense imbalance that a monopoly on this knowledge grants. The pandemic has made clear just how urgently countries need to figure out the limits on what can be done with data and the extent to which it should be anonymized. Only once regulations are developed, can we begin re-contextualizing these technologies and their uses, without compromising our freedom and privacy.

Set the scene you do, very well. But having opened with a broad general argument, your movement to the particulars of the data-miners' response to the epidemic is too abrupt. The draft would benefit from having that transition occur in a more articulated manner.

It would be better to put the epidemic-related activities in their larger context. With respect only to Google, for example, you should point out that health information is already an entire letter of the Alphabet. A little more clarity about what is already going on under the V would set the scene better in another way. Instead of one item, about the UK NHS, the essay should have command of a broader view of activities of the companies. Again, with respect only to Google, a collation of its own statements is a good place to start.

COMMENT ON DRAFT 2: I have taken your words to heart and tried to paint a broader picture of the issue. Rather than explaining Verily's particular efforts since 2015, however, I gave a brief overview of some of the efforts from multiple companies. I am a little bit torn by this choice as I feel like the paper would perhaps benefit from a more systematic explanation of the progression of Google/Verily, since it perfectly exemplifies everything mentioned in this paper. Unfortunately, I ultimately could not find a way to pay lip service to all of the big tech companies, while also including this within the word limit. I would love your response to this so that I can continue polishing this even after January 8th.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.