Law in the Internet Society

Your profile defines your future.

The emergence of individual connected devices such as personal computers, smartphones, smartwatches, tablets, etc. introduced spies in our lives. The "smart" objects that are supposed to make our lives easier are designed to collect our behaviors. We generate more information in two days than humanity did in two million years. Our travels, our purchases, our internet searches, our interests, our friendships, the length of our sleep, our political opinions, the pace of our heart, the features of our face are a few examples of the type of data collected by our new tech-toys/behavioral trackers. In 2010, there was one connected device per human on earth. Today, there are more than 6 connected devices. The "Smart" objects brought the "Internet of Things revolution” track our behaviors in return for services supposed to make our life more convenient.

At the data collection stage, our behaviors data is a huge, incoherent, decontextualized mass of data. After collecting, data must be processed. Data about an individual will be compiled in order to define his profile. For instance: “white male going to law school at Columbia, of catholic religion, homosexual, etc.”. Once you are categorized into a profile, algorithms will predict and influence your behaviors on the basis of how you and other people having similar profiles behaved.

There are multiple (potential) applications of profiling: marketing, surveillance, city management, security, etc. They all have one thing in common: learning about and influencing our behaviors. Once an individual’s profile is compiled, algorithms are able to guide his behaviors through incentives, personalized recommendations, suggestions or alerts. Such influence on our behaviors have at least three undesirable consequences: (1) a new non-democratic normativity regime; (2) a new conception of the human; (3) locking people into their profile.

New non-democratic normativity regime

As algorithms are capable to influence people’s behaviors, they have normative power. Contrary to classic laws and other forms of governmental regulations, algorithmic normativity does not act by way of constraint, but rather makes the disobedience the norm unlikely. For instance, algorithm can predict which driver are likely to drink alcohol, and by way of alerts, prevent such drive to drive. A “smart” car could also potentially refuse to start if it detects that the driver has alcohol in his/her blood.

This new form of normativity is not conducted in the name of certain shared values, a philosophy or an ideology. Algorithms claim to govern society objectively and efficiently with the sole aim of optimizing social life as much as possible, without bothering to know whether the norm is fair, equitable or legitimate. Neither has the algorithmic any democratic legitimacy, as they are currently mostly used by private actors in order to serve their private monetary interests (at least in Western liberal democracies).

The algorithmic human

The second issue caused by data processing and profiling is of philosophical nature. The philosophy of the Enlightenment envisages the modern human as a free individual, responsible, endowed with reason and free will.

The “algorithmic human”, whose behaviors are collected, processed, compiled in a profile and influenced, shares a common feature with the free individual conceived by the philosophy of the Enlightenment. They share a logic of individualization. Insofar it allows the environment to adapt to each profile in all its singularity (for example: individualization of advertising), the “algorithmic human” is as individualistic as the “free human”. However, the “algorithmic human” is far from the notion of the modern man as conceived by the philosophy of the Enlightenment. He is surveilled. His behaviors are tracked and influenced. Eric Schmidt, CEO of Google from 2001 to 2011 said: “I actually think most people don't want Google to answer their questions. They want Google to tell them what they should be doing next. (...) The technology will be so good it will be very hard for people to watch or consume something that has not in some sense been tailored for them”.

The profile prison

The third regrettable consequence of data profiling is the reproduction of class systems. The “free human” should be able to become the person he/she freely chooses to become. Liberal democracies traditionally strive at giving each individual the tools necessary to achieve its potential and to emancipate. Individuals should be free to practice the sport they like, listen and play music they like, be interested in the languages and culture they admire, etc. Such life choices should ideally be made freely.

The “algorithmic human” does not freely make these choices. The “algorithmic human” is predictable. The algorithmic human will typically behave in the same way as he did in the past and in the same way as people with similar profiles have behaved previously.

If you are a white male studying law at Columbia, it means you are also likely to vote Democrats, travel to Europe, eat salads for lunch, be interested in wine, play tennis, and listen to rock music. If you were born in Harlem and you are a black unemployed male, you will also be likely to vote Democrats, but you will rather be likely not to travel, to eat junk food for lunch, be interested in drinking beers or sodas rather than wine, play soccer or video games rather than tennis, listen to hip-hop instead of rock music. And this is what will be suggested to them as well as to their alike friends. The algorithm locks people into their profiles. It makes people become what you were likely to become and, in this sense, prevents individuals to freely realize themselves and reproduces existing social patterns.

Begin by telling people what they will get from reading. Starting out with two paragraphs of exposition before we even begin to find out what your contribution is will lose readers you could have kept.

Because you have seen deeply so far, advice on the improvement of the essay involves specifics.

You attribute to "algorithms" the various effects in "guiding" behavior you describe. This error is becoming so cheap and easy that reality will never disturb it. Not only does it distract people from actually thinking about technology, it introduces into policy the idea of "algorithmic transparency." which is a favorite recommendation of tech-adjacent rather than technically expert people.

An algorithm is, strictly speaking, a procedure for making a computation, a computer program strictu sensu. An efficient mode for sorting a list or computing the transcendental arccosine of floating-point data is an algorithm.

Generally speaking, the algorithms involved in the ad-tech targeting and the public-order "nudging" are pattern recognition algorithms. They are simple and general: if you look at them and you're a proficient reader of program code you can see everything about them very easily and in a short time. Those programs are exposed to "training data." This data trains the program to recognize patterns. Any given set of training data will cause the simple pattern recognition program being trained on it to behave in the recognition of its relevant patterns in slightly different (in the concrete technical sense, in differently biased ways).

The pattern-recognition program as it has been trained then is given a flow of "real" input, on which it is trained in turn, so that it "improves" its ability to find the pre-establish patterns as they are modified by the training process. The whole state of the model at any moment is the product of all its previous states. Knowing "the algorithm" is more or less useless in knowing what is actually occurring in the model.

Think of a spam filter, trying to do an intelligent job filtering your email for you. It begins by training on lots of spam, presumably including the things you too consider spam, so that when you begin receiving email for it to filter, it catches a bunch of spam, not much of which you considered not-spam (called "ham" in hacker jargon). Over time, as you mark spam you didn't want to receive and ham you regret was sidelined in transit, the filter gets better at doing the job for you. (Because "spam" is actually just mail you didn't want and "ham" is actually just stuff you did, this simple Bayesian probability calculator that could be whipped up in an afternoon does a pretty good imitation of an impossible job, namely mirroring your semi-conscious subjective decisions about what email you want to see.)

So, to be brief about it, it's not the algorithms, it's the data over time and the evolving state of the model. To be even briefer, if less comprehensible, it's the Gestalt of neural networks. Algorithmic transparency is mostly useless. People who are essentially applying our free software philosophy to this problem are unaware of the differences that matter.

Behavior guiding, then, is based not on some "algorithm for guiding people" that we should be studying, but on another simple principle, the basic life-rule of the Parasite With the Mind of God: reinforce patterns that benefit you, and discourage patterns that might benefit the human, but don't benefit you. This is the basic life rule of all parasites. This also leads to reinforcing patterns that benefit the human but also benefit the parasite. That's why parasitism can be evolutionarily advantageous. We have eukaryotes and photosynthesis for this reason, after all.

In this instance, the parasite guides first of all by reinforcing patterns of engagement. Where those patterns of engagement reduce anxiety responses in the human, the patterns reinforced are experienced as "convenient" by the human.

The parasite guides secondarily by reducing patterns that reduce engagement. The negative reinforcement structure is accomplished by transferring anxiety back to the human: this is experienced by the human as FOMO, or fear of social isolation, encouraging negative internal feedback experienced as depression that can be alleviated by re-engaging.

It's only at the tertiary level that the platforms to which humans allow themselves to be connected then guide behavior by presenting particular stimuli known to elicit specific consumptive responses—a process variously described as "advertising," or "campaigning," or "activating." Whether this is democratic depends on the defiinition of democracy that you do not give. But one might not inquire whether the inquiry results from a category error or a tautology: both predation and parasitism are processes to which the concept of democracy is not applicable.

Blaming the existence of pattern-matching software for the patterns is also a category error, known colloquially among humans as "shooting the messager." If human behavior is calculable and can be nudged in these ways, then our Enlightenment account of human-ness is incomplete, or perhaps our account of the Enlightenment and its relation to our existence after the phenomena we call Freud, Lenin, Bernays, Hitler, Skinner, Mao Zedong, Pablo Picasso and the King of the Undead Now Dead is not quite perfect. I knew that there were problems in our conception of free will before the Apple ][ existed, let alone Facebook.

But there is a prison being built. You just don't say anything about how we can walk out from it while it is still unfinished. Now that would be one hell of an essay.

Navigation

Webs Webs

r4 - 28 Nov 2021 - 18:22:32 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM