Computers, Privacy & the Constitution

The Tragedy of the ML Commons

draft two

-- By KoljaVerhage - 5 May 2021

Principles of AI Governance

Over the past ten years we've seen an acceleration in the impact of machine learning algorithms on human civilization. This has led both companies and governments across the world to think about how to govern these algorithms and control their impact on society. But the motivations of these different actors and the ways in which they want to use the algorithms vary greatly. There are three major distinctions that we can make. First of all, there is the Chinese way which I'll call "monarchism with machine learning". Put simply, their understanding of control, when it comes to machine learning, is to perfect its ability to reinforce state power and their control over society. The second groups of actors, multinational companies, fall in the category of "surveillance capitalists". Their aim is primarily to maximize shareholder value by collecting as much behavioral data as they can and commodify it in order to sell either physical or software products. Finally, there is the group which I'll call the digital democracies. These actors have a genuine interest in protecting human values, like freedom of thought and expression, privacy and autonomy.

The Tragedy of the ML Commons

The first step towards "AI Governance" has been for organizations within these three groups to present abstract high-level principles of what they consider ethical or trustworthy artificial intelligence. Over the past five years, hundreds of organizations have published these kinds of principles. In what can be described as an act of irony, even the Beijing Academy of Artificial Intelligence (BAAI) has released a set of principles in support of privacy, freedom, and the like. However, despite all these publicized principles there has been little progress towards any agreement between or within groups on how to operationalize any of these principles into actual policies or technical standards. The fact that countries who generally disagree on just about anything, have striking similarities in their principles should be evidence enough of their vacuity. This shallowness largely persists because there exist strong incentives for non-cooperation because of diverging interests. This is the case between digital democracies and the Chinese State but also between digital democracies themselves. The proposals to operationalize the principles often lack any effective mechanism to enforce their normative claims. This situation constitutes a social dilemma, a situation where no one has an individual incentive to cooperate, even though mutual cooperation would lead to the best outcome for all. If the current misuse of machine learning technologies continues and becomes the long-term status quo, it will destroy public trust (thereby erasing any potential it has to improve the human condition) and, more importantly, impact civil liberties across the world for generations to come. Because of the nature of the divergence of interests between digital democracies it is no surprise that reaching agreement is difficult. The important point is that even digital democracies that have similar interests have been unable to come to any comprehensive agreement. All the while the Chinese state and MNC's have been plundering the world's pastures. This failure should come as no surprise to game theorists. It constitutes a tragedy of the ML commons.

  Cooperate Defect
Cooperate 4,4 -2,6
Defect 6,-2 0,0

Rising Waters

Using the analogy of environmental degradation and the difficulties to agree on any international norms on preserving it, we start to see how the lack of action on machine learning is leading to an acidification of human freedoms. As Columbia Professor Scott Barrett observed, when it comes to environmental policy, the ability of countries to organize to avoid catastrophes depends critically on uncertainty about the tipping point for catastrophic change. When this uncertainty is large, as is the case with climate change and algorithmic governance, getting to collective action requires enforcement of a cooperative agreement. The difficulties to formulate a cooperative agreement bring us back to our tragedy. On the other hand, when this uncertainty is small, the catastrophic threat transforms from a problem of cooperation into a problem of coordination. Once this stage is reached a solution is usually not too far off as humans are quite good at working together when the water is at the dyke.

Green Pastures

As we carefully proceed towards thinking about solutions there are a few general lessons that the history of climate cooperation has taught us about getting out of collective action problems. First of all, it is important that we accurately define both the risks and the consequences of non-cooperation. At the very least this will help conceptualize our tipping point to catastrophe. Secondly, the idea of combining many abstract proposals into one may undermine their prospects for success. Before getting to a proposal, we must figure out the dimensions along which disagreement exists and work on getting a better understanding of the interests of the individual digital democracies. Finally, lowering the cost of cooperation may increase the likelihood of cooperative success. Creating small, decentralized groups, made up of representatives from the individual countries may help to provide insights into the conditions under which we could expect proposals to be successful.

The Brussels Effect?

The recent proposal by the European Commission on "AI governance" shows that we still have a long way to go. It's lack of operable proposals, vague wording and weak enforcement shows that European countries are not willing to cede sovereignty on this matter. The proposal underscores its own limitations by failing to define "artificial intelligence" before attempting to regulate it. Unlike climate change, where efforts to turn it into a coordination problem are well underway, here we have hardly begun teaching citizens what the serious risks are. Without this education, any hope to transform the problem of algorithms into one of coordination seems ever distant. Alas, Brussel clings to the hope that it's "Brussel's Effect" will help bridge the gap and bring digital democracies together. But while the EU had a first-mover advantage on data protection with GDPR, there are many more actors working on "AI Governance", leading us to the tragedy and making the wholesale adoption of EU rules much less likely.

You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Webs Webs

r4 - 06 May 2021 - 01:09:18 - KoljaVerhage
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM