Computers, Privacy & the Constitution

View   r4  >  r3  >  r2  >  r1
KoljaVerhageSecondPaper 4 - 06 May 2021 - Main.KoljaVerhage
Line: 1 to 1
 
META TOPICPARENT name="SecondPaper"

The Tragedy of the ML Commons

Added:
>
>
draft two
 
Changed:
<
<
-- By KoljaVerhage - 15 Apr 2021
>
>
-- By KoljaVerhage - 5 May 2021
 

Principles of AI Governance

Changed:
<
<
Over the past ten years we've seen an acceleration in the impact of machine learning algorithms on human civilization. This has led both companies and governments across the world to think about how to govern these algorithms and control their impact on society. But the motivations of these different actors and the ways in which they want to use the algorithms vary greatly. There are three major distinctions that we can make. First of all, there is the Chinese way which we'll call "monarchism with machine learning". Put simply, their understanding of control, when it comes to machine learning, is to perfect its ability to reinforce state power and their control over society. The second groups of actors, multinational companies, fall in the category of "surveillance capitalists". Their aim is primarily to maximize shareholder value by collecting as much behavioral data as they can and commodify it in order to sell either physical or software products. Finally, there is the group which we'll call the digital democracies. These actors have a genuine interest is protecting human values, like freedom of thought and expression, privacy and autonomy.
>
>
Over the past ten years we've seen an acceleration in the impact of machine learning algorithms on human civilization. This has led both companies and governments across the world to think about how to govern these algorithms and control their impact on society. But the motivations of these different actors and the ways in which they want to use the algorithms vary greatly. There are three major distinctions that we can make. First of all, there is the Chinese way which I'll call "monarchism with machine learning". Put simply, their understanding of control, when it comes to machine learning, is to perfect its ability to reinforce state power and their control over society. The second groups of actors, multinational companies, fall in the category of "surveillance capitalists". Their aim is primarily to maximize shareholder value by collecting as much behavioral data as they can and commodify it in order to sell either physical or software products. Finally, there is the group which I'll call the digital democracies. These actors have a genuine interest in protecting human values, like freedom of thought and expression, privacy and autonomy.
 

The Tragedy of the ML Commons

The first step towards "AI Governance" has been for organizations within these three groups to present abstract high-level principles of what they consider ethical or trustworthy artificial intelligence. Over the past five years, hundreds of organizations have published these kinds of principles. In what can be described as an act of irony, even the Beijing Academy of Artificial Intelligence (BAAI) has released a set of principles in support of privacy, freedom, and the like. However, despite all these publicized principles there has been little progress towards any agreement between or within groups on how to operationalize any of these principles into actual policies or technical standards. The fact that countries who generally disagree on just about anything, have striking similarities in their principles should be evidence enough of their vacuity.
Changed:
<
<
This shallowness largely persists because there exist strong incentives for non-cooperation because of diverging interests. This is the case between digital democracies and the Chinese State but also between digital democracies themselves. The proposals to operationalize the principles often lack any effective mechanism to enforce their normative claims. This situation constitutes a social dilemma, a situation where no one has an individual incentive to cooperate, even though mutual cooperation would lead to the best outcome for all. If the current abuse of machine learning technologies continues and becomes the long-term status quo, it will destroy public trust (thereby erasing any potential it has to improve the human condition) and, more importantly, impact civil liberties across the world for generations to come. Because of the nature of the divergence of interests between digital democracies it is no surprise that reaching agreement is difficult. The important point is that even digital democracies that have similar interests have been unable to come to any agreement. All the while the Chinese state and MNC's have been plundering our pastures. This failure should come as no surprise to game theorists. It constitutes a tragedy of the ML commons.
>
>
This shallowness largely persists because there exist strong incentives for non-cooperation because of diverging interests. This is the case between digital democracies and the Chinese State but also between digital democracies themselves. The proposals to operationalize the principles often lack any effective mechanism to enforce their normative claims. This situation constitutes a social dilemma, a situation where no one has an individual incentive to cooperate, even though mutual cooperation would lead to the best outcome for all. If the current misuse of machine learning technologies continues and becomes the long-term status quo, it will destroy public trust (thereby erasing any potential it has to improve the human condition) and, more importantly, impact civil liberties across the world for generations to come. Because of the nature of the divergence of interests between digital democracies it is no surprise that reaching agreement is difficult. The important point is that even digital democracies that have similar interests have been unable to come to any comprehensive agreement. All the while the Chinese state and MNC's have been plundering the world's pastures. This failure should come as no surprise to game theorists. It constitutes a tragedy of the ML commons.
 
  Cooperate Defect
Cooperate 4,4 -2,6
Line: 25 to 27
 As we carefully proceed towards thinking about solutions there are a few general lessons that the history of climate cooperation has taught us about getting out of collective action problems. First of all, it is important that we accurately define both the risks and the consequences of non-cooperation. At the very least this will help conceptualize our tipping point to catastrophe. Secondly, the idea of combining many abstract proposals into one may undermine their prospects for success. Before getting to a proposal, we must figure out the dimensions along which disagreement exists and work on getting a better understanding of the interests of the individual digital democracies. Finally, lowering the cost of cooperation may increase the likelihood of cooperative success. Creating small, decentralized groups, made up of representatives from the individual countries may help to provide insights into the conditions under which we could expect proposals to be successful.

The Brussels Effect?

Changed:
<
<
The recently leaked proposal by the European Commission on "AI governance" shows that we still have a long way to go. It's lack of operable proposals, vague wording and weak enforcement shows that European countries are not willing to cede sovereignty on this matter. Alas, Brussel clings to the hope that it's "Brussel's Effect" will help bridge the gap and bring digital democracies together. But that dream seems more distant than ever as NSCAI Chairman Eric Schmidt responded Europe's strategy won't be successful as its "simply not big enough" to compete in this field. Efforts to turn climate change into a coordination problem are well underway with efforts to place a value on environmental externalities, like a carbon tax. Similarly, we must think about how the problem of algorithms can be turned into one of coordination if we have any chance of reducing our reliance on vague principles. By taking the lessons from efforts on climate change and nuclear proliferation we can begin crossing the barriers of social dilemmas.

An excellent start. Your comparisons are to areas in which the problem could be strictly defined and the public could understand the dangers. Here the EC cannot even define "artificial intelligence" before regulating it, and we have only just begun testing our ability to teach citizens what the serious risks are. These are relevant differences to take back into account in your drafting.

>
>
The recent proposal by the European Commission on "AI governance" shows that we still have a long way to go. It's lack of operable proposals, vague wording and weak enforcement shows that European countries are not willing to cede sovereignty on this matter. The proposal underscores its own limitations by failing to define "artificial intelligence" before attempting to regulate it. Unlike climate change, where efforts to turn it into a coordination problem are well underway, here we have hardly begun teaching citizens what the serious risks are. Without this education, any hope to transform the problem of algorithms into one of coordination seems ever distant. Alas, Brussel clings to the hope that it's "Brussel's Effect" will help bridge the gap and bring digital democracies together. But while the EU had a first-mover advantage on data protection with GDPR, there are many more actors working on "AI Governance", leading us to the tragedy and making the wholesale adoption of EU rules much less likely.
 

KoljaVerhageSecondPaper 3 - 25 Apr 2021 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="SecondPaper"
Deleted:
<
<
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
 

The Tragedy of the ML Commons

Line: 30 to 29
  Efforts to turn climate change into a coordination problem are well underway with efforts to place a value on environmental externalities, like a carbon tax. Similarly, we must think about how the problem of algorithms can be turned into one of coordination if we have any chance of reducing our reliance on vague principles. By taking the lessons from efforts on climate change and nuclear proliferation we can begin crossing the barriers of social dilemmas.
Added:
>
>
An excellent start. Your comparisons are to areas in which the problem could be strictly defined and the public could understand the dangers. Here the EC cannot even define "artificial intelligence" before regulating it, and we have only just begun testing our ability to teach citizens what the serious risks are. These are relevant differences to take back into account in your drafting.

 

KoljaVerhageSecondPaper 2 - 16 Apr 2021 - Main.KoljaVerhage
Line: 1 to 1
 
META TOPICPARENT name="SecondPaper"

It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.

Changed:
<
<

Paper Title

>
>

The Tragedy of the ML Commons

 -- By KoljaVerhage - 15 Apr 2021
Changed:
<
<

Section I

>
>

Principles of AI Governance

Over the past ten years we've seen an acceleration in the impact of machine learning algorithms on human civilization. This has led both companies and governments across the world to think about how to govern these algorithms and control their impact on society. But the motivations of these different actors and the ways in which they want to use the algorithms vary greatly. There are three major distinctions that we can make. First of all, there is the Chinese way which we'll call "monarchism with machine learning". Put simply, their understanding of control, when it comes to machine learning, is to perfect its ability to reinforce state power and their control over society. The second groups of actors, multinational companies, fall in the category of "surveillance capitalists". Their aim is primarily to maximize shareholder value by collecting as much behavioral data as they can and commodify it in order to sell either physical or software products. Finally, there is the group which we'll call the digital democracies. These actors have a genuine interest is protecting human values, like freedom of thought and expression, privacy and autonomy.
 
Changed:
<
<

Subsection A

>
>

The Tragedy of the ML Commons

The first step towards "AI Governance" has been for organizations within these three groups to present abstract high-level principles of what they consider ethical or trustworthy artificial intelligence. Over the past five years, hundreds of organizations have published these kinds of principles. In what can be described as an act of irony, even the Beijing Academy of Artificial Intelligence (BAAI) has released a set of principles in support of privacy, freedom, and the like. However, despite all these publicized principles there has been little progress towards any agreement between or within groups on how to operationalize any of these principles into actual policies or technical standards. The fact that countries who generally disagree on just about anything, have striking similarities in their principles should be evidence enough of their vacuity. This shallowness largely persists because there exist strong incentives for non-cooperation because of diverging interests. This is the case between digital democracies and the Chinese State but also between digital democracies themselves. The proposals to operationalize the principles often lack any effective mechanism to enforce their normative claims. This situation constitutes a social dilemma, a situation where no one has an individual incentive to cooperate, even though mutual cooperation would lead to the best outcome for all. If the current abuse of machine learning technologies continues and becomes the long-term status quo, it will destroy public trust (thereby erasing any potential it has to improve the human condition) and, more importantly, impact civil liberties across the world for generations to come. Because of the nature of the divergence of interests between digital democracies it is no surprise that reaching agreement is difficult. The important point is that even digital democracies that have similar interests have been unable to come to any agreement. All the while the Chinese state and MNC's have been plundering our pastures. This failure should come as no surprise to game theorists. It constitutes a tragedy of the ML commons.
 
  Cooperate Defect
Cooperate 4,4 -2,6
Defect 6,-2 0,0
Added:
>
>

Rising Waters

Using the analogy of environmental degradation and the difficulties to agree on any international norms on preserving it, we start to see how the lack of action on machine learning is leading to an acidification of human freedoms. As Columbia Professor Scott Barrett observed, when it comes to environmental policy, the ability of countries to organize to avoid catastrophes depends critically on uncertainty about the tipping point for catastrophic change. When this uncertainty is large, as is the case with climate change and algorithmic governance, getting to collective action requires enforcement of a cooperative agreement. The difficulties to formulate a cooperative agreement bring us back to our tragedy. On the other hand, when this uncertainty is small, the catastrophic threat transforms from a problem of cooperation into a problem of coordination. Once this stage is reached a solution is usually not too far off as humans are quite good at working together when the water is at the dyke.
 
Changed:
<
<

Subsub 1

>
>

Green Pastures

As we carefully proceed towards thinking about solutions there are a few general lessons that the history of climate cooperation has taught us about getting out of collective action problems. First of all, it is important that we accurately define both the risks and the consequences of non-cooperation. At the very least this will help conceptualize our tipping point to catastrophe. Secondly, the idea of combining many abstract proposals into one may undermine their prospects for success. Before getting to a proposal, we must figure out the dimensions along which disagreement exists and work on getting a better understanding of the interests of the individual digital democracies. Finally, lowering the cost of cooperation may increase the likelihood of cooperative success. Creating small, decentralized groups, made up of representatives from the individual countries may help to provide insights into the conditions under which we could expect proposals to be successful.
 
Changed:
<
<

Subsection B

>
>

The Brussels Effect?

The recently leaked proposal by the European Commission on "AI governance" shows that we still have a long way to go. It's lack of operable proposals, vague wording and weak enforcement shows that European countries are not willing to cede sovereignty on this matter. Alas, Brussel clings to the hope that it's "Brussel's Effect" will help bridge the gap and bring digital democracies together. But that dream seems more distant than ever as NSCAI Chairman Eric Schmidt responded Europe's strategy won't be successful as its "simply not big enough" to compete in this field. Efforts to turn climate change into a coordination problem are well underway with efforts to place a value on environmental externalities, like a carbon tax. Similarly, we must think about how the problem of algorithms can be turned into one of coordination if we have any chance of reducing our reliance on vague principles. By taking the lessons from efforts on climate change and nuclear proliferation we can begin crossing the barriers of social dilemmas.
 
Deleted:
<
<

Subsub 1

 
Deleted:
<
<

Subsub 2

Section II

Subsection A

Subsection B

 



KoljaVerhageSecondPaper 1 - 15 Apr 2021 - Main.KoljaVerhage
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="SecondPaper"
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.

Paper Title

-- By KoljaVerhage - 15 Apr 2021

Section I

Subsection A

  Cooperate Defect
Cooperate 4,4 -2,6
Defect 6,-2 0,0

Subsub 1

Subsection B

Subsub 1

Subsub 2

Section II

Subsection A

Subsection B


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 4r4 - 06 May 2021 - 01:09:18 - KoljaVerhage
Revision 3r3 - 25 Apr 2021 - 17:25:23 - EbenMoglen
Revision 2r2 - 16 Apr 2021 - 19:07:26 - KoljaVerhage
Revision 1r1 - 15 Apr 2021 - 15:50:02 - KoljaVerhage
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM