Computers, Privacy & the Constitution

View   r10  >  r9  >  r8  >  r7  >  r6  >  r5  ...
JackFurnessSecondPaper 10 - 05 May 2021 - Main.JackFurness
Line: 1 to 1
 
META TOPICPARENT name="SecondPaper"

JackFurnessSecondPaper 9 - 30 Apr 2021 - Main.JackFurness
Line: 1 to 1
 
META TOPICPARENT name="SecondPaper"
Changed:
<
<

Letting in the Sunshine: The Problem of Transparency in Algorithmic Governance

>
>

Letting in the Sunshine: Algorithmic Governance and the Need for Explainable AI

 
Changed:
<
<
-- By JackFurness - 16 Apr 2021
>
>
-- By JackFurness - 29 Apr 2021
 

Algorithmic Governance

Changed:
<
<
Algorithms have already infiltrated nearly every aspect of modern life. Blockchain-based applications, which have already revolutionized the global financial industry by reducing friction within and between capital markets, will soon form the basis of decentralized communication platforms, self-enforcing “smart contracts,” and perhaps even politics or law. While this new era of decentralization offers exciting potential for the creation of new forms of digital rights management, machine learning has the potential to work harms as well. This paper discusses the rise of algorithmic governance—government by computer—and explains how the First Amendment might require greater transparency by government actors.
>
>
Algorithms have infiltrated nearly every aspect of modern life. Like other actors, courts rely on algorithms for a variety of uses, and the ease and efficiency with which machine learning can be deployed ensures that it will proliferate in the future. While this new era of algorithmic governance—government by computer—offers exciting potential for the creation of new forms of digital rights management, machine learning has the potential to work harms as well.
 
Changed:
<
<

How the Government Relies on Algorithms

>
>
One obvious drawback of algorithmic governance is the opacity with which calculations are made. Moreover, ‘bottom-up’ algorithms, in which a computer program is ‘taught’ a learning rule and then trained on large datasets in order to develop its own set of rules, can make it so that only someone with a sophisticated knowledge of the program’s design is able to understand and interpret the outcome. When judges are asked to adjudicate legal rights and entitlements on the basis of such algorithms, it is therefore important to ensure that there is transparency and clarity throughout the process.
 
Changed:
<
<
Algorithmic governance refers to the use of autonomous decisionmaking methodologies by governments to replace traditional deliberation. At a high level of generality, an algorithm is simply a set of steps in a mathematical or logical process. In the sense then that algorithmic government promises efficiency and accuracy there is nothing new or problematic about its use. When the U.S. Postal Service uses machine learning to sort mail, for example, no serious constitutional concerns arise.
>
>
‘Explainable AI’ (xAI) offers a solution to these problems, one that would enable judges and litigants better to understand the algorithmic process and ensure that future legislators are able to define and evaluate algorithmic governance more accurately. A better understanding of these algorithms will lead to better rules governing their use.
 
Changed:
<
<
Algorithms may be employed to influence everything from policymaking to policing to even the adjudicative process. A society could even, through the use of smart-contracts, allow its citizens to create customized legal systems, what some have referred to as “digital common law.” More moderate forms of digital governance are already widespread. As algorithms promise of efficiency and accuracy at a lower cost than ever before, the role of machine learning in our government has plenty of room to grow.

The Transparency Problem

>
>

The Transparency Problem

 “Sunlight,” the saying goes, “is said to be the best of disinfectants.” Transparency is an essential
Changed:
<
<
component of ordered government and fair, open societies. Algorithms, when shielded by claims of proprietorship and trade secrets, can lead to oppression. Without transparency there can be no accountability.

Machine Learning and the Potential for Abuse

Current and future applications of machine learning deployed by the government raise important questions about the transparency of algorithmic governance. While most governmental applications of machine learning are not outcome determinative but simply one piece in a larger decisionmaking process, the potential for fully automated governance is not difficult to imagine. For example, the government can easily use a high speed camera and a simple algorithm to detect and ticket speeding drivers without any human input. In this example, the algorithmic process plays only a minor role. A litigant wishing to challenge the legality of this process would likely have no need of knowing how the camera calculated their speed unless it is clear that the camera malfunctioned. But suppose that the judge who presides over the traffic court is replaced by a computer that automatically determines culpability on the basis of an offender’s digital profile. It is easy to see how a system like this could be problematic from a constitutional perspective. And yet, there is no principled reason to believe that the government will regulate its own behavior in this area. Therefore, litigants need access to as many tools as possible in challenging unconstitutional invasions of privacy. The first and most important instrument in this toolkit is transparency.

What Must the Government Share and When?

>
>
component of ordered government and fair and open adjudication. Algorithms, when shielded by claims of proprietorship, trade secret, or privilege, can lead to oppression, particularly in the judicial context. Without transparency there can be no accountability.
 
Changed:
<
<
Laws regulating government transparency mandate public access to a wide array of federal government activity, from the public nature of most legislative activity to the requirement that agencies publish notes of proposed regulations. Moreover, the principle of reasoned transparency, rooted in the due process protections of the Fifth and Fourteenth Amendments, requires the government in some cases to provide explanations for why it is taking a particular course of action.
>
>

The Growing Use of Algorithms in Governance

 
Changed:
<
<

Open Access: A Solution to Closeted Decisionmaking

>
>
While most judicial applications of machine learning are not outcome determinative, the potential for fully automated adjudication is not entirely remote. For example, local governments can easily use a high speed camera and a simple algorithm to detect and ticket speeding drivers without any human input. In this example, the algorithmic process plays only a minor role; a litigant wishing to challenge his ticket would have no need of knowing how the camera calculated his speed. But suppose that the judge presiding over traffic court is instead a computer that automatically determines culpability on the basis of the offender’s digital profile. It is easy to see how a system like this could be constitutionally problematic.
 
Changed:
<
<
Algorithms are here to stay, but the extent to which litigants and observers can access to government algorithms to probe their legality is far from settled. First Amendment doctrine protects the speech rights of citizens, but also includes the freedom to access information and ideas. The growing prevalence and invasiveness of algorithmic government will likely raise difficult questions about the reach of this doctrine.
>
>

Machine Learning in the Criminal Context

 
Changed:
<
<

First Amendment Doctrine

>
>
Authorities use computer programs routinely in the criminal justice system. For example, algorithms are used by judges to predict an individual’s likelihood of recidivism and determine whether to set or withhold bail. However, these algorithms are only as good as the programmers that create them, and it is easy for bugs like racial and economic bias to work their way into the system. These concerns were evident in State v. Loomis, a recent case in which the Wisconsin Supreme Court upheld the constitutionality of a sentence imposed by a judge who relied on a profiling algorithm that classified a defendant as having a “high risk of recidivism.” In her concurring opinion, Judge Shirley Abrahamson noted that the court’s lack of understanding of the profiling algorithm was a “significant problem” for the court, and the government conceded in an amicus brief that some uses of algorithmic profiling systems would clearly raise due process concerns.
 
Changed:
<
<
The First Amendment demands a certain level of transparency in government decisionmaking. While there is no general “right to know” under the First Amendment, legislation such as the Freedom of Information Act has codified a basic presumption of transparency between the federal government and American citizens. Such a right existed at common law as well.
>
>

Letting in the Sunshine Through xAI

 
Changed:
<
<

Right to Access

>
>
The primary goal of xAI in the judicial context is to help individuals better understand how a decision is reached, allowing judges and litigants to make better, more informed decisions. When this primary goal is met, secondary goals such as allowing defendants to contest these outcomes and enabling lawmakers better to regulate this activity will follow.
 
Changed:
<
<
The First Amendment right to access demands that preliminary criminal hearings be held in open Court. In Press-Enterprise Co. v. Superior Court (“Press-Enterprise II”) and its progeny, the Supreme Court established a presumption of public access to bail hearings and sentencing. Thus, at least to the extent that algorithmic processes are used to generate evidence for use at trial, set bail, or sentence a defendant, the First Amendment requires that these proceedings be conducted in an open and transparent manner. This includes access to decisionmaking algorithms.
>
>
In this context, xAI works not by actually reciting the mathematical calculations of the machine learning algorithm, but rather by providing relevant information about how the model works using independent variables that are extrinsic to the algorithm and easier to digest. Two approaches, one centered on the model itself and the other on the subjects within the algorithm, can be used in tandem to ensure that judges and criminal defendants have access to a full degree of interpretive material in evaluating algorithms.
 
Changed:
<
<
Access to computerized algorithms is critical for the fair and efficient functioning of government. For one reason, algorithms can and do make mistakes. Additionally, the potential for human error and cognitive bias to affect how these algorithms are used should be apparent. Suppose, for example, an expert witness in a criminal trial employs a well-known methodology to identify a murder weapon, but that official lacks the training to properly employ the methodology and makes a mistake, causing the wrong weapon to be identified. The process may be sound in theory, but unconstitutional in practice. Transparency permits problems like this to be uncovered. As computers and their programmers are also prone to error, the Press Enterprise II doctrine can and should be extended to allow for greater transparency any time an algorithm is used in the criminal context.
>
>

Model-Centered Transparency

 
Added:
>
>
Model-centered xAI can answer questions about a programmer’s intent, the type of model used, and the parameters used by programmers in training the system. The model-centered approach breaks the algorithm up into its constituent parts in a way that allows judges and advocates to see and understand what steps went into making the algorithm work. An explainable algorithm could be taught to generate this evidence itself, which would be available to judges and parties on both sides of the ‘v.’ Doing so may or may not lead to new legal entitlements, but at the very least it would certainly work to enhance transparency in the criminal justice system.
 
Changed:
<
<

The Experience and Logic Test

>
>

Subject-Centered Transparency

 
Changed:
<
<
The prevailing test for evaluating when the right to access applies is the “experience and logic test,” which states that the public access right attaches to any judicial proceeding (1) historically open to the public (2) that would benefit from access and oversight. There is no shortage of ways in which algorithmic government can transgress this test. As machine learning becomes increasingly prevalent, public access is perhaps the only way to hold the government accountable. Public scrutiny has tangible benefits, while secrecy can lead not only to a higher rate of error, but also public distrust of the legal system.
>
>
Subject-centered xAI provides the subject of a decision, such as a criminal defendant, with relevant information about other individuals in the system. Automatically generating a list of similarly situated defendants would foster transparency across all facets of the adjudicative process, from plea bargaining to sentencing. Algorithms could also provide counterfactuals that might be used to test whether and to what extent individual characteristics influenced a particular outcome. Even more so than model-centered xAI, this information would be accessible for criminal defendants and would make it easier to determine when a fundamental right has been violated. The result would be greater efficiency in the criminal justice system, giving defendants fewer reasons to appeal decisions when they appear fair and reasonable, and discouraging judges from taking action that is not.
 
Changed:
<
<
Letting in the sunshine truly is the best remedy for an overreaching government, and as algorithms continue to supplant traditional decisionmaking roles, transparency is an antidote that we cannot do without.
>
>
Transparency is an essential feature of the criminal justice system. With the advent of machine learning algorithms in this space comes a need for better and easier access to information about how they operate. xAI might just be the answer.
 
Deleted:
<
<
There seems to be a conflation between rules of access to judicial proceedings and requirements for access to information under either existing or hypothetical freedom of information statutes, or subpoenas for defense evidence in criminal cases.
 
Deleted:
<
<
Then there is the question of what constitutes adequate evidence when the computer systems in use are the result of training data combined with very simple blocks of code to produce over time complex states that are neither obvious from the code and the training data nor completely explicable by either the systems themselves or those who built them. But it is not clear also under what circumstances that's relevant. The route to improving the draft, I think, is to get closer to those questions.
 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.

JackFurnessSecondPaper 8 - 29 Apr 2021 - Main.JackFurness
Line: 1 to 1
 
META TOPICPARENT name="SecondPaper"

JackFurnessSecondPaper 7 - 25 Apr 2021 - Main.JackFurness
Line: 1 to 1
 
META TOPICPARENT name="SecondPaper"

JackFurnessSecondPaper 6 - 24 Apr 2021 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="SecondPaper"
Line: 56 to 56
 Letting in the sunshine truly is the best remedy for an overreaching government, and as algorithms continue to supplant traditional decisionmaking roles, transparency is an antidote that we cannot do without.
Added:
>
>
There seems to be a conflation between rules of access to judicial proceedings and requirements for access to information under either existing or hypothetical freedom of information statutes, or subpoenas for defense evidence in criminal cases.

Then there is the question of what constitutes adequate evidence when the computer systems in use are the result of training data combined with very simple blocks of code to produce over time complex states that are neither obvious from the code and the training data nor completely explicable by either the systems themselves or those who built them. But it is not clear also under what circumstances that's relevant. The route to improving the draft, I think, is to get closer to those questions.

 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

JackFurnessSecondPaper 5 - 18 Apr 2021 - Main.JackFurness
Line: 1 to 1
 
META TOPICPARENT name="SecondPaper"

JackFurnessSecondPaper 4 - 17 Apr 2021 - Main.JackFurness
Line: 1 to 1
 
META TOPICPARENT name="SecondPaper"

JackFurnessSecondPaper 3 - 17 Apr 2021 - Main.JackFurness
Line: 1 to 1
 
META TOPICPARENT name="SecondPaper"

JackFurnessSecondPaper 2 - 16 Apr 2021 - Main.JackFurness
Line: 1 to 1
 
META TOPICPARENT name="SecondPaper"
Deleted:
<
<
 
Deleted:
<
<
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
 
Changed:
<
<

Paper Title

>
>

Letting in the Sunshine: The Problem of Transparency in Algorithmic Governance

 -- By JackFurness - 16 Apr 2021
Changed:
<
<

Section I

>
>

Algorithmic Governance

Algorithms have already infiltrated nearly every aspect of modern life. Blockchain-based applications, which have already revolutionized the global financial industry by reducing friction within and between capital markets, will soon form the basis of decentralized communication platforms, self-enforcing “smart contracts,” and perhaps even politics or law. While this new era of decentralization offers exciting potential for the creation of new forms of digital rights management, machine learning has the potential to work harms as well. This paper discusses the rise of algorithmic governance—government by computer—and explains how the First Amendment might require greater transparency by government actors.

How the Government Relies on Algorithms

Algorithmic governance refers to the use of autonomous decisionmaking methodologies by governments to replace traditional deliberation. At a high level of generality, an algorithm is simply a set of steps in a mathematical or logical process. In the sense then that algorithmic government promises efficiency and accuracy there is nothing new or problematic about its use. When the U.S. Postal Service uses machine learning to sort mail, for example, no serious constitutional concerns arise.

Algorithms may be employed to influence everything from policymaking to policing to even the adjudicative process. A society could even, through the use of smart-contracts, allow its citizens to create customized legal systems, what some have referred to as “digital common law.” More moderate forms of digital governance are already widespread. As algorithms promise of efficiency and accuracy at a lower cost than ever before, the role of machine learning in our government has plenty of room to grow.

The Transparency Problem

“Sunlight,” the saying goes, “is said to be the best of disinfectants.” Transparency is an essential component of ordered government and fair, open societies. Algorithms, when shielded by claims of proprietorship and trade secrets, can lead to oppression. Without transparency there can be no accountability.

Machine Learning and the Potential for Abuse

 
Changed:
<
<

Subsection A

>
>
Current and future applications of machine learning deployed by the government raise important questions about the transparency of algorithmic governance. While most governmental applications of machine learning are not outcome determinative but simply one piece in a larger decisionmaking process, the potential for fully automated governance is not difficult to imagine. For example, the government can easily use a high speed camera and a simple algorithm to detect and ticket speeding drivers without any human input. In this example, the algorithmic process plays only a minor role. A litigant wishing to challenge the legality of this process would likely have no need of knowing how the camera calculated their speed unless it is clear that the camera malfunctioned. But suppose that the judge who presides over the traffic court is replaced by a computer that automatically determines culpability on the basis of an offender’s digital profile. It is easy to see how a system like this could be problematic from a constitutional perspective. And yet, there is no principled reason to believe that the government will regulate its own behavior in this area. Therefore, litigants need access to as many tools as possible in challenging unconstitutional invasions of privacy. The first and most important instrument in this toolkit is transparency.
 
Added:
>
>

What Must the Government Share and When?

 
Changed:
<
<

Subsub 1

>
>
Laws regulating government transparency mandate public access to a wide array of federal government activity, from the public nature of most legislative activity to the requirement that agencies publish notes of proposed regulations. Moreover, the principle of reasoned transparency, rooted in the due process protections of the Fifth and Fourteenth Amendments, requires the government in some cases to provide explanations for why it is taking a particular course of action.
 
Changed:
<
<

Subsection B

>
>

Open Access: A Solution to Closeted Decisionmaking

 
Added:
>
>
Algorithms are here to stay, but the extent to which litigants and observers can access to government algorithms to probe their legality is far from settled. First Amendment doctrine protects the speech rights of citizens, but also includes the freedom to access information and ideas. The growing prevalence and invasiveness of algorithmic government will likely raise difficult questions about the reach of this doctrine.
 
Changed:
<
<

Subsub 1

>
>

First Amendment Doctrine

 
Added:
>
>
The First Amendment demands a certain level of transparency in government decisionmaking. While there is no general “right to know” under the First Amendment, legislation such as the Freedom of Information Act has codified a basic presumption of transparency between the federal government and American citizens. Such a right existed at common law as well.
 
Changed:
<
<

Subsub 2

>
>

Right to Access

 
Added:
>
>
The First Amendment right to access demands that preliminary criminal hearings be held in open Court. In Press-Enterprise Co. v. Superior Court (“Press-Enterprise II”) and its progeny, the Supreme Court established a presumption of public access to bail hearings and sentencing. Thus, at least to the extent that algorithmic processes are used to generate evidence for use at trial, set bail, or sentence a defendant, the First Amendment requires that these proceedings be conducted in an open and transparent manner. This includes access to decisionmaking algorithms.
 
Added:
>
>
Access to computerized algorithms is critical for the fair and efficient functioning of government. For one reason, algorithms can and do make mistakes. Additionally, the potential for human error and cognitive bias to affect how these algorithms are used should be apparent. Suppose, for example, an expert witness in a criminal trial employs a well-known methodology to identify a murder weapon, but that official lacks the training to properly employ the methodology and makes a mistake, causing the wrong weapon to be identified. The process may be sound in theory, but unconstitutional in practice. Transparency permits problems like this to be uncovered. As computers and their programmers are also prone to error, the Press Enterprise II doctrine can and should be extended to allow for greater transparency any time an algorithm is used in the criminal context.
 
Deleted:
<
<

Section II

 
Changed:
<
<

Subsection A

>
>

The Experience and Logic Test

 
Changed:
<
<

Subsection B

>
>
The prevailing test for evaluating when the right to access applies is the “experience and logic test,” which states that the public access right attaches to any judicial proceeding (1) historically open to the public (2) that would benefit from access and oversight. There is no shortage of ways in which algorithmic government can transgress this test. As machine learning becomes increasingly prevalent, public access is perhaps the only way to hold the government accountable. Public scrutiny has tangible benefits, while secrecy can lead not only to a higher rate of error, but also public distrust of the legal system.
 
Added:
>
>
Letting in the sunshine truly is the best remedy for an overreaching government, and as algorithms continue to supplant traditional decisionmaking roles, transparency is an antidote that we cannot do without.
 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.

JackFurnessSecondPaper 1 - 16 Apr 2021 - Main.JackFurness
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="SecondPaper"

It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.

Paper Title

-- By JackFurness - 16 Apr 2021

Section I

Subsection A

Subsub 1

Subsection B

Subsub 1

Subsub 2

Section II

Subsection A

Subsection B


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 10r10 - 05 May 2021 - 14:18:49 - JackFurness
Revision 9r9 - 30 Apr 2021 - 01:08:27 - JackFurness
Revision 8r8 - 29 Apr 2021 - 13:47:32 - JackFurness
Revision 7r7 - 25 Apr 2021 - 13:44:08 - JackFurness
Revision 6r6 - 24 Apr 2021 - 14:33:03 - EbenMoglen
Revision 5r5 - 18 Apr 2021 - 16:48:12 - JackFurness
Revision 4r4 - 17 Apr 2021 - 21:31:47 - JackFurness
Revision 3r3 - 17 Apr 2021 - 15:16:02 - JackFurness
Revision 2r2 - 16 Apr 2021 - 21:30:47 - JackFurness
Revision 1r1 - 16 Apr 2021 - 19:47:01 - JackFurness
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM