Computers, Privacy & the Constitution

Letting in the Sunshine: Algorithmic Governance and the Need for Explainable AI

-- By JackFurness - 29 Apr 2021

Algorithmic Governance

Algorithms have infiltrated nearly every aspect of modern life. Like other actors, courts rely on algorithms for a variety of uses, and the ease and efficiency with which machine learning can be deployed ensures that it will proliferate in the future. While this new era of algorithmic governance—government by computer—offers exciting potential for the creation of new forms of digital rights management, machine learning has the potential to work harms as well.

One obvious drawback of algorithmic governance is the opacity with which calculations are made. Moreover, ‘bottom-up’ algorithms, in which a computer program is ‘taught’ a learning rule and then trained on large datasets in order to develop its own set of rules, can make it so that only someone with a sophisticated knowledge of the program’s design is able to understand and interpret the outcome. When judges are asked to adjudicate legal rights and entitlements on the basis of such algorithms, it is therefore important to ensure that there is transparency and clarity throughout the process.

‘Explainable AI’ (xAI) offers a solution to these problems, one that would enable judges and litigants better to understand the algorithmic process and ensure that future legislators are able to define and evaluate algorithmic governance more accurately. A better understanding of these algorithms will lead to better rules governing their use.

The Transparency Problem

“Sunlight,” the saying goes, “is said to be the best of disinfectants.” Transparency is an essential component of ordered government and fair and open adjudication. Algorithms, when shielded by claims of proprietorship, trade secret, or privilege, can lead to oppression, particularly in the judicial context. Without transparency there can be no accountability.

The Growing Use of Algorithms in Governance

While most judicial applications of machine learning are not outcome determinative, the potential for fully automated adjudication is not entirely remote. For example, local governments can easily use a high speed camera and a simple algorithm to detect and ticket speeding drivers without any human input. In this example, the algorithmic process plays only a minor role; a litigant wishing to challenge his ticket would have no need of knowing how the camera calculated his speed. But suppose that the judge presiding over traffic court is instead a computer that automatically determines culpability on the basis of the offender’s digital profile. It is easy to see how a system like this could be constitutionally problematic.

Machine Learning in the Criminal Context

Authorities use computer programs routinely in the criminal justice system. For example, algorithms are used by judges to predict an individual’s likelihood of recidivism and determine whether to set or withhold bail. However, these algorithms are only as good as the programmers that create them, and it is easy for bugs like racial and economic bias to work their way into the system. These concerns were evident in State v. Loomis, a recent case in which the Wisconsin Supreme Court upheld the constitutionality of a sentence imposed by a judge who relied on a profiling algorithm that classified a defendant as having a “high risk of recidivism.” In her concurring opinion, Judge Shirley Abrahamson noted that the court’s lack of understanding of the profiling algorithm was a “significant problem” for the court, and the government conceded in an amicus brief that some uses of algorithmic profiling systems would clearly raise due process concerns.

Letting in the Sunshine Through xAI

The primary goal of xAI in the judicial context is to help individuals better understand how a decision is reached, allowing judges and litigants to make better, more informed decisions. When this primary goal is met, secondary goals such as allowing defendants to contest these outcomes and enabling lawmakers better to regulate this activity will follow.

In this context, xAI works not by actually reciting the mathematical calculations of the machine learning algorithm, but rather by providing relevant information about how the model works using independent variables that are extrinsic to the algorithm and easier to digest. Two approaches, one centered on the model itself and the other on the subjects within the algorithm, can be used in tandem to ensure that judges and criminal defendants have access to a full degree of interpretive material in evaluating algorithms.

Model-Centered Transparency

Model-centered xAI can answer questions about a programmer’s intent, the type of model used, and the parameters used by programmers in training the system. The model-centered approach breaks the algorithm up into its constituent parts in a way that allows judges and advocates to see and understand what steps went into making the algorithm work. An explainable algorithm could be taught to generate this evidence itself, which would be available to judges and parties on both sides of the ‘v.’ Doing so may or may not lead to new legal entitlements, but at the very least it would certainly work to enhance transparency in the criminal justice system.

Subject-Centered Transparency

Subject-centered xAI provides the subject of a decision, such as a criminal defendant, with relevant information about other individuals in the system. Automatically generating a list of similarly situated defendants would foster transparency across all facets of the adjudicative process, from plea bargaining to sentencing. Algorithms could also provide counterfactuals that might be used to test whether and to what extent individual characteristics influenced a particular outcome. Even more so than model-centered xAI, this information would be accessible for criminal defendants and would make it easier to determine when a fundamental right has been violated. The result would be greater efficiency in the criminal justice system, giving defendants fewer reasons to appeal decisions when they appear fair and reasonable, and discouraging judges from taking action that is not.

Transparency is an essential feature of the criminal justice system. With the advent of machine learning algorithms in this space comes a need for better and easier access to information about how they operate. xAI might just be the answer.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.

Navigation

Webs Webs

r10 - 05 May 2021 - 14:18:49 - JackFurness
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM