Law in the Internet Society
The Problem with RAIs

Increasingly implemented but continually misunderstood, Risk Assessment instruments, or RAIs, have become a common presence in the criminal legal process, with judges across all fifty states implementing RAIs in decision-making processes that determine bail amounts, flight risk, or even sentence lengths for criminal defendants. A major issue facing the implementation of this software, however, is what appears to be a fundamental misunderstanding by Courts and Legislators alike of how exactly these algorithmic tools work. Alas, from legislation to Court decisions on the topic, there appears to be a continual misconception that RAIs act as an infallible legal arbiter capable of making more precise decisions based on the data at its disposal. In actuality, the evidence is more troubling; last August, the Journal of Criminal Justice published a study on the validation techniques used by private companies to measure both the accuracy and risk of bias of nine of the most popular RAIs used countrywide. Overall, the study determined that, through evaluating numerous efficacy measurements reported for each of the nine tools, the “extent and quality of the evidence in support of the tools” was typically “poor”.

As Shoshana Zuboff argues in her book The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, this illusion of accuracy created by decision support software causes avoidable public harm, and that harm is unequally distributed across class and race. This misunderstanding of RAIs is a fundamental example of the argument set forth by Zuboff, and reiterated here. Take for example the Courts’ treatment of legal challenges against RAIs. Although fairly new, Risk Assessment Instruments have already faced accusations of lack of transparency, discrimination, and potentially infringing practices. Potentially most notable is State v. Loomis, a 2016 discrimination and Due Process challenge against the technology before the Wisconsin Supreme Court.

The case arose from the early 2013 arrest of Wisconsin resident Eric Loomis who, upon pleading guilty to two charges in connection with a drive-by murder, was sentenced to six years in prison based largely on the findings of COMPAS, a risk assessment tool used to evaluate Loomis upon his arrest. In response to the use of the software in his sentencing, Loomis mounted a motion for post-conviction relief on the ground that, given that the source code behind COMPAS’ risk assessment is a trade secret, and is therefore unknowable to the defendant, the defendant was unable to properly challenge the evidence against him– a violation of his due process rights to be sentenced based on accurate information, to know the evidence against him, and to receive a personalized sentence. Loomis additionally argued that the trial court violated his due process rights by allowing the RAI to consider gender in its methodology, thus introducing an impermissible consideration of gender into the Court’s sentencing. While the post-conviction relief motion was denied by the trial court, the Wisconsin Court of Appeals certified the case to the Wisconsin Supreme Court. The Wisconsin Supreme Court ultimately affirmed the lower court’s decision, shooting down Loomis’ arguments on several grounds. Most important here, though, was the Court’s argument that the use of gender in the algorithm’s methodology served a nondiscriminatory purpose, specifically accuracy, in its inclusion.

Here, it is clear that the Court has fallen for the software’s “illusion of precision”; the Court in this instance signals that it is willing to allow discrimination against private citizens in the interest of bolstering the usage of what it believes to be a more accurate, precise decisionmaker. Thus, the Wisconsin Supreme Court errs in precisely the way that Zuboff predicts. Avoidable discrimination against people of certain genders, races, and socioeconomic statuses are and will continue to be enforced so long as there remains a fundamental misunderstanding of what AI is, what it entails, and how exactly decision support software like RAIs differ from artificial intelligence.

I think a better draft would be more than 640 words long. Almost a quarter of the draft is spent laying out the procedural history of a case you want to discuss, but whose specific holding you compress into half a sentence, and whose reasoning, of which you are dismissive, you do not analyze or quote. Zuboff, too, is mentioned, but not actually cited, quoted or discussed. Any statistical argument about the real world can be harmed by excess precision, and "computer says no" reasoning has been the hallmark of bureaucracy (with or without computers) since Mesopotamia, at least. And to say that anyone in particular "predicts" that the incidence of power will tend to disfavor those who are already disfavored (Xs in a world of Ys, whatever X and Y may be) is rather like awarding credit for predicting sunrise.

Clearer argument would therefore help, too. Can we create systems for assessing risk of future criminal behavior, or is such an effort inherently impossible, such that even if the law requires our public agencies to do so they should refuse to make such systems? If they are possible, among the flaws from which they can be expected to suffer are inaccuracy and bias. Both inaccuracy (for example, forecasting 50% chance of rain 50% of the time when it only rains 10% of the time in fact), and bias (forecasting rain twice as often on Mondays as on Fridays) can exist without making weather forecasts a bad idea. We can also suppose that there are situations in which measures to improve accuracy will also increase bias, or in more general terms, that improvement doesn't always mean achievement of the optimal. So some principles should emerge from the inquiry. If "due process" and "equal protection" are the labels we apply to those principles, what should be in each jar?


Webs Webs

r4 - 21 Jan 2024 - 17:33:21 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM