Law in the Internet Society

Should the criminal justice system be governed by algorithms?

-- By CarlaDULAC - 27 Nov 2019

My concerns about artificial intelligence which is supposed to predict the future behavior of defendants

In our daily life, we are frequently subject to predictive algorithms that determine music recommendations, product advertising, university admission and job placement. Until recently, I only knew such algorithms were used for those purposes. But in the American criminal justice system, predictive algorithms have been used to predict where crimes will most likely occur, who is most likely to commit a violent crime, to fail to appear at their court hearing, and who is likely to re-offend at some point in the future. When I heard about it, I was quite surprised since I am French and such tools don’t exist in our judicial system. I didn’t see the purpose of those predictive algorithms and risk assessment tools used by US Police departments and judges. How could machines be “better”, more efficient than human, since they are made by flawed people?

A few years ago, Los Angeles Police Department adopted a predictive-policing system called Predpol. Its goal was to better direct police efforts to reduce crime. It raised questions about reducing criminal activities in one place, only to have them occur elsewhere. If we think about systems predicting court appearance, their use does not always result in incarceration. They can be seen as a tool for court to bring defendants to them. However, it raises concerns about how judges use such algorithms.

What would you do if you were a United States judge who has to decide bail for a black man, a first-time offender, accused of a non-violent crime. An algorithm just told you there is a 100 percent chance he'll re-offend.Would you rely on the answer provided by the algorithm, even if your personal thought leads you to another answer? Is the algorithm answer more reliable than your opinion? One could argue that the answer provided by the algorithm is based on 137 question-quiz. The questions are part of a software program called Correctional Offender Management Profiling for Alternative Sanctions: COMPAS. The problem when we rely on such algorithms is their opacity

How could a judge rely on such a tool, when he does not even know how it works? Concerns have been raised about the nature of the inputs and how they are weighted by the algorithm. We all know that human decisions are not all the times the good ones, perfect. They depend on the person deciding, its opinion, ideas, and his background. But at least it is a human that decides, individualizing each case, each person. Whereas an algorithm is incapable of doing such a thing.

Do we prefer human or machine bias?

Human decision-making in criminal justice settings is often flawed, and stereotypical arguments and prohibited criteria, such as race, sexual preference or ethnic origin, often creep into judgments. The question is can algorithms help prevent disproportionate and often arbitrary decisions? We can argue that no bias is desirable, and a computer can offer such an unbiased choice of architecture. But algorithms are fed with data that is not clean of social, cultural and economic circumstances.

In analyzing language we can see that natural language necessarily contains human biases. The training of machines on language entails that artificial intelligence will inevitably absorb the existing biases in a given society. Furthermore, judges already carry out risk assessments on a daily basis, for example when deciding on the probability of recidivism. This process always subsumes human experience, culture, and even biases. Human empathy and other personal qualities are in fact types of bias that overreach statistically measurable equality.

Should not such personal traits and skills or emotional intelligence be actively supported in judicial settings? I think human traits are essential in the justice process. Non-bias goal is not possible: whether because of humans or algorithms. What is important is the idea of fairness, what COMPASS system proved to failed since its predictions were racially biased.

Towards a computerized justice: what about due process?

Let’s focus on another algorithm created by researchers from Sheffield and Pennsylvania universities, which is capable of guessing court decisions by using parties’ arguments and the relevant positive law. The results of this algorithm have been published in October 2016. It was an outstanding result: this algorithm made the same choice than human-judge eight times in ten. The 21% margin of error of the judge robot does not necessarily mean he was wrong, but rather he said differently the law compared to a human-judge.

Such results could encourage us to think the algorithm-judge is the future for predicting judicial decisions. Such algorithms, if we could create some perfectly reliable, would enable us to obtain a decision the most conform to the positive law. It would be a pledge of judicial security and would allow standardizing judicial decisions. A computerized justice could help judges to focus only on complicated case law and would help to unblock tribunals and make justice faster. With respect to all those possible advantages, can the justice reasonably answer to an implacable mathematical logic? I don’t think so, it depends on the fact, on the defendants… We can’t rely on an algorithm and we should not. Algorithms will never be able to replace humans because they lack the human ability to individualize since they can’t see each person as one kind.

If we think about the risk assessment algorithm in sentencing, it is also dangerous for due process. The Loomis case challenged the use of COMPAS as a violation of the defendant’s due process rights. In this case, the defendant argued that the risk assessment score violated his constitutional right to a fair trial. For the defendant, it violated his right to be sentenced based on accurate information and also violated his right to an individualized sentence The court held that Loomis’s challenge did not clear the constitutional hurdles. When an algorithm is used to produce a risk assessment, the due process question should be whether and how it was used. The Wisconsin Supreme Court stated that the sentencing court “ would have imposed the exact same sentence without it.

It raises concerns if the use of a risk assessment tool at sentencing is only appropriate when the same sentencing decision would be reached without it, this suggests that the risk assessment plays absolutely no role in probation or sentencing decisions.

You might want to go back and look at Jerome Frank's 1949 argument in Courts on Trial about why computational judging is impossible. He was right then and nothing has changed, even though there were only a couple of digital computers in the world and no software to speak of when he wrote. But as I said, disproving the utility of algorithms for judging is trivial anyway: that's not what we need computer programs for in a justice system. So it would be better either to figure out what computational improvements we can make in the justice system and whether they are worth it, or look for other sources of injustice in the justice system, out of which we are unlikely to run, because—as you say—justice is a human project and we are very fallible.

You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Webs Webs

r4 - 26 Jan 2020 - 09:08:30 - CarlaDULAC
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM