Law in the Internet Society

View   r2  >  r1  ...
CarlaDULACSecondEssay 2 - 06 Dec 2019 - Main.CarlaDULAC
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Deleted:
<
<
 It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
Line: 4 to 3
 It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
Changed:
<
<

Paper Title

>
>

Should the criminal justice system be governed by algorithms?

 -- By CarlaDULAC - 27 Nov 2019
Changed:
<
<

Section I

>
>

My concerns about artificial intelligence which is supposed to predict the future behavior of defendants

In our daily life, we are frequently subject to predictive algorithms that determine music recommendations, product advertising, university admission and job placement. Until recently, I only knew such algorithms were used for those purposes. But in the American criminal justice system, predictive algorithms have been used to predict where crimes will most likely occur, who is most likely to commit a violent crime, who is likely to fail to appear at their court hearing, and who is likely to re-offend at some point in the future. When I heard about it, I was quite surprised since I am French and such tools don’t exist in our judicial system. I didn’t see the purpose and how machines could be “better” than humans. I think I would not rely on such a tool because I am the judge, and machines are flawed since they are made by humans.

And you, what would you do if you were a United States judge who has to decide bail for a black man, a first-time offender, accused of a non-violent crime. An algorithm just told you there is a 100 percent chance he'll re-offend. With no further context, what do you do? Would you rely on the answer provided by the algorithm, even if your personal thought leads you to another answer? Is the algorithm answer more reliable than your opinion? One could argue that the answer provided by the algorithm is based on 137 question-quiz. The questions are part of a software program called Correctional Offender Management Profiling for Alternative Sanctions: COMPAS. The problem when we rely on such algorithms is their opacity

How could a judge rely on such a tool, when he does not even know how it works? Concerns have been raised about the nature of the inputs and how they are weighted by the algorithm. We all know that human decisions are not all the times the good ones, perfect. They depend on the person deciding, its opinion, ideas, and his background. But at least it is a human that decides, individualizing each case, each person. Whereas an algorithm is incapable of doing such a thing.

Do we prefer human or machine bias?

Human decision-making in criminal justice settings is often flawed, and stereotypical arguments and prohibited criteria, such as race, sexual preference or ethnic origin, often creep into judgments. The question is can algorithms help prevent disproportionate and often arbitrary decisions? We can argue that no bias is desirable, and a computer can offer such an unbiased choice of architecture. But algorithms are fed with data that is not clean of social, cultural and economic circumstances.

In analyzing language we can see that natural language necessarily contains human biases. The training of machines on language entails that artificial intelligence will inevitably absorb the existing biases in a given society. Furthermore, judges already carry out risk assessments on a daily basis, for example when deciding on the probability of recidivism. This process always subsumes human experience, culture, and even biases. Human empathy and other personal qualities are in fact types of bias that overreach statistically measurable equality.

 
Changed:
<
<

Subsection A

>
>
Should not such personal traits and skills or emotional intelligence be actively supported in judicial settings? I think human traits are essential in the justice process. Non-bias goal is not possible: whether because of humans or algorithms. What is important is the idea of fairness, what COMPASS system proved to failed since its predictions were racially biased.
 
Added:
>
>
 
Deleted:
<
<

Subsub 1

 
Deleted:
<
<

Subsection B

 
Added:
>
>

Towards a computerized justice: what about due process?

 
Changed:
<
<

Subsub 1

>
>
Let’s focus on another algorithm created by researchers from Sheffield and Pennsylvania universities, which is capable of guessing court decisions by using parties’ arguments and the relevant positive law. The results of this algorithm have been published in October 2016. It was an outstanding result: this algorithm made the same choice than human-judge eight times in ten. The 21% margin of error of the judge robot does not necessarily mean he was wrong, but rather he said differently the law compared to a human-judge.
 
Added:
>
>
Such results could encourage us to think the algorithm-judge is the future for predicting judicial decisions. Such algorithms, if we could create some perfectly reliable, would enable us to obtain a decision the most conform to the positive law. It would be a pledge of judicial security and would allow standardizing judicial decisions. A computerized justice could help judges to focus only on complicated case law and would help to unblock tribunals and make justice faster. With respect to all those possible advantages, can the justice reasonably answer to an implacable mathematical logic? I don’t think so, it depends on the fact, on the defendants… We can’t rely on an algorithm and we should not. Algorithms will never be able to replace humans because they lack the human ability to individualize since they can’t see each person as one kind.
 
Changed:
<
<

Subsub 2

>
>
If we think about the risk assessment algorithm in sentencing, it is also dangerous for due process. The Loomis case challenged the use of COMPAS as a violation of the defendant’s due process rights. In this case, the defendant argued that the risk assessment score violated his constitutional right to a fair trial. For the defendant, it violated his right to be sentenced based on accurate information and also violated his right to an individualized sentence The court held that Loomis’s challenge did not clear the constitutional hurdles. When an algorithm is used to produce a risk assessment, the due process question should be whether and how it was used. The Wisconsin Supreme Court stated that the sentencing court “ would have imposed the exact same sentence without it.
 
Added:
>
>
It raises concerns if the use of a risk assessment tool at sentencing is only appropriate when the same sentencing decision would be reached without it, this suggests that the risk assessment plays absolutely no role in probation or sentencing decisions.
 
Added:
>
>
 
Deleted:
<
<

Section II

 
Deleted:
<
<

Subsection A

 
Deleted:
<
<

Subsection B

 



CarlaDULACSecondEssay 1 - 27 Nov 2019 - Main.CarlaDULAC
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="SecondEssay"

It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.

Paper Title

-- By CarlaDULAC - 27 Nov 2019

Section I

Subsection A

Subsub 1

Subsection B

Subsub 1

Subsub 2

Section II

Subsection A

Subsection B


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 2r2 - 06 Dec 2019 - 14:33:07 - CarlaDULAC
Revision 1r1 - 27 Nov 2019 - 13:54:31 - CarlaDULAC
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM