Law in the Internet Society

View   r6  >  r5  >  r4  >  r3  >  r2  >  r1
CarlaDULACSecondEssay 6 - 03 Feb 2020 - Main.CarlaDULAC
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Line: 17 to 17
 If we think about systems predicting court appearance, their use does not always result in incarceration. They can be seen as a tool for court to bring defendants to them. However, it raises concerns about how judges use such algorithms.

What would you do if you were a United States judge who has to decide bail for a black man, a first-time offender, accused of a non-violent crime.

Changed:
<
<
An algorithm just told you there is a 100 percent chance he'll re-offend.Would you rely on the answer provided by the algorithm, even if your personal thought leads you to another answer? Is the algorithm answer more reliable than your opinion? One could argue that the answer provided by the algorithm is based on 137 question-quiz.
>
>
An algorithm just told you there is a 100 percent chance he'll re-offend.Would you rely on the answer provided by the algorithm ? Is the algorithm answer more reliable than your opinion? One could argue that the answer provided by the algorithm is based on 137 question-quiz.
 The questions are part of a software program called Correctional Offender Management Profiling for Alternative Sanctions: COMPAS. The problem when we rely on such algorithms is their opacity
Changed:
<
<
How could a judge rely on such a tool, when he does not even know how it works? Concerns have been raised about the nature of the inputs and how they are weighted by the algorithm. We all know that human decisions are not all the times the good ones, perfect. They depend on the person deciding, its opinion, ideas, and his background. But at least it is a human that decides, individualizing each case, each person. Whereas an algorithm is incapable of doing such a thing.
>
>
Concerns have been raised about the nature of the inputs and how they are weighted by the algorithm. We all know that human decisions are not all the times the good ones, perfect. They depend on the person deciding but at least he is individualizing each case, each person. Whereas an algorithm is incapable of doing such a thing.
 

Line: 47 to 47
 

Towards a computerized justice:what improvements can we make ?

Deleted:
<
<
Let’s focus on another algorithm created by researchers from Sheffield and Pennsylvania universities, which is capable of guessing court decisions by using parties’ arguments and the relevant positive law. The results of this algorithm have been published in October 2016. It was an outstanding result: this algorithm made the same choice than human-judge eight times in ten. The 21% margin of error of the judge robot does not necessarily mean he was wrong, but rather he said differently the law compared to a human-judge.
 
Changed:
<
<
Such results could encourage us to think the algorithm-judge is the future for predicting judicial decisions. Such algorithms, if we could create some perfectly reliable, would enable us to obtain a decision the most conform to the positive law. It would be a pledge of judicial security and would allow standardizing judicial decisions. A computerized justice could help judges to focus only on complicated case law and would help to unblock tribunals and make justice faster. With respect to all those possible advantages, can the justice reasonably answer to an implacable mathematical logic? I don’t think so, it depends on the fact, on the defendants… We can’t rely on an algorithm and we should not. Algorithms will never be able to replace humans because they lack the human ability to individualize since they can’t see each person as one kind.

If we think about the risk assessment algorithm in sentencing, it is also dangerous for due process. The Loomis case challenged the use of COMPAS as a violation of the defendant’s due process rights. In this case, the defendant argued that the risk assessment score violated his constitutional right to a fair trial. For the defendant, it violated his right to be sentenced based on accurate information and also violated his right to an individualized sentence The court held that Loomis’s challenge did not clear the constitutional hurdles. When an algorithm is used to produce a risk assessment, the due process question should be whether and how it was used. The Wisconsin Supreme Court stated that the sentencing court “ would have imposed the exact same sentence without it.

>
>
Algorithms are mostly used in two ways: to estimate a defendant’s flight risk, and to assess his or her threat to public safety. We can disprove their utility because algorithms are biased.
 
Changed:
<
<
It raises concerns if the use of a risk assessment tool at sentencing is only appropriate when the same sentencing decision would be reached without it, this suggests that the risk assessment plays absolutely no role in probation or sentencing decisions.
>
>
One of the improvements to try to fix algorithms is through an administrative solution which includes regulatory oversight. It has been implemented in 2018, in New York with the first algorithmic accountability law in the nation. The goal of the law is fairness, accountability, and transparency. In order to do so, the law seeks to create a group of experts who will identify automated decision systems’ disproportionate impacts.

The law will allow anyone affected by an automated decision to request an explanation for the decision and will require a path for redress for anyone harmed by a decision. However, there is an issue with this law since no compliance with it is required if it would result in the disclosure of proprietary information.

One of the biggest problems with algorithms is that they are based on math, not justice. If we want an algorithm to predict something, we have to represent concepts in terms of numbers for it to process information. Issues arise from the fact that concepts like fairness and justice can’t be represented in maths because they always evolve, are subject to debates and public opinion. There is no one metric to determine fairness. We can try to change the way we design algorithms by using gender-specific risk assessments for example but disparities in the treatment of different groups will always exist. It should be decided when designing algorithms, what kind of fairness is important to prioritize? What threshold of risk should be used for release, and what kind of risk makes sense to measure in this context?

Finally, the main issue remains the lack of transparency. In fact, private providers of algorithms have asserted trade secret protections in criminal cases to protect their intellectual property. As a result, defendants are denied the right to look at the data and source code that is being used to keep them incarcerated. Without having real access to the system inside algorithms, we can say what is wrong but it is harder to say how things should be changed since we don’t clearly know how those programs run.

 
Deleted:
<
<
 
Deleted:
<
<
You might want to go back and look at Jerome Frank's 1949 argument in Courts on Trial about why computational judging is impossible. He was right then and nothing has changed, even though there were only a couple of digital computers in the world and no software to speak of when he wrote. But as I said, disproving the utility of algorithms for judging is trivial anyway: that's not what we need computer programs for in a justice system. So it would be better either to figure out what computational improvements we can make in the justice system and whether they are worth it, or look for other sources of injustice in the justice system, out of which we are unlikely to run, because—as you say—justice is a human project and we are very fallible.
 

CarlaDULACSecondEssay 5 - 03 Feb 2020 - Main.CarlaDULAC
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Line: 7 to 7
 -- By CarlaDULAC - 27 Nov 2019
Changed:
<
<

My concerns about artificial intelligence which is supposed to predict the future behavior of defendants

>
>

My concerns about artificial intelligence which is supposed to predict behavior and bring public safety

 

Line: 44 to 44
 
Changed:
<
<

Towards a computerized justice: what about due process?

>
>

Towards a computerized justice:what improvements can we make ?

 

Let’s focus on another algorithm created by researchers from Sheffield and Pennsylvania universities, which is capable of guessing court decisions by using parties’ arguments and the relevant positive law. The results of this algorithm have been published in October 2016. It was an outstanding result: this algorithm made the same choice than human-judge eight times in ten.


CarlaDULACSecondEssay 4 - 26 Jan 2020 - Main.CarlaDULAC
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Line: 11 to 11
 

Changed:
<
<
In our daily life, we are frequently subject to predictive algorithms that determine music recommendations, product advertising, university admission and job placement. Until recently, I only knew such algorithms were used for those purposes. But in the American criminal justice system, predictive algorithms have been used to predict where crimes will most likely occur, who is most likely to commit a violent crime, who is likely to fail to appear at their court hearing, and who is likely to re-offend at some point in the future. When I heard about it, I was quite surprised since I am French and such tools don’t exist in our judicial system. I didn’t see the purpose and how machines could be “better” than humans. I think I would not rely on such a tool because I am the judge, and machines are flawed since they are made by humans.

What does "rely" mean, and why are we assuming that these are tools for judges? Systems being used to allocate police resources, shift by shift or in real time, may be predicting the need for policing on the basis of patterns in social activity that would not be evidence of any kind in a prosecution, but which would be "better" than other forms of resource management according to some metrics. Models predicting court appearance don't necessarily have to be designed to result in incarceration pending trial. Knowing as we do that there are interventions that will help to bring defendants to court, we can once again use pattern-analyzing software to help improve the allocation of resources for those interventions.

Having to judge-focused a view may be part of the problem. A criminal justice system has many parts, only a small one of which is judges. And making judicial determinations may be the object of the system, but is only one detail of its operation.

"Predictive algorithms," like "artificial intelligence" are pretentious phrases to describe computer programs that do pattern-matching. The form of matching involved is sophisticated, and the patterns exist in approximate and shifting forms in complex data, whether the data represent CAT images of lungs that might have tumors in them or train station crowds that might contain pickpockets, or train operation data that might make commuter rail services a little more efficient. Belaboring the strengths and weaknesses of the algorithms is not the same as making thoughtful public policy, no matter what the algorithms are about, or what their weaknesses are.

>
>
In our daily life, we are frequently subject to predictive algorithms that determine music recommendations, product advertising, university admission and job placement. Until recently, I only knew such algorithms were used for those purposes. But in the American criminal justice system, predictive algorithms have been used to predict where crimes will most likely occur, who is most likely to commit a violent crime, to fail to appear at their court hearing, and who is likely to re-offend at some point in the future. When I heard about it, I was quite surprised since I am French and such tools don’t exist in our judicial system. I didn’t see the purpose of those predictive algorithms and risk assessment tools used by US Police departments and judges. How could machines be “better”, more efficient than human, since they are made by flawed people?
 
Added:
>
>
A few years ago, Los Angeles Police Department adopted a predictive-policing system called Predpol. Its goal was to better direct police efforts to reduce crime. It raised questions about reducing criminal activities in one place, only to have them occur elsewhere. If we think about systems predicting court appearance, their use does not always result in incarceration. They can be seen as a tool for court to bring defendants to them. However, it raises concerns about how judges use such algorithms.
 
Changed:
<
<
And you, what would you do if you were a United States judge who has to decide bail for a black man, a first-time offender, accused of a non-violent crime. An algorithm just told you there is a 100 percent chance he'll re-offend. With no further context, what do you do? Would you rely on the answer provided by the algorithm, even if your personal thought leads you to another answer? Is the algorithm answer more reliable than your opinion? One could argue that the answer provided by the algorithm is based on 137 question-quiz.
>
>
What would you do if you were a United States judge who has to decide bail for a black man, a first-time offender, accused of a non-violent crime. An algorithm just told you there is a 100 percent chance he'll re-offend.Would you rely on the answer provided by the algorithm, even if your personal thought leads you to another answer? Is the algorithm answer more reliable than your opinion? One could argue that the answer provided by the algorithm is based on 137 question-quiz.
 The questions are part of a software program called Correctional Offender Management Profiling for Alternative Sanctions: COMPAS. The problem when we rely on such algorithms is their opacity

CarlaDULACSecondEssay 3 - 11 Jan 2020 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Deleted:
<
<
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
 

Should the criminal justice system be governed by algorithms?

Line: 16 to 15
 But in the American criminal justice system, predictive algorithms have been used to predict where crimes will most likely occur, who is most likely to commit a violent crime, who is likely to fail to appear at their court hearing, and who is likely to re-offend at some point in the future. When I heard about it, I was quite surprised since I am French and such tools don’t exist in our judicial system. I didn’t see the purpose and how machines could be “better” than humans. I think I would not rely on such a tool because I am the judge, and machines are flawed since they are made by humans.
Added:
>
>
What does "rely" mean, and why are we assuming that these are tools for judges? Systems being used to allocate police resources, shift by shift or in real time, may be predicting the need for policing on the basis of patterns in social activity that would not be evidence of any kind in a prosecution, but which would be "better" than other forms of resource management according to some metrics. Models predicting court appearance don't necessarily have to be designed to result in incarceration pending trial. Knowing as we do that there are interventions that will help to bring defendants to court, we can once again use pattern-analyzing software to help improve the allocation of resources for those interventions.

Having to judge-focused a view may be part of the problem. A criminal justice system has many parts, only a small one of which is judges. And making judicial determinations may be the object of the system, but is only one detail of its operation.

"Predictive algorithms," like "artificial intelligence" are pretentious phrases to describe computer programs that do pattern-matching. The form of matching involved is sophisticated, and the patterns exist in approximate and shifting forms in complex data, whether the data represent CAT images of lungs that might have tumors in them or train station crowds that might contain pickpockets, or train operation data that might make commuter rail services a little more efficient. Belaboring the strengths and weaknesses of the algorithms is not the same as making thoughtful public policy, no matter what the algorithms are about, or what their weaknesses are.

 And you, what would you do if you were a United States judge who has to decide bail for a black man, a first-time offender, accused of a non-violent crime. An algorithm just told you there is a 100 percent chance he'll re-offend. With no further context, what do you do? Would you rely on the answer provided by the algorithm, even if your personal thought leads you to another answer? Is the algorithm answer more reliable than your opinion? One could argue that the answer provided by the algorithm is based on 137 question-quiz. The questions are part of a software program called Correctional Offender Management Profiling for Alternative Sanctions: COMPAS.
Line: 63 to 71
 
Added:
>
>
You might want to go back and look at Jerome Frank's 1949 argument in Courts on Trial about why computational judging is impossible. He was right then and nothing has changed, even though there were only a couple of digital computers in the world and no software to speak of when he wrote. But as I said, disproving the utility of algorithms for judging is trivial anyway: that's not what we need computer programs for in a justice system. So it would be better either to figure out what computational improvements we can make in the justice system and whether they are worth it, or look for other sources of injustice in the justice system, out of which we are unlikely to run, because—as you say—justice is a human project and we are very fallible.

 

CarlaDULACSecondEssay 2 - 06 Dec 2019 - Main.CarlaDULAC
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Deleted:
<
<
 It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
Line: 4 to 3
 It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
Changed:
<
<

Paper Title

>
>

Should the criminal justice system be governed by algorithms?

 -- By CarlaDULAC - 27 Nov 2019
Changed:
<
<

Section I

>
>

My concerns about artificial intelligence which is supposed to predict the future behavior of defendants

In our daily life, we are frequently subject to predictive algorithms that determine music recommendations, product advertising, university admission and job placement. Until recently, I only knew such algorithms were used for those purposes. But in the American criminal justice system, predictive algorithms have been used to predict where crimes will most likely occur, who is most likely to commit a violent crime, who is likely to fail to appear at their court hearing, and who is likely to re-offend at some point in the future. When I heard about it, I was quite surprised since I am French and such tools don’t exist in our judicial system. I didn’t see the purpose and how machines could be “better” than humans. I think I would not rely on such a tool because I am the judge, and machines are flawed since they are made by humans.

And you, what would you do if you were a United States judge who has to decide bail for a black man, a first-time offender, accused of a non-violent crime. An algorithm just told you there is a 100 percent chance he'll re-offend. With no further context, what do you do? Would you rely on the answer provided by the algorithm, even if your personal thought leads you to another answer? Is the algorithm answer more reliable than your opinion? One could argue that the answer provided by the algorithm is based on 137 question-quiz. The questions are part of a software program called Correctional Offender Management Profiling for Alternative Sanctions: COMPAS. The problem when we rely on such algorithms is their opacity

How could a judge rely on such a tool, when he does not even know how it works? Concerns have been raised about the nature of the inputs and how they are weighted by the algorithm. We all know that human decisions are not all the times the good ones, perfect. They depend on the person deciding, its opinion, ideas, and his background. But at least it is a human that decides, individualizing each case, each person. Whereas an algorithm is incapable of doing such a thing.

Do we prefer human or machine bias?

Human decision-making in criminal justice settings is often flawed, and stereotypical arguments and prohibited criteria, such as race, sexual preference or ethnic origin, often creep into judgments. The question is can algorithms help prevent disproportionate and often arbitrary decisions? We can argue that no bias is desirable, and a computer can offer such an unbiased choice of architecture. But algorithms are fed with data that is not clean of social, cultural and economic circumstances.

In analyzing language we can see that natural language necessarily contains human biases. The training of machines on language entails that artificial intelligence will inevitably absorb the existing biases in a given society. Furthermore, judges already carry out risk assessments on a daily basis, for example when deciding on the probability of recidivism. This process always subsumes human experience, culture, and even biases. Human empathy and other personal qualities are in fact types of bias that overreach statistically measurable equality.

 
Changed:
<
<

Subsection A

>
>
Should not such personal traits and skills or emotional intelligence be actively supported in judicial settings? I think human traits are essential in the justice process. Non-bias goal is not possible: whether because of humans or algorithms. What is important is the idea of fairness, what COMPASS system proved to failed since its predictions were racially biased.
 
Added:
>
>
 
Deleted:
<
<

Subsub 1

 
Deleted:
<
<

Subsection B

 
Added:
>
>

Towards a computerized justice: what about due process?

 
Changed:
<
<

Subsub 1

>
>
Let’s focus on another algorithm created by researchers from Sheffield and Pennsylvania universities, which is capable of guessing court decisions by using parties’ arguments and the relevant positive law. The results of this algorithm have been published in October 2016. It was an outstanding result: this algorithm made the same choice than human-judge eight times in ten. The 21% margin of error of the judge robot does not necessarily mean he was wrong, but rather he said differently the law compared to a human-judge.
 
Added:
>
>
Such results could encourage us to think the algorithm-judge is the future for predicting judicial decisions. Such algorithms, if we could create some perfectly reliable, would enable us to obtain a decision the most conform to the positive law. It would be a pledge of judicial security and would allow standardizing judicial decisions. A computerized justice could help judges to focus only on complicated case law and would help to unblock tribunals and make justice faster. With respect to all those possible advantages, can the justice reasonably answer to an implacable mathematical logic? I don’t think so, it depends on the fact, on the defendants… We can’t rely on an algorithm and we should not. Algorithms will never be able to replace humans because they lack the human ability to individualize since they can’t see each person as one kind.
 
Changed:
<
<

Subsub 2

>
>
If we think about the risk assessment algorithm in sentencing, it is also dangerous for due process. The Loomis case challenged the use of COMPAS as a violation of the defendant’s due process rights. In this case, the defendant argued that the risk assessment score violated his constitutional right to a fair trial. For the defendant, it violated his right to be sentenced based on accurate information and also violated his right to an individualized sentence The court held that Loomis’s challenge did not clear the constitutional hurdles. When an algorithm is used to produce a risk assessment, the due process question should be whether and how it was used. The Wisconsin Supreme Court stated that the sentencing court “ would have imposed the exact same sentence without it.
 
Added:
>
>
It raises concerns if the use of a risk assessment tool at sentencing is only appropriate when the same sentencing decision would be reached without it, this suggests that the risk assessment plays absolutely no role in probation or sentencing decisions.
 
Added:
>
>
 
Deleted:
<
<

Section II

 
Deleted:
<
<

Subsection A

 
Deleted:
<
<

Subsection B

 



CarlaDULACSecondEssay 1 - 27 Nov 2019 - Main.CarlaDULAC
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="SecondEssay"

It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.

Paper Title

-- By CarlaDULAC - 27 Nov 2019

Section I

Subsection A

Subsub 1

Subsection B

Subsub 1

Subsub 2

Section II

Subsection A

Subsection B


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 6r6 - 03 Feb 2020 - 10:03:52 - CarlaDULAC
Revision 5r5 - 03 Feb 2020 - 07:58:14 - CarlaDULAC
Revision 4r4 - 26 Jan 2020 - 09:08:30 - CarlaDULAC
Revision 3r3 - 11 Jan 2020 - 16:21:04 - EbenMoglen
Revision 2r2 - 06 Dec 2019 - 14:33:07 - CarlaDULAC
Revision 1r1 - 27 Nov 2019 - 13:54:31 - CarlaDULAC
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM