Computers, Privacy & the Constitution

View   r2  >  r1  ...
ShotaSugiuraFirstPaper 2 - 04 Apr 2022 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="FirstPaper"
Deleted:
<
<
 

Issues on Facial Recognition Technologies and Regulations Needed on Profiling

-- By ShotaSugiura - 11 Mar 2022
Line: 4 to 3
 

Issues on Facial Recognition Technologies and Regulations Needed on Profiling

-- By ShotaSugiura - 11 Mar 2022
Changed:
<
<

Section I Introduction

>
>

Introduction

 Facial recognition technologies have been a topical issue on privacy these days. In 2020, IBM announced canceling of their facial recognition program. In their statement, IBM’s CEO, Arvind Krishna, expressed their deep concerns on “mass surveillance, racial profiling, violations of basic human rights and freedoms” [1]. In the same month as this statement, Amazon and Microsoft also announced that they were not selling their facial recognition products to the police[2][3]. Big tech giants voluntarily withdrew from their facial recognition business, although the facial recognition technologies were thought to be emerging businesses which to serve in various places and services. This issue intrigues me because the withdrawal of giant techs from facial recognition implies the necessity of regulation in this area. This short essay discusses the problems of facial recognition technologies and whether regulation is necessary for the US. [1] https://www.cnn.com/2020/06/09/tech/ibm-facial-recognition-blm/index.html [2] https://www.cnn.com/2020/06/10/tech/amazon-facial-recognition-moratorium/index.html [3] https://www.washingtonpost.com/technology/2020/06/11/microsoft-facial-recognition/
Changed:
<
<

Section II Background of Withdrawal from Facial Recognition Technologies

>
>
Why are these URL's strewn in the text? Just make them links.

Background of Withdrawal from Facial Recognition Technologies

 One of the articles reported that Amazon decided to ban the police from using Amazon’s facial recognition system “Rekognition” at least for a year. At the time, the risks that come from the police using facial recognition systems have been widely realized. For example, an innocent man was arrested by Detroit Police Department due to misidentification of the facial recognition system in January of 2020[4]. Even before this event, research institutions and citizen groups warned about the inaccuracy of facial recognition systems and the danger of mass surveillance. [4]https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html
Changed:
<
<

Section III Problems on Facial Recognition

Subsection A (Problem 1) Inaccuracy of Facial Recognition Technologies

>
>

Problems on Facial Recognition

Inaccuracy of Facial Recognition Technologies

 What are the problems of facial recognition technologies? One apparent problem is inaccuracy. In 2018, a researcher at the MIT Media Lab published a study showing that facial recognition technologies make more errors when identifying darker-skinned women than it does for lighter-skinned men[5]. According to the research, since facial recognition is a technology that identifies a person based on machine learning of the pattern of human faces, if the system has fewer opportunities to learn from pictures of darker-skinned women, its accuracy at identifying them would also be low. Inaccurate analysis via facial recognition system can lead to mistaken arrest, just like the Detroit Police Department did in 2020. Other than law enforcement, it can also cause serious problems in airports, political conferences, or even hospitals. [5] https://www.media.mit.edu/articles/facial-recognition-software-is-biased-towards-white-men-researcher-finds/
Changed:
<
<

Subsection B (Problem 2) Profiling

>
>

Profiling

 Another issue would be the profiling of people based on their gender, race, etc., through the use of facial recognition technologies. Profiling is typically defined as the automated processing of data to evaluate certain personal aspects of a person, in particular, to analyze and predict a person’s preferences, interests, economic situation, reliability, or movements.
Added:
>
>
 Since facial recognition identifies a person only by their appearance, it can easily lead to biased decisions and inappropriate discrimination based on the person’s appearance, such as their gender or skin color. Take for example, a smart security camera. It can identify suspicious behavior of people in a scene. There is a possibility that the camera has a tendency to pick a person of certain traits, such as a specific gender or race, based on the machine-learned patterns of scenes of criminal behavior. A kind of bias or discrimination could be justified by artificial intelligence in this situation.

Section IV Is regulation needed?

Line: 26 to 30
 

Section IV Is regulation needed?

Among the two major problems of facial recognition technologies, the second one (profiling) is much more serious. Currently, it seems that the inaccuracy rather draws people’s attention because its harm is apparent, as we can see in the Detroit Police Department case. However, this would not be a crucial problem if facial recognition was used long-term. Any new, relatively undeveloped technology makes mistakes. DNA testing led to many false arrests and even false judgments in the 20th century, but now the standards of technology have been dramatically improved, and DNA testing has become a powerful tool in criminal proceedings. The accuracy of new technology can be improved, and its mistakes can be fixed, and this would be true for facial recognition systems as well. The problem of accuracy might just be a matter of its current transition period. However, profiling is not. It is not a matter of immatureness of the technology. This definitely needs regulation so that the technologies are used appropriately.
Added:
>
>
 As an example of regulation on profiling, we can see the EU's General Data Protection Regulation (“GDPR”). Under GDPR, any activity of collection, use, and disclosure of personal data need to meet certain requirements, such as (i) obtaining consent from data subjects (Article 6), (ii) business operators are obliged to describe and notify people how personal data is processed (Articles 13 and 14), and (iii) the data subjects have the right not to be subject to a decision based solely on automated processing, including profiling (Article ss). Enacted in 2016, GDPR already provides several protection measures against the danger of profiling.
Added:
>
>
 Currently, the US does not have comprehensive data protection laws at a federal level. Personal information is regulated only by sector (Health Insurance Portability and Accountability Act or “HIPPA” for medical data, Children’s Online Privacy Protection Act or “COPPA” for children’s data, for example). Some states such as California, Virginia, and Colorado, have established data protection regulations at a state level. Among them, California Privacy Rights Act (CPRA) is oustanding because it provides consumers opt-out rights against profiling. However, the regulation on facial recognition technologies must be enforced nationwide. Otherwise, people live in a less regulated states in the same country would suffer disadvantages because of the different standard of protection.
Added:
>
>
The best routes to improvement here begin by reducing technical confusion.

Facial recognition is software, of a simple kind—pattern matching—now grandly and ridiculously referred to as "artificial intelligence" and "machine learning," actually a staple of what computer programs have been used to do since the beginning of digital computing. The program compares a target image against a large collection of other pictures of faces, in order to identify the person whose face is in the target image. Finding the needle in the haystack is comparatively easy. Building and maintaining the haystack, on the other hand, is harder. So the most obvious technology pattern in the current context is for "platform" companies that build and maintain immense data stores on the world's human beings run the pattern matching programs "as a service," accepting target data from people paying for the service, giving them the output from pattern matching, and remembering forever who was looking for whom, when, and inferring why.

The various moratoriums, pauses, decisions not to sell for now, etc. that you mention but do not actually analyze, answer your rhetorical question: obviously the business of selling the ability to identify anyone anywhere in real time will be regulated. "Respectable" parties know that stealing a haystack is cheaper than building and maintaining one, hence Clearview AI. They all know that broad dispersal of the power involved is inevitable, so it is the terms on which this new danger will be added to society on which they want to focus, naturally so that the terms can be advantageous to them, rather than to less respectable parties. Governments at all levels also have both the public and their parochial interests at heart. Various civil society and astroturf entities purport to represent the public interest as well.

But the software that finds needles in haystacks is not the cameras everywhere that acquire all the images, nor the social processes that use or misuse the outputs. The business of selling real-time identification is not the only regulable entity, or even the one we would most be interested in this course. Perhaps we should ask instead a narrower and more productive question: What should be the limits on the State's complete real-time control over its citizens' identities? That will affect issues of constitutional limitation on cashless payments design and genomic identification as well as facial searching, but at the core hinges on what we mean by freedom.

 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

ShotaSugiuraFirstPaper 1 - 11 Mar 2022 - Main.ShotaSugiura
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="FirstPaper"

Issues on Facial Recognition Technologies and Regulations Needed on Profiling

-- By ShotaSugiura - 11 Mar 2022

Section I Introduction

Facial recognition technologies have been a topical issue on privacy these days. In 2020, IBM announced canceling of their facial recognition program. In their statement, IBM’s CEO, Arvind Krishna, expressed their deep concerns on “mass surveillance, racial profiling, violations of basic human rights and freedoms” [1]. In the same month as this statement, Amazon and Microsoft also announced that they were not selling their facial recognition products to the police[2][3]. Big tech giants voluntarily withdrew from their facial recognition business, although the facial recognition technologies were thought to be emerging businesses which to serve in various places and services. This issue intrigues me because the withdrawal of giant techs from facial recognition implies the necessity of regulation in this area. This short essay discusses the problems of facial recognition technologies and whether regulation is necessary for the US. [1] https://www.cnn.com/2020/06/09/tech/ibm-facial-recognition-blm/index.html [2] https://www.cnn.com/2020/06/10/tech/amazon-facial-recognition-moratorium/index.html [3] https://www.washingtonpost.com/technology/2020/06/11/microsoft-facial-recognition/

Section II Background of Withdrawal from Facial Recognition Technologies

One of the articles reported that Amazon decided to ban the police from using Amazon’s facial recognition system “Rekognition” at least for a year. At the time, the risks that come from the police using facial recognition systems have been widely realized. For example, an innocent man was arrested by Detroit Police Department due to misidentification of the facial recognition system in January of 2020[4]. Even before this event, research institutions and citizen groups warned about the inaccuracy of facial recognition systems and the danger of mass surveillance. [4]https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html

Section III Problems on Facial Recognition

Subsection A (Problem 1) Inaccuracy of Facial Recognition Technologies

What are the problems of facial recognition technologies? One apparent problem is inaccuracy. In 2018, a researcher at the MIT Media Lab published a study showing that facial recognition technologies make more errors when identifying darker-skinned women than it does for lighter-skinned men[5]. According to the research, since facial recognition is a technology that identifies a person based on machine learning of the pattern of human faces, if the system has fewer opportunities to learn from pictures of darker-skinned women, its accuracy at identifying them would also be low. Inaccurate analysis via facial recognition system can lead to mistaken arrest, just like the Detroit Police Department did in 2020. Other than law enforcement, it can also cause serious problems in airports, political conferences, or even hospitals. [5] https://www.media.mit.edu/articles/facial-recognition-software-is-biased-towards-white-men-researcher-finds/

Subsection B (Problem 2) Profiling

Another issue would be the profiling of people based on their gender, race, etc., through the use of facial recognition technologies. Profiling is typically defined as the automated processing of data to evaluate certain personal aspects of a person, in particular, to analyze and predict a person’s preferences, interests, economic situation, reliability, or movements. Since facial recognition identifies a person only by their appearance, it can easily lead to biased decisions and inappropriate discrimination based on the person’s appearance, such as their gender or skin color. Take for example, a smart security camera. It can identify suspicious behavior of people in a scene. There is a possibility that the camera has a tendency to pick a person of certain traits, such as a specific gender or race, based on the machine-learned patterns of scenes of criminal behavior. A kind of bias or discrimination could be justified by artificial intelligence in this situation.

Section IV Is regulation needed?

Among the two major problems of facial recognition technologies, the second one (profiling) is much more serious. Currently, it seems that the inaccuracy rather draws people’s attention because its harm is apparent, as we can see in the Detroit Police Department case. However, this would not be a crucial problem if facial recognition was used long-term. Any new, relatively undeveloped technology makes mistakes. DNA testing led to many false arrests and even false judgments in the 20th century, but now the standards of technology have been dramatically improved, and DNA testing has become a powerful tool in criminal proceedings. The accuracy of new technology can be improved, and its mistakes can be fixed, and this would be true for facial recognition systems as well. The problem of accuracy might just be a matter of its current transition period. However, profiling is not. It is not a matter of immatureness of the technology. This definitely needs regulation so that the technologies are used appropriately. As an example of regulation on profiling, we can see the EU's General Data Protection Regulation (“GDPR”). Under GDPR, any activity of collection, use, and disclosure of personal data need to meet certain requirements, such as (i) obtaining consent from data subjects (Article 6), (ii) business operators are obliged to describe and notify people how personal data is processed (Articles 13 and 14), and (iii) the data subjects have the right not to be subject to a decision based solely on automated processing, including profiling (Article ss). Enacted in 2016, GDPR already provides several protection measures against the danger of profiling. Currently, the US does not have comprehensive data protection laws at a federal level. Personal information is regulated only by sector (Health Insurance Portability and Accountability Act or “HIPPA” for medical data, Children’s Online Privacy Protection Act or “COPPA” for children’s data, for example). Some states such as California, Virginia, and Colorado, have established data protection regulations at a state level. Among them, California Privacy Rights Act (CPRA) is oustanding because it provides consumers opt-out rights against profiling. However, the regulation on facial recognition technologies must be enforced nationwide. Otherwise, people live in a less regulated states in the same country would suffer disadvantages because of the different standard of protection.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 2r2 - 04 Apr 2022 - 12:29:23 - EbenMoglen
Revision 1r1 - 11 Mar 2022 - 21:34:17 - ShotaSugiura
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM