Computers, Privacy & the Constitution

View   r3  >  r2  ...
ShotaSugiuraFirstPaper 3 - 03 May 2022 - Main.ShotaSugiura
Line: 1 to 1
 
META TOPICPARENT name="FirstPaper"

Issues on Facial Recognition Technologies and Regulations Needed on Profiling

-- By ShotaSugiura - 11 Mar 2022

Introduction

Changed:
<
<
Facial recognition technologies have been a topical issue on privacy these days. In 2020, IBM announced canceling of their facial recognition program. In their statement, IBM’s CEO, Arvind Krishna, expressed their deep concerns on “mass surveillance, racial profiling, violations of basic human rights and freedoms” [1]. In the same month as this statement, Amazon and Microsoft also announced that they were not selling their facial recognition products to the police[2][3]. Big tech giants voluntarily withdrew from their facial recognition business, although the facial recognition technologies were thought to be emerging businesses which to serve in various places and services. This issue intrigues me because the withdrawal of giant techs from facial recognition implies the necessity of regulation in this area. This short essay discusses the problems of facial recognition technologies and whether regulation is necessary for the US. [1] https://www.cnn.com/2020/06/09/tech/ibm-facial-recognition-blm/index.html [2] https://www.cnn.com/2020/06/10/tech/amazon-facial-recognition-moratorium/index.html [3] https://www.washingtonpost.com/technology/2020/06/11/microsoft-facial-recognition/

Why are these URL's strewn in the text? Just make them links.
>
>
Facial recognition technologies have been a topical privacy issue. In 2020, IBM announced canceling of its facial recognition program. In their statement, IBM’s CEO, Arvind Krishna, expressed their concerns about “mass surveillance, racial profiling, violations of basic human rights and freedoms”. In the same month as this statement, Amazon and Microsoft also announced that they were not selling their facial recognition products to the police. Big tech giants voluntarily withdrew from their facial recognition business, although the facial recognition technologies were thought to be emerging businesses that serve in various places and services. This issue intrigues me because the withdrawal of giant techs from facial recognition implies the necessity of regulation in this area. This essay discusses the problems of facial recognition technologies and what type of regulation is necessary.
 

Background of Withdrawal from Facial Recognition Technologies

Changed:
<
<
One of the articles reported that Amazon decided to ban the police from using Amazon’s facial recognition system “Rekognition” at least for a year. At the time, the risks that come from the police using facial recognition systems have been widely realized. For example, an innocent man was arrested by Detroit Police Department due to misidentification of the facial recognition system in January of 2020[4]. Even before this event, research institutions and citizen groups warned about the inaccuracy of facial recognition systems and the danger of mass surveillance. [4]https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html
>
>
As reported in 2020, Amazon decided to ban the police from using Amazon’s facial recognition system “Rekognition” at least for a year. At the time, the risks that come from the police using facial recognition systems have been widely realized. For example, an innocent man was arrested by Detroit Police Department due to misidentification of the facial recognition system in January of 2020. Even before this event, research institutions and citizen groups warned about the inaccuracy of facial recognition systems and the danger of mass surveillance.
 
Changed:
<
<

Problems on Facial Recognition

>
>

Problems with Facial Recognition

 

Inaccuracy of Facial Recognition Technologies

Changed:
<
<
What are the problems of facial recognition technologies? One apparent problem is inaccuracy. In 2018, a researcher at the MIT Media Lab published a study showing that facial recognition technologies make more errors when identifying darker-skinned women than it does for lighter-skinned men[5]. According to the research, since facial recognition is a technology that identifies a person based on machine learning of the pattern of human faces, if the system has fewer opportunities to learn from pictures of darker-skinned women, its accuracy at identifying them would also be low. Inaccurate analysis via facial recognition system can lead to mistaken arrest, just like the Detroit Police Department did in 2020. Other than law enforcement, it can also cause serious problems in airports, political conferences, or even hospitals. [5] https://www.media.mit.edu/articles/facial-recognition-software-is-biased-towards-white-men-researcher-finds/
>
>
What are the problems with facial recognition technologies? One apparent problem is inaccuracy. In 2018, a researcher at the MIT Media Lab published a study showing that facial recognition technologies make more errors when identifying darker-skinned women than it does for lighter-skinned men. According to the research, since facial recognition is a technology that identifies a person based on machine learning of the pattern of human faces, if the system has fewer opportunities to learn from pictures of darker-skinned women, its accuracy in identifying them would also be low. Inaccurate analysis via facial recognition system can lead to mistaken arrest, just as the Detroit Police Department did in 2020. Other than law enforcement, it can also cause serious problems in airports, political conferences, or hospitals.
 

Profiling

Another issue would be the profiling of people based on their gender, race, etc., through the use of facial recognition technologies. Profiling is typically defined as the automated processing of data to evaluate certain personal aspects of a person, in particular, to analyze and predict a person’s preferences, interests, economic situation, reliability, or movements.
Added:
>
>
Since facial recognition identifies a person only by their facial appearance, it can easily lead to biased decisions and inappropriate discrimination based on the person’s external characteristics, such as their gender or skin color. Take, for example, a smart security camera. It can identify suspicious behavior of people in a scene. There is a possibility that the camera has a tendency to pick a person of certain traits, such as a specific gender or race, based on the machine-learned patterns of scenes of criminal behavior. A kind of bias or discrimination could be justified by machines that learn patterns of human behaviors in our society.
 
Changed:
<
<
Since facial recognition identifies a person only by their appearance, it can easily lead to biased decisions and inappropriate discrimination based on the person’s appearance, such as their gender or skin color. Take for example, a smart security camera. It can identify suspicious behavior of people in a scene. There is a possibility that the camera has a tendency to pick a person of certain traits, such as a specific gender or race, based on the machine-learned patterns of scenes of criminal behavior. A kind of bias or discrimination could be justified by artificial intelligence in this situation.

Section IV Is regulation needed?

Among the two major problems of facial recognition technologies, the second one (profiling) is much more serious. Currently, it seems that the inaccuracy rather draws people’s attention because its harm is apparent, as we can see in the Detroit Police Department case. However, this would not be a crucial problem if facial recognition was used long-term. Any new, relatively undeveloped technology makes mistakes. DNA testing led to many false arrests and even false judgments in the 20th century, but now the standards of technology have been dramatically improved, and DNA testing has become a powerful tool in criminal proceedings. The accuracy of new technology can be improved, and its mistakes can be fixed, and this would be true for facial recognition systems as well. The problem of accuracy might just be a matter of its current transition period. However, profiling is not. It is not a matter of immatureness of the technology. This definitely needs regulation so that the technologies are used appropriately.

As an example of regulation on profiling, we can see the EU's General Data Protection Regulation (“GDPR”). Under GDPR, any activity of collection, use, and disclosure of personal data need to meet certain requirements, such as (i) obtaining consent from data subjects (Article 6), (ii) business operators are obliged to describe and notify people how personal data is processed (Articles 13 and 14), and (iii) the data subjects have the right not to be subject to a decision based solely on automated processing, including profiling (Article ss). Enacted in 2016, GDPR already provides several protection measures against the danger of profiling.

Currently, the US does not have comprehensive data protection laws at a federal level. Personal information is regulated only by sector (Health Insurance Portability and Accountability Act or “HIPPA” for medical data, Children’s Online Privacy Protection Act or “COPPA” for children’s data, for example). Some states such as California, Virginia, and Colorado, have established data protection regulations at a state level. Among them, California Privacy Rights Act (CPRA) is oustanding because it provides consumers opt-out rights against profiling. However, the regulation on facial recognition technologies must be enforced nationwide. Otherwise, people live in a less regulated states in the same country would suffer disadvantages because of the different standard of protection.

The best routes to improvement here begin by reducing technical confusion.

Facial recognition is software, of a simple kind—pattern matching—now grandly and ridiculously referred to as "artificial intelligence" and "machine learning," actually a staple of what computer programs have been used to do since the beginning of digital computing. The program compares a target image against a large collection of other pictures of faces, in order to identify the person whose face is in the target image. Finding the needle in the haystack is comparatively easy. Building and maintaining the haystack, on the other hand, is harder. So the most obvious technology pattern in the current context is for "platform" companies that build and maintain immense data stores on the world's human beings run the pattern matching programs "as a service," accepting target data from people paying for the service, giving them the output from pattern matching, and remembering forever who was looking for whom, when, and inferring why.

The various moratoriums, pauses, decisions not to sell for now, etc. that you mention but do not actually analyze, answer your rhetorical question: obviously the business of selling the ability to identify anyone anywhere in real time will be regulated. "Respectable" parties know that stealing a haystack is cheaper than building and maintaining one, hence Clearview AI. They all know that broad dispersal of the power involved is inevitable, so it is the terms on which this new danger will be added to society on which they want to focus, naturally so that the terms can be advantageous to them, rather than to less respectable parties. Governments at all levels also have both the public and their parochial interests at heart. Various civil society and astroturf entities purport to represent the public interest as well.

But the software that finds needles in haystacks is not the cameras everywhere that acquire all the images, nor the social processes that use or misuse the outputs. The business of selling real-time identification is not the only regulable entity, or even the one we would most be interested in this course. Perhaps we should ask instead a narrower and more productive question: What should be the limits on the State's complete real-time control over its citizens' identities? That will affect issues of constitutional limitation on cashless payments design and genomic identification as well as facial searching, but at the core hinges on what we mean by freedom.

>
>

Who Should be Regulated?

Regulation on Providers and Users of Facial Recognition Software

In considering regulation on facial recognition, there are two possible players to be regulated. The first one is those who sells or uses facial recognition software, such as IBM, Amazon, Microsoft, or law enforcement agency that use their software. As mentioned, some software providers have already taken action by themselves in response to the public accusation. However, many other providers have not done yet. Not all companies can regulate themselves, nor is all self-regulation sufficient. So, we need to set general standards in selling and using facial recognition software to avoid possible misuse and undesirable result by using it. We can find a good example of regulation in the EU’s General Data Protection Regulation ("GDPR"). Under GDPR, people have the right not to be subject to a decision based on automated processing, including profiling (Article 21-1). Profiling is one of the major serious concerns that can be caused by facial recognition software. Therefore, regulation targeting profiling activity is an effective and less suppressive way to restrict the most invasive function of the software.

Regulation on Collection and Distribution of Facial Images

Regulation on selling and using software is a way to block off the exit of the problem at the downstream side. The other way of regulation is to control the upstream side of the system. Facial recognition technologies are supported by a massive amount of data sets of facial images. And such large-scale data collection can be conducted only by a handful of tech companies such as digital platformers. And they typically monopolize their privileged position to collect big data from people. Therefore, controlling these upstream companies could be more effective than regulating downstream players. How can we regulate such a massive collection of facial images? A typical measure is to require companies to explain how to use data and obtain consent from people when they collect personal data. In fact, this type of regulation has been put in place in many countries. In those countries, companies cannot use personal data, including facial images, outside the scope of informed consent taken from people in advance. However, I am skeptical of this approach because, in real life, people voluntarily provide their data to the digital platformers in exchange for their “free” services usually without serious consideration. We cannot expect average consumers to take time to read and understand lengthy privacy policies and decide whether or not to provide their personal data. Leaving consumers to make their own decision cannot be always sufficient. On top of that, we should also take more direct approach to restrict collection and use of personal data if they are conducted in an unjustified way, regardless of consumers’ consent to it.
 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.

Revision 3r3 - 03 May 2022 - 18:42:07 - ShotaSugiura
Revision 2r2 - 04 Apr 2022 - 12:29:23 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM