Law in Contemporary Society

View   r5  >  r4  >  r3  >  r2  >  r1
GabrielaFloresRomoFirstEssay 5 - 29 Jul 2021 - Main.GabrielaFloresRomo
Added:
>
>
Revision 5 is unreadable
Deleted:
<
<
META TOPICPARENT name="FirstEssay"

Facial Recognition- Privacy and Bias

-- By GabrielaFloresRomo - 18 May 2021

Facial Recognition

Before fingerprints or passport verification, my father, a 6-foot, bald, blue-eyed Latino man and U.S. citizen was mistaken for someone else after using the facial recognition technology at an airport security checkpoint. We left, furious.

Biometrics—the use of AI for verification—has filtered into our lives, ranging from facial recognition to unlock cellphones and computers to more general identification systems used by police departments and the FBI. Individual screening processes for identification purposes, formerly performed by humans, have been overtaken by algorithms. With facial recognition, however, bias and privacy concerns go hand-in-hand; it lacks the ability to distinguish between facial features, skin tones, and gender. If unreliable in high security settings, why use them in the first place?

Bias

Facial recognition, the least accurate biometric form, has grown from amusing face “filters” to programs that assists law enforcement. Once a face is linked to a platform, these platforms may continue automatically linking pictures to the face. By 2016, through both private and third-party companies, law enforcement face recognition affected over 117 million American adults.

A study by the National Institute for Standards and Technology of 18.27 million images of 8.49 million people found that in one-to-one photo matchings, Black females had the highest false positives; however, there were similar false positive rates for Asian, Black, and Native American women and men. Another study showed that with darker skin tones, Microsoft’s error rates were 12.9% as compared to 0.7% for lighter skin tones, and IBM’s error rates were 22.4% compared to 3.2%, respectively. This discrepancy leads to false accusations, racial profiling, and the unnecessary deprivation of rights and inaccurate matching of individuals, as happened with my father. Individuals whose photos were used in these studies were most likely unaware their images were being used. Currently, this remains unregulated.

Privacy Concerns

Not only is there bias but also privacy concerns. Consumer habits and facial recognition work in tandem. Algorithms are generated from preferences. Once a place of anonymity, companies now have a name behind Internet activity and a linked face via facial recognition technology.

Companies may sell the consumer patterns and facial recognition information they compile to third parties, leaving individuals’ information open to data breaches, without one’s express consent, and making it both increasingly difficult to protect data and track down who has one’s data. This surveillance and data collection must be regulated.

Regulation?

To prevent further injustice, police use of facial recognition must end. Given the extremely disproportionate number of Black individuals in the criminal justice system and the disproportionate amount of surveillance cameras in minority neighborhoods, individuals represented in databases weigh heavily in a generalized direction. In our criminal justice system, algorithmic bias has been heavily documented, as with skin color and eye shape. While affluent neighborhoods allocate money to rehabilitation programs, minority neighborhoods have increased biometric spending, leading to mistaken identities rather than precise expediency. When a system is created by humans with preexisting biases, those biases will only be replicated. AI is flawed, and it is only perpetuating the cycle of injustice in our country’s law enforcement. Different perspectives do not exist within this system; it must cease.

If facial recognition must continue outside of policing, it should be stringently regulated—both in its scope and purpose—on a state or federal level, and, at most, limited to simple tasks. While the United States “does not have a single, comprehensive federal law governing biometric data,” some states, such as Illinois, have enacted laws regulating biometric privacy, and others are in the process of passing state laws to regulate biometric privacy (such as New York’s Biometric Privacy Act in January of this year). In order to protect informational privacy, “a private person's right to choose to determine whether, how, and to what extent information about oneself is communicated to others, especially sensitive and confidential information,” regulation is necessary if facial recognition continues.

As biometric privacy laws develop, a multitude of lawsuits against big corporations for privacy infringement is expected. In January 2021, Facebook settled a suit for approximately $650 million for infringing Illinois’ biometric privacy law without plaintiffs’ prior consent. In late 2020, plaintiffs filed a lawsuit in California against Instagram, alleging its collection of biometric data. While suits will exponentially increase, it will be interesting to see whether, due to conflict-of-interests and other implications, individuals challenging big corps will move to independent, specialized practitioners rather than large law firms. It is not only necessary to ensure that laws are developed in our increasingly virtual world but also important to look at who develops these laws.

Conclusion

It is obvious that bias exists, and both legal and tech giants are pitching in. The ACLU is petitioning the Biden administration to halt these dangerous technologies to “uphold his [Biden’s] commitment to racial equity and civil liberties for all.” Likewise, Microsoft, Amazon, and IBM, amongst others, have banned and or halted facial recognition technology sales and research to police until regulations are put in place. If some of the starkest advocates of technological advances are banning facial recognition technology, what do they know that we do not?

Our “benign” uses of facial recognition in everyday technology are adding to our own personal database, one that is often sold to third parties who profit off of our search history or use inaccurate methods to “identify” individuals. The only way to ensure that privacy concerns prevail over private, biased interests is to end facial recognition. While this is highly unlikely in the short term, we should start by regulating the use of facial recognition to those uses—without information sharing—that simplify activity, aiding those with disabilities.

The facial recognition market is estimated to reach U.S. $9.78 billion by 2023. It is clear that facial recognition’s popularity and perceived usefulness is growing. It is vital to combat privacy and bias concerns by proactively educating others, limiting our personal use of these technologies, working alongside the ACLU and Algorithmic Justice League (among others), and pushing for federal and or state biometrics regulations. A system created using skewed data, can only produce skewed results.

I think this is a very substantial improvement. Having your ideas lucidly stated helps us to understand where we may go next.

I think first it is important to reflect that the public order-keepers and intelligence services were the very first adopters of facial recognition. The Israelis were already using a system to detect possible crossings of the Allonby Bridge border with Jordan by certain PLO adversaries in 1989. At the other end of the population scale, the PRC was manufacturing hardware in vast quantities and developing at Huawei software compatible with IBM SmartCities to enable mass-population citywide facial recognition by 2010. As usual, the surveillance capitalists in the US largely made use of technologies developed by socialism in our defense industrial sector and applied them on massive scale to new purposes. But in thinking about this tech, it's very helpful to remember that the policing and spying applications aren't peripheral to the development, but central.

Second, as you have some experience with this technology, you understand that the executable software itself is not the source of the social biases discussed, but rather the training data. The same two-cent neural network cookbook traiued, let us say, on the FairFace dataset will show different recognition and categorization outputs entirely. This presents a significant internal conflict for those who discuss the policy implications of this sort of tech: if the bugs are fixed by "better" training, is the tech then okay to use? If not, what's the other reason? The recent "Coded Bias" documentary is a good example of powerful and lucid presentation of the issue you discuss, terrific for introducing the issues and hamstrung for experts by its inability to clarify its ambivalence on this point.

For me, as you'll see if we have a chance to work together in LawNetSoc, better clarity is gained by starting at another vantage. If we begin from close definition of rights, seeing what rights are coiled up inside the bag we call "privacy," it turns out that the problems of bias are bugs in what is already a privacy violation, so fixing them doesn't constitute remedy.

On the other hand, "just say no" structures are not the only way to ensure the rule of law. And they are politically the most difficult, precisely because the organs of order-keeping and security will always oppose them.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


GabrielaFloresRomoFirstEssay 4 - 21 May 2021 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Line: 46 to 46
  The facial recognition market is estimated to reach U.S. $9.78 billion by 2023. It is clear that facial recognition’s popularity and perceived usefulness is growing. It is vital to combat privacy and bias concerns by proactively educating others, limiting our personal use of these technologies, working alongside the ACLU and Algorithmic Justice League (among others), and pushing for federal and or state biometrics regulations. A system created using skewed data, can only produce skewed results.
Added:
>
>
I think this is a very substantial improvement. Having your ideas lucidly stated helps us to understand where we may go next.

I think first it is important to reflect that the public order-keepers and intelligence services were the very first adopters of facial recognition. The Israelis were already using a system to detect possible crossings of the Allonby Bridge border with Jordan by certain PLO adversaries in 1989. At the other end of the population scale, the PRC was manufacturing hardware in vast quantities and developing at Huawei software compatible with IBM SmartCities to enable mass-population citywide facial recognition by 2010. As usual, the surveillance capitalists in the US largely made use of technologies developed by socialism in our defense industrial sector and applied them on massive scale to new purposes. But in thinking about this tech, it's very helpful to remember that the policing and spying applications aren't peripheral to the development, but central.

Second, as you have some experience with this technology, you understand that the executable software itself is not the source of the social biases discussed, but rather the training data. The same two-cent neural network cookbook traiued, let us say, on the FairFace dataset will show different recognition and categorization outputs entirely. This presents a significant internal conflict for those who discuss the policy implications of this sort of tech: if the bugs are fixed by "better" training, is the tech then okay to use? If not, what's the other reason? The recent "Coded Bias" documentary is a good example of powerful and lucid presentation of the issue you discuss, terrific for introducing the issues and hamstrung for experts by its inability to clarify its ambivalence on this point.

For me, as you'll see if we have a chance to work together in LawNetSoc, better clarity is gained by starting at another vantage. If we begin from close definition of rights, seeing what rights are coiled up inside the bag we call "privacy," it turns out that the problems of bias are bugs in what is already a privacy violation, so fixing them doesn't constitute remedy.

On the other hand, "just say no" structures are not the only way to ensure the rule of law. And they are politically the most difficult, precisely because the organs of order-keeping and security will always oppose them.

 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.

GabrielaFloresRomoFirstEssay 3 - 18 May 2021 - Main.GabrielaFloresRomo
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Changed:
<
<

AI Intelligence- Privacy and Bias

>
>

Facial Recognition- Privacy and Bias

 
Changed:
<
<
-- By GabrielaFloresRomo - 23 Feb 2021
>
>
-- By GabrielaFloresRomo - 18 May 2021
 
Changed:
<
<

Introduction of AI Intelligence

>
>

Facial Recognition

 
Changed:
<
<
With the need to simplify and create more efficient processes in our evermore industrialized society, companies have shifted to artificial intelligence. Artificial intelligence is the making of intelligent machines, especially computer programs, which are not confined to biologically observable methods (McCarthy? ). In application, biometrics—the use of AI for verification—has filtered into many aspects of our lives, ranging from facial and fingerprint recognition to quickly unlock cellphones and computers to more general identification systems used by police departments and the FBI. Algorithms now take traditionally onerous and time-consuming jobs away from humans, such as individual screening processes for identification purposes at the airport or in identification of potential suspects. With these developments, however, come privacy concerns and biases within the systems, which may lack the ability to distinguish between facial features, skin tones, and gender.
>
>
Before fingerprints or passport verification, my father, a 6-foot, bald, blue-eyed Latino man and U.S. citizen was mistaken for someone else after using the facial recognition technology at an airport security checkpoint. We left, furious.
 
Added:
>
>
Biometrics—the use of AI for verification—has filtered into our lives, ranging from facial recognition to unlock cellphones and computers to more general identification systems used by police departments and the FBI. Individual screening processes for identification purposes, formerly performed by humans, have been overtaken by algorithms. With facial recognition, however, bias and privacy concerns go hand-in-hand; it lacks the ability to distinguish between facial features, skin tones, and gender. If unreliable in high security settings, why use them in the first place?
 
Changed:
<
<

Privacy Concerns

Privacy is the “right that determines the nonintervention of secret surveillance and the protection of an individual’s information” (Black’s Law Dictionary). Some states, such as Illinois, have enacted laws regulating biometric privacy, and others are in the process of passing state laws to regulate biometric privacy (New York proposed and introduced the Biometric Privacy Act in January of this year). At a final approval hearing in January of this year, Facebook settled a suit against it for infringing Illinois’ biometric privacy law without first getting plaintiffs’ prior consents for approximately $650 million (Channick). In late 2020, plaintiffs filed a lawsuit in California against Instagram alleging its collection of biometric data (Burnson and Bloomberg). Although Instagram’s data policy states, “If we introduce face-recognition technology to your Instagram experience, we will let you know first, and you will have control over whether we use this technology for you,” a question remains of when Instagram began informing its user and how. These lawsuits are likely the first of many against big corporations regarding privacy concerns, especially as more and more states enact biometric privacy laws.

Although originally promising, the National Biometric Information Privacy Act of 2020 died in Congress in 2020, and thus, “the United States does not have a single, comprehensive federal law governing biometric data” (Baiardo and Le). One of the biggest concerns of biometric data is its ability to monitor individual facial features through phone activities and social media platforms—which tie one’s consumer patterns and physical characteristics—and the ability to sell them to third parties. This leaves individuals’ information open to data breaches—all without one’s own knowledge or express consent. These third parties—data brokers— “do not have a direct relationship with the individuals they are collecting data on, and thus, individuals are often unaware that their information is being transferred, shared, or sold to third parties,” making it both increasingly difficult to protect one’s data and track down who or what has one’s data (Wright, 631). The expansive arm of these corporations—unbeknownst to most— is daunting. This apparent “normalization” of surveillance and data collection is a serious privacy concern for many and a surprise to unsuspecting others.

>
>

Bias

 
Changed:
<
<

Facial Recognition & Bias

>
>
Facial recognition, the least accurate biometric form, has grown from amusing face “filters” to programs that assists law enforcement. Once a face is linked to a platform, these platforms may continue automatically linking pictures to the face. By 2016, through both private and third-party companies, law enforcement face recognition affected over 117 million American adults.
 
Changed:
<
<
Facial recognition has been used in a range of ways—from amusing “filters” that individuals use to distort their appearance to programming that assists law enforcement to narrow down suspects. Moreover, once one’s facial appearance is linked to a platform, these platforms may continue automatically linking one’s face to other pictures posted by one or others. Facial recognition has also been recognized as the least accurate biometric form (Najibi). Georgetown Law’s Center on Privacy and Technology found that by 2016, law enforcement face recognition affected over 117 million American adults (Garvie, Bedoya, and Frankle). This is unregulated.
>
>
A study by the National Institute for Standards and Technology of 18.27 million images of 8.49 million people found that in one-to-one photo matchings, Black females had the highest false positives; however, there were similar false positive rates for Asian, Black, and Native American women and men. Another study showed that with darker skin tones, Microsoft’s error rates were 12.9% as compared to 0.7% for lighter skin tones, and IBM’s error rates were 22.4% compared to 3.2%, respectively. This discrepancy leads to false accusations, racial profiling, and the unnecessary deprivation of rights and inaccurate matching of individuals, as happened with my father. Individuals whose photos were used in these studies were most likely unaware their images were being used. Currently, this remains unregulated.
 
Deleted:
<
<
With an extremely disproportionate number of Black individuals in the criminal justice system and a disproportionate amount of police surveillance cameras in minority neighborhoods, the pools of individuals represented in databases weigh heavily in one, generalized direction. Well-known examples of algorithmic bias exist across diverse policy areas, such as criminal justice and disease surveillance, and certain facial characteristics, such as skin color and eye shape, increase the likelihood of bias (Yeung et. al, 15).

A study by the National Institute for Standards and Technology (NIST) of 18.27 million images of 8.49 million people found, among other things, that in one-to-one photo matching, false positives were higher for Black females, and similar false positives rates for one-to-one matching of photos for Asian, Black, and native groups, with American Indian with the highest rate (“NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software”). This may lead to false accusations, racial profiling, and inaccurate matching of individuals. Another study showed that with darker skin tones, Microsoft’s error rates were 12.9% as compared to 0.7% for lighter skin tones and IBM’s error rates were 22.4% for darker skin individuals (Buolamwini and Gebru, 10). Given the increased use of facial recognition in the private sector and government, the disparity in accuracy is concerning in terms of falsely accusing individuals and further depriving them of their rights. With regards to privacy (and compensation), it is also highly unlikely that many, if any, individuals whose photos were used in this study had any idea that their images were being used.

Conclusion

 
Changed:
<
<
Traction against facial recognition technology has increased, with the ACLU putting up a petition this week for the Biden administration to halt these dangerous technologies in order to “uphold his commitment to racial equity and civil liberties for all.” Whether more research will be conducted to further analyze bias effects in facial recognition is yet to be seen. Whether privacy concerns will prevail over private interests will hopefully be resolved. The face recognition market is estimated to reach U.S. $9.78 billion by 2023 (“Global Facial Recognition Market Report 2018). It is clear that facial recognition’s popularity and perceived usefulness is expanding. Privacy and bias, however, remain important concerns.

Works Cited

Baiardo, Alice, and Anthony Le. “U.S. Biometrics Laws Part II: What to Expect in 2021.” JD Supra, 8 Feb. 2021, www.jdsupra.com/legalnews/u-s-biometrics-laws-part-ii-what-to-7257250/.

Buolamwini, Joy, and Timnit Gebru. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. 2018, proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.

Burnson, Robert, and Bloomberg. “Instagram Faces Lawsuit over Illegal Harvesting of Biometrics.” Fortune, Fortune, 12 Aug. 2020, 5:30, fortune.com/2020/08/12/instagram-lawsuit-biometric-data/.

>
>

Privacy Concerns

 
Changed:
<
<
Channick, Robert. “Nearly 1.6 Million Illinois Facebook Users to Get about $350 Each in Privacy Settlement.” Chicagotribune.com, Chicago Tribune, 14 Jan. 2021, www.chicagotribune.com/business/ct-biz-facebook-privacy-settlement-illinois-20210115-2gau5ijyjff4xd2wfiiow7yl4m-story.html.
>
>
Not only is there bias but also privacy concerns. Consumer habits and facial recognition work in tandem. Algorithms are generated from preferences. Once a place of anonymity, companies now have a name behind Internet activity and a linked face via facial recognition technology.
 
Changed:
<
<
Garvie, Clare, et al. “The Perpetual Line-Up.” Perpetual Line Up, 18 Oct. 2016, www.perpetuallineup.org/.
>
>
Companies may sell the consumer patterns and facial recognition information they compile to third parties, leaving individuals’ information open to data breaches, without one’s express consent, and making it both increasingly difficult to protect data and track down who has one’s data. This surveillance and data collection must be regulated.
 
Deleted:
<
<
Global Facial Recognition Market Report 2018, 5 June 2018, 5:00 pm, www.prnewswire.com/news-releases/global-facial-recognition-market-report-2018-300660163.html.
 
Changed:
<
<
McCarthy? ,John. What Is AI? / Basic Questions. jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html.
>
>

Regulation?

 
Changed:
<
<
Najibi, Alex. “Racial Discrimination in Face Recognition Technology.” Harvard Science Policy Blog, 24 Oct. 2020, sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/.
>
>
To prevent further injustice, police use of facial recognition must end. Given the extremely disproportionate number of Black individuals in the criminal justice system and the disproportionate amount of surveillance cameras in minority neighborhoods, individuals represented in databases weigh heavily in a generalized direction. In our criminal justice system, algorithmic bias has been heavily documented, as with skin color and eye shape. While affluent neighborhoods allocate money to rehabilitation programs, minority neighborhoods have increased biometric spending, leading to mistaken identities rather than precise expediency. When a system is created by humans with preexisting biases, those biases will only be replicated. AI is flawed, and it is only perpetuating the cycle of injustice in our country’s law enforcement. Different perspectives do not exist within this system; it must cease.
 
Changed:
<
<
“NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software.” NIST, 19 Dec. 2019, www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software.
>
>
If facial recognition must continue outside of policing, it should be stringently regulated—both in its scope and purpose—on a state or federal level, and, at most, limited to simple tasks. While the United States “does not have a single, comprehensive federal law governing biometric data,” some states, such as Illinois, have enacted laws regulating biometric privacy, and others are in the process of passing state laws to regulate biometric privacy (such as New York’s Biometric Privacy Act in January of this year). In order to protect informational privacy, “a private person's right to choose to determine whether, how, and to what extent information about oneself is communicated to others, especially sensitive and confidential information,” regulation is necessary if facial recognition continues.
 
Deleted:
<
<
“Tell Biden: Halt Dangerous Face Recognition Technologies.” American Civil Liberties Union, action.aclu.org/petition/tell-biden-halt-dangerous-face-recognition-technologies?initms_aff=nat&initms_chan=soc&utm_medium=soc&initms=210217_facesurveil_action_ig&utm_source=ig&utm_campaign=facesurveil&utm_content=210217_privacytechnology_action&ms_aff=nat&ms_chan=soc&ms=210217_facesurveil_action_ig.
 
Changed:
<
<
Wright, Elias. “The Future of Facial Recognition Is Not F Acial Recognition Is Not Fully Known: De Nown: Developing Eloping Privacy and Security Regulatory Mechanisms for Facial Recognition in the Retail Sector .” Fordham Intellectual Property, Media and Entertainment Law Journal, 2019, ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=1718&context=iplj.
>
>
As biometric privacy laws develop, a multitude of lawsuits against big corporations for privacy infringement is expected. In January 2021, Facebook settled a suit for approximately $650 million for infringing Illinois’ biometric privacy law without plaintiffs’ prior consent. In late 2020, plaintiffs filed a lawsuit in California against Instagram, alleging its collection of biometric data. While suits will exponentially increase, it will be interesting to see whether, due to conflict-of-interests and other implications, individuals challenging big corps will move to independent, specialized practitioners rather than large law firms. It is not only necessary to ensure that laws are developed in our increasingly virtual world but also important to look at who develops these laws.
 
Deleted:
<
<
Yeung, Douglas, Rebecca Balebako, Carlos Ignacio Gutierrez Gaviria, and Michael Chaykowsky, Face Recognition Technologies: Designing Systems that Protect Privacy and Prevent Bias. Homeland Security Operational Analysis Center operated by the RAND Corporation, 2020. https://www.rand.org/pubs/research_reports/RR4226.html. Also available in print form.
 
Changed:
<
<
Why couldn't these have been linked to the relevant places in the text? Why in writing for the web wouldn't we use the web?
>
>

Conclusion

 
Changed:
<
<

>
>
It is obvious that bias exists, and both legal and tech giants are pitching in. The ACLU is petitioning the Biden administration to halt these dangerous technologies to “uphold his [Biden’s] commitment to racial equity and civil liberties for all.” Likewise, Microsoft, Amazon, and IBM, amongst others, have banned and or halted facial recognition technology sales and research to police until regulations are put in place. If some of the starkest advocates of technological advances are banning facial recognition technology, what do they know that we do not?
 
Changed:
<
<
>
>
Our “benign” uses of facial recognition in everyday technology are adding to our own personal database, one that is often sold to third parties who profit off of our search history or use inaccurate methods to “identify” individuals. The only way to ensure that privacy concerns prevail over private, biased interests is to end facial recognition. While this is highly unlikely in the short term, we should start by regulating the use of facial recognition to those uses—without information sharing—that simplify activity, aiding those with disabilities.
 
Changed:
<
<
This isn't about artificial intelligence at all, which isn't really a surprise because most of what is written about "AI" has nothing to do with artificial intelligence, which doesn't exist. This is barely about "machine learning," even. It's mostly about facial recognition and biometric information collection. It provides a good summary of recent issues. With some reduction of rhetorical confusion, reducing the techn-hype to a realistic level that accords with your own technical understanding rather than press inflation, it's a good first draft.
>
>
The facial recognition market is estimated to reach U.S. $9.78 billion by 2023. It is clear that facial recognition’s popularity and perceived usefulness is growing. It is vital to combat privacy and bias concerns by proactively educating others, limiting our personal use of these technologies, working alongside the ACLU and Algorithmic Justice League (among others), and pushing for federal and or state biometrics regulations. A system created using skewed data, can only produce skewed results.
 
Deleted:
<
<
But the more important route to improvement would lie in presenting more of your own ideas. There is room for this: you don't need to provide basic information here, because I know what you're talking about: I teach a whole course on the subject, after all. That means you could drop about 500 words of basic explanation and use that space to say what you've thought yourself, which surely goes beyond the "facial recognition is coming but there are privacy concerns" conclusion, which barely does justice to the hard work of learning you've already done.
 



GabrielaFloresRomoFirstEssay 2 - 27 Mar 2021 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Deleted:
<
<
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
 

AI Intelligence- Privacy and Bias

Line: 59 to 58
  Yeung, Douglas, Rebecca Balebako, Carlos Ignacio Gutierrez Gaviria, and Michael Chaykowsky, Face Recognition Technologies: Designing Systems that Protect Privacy and Prevent Bias. Homeland Security Operational Analysis Center operated by the RAND Corporation, 2020. https://www.rand.org/pubs/research_reports/RR4226.html. Also available in print form.
Added:
>
>
Why couldn't these have been linked to the relevant places in the text? Why in writing for the web wouldn't we use the web?


This isn't about artificial intelligence at all, which isn't really a surprise because most of what is written about "AI" has nothing to do with artificial intelligence, which doesn't exist. This is barely about "machine learning," even. It's mostly about facial recognition and biometric information collection. It provides a good summary of recent issues. With some reduction of rhetorical confusion, reducing the techn-hype to a realistic level that accords with your own technical understanding rather than press inflation, it's a good first draft.

But the more important route to improvement would lie in presenting more of your own ideas. There is room for this: you don't need to provide basic information here, because I know what you're talking about: I teach a whole course on the subject, after all. That means you could drop about 500 words of basic explanation and use that space to say what you've thought yourself, which surely goes beyond the "facial recognition is coming but there are privacy concerns" conclusion, which barely does justice to the hard work of learning you've already done.

 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

GabrielaFloresRomoFirstEssay 1 - 24 Feb 2021 - Main.GabrielaFloresRomo
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="FirstEssay"
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.

AI Intelligence- Privacy and Bias

-- By GabrielaFloresRomo - 23 Feb 2021

Introduction of AI Intelligence

With the need to simplify and create more efficient processes in our evermore industrialized society, companies have shifted to artificial intelligence. Artificial intelligence is the making of intelligent machines, especially computer programs, which are not confined to biologically observable methods (McCarthy? ). In application, biometrics—the use of AI for verification—has filtered into many aspects of our lives, ranging from facial and fingerprint recognition to quickly unlock cellphones and computers to more general identification systems used by police departments and the FBI. Algorithms now take traditionally onerous and time-consuming jobs away from humans, such as individual screening processes for identification purposes at the airport or in identification of potential suspects. With these developments, however, come privacy concerns and biases within the systems, which may lack the ability to distinguish between facial features, skin tones, and gender.

Privacy Concerns

Privacy is the “right that determines the nonintervention of secret surveillance and the protection of an individual’s information” (Black’s Law Dictionary). Some states, such as Illinois, have enacted laws regulating biometric privacy, and others are in the process of passing state laws to regulate biometric privacy (New York proposed and introduced the Biometric Privacy Act in January of this year). At a final approval hearing in January of this year, Facebook settled a suit against it for infringing Illinois’ biometric privacy law without first getting plaintiffs’ prior consents for approximately $650 million (Channick). In late 2020, plaintiffs filed a lawsuit in California against Instagram alleging its collection of biometric data (Burnson and Bloomberg). Although Instagram’s data policy states, “If we introduce face-recognition technology to your Instagram experience, we will let you know first, and you will have control over whether we use this technology for you,” a question remains of when Instagram began informing its user and how. These lawsuits are likely the first of many against big corporations regarding privacy concerns, especially as more and more states enact biometric privacy laws.

Although originally promising, the National Biometric Information Privacy Act of 2020 died in Congress in 2020, and thus, “the United States does not have a single, comprehensive federal law governing biometric data” (Baiardo and Le). One of the biggest concerns of biometric data is its ability to monitor individual facial features through phone activities and social media platforms—which tie one’s consumer patterns and physical characteristics—and the ability to sell them to third parties. This leaves individuals’ information open to data breaches—all without one’s own knowledge or express consent. These third parties—data brokers— “do not have a direct relationship with the individuals they are collecting data on, and thus, individuals are often unaware that their information is being transferred, shared, or sold to third parties,” making it both increasingly difficult to protect one’s data and track down who or what has one’s data (Wright, 631). The expansive arm of these corporations—unbeknownst to most— is daunting. This apparent “normalization” of surveillance and data collection is a serious privacy concern for many and a surprise to unsuspecting others.

Facial Recognition & Bias

Facial recognition has been used in a range of ways—from amusing “filters” that individuals use to distort their appearance to programming that assists law enforcement to narrow down suspects. Moreover, once one’s facial appearance is linked to a platform, these platforms may continue automatically linking one’s face to other pictures posted by one or others. Facial recognition has also been recognized as the least accurate biometric form (Najibi). Georgetown Law’s Center on Privacy and Technology found that by 2016, law enforcement face recognition affected over 117 million American adults (Garvie, Bedoya, and Frankle). This is unregulated.

With an extremely disproportionate number of Black individuals in the criminal justice system and a disproportionate amount of police surveillance cameras in minority neighborhoods, the pools of individuals represented in databases weigh heavily in one, generalized direction. Well-known examples of algorithmic bias exist across diverse policy areas, such as criminal justice and disease surveillance, and certain facial characteristics, such as skin color and eye shape, increase the likelihood of bias (Yeung et. al, 15).

A study by the National Institute for Standards and Technology (NIST) of 18.27 million images of 8.49 million people found, among other things, that in one-to-one photo matching, false positives were higher for Black females, and similar false positives rates for one-to-one matching of photos for Asian, Black, and native groups, with American Indian with the highest rate (“NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software”). This may lead to false accusations, racial profiling, and inaccurate matching of individuals. Another study showed that with darker skin tones, Microsoft’s error rates were 12.9% as compared to 0.7% for lighter skin tones and IBM’s error rates were 22.4% for darker skin individuals (Buolamwini and Gebru, 10). Given the increased use of facial recognition in the private sector and government, the disparity in accuracy is concerning in terms of falsely accusing individuals and further depriving them of their rights. With regards to privacy (and compensation), it is also highly unlikely that many, if any, individuals whose photos were used in this study had any idea that their images were being used.

Conclusion

Traction against facial recognition technology has increased, with the ACLU putting up a petition this week for the Biden administration to halt these dangerous technologies in order to “uphold his commitment to racial equity and civil liberties for all.” Whether more research will be conducted to further analyze bias effects in facial recognition is yet to be seen. Whether privacy concerns will prevail over private interests will hopefully be resolved. The face recognition market is estimated to reach U.S. $9.78 billion by 2023 (“Global Facial Recognition Market Report 2018). It is clear that facial recognition’s popularity and perceived usefulness is expanding. Privacy and bias, however, remain important concerns.

Works Cited

Baiardo, Alice, and Anthony Le. “U.S. Biometrics Laws Part II: What to Expect in 2021.” JD Supra, 8 Feb. 2021, www.jdsupra.com/legalnews/u-s-biometrics-laws-part-ii-what-to-7257250/.

Buolamwini, Joy, and Timnit Gebru. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. 2018, proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.

Burnson, Robert, and Bloomberg. “Instagram Faces Lawsuit over Illegal Harvesting of Biometrics.” Fortune, Fortune, 12 Aug. 2020, 5:30, fortune.com/2020/08/12/instagram-lawsuit-biometric-data/.

Channick, Robert. “Nearly 1.6 Million Illinois Facebook Users to Get about $350 Each in Privacy Settlement.” Chicagotribune.com, Chicago Tribune, 14 Jan. 2021, www.chicagotribune.com/business/ct-biz-facebook-privacy-settlement-illinois-20210115-2gau5ijyjff4xd2wfiiow7yl4m-story.html.

Garvie, Clare, et al. “The Perpetual Line-Up.” Perpetual Line Up, 18 Oct. 2016, www.perpetuallineup.org/.

Global Facial Recognition Market Report 2018, 5 June 2018, 5:00 pm, www.prnewswire.com/news-releases/global-facial-recognition-market-report-2018-300660163.html.

McCarthy? ,John. What Is AI? / Basic Questions. jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html.

Najibi, Alex. “Racial Discrimination in Face Recognition Technology.” Harvard Science Policy Blog, 24 Oct. 2020, sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/.

“NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software.” NIST, 19 Dec. 2019, www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software.

“Tell Biden: Halt Dangerous Face Recognition Technologies.” American Civil Liberties Union, action.aclu.org/petition/tell-biden-halt-dangerous-face-recognition-technologies?initms_aff=nat&initms_chan=soc&utm_medium=soc&initms=210217_facesurveil_action_ig&utm_source=ig&utm_campaign=facesurveil&utm_content=210217_privacytechnology_action&ms_aff=nat&ms_chan=soc&ms=210217_facesurveil_action_ig.

Wright, Elias. “The Future of Facial Recognition Is Not F Acial Recognition Is Not Fully Known: De Nown: Developing Eloping Privacy and Security Regulatory Mechanisms for Facial Recognition in the Retail Sector .” Fordham Intellectual Property, Media and Entertainment Law Journal, 2019, ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=1718&context=iplj.

Yeung, Douglas, Rebecca Balebako, Carlos Ignacio Gutierrez Gaviria, and Michael Chaykowsky, Face Recognition Technologies: Designing Systems that Protect Privacy and Prevent Bias. Homeland Security Operational Analysis Center operated by the RAND Corporation, 2020. https://www.rand.org/pubs/research_reports/RR4226.html. Also available in print form.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 5r5 - 29 Jul 2021 - 17:09:00 - GabrielaFloresRomo
Revision 4r4 - 21 May 2021 - 19:08:12 - EbenMoglen
Revision 3r3 - 18 May 2021 - 22:03:44 - GabrielaFloresRomo
Revision 2r2 - 27 Mar 2021 - 19:49:57 - EbenMoglen
Revision 1r1 - 24 Feb 2021 - 07:18:52 - GabrielaFloresRomo
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM