Law in Contemporary Society

View   r4  >  r3  ...
GabrielaFloresRomoFirstEssay 4 - 21 May 2021 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Line: 46 to 46
  The facial recognition market is estimated to reach U.S. $9.78 billion by 2023. It is clear that facial recognition’s popularity and perceived usefulness is growing. It is vital to combat privacy and bias concerns by proactively educating others, limiting our personal use of these technologies, working alongside the ACLU and Algorithmic Justice League (among others), and pushing for federal and or state biometrics regulations. A system created using skewed data, can only produce skewed results.
Added:
>
>
I think this is a very substantial improvement. Having your ideas lucidly stated helps us to understand where we may go next.

I think first it is important to reflect that the public order-keepers and intelligence services were the very first adopters of facial recognition. The Israelis were already using a system to detect possible crossings of the Allonby Bridge border with Jordan by certain PLO adversaries in 1989. At the other end of the population scale, the PRC was manufacturing hardware in vast quantities and developing at Huawei software compatible with IBM SmartCities to enable mass-population citywide facial recognition by 2010. As usual, the surveillance capitalists in the US largely made use of technologies developed by socialism in our defense industrial sector and applied them on massive scale to new purposes. But in thinking about this tech, it's very helpful to remember that the policing and spying applications aren't peripheral to the development, but central.

Second, as you have some experience with this technology, you understand that the executable software itself is not the source of the social biases discussed, but rather the training data. The same two-cent neural network cookbook traiued, let us say, on the FairFace dataset will show different recognition and categorization outputs entirely. This presents a significant internal conflict for those who discuss the policy implications of this sort of tech: if the bugs are fixed by "better" training, is the tech then okay to use? If not, what's the other reason? The recent "Coded Bias" documentary is a good example of powerful and lucid presentation of the issue you discuss, terrific for introducing the issues and hamstrung for experts by its inability to clarify its ambivalence on this point.

For me, as you'll see if we have a chance to work together in LawNetSoc, better clarity is gained by starting at another vantage. If we begin from close definition of rights, seeing what rights are coiled up inside the bag we call "privacy," it turns out that the problems of bias are bugs in what is already a privacy violation, so fixing them doesn't constitute remedy.

On the other hand, "just say no" structures are not the only way to ensure the rule of law. And they are politically the most difficult, precisely because the organs of order-keeping and security will always oppose them.

 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.

Revision 4r4 - 21 May 2021 - 19:08:12 - EbenMoglen
Revision 3r3 - 18 May 2021 - 22:03:44 - GabrielaFloresRomo
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM