Computers, Privacy & the Constitution

Mass Facial Recognition and the First Amendment

-- By MitchellKohles - 13 Apr 2021


Following the Capitol riot of January 6th, 2021, law enforcement agencies relied heavily on facial recognition technology to identify and locate the hundreds of people who stormed the Capitol. Given the day’s repugnant attack on democracy, it can be difficult to criticize even particular aspects of the effort to hold the rioters accountable.

Nevertheless, mass facial recognition tools present a unique danger to the First Amendment values of political deliberation. To avoid this outcome, and in the context of technology permitting mass facial recognition, state actors have an obligation to secure public life from constant visual surveillance, aggregation, and analysis.

Facial Recognition and Consent

The ACLU is currently litigating a case against Clearview AI for its dissemination of facial recognition software. ACLU says that when Clearview built its database—by scraping the web for photos of all kinds, then analyzing them to build “faceprints” of millions of individuals, which Clearview’s customers can use to identify and track those individuals—it violated an Illinois state law, BIPA, that requires anyone selling any person’s biometric information to get that person’s consent. Clearview’s product is not entirely novel. The FBI and law enforcement agencies across the country employ facial recognition technology of various kinds, and those are now, according to Clearview, the company’s sole clients.

However, there is a meaningful difference—turning on Fourth Amendment logic—between the use of photos secured pursuant to arrests, border crossings, or the state’s issuing of a driver’s license and what Clearview does.

In its briefs attacking BIPA, Clearview argues that because its technology merely sweeps up information that is “public”—e.g., photos posted to Facebook—its service doesn’t implicate privacy interests. Motion to Dismiss at 22. But Clearview’s invocation of the Fourth Amendment’s third-party doctrine is not only flawed; it shows why the doctrine itself fits so poorly with the Fourth Amendment.

The third-party doctrine ultimately depends on consent, whether that be consent to share with your carrier the phone number you’d like to dial or consent to tell your credit card issuer when and where you ate lunch. But a photo on the internet does not necessarily convey any consent of the photographed person, not even the low bar of consent contemplated by Smith v. Maryland, 442 U.S. 735 (1979). Consent in this context breaks down entirely. Surveillance via consent, in Clearview’s product, is accomplished only by proclaiming the entire internet an open field in which John Doe’s selfie that he posts to Instagram implies no more privacy interest (and just as much consent) as a photo of a young woman in her bedroom uploaded by her stalker. Clearview’s argument, despite its clear flaw, reveals how cartoonishly ill-suited the third-party doctrine is to modern life.

Facial Recognition and Expression

Facial recognition tools have implications for the rights of expression, assembly, and association because—insofar as metadata empowers insights that these tools promise—they can chill the exercise of these rights for those who want to keep their expressions and associations anonymous.

With mass facial recognition tools, the state gains the power to identify individuals based on their possible affinity for one viewpoint or the other, or their decision to walk down a certain street or visit a certain shop. Metadata makes this possible: where the technology is trained on visual data that includes information about location, time of day, the presence of other known “faceprints,” and other information, these tools can correlate one’s face with innumerable privacies of life—a doctor’s visit, a church service, a rally—simply because physical movements can be caught on camera. For example, nothing in the Constitution prevents Clearview from contracting with every private entity who installs cameras inside and outside their business for that data (the kind prosecutors acquire while investigating a crime).

On the other hand, where faceprints merely assist prosecutors in identifying criminal suspects (for example, by comparing photos to the name associated with a driver’s license), no impermissible chilling occurs. Ironically, collection of the nearly immutable aspect of one’s identity—the face—does not by itself greatly threaten privacy. But a unique faceprint that conveys information correlated to countless data and metadata erodes the privacy of one’s person over time.

An index of faceprints—where training metadata is accessible to users or otherwise informs insights available to them—means that state actor users can identify not only the speaker at a public rally in the park, but also passive participants who may want to express support for the speaker’s message, evaluate the message, or develop their own criticisms. And the state could do so on the basis of that participant’s decision to attend the rally. Such bystander actions are critical to the discussion and deliberation that the Founders sought to protect in the First Amendment. Deliberation of viewpoints is a means to ensure that individuals can hold their governments accountable in ways deemed appropriate by the community.

This danger animates the Supreme Court’s chilling effect doctrine, wherein a state can be found to violate the First Amendment by acting in a way that chills the exercise of rights, even where the aggrieved party has not been found to violate the state’s laws. To be clear, what matters is a state actor’s purpose in using a deep database of faceprints. Profiling individuals outside the context of a lawfully predicated criminal investigation is an impermissible purpose. But such purposes are made possible by the unrestricted hoovering up of so-called “public” data, and the chilling effect of making metadata-infused insights available to state actors is constitutionally cognizable.

If the incorporation against the states of the First Amendment’s non-abridgement principle demands that the state tolerate minority viewpoints, and the use of these surveillance technologies may have a meaningful chilling effect on deliberation of those viewpoints, the Constitution requires that state actors refrain from using these tools to build a surveillance apparatus that delivers insights beyond the mere fact of one’s faceprint.

You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Webs Webs

r4 - 03 May 2021 - 18:47:19 - MitchellKohles
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM