Law in the Internet Society

Making Citizens Free: Privacy and the First Amendment

-- By JohnClayton - 1 Feb 2020

The argument goes like this: Company X is analogous to a news publisher. It gathers information, distills its findings, and delivers insights to its customers. Never mind that the “information” it collects consists of three billion human images scraped from the web; or that the “insights” it creates are biometric faceprints of millions of unconsenting individuals. Because the First Amendment protects the “the creation and dissemination of information,” Company X’s facial recognition services must be immune from all but the most narrowly tailored regulations.

The challenge by Company X—real name, Clearview AI—against an Illinois’s privacy statute exposes a tension in First Amendment law. The amendment offers a bulwark against state suppression of political, social, and scientific knowledge. But the freedom of speech also depends upon privacy: One cannot think or act autonomously if they are being watched. Tech companies have increasingly wielded the First Amendment as a sword against laws that threaten their efforts to learn about us. The result is a seeming paradox, with the amendment threatening privacy measures necessary to preserve its vitality.

This paradox can be reconciled. I proceed from a basic premise: the state has a compelling interest in enacting privacy laws that help create and preserve an environment where free thought and inquiry—the preconditions of self-governance—can flourish among the citizenry. Neutral, narrowly tailored regulations meant to tame our ecological privacy disaster do not violate the First Amendment.

The Scope of “Speech”

Start with an acknowledgment: collecting and deciphering human behavioral insights falls within the ambit of the First Amendment. This is true of Clearview’s faceprints; or academic research; or online browsing histories sold by data brokers. Nor is such expansive coverage necessarily a bad thing. Modes of knowledge creation constantly evolve—defining the precise borders of the “freedom of speech” is impossible. Data scraping feels icky when it maps our faces, but journalists also use such methods to study Facebook’s news feed. A default presumption in favor of protecting new tools of expression or learning makes sense.

But this only begins the First Amendment inquiry. What recognized government interests can privacy laws pursue? And what methods are permissible to achieve those goals?

Free Speech Environmentalism

Louis Brandeis wrote in a famous First Amendment opinion that the “final end of the state was to make men free to develop their faculties.” Brandeis, a noted privacy advocate, frequently voted to strike government speech regulations. But he also worried about private power’s potential to stunt citizens’ intellectual and civic development.

To borrow language from Brandeis, I believe that in pursuing privacy regulations, the government has a compelling interest in helping to make citizens free by creating the conditions where speech and thought can flourish. This includes shielding citizens from the most intrusive means of privacy-destroying private surveillance. Facial recognition databases are one example. The larger ecological disaster of online privacy has spawned externalities—invasive advertisements, data leaks, the atrophying of human attention. These surveillance byproducts imperil free thought and threaten democracy. Successful self-governance depends on citizens’ ability to think, learn, and make decisions based on that thinking and learning. The freedom of speech, in other words, is both an end of government and a means.

Such an environmental government interest in culling data pollution is distinct from the individualized privacy interests usually advanced—for example, that citizens may prevent misuse of their data. This transactional approach implies that consent can cure privacy invasions. But the harm of data collection can rarely be so cabined; track one person’s phone, and you can learn who they call or email or visit.

Law isn’t the only available remedy. Technologies like email encryption can help individuals better protect their data. But addressing data pollution on a societal scale requires decisive, collective action.

Stemming Data Pollution

Certainly there are limits to what means the government may employ to pursue the speech environment necessary to protect the freedom of thought. Privacy laws cannot target “core” political speech. Nor can they single out viewpoints or speakers. Commercial speech—that is, uses of data for purely economic gain— seem a more attractive target for regulation. But tech companies are not alone in exploiting our data. An especially popular facial recognition photo data set was created by university researchers.

I argue, then, that there are some private data collection practices that so injure privacy that they can be presumptively prohibited without offending the First Amendment—regardless of who the “speaker” is. Smartphone location tracking, for example, allows companies to assemble comprehensive snapshots of where individuals go and who they associate with. And the proliferation of facial recognition technologies threatens to chill expressive and associational conduct in person and online.

Admittedly, blanket bans run the risk of preventing socially beneficial means of knowledge creation that use locational, biometric, or other data. One solution could be to employ an institutional review board model—in which a government agency must review and approve activities that entail the sale or dissemination of certain types of data. The burden would rest on the applicant to show the public benefits of their activities outweigh privacy harms.

It bears repeating that in a scenario where the lone interest in pursuing privacy laws in transactional—that is, protecting one’s right to their data—presumptive bans on these practices likely could not survive narrow tailoring unless they included a safe harbor for securing user consent. Illinois Biometric Privacy Act, which Clearview is challenging, includes such a provision.


The First Amendment belongs to Clearview as much as any citizen. But notwithstanding the First Amendment’s command to “make no law,” speech can, under some circumstances, be abridged. Determining whether and how speech can be regulated requires, first, an understanding what governmental interests we recognize as compelling. The goal of government to make citizens free—free to think, learn, and associate—demands that the state be allowed to craft neutral, narrowly tailored laws to curb certain privacy-destroying practices. Even when those practices look like “speech.”

On this logic, political speech design to "imperil the foundational principles ... that the First Amendment sits atop" should also no longer enjoy the protection of strict scrutiny. But that's not "the traditions of our people and our law," as it were.

You have seen clearly the tension between protecting the rights to learn, to think, and to speak and the protection of individual privacy against people who learn, think, and sell the results of their thinking about others. Whether the particular case of Clearview AI deserves the illustrative prominence you give it depends not on whether they are sleazy, which they are, but on how clearly the principles in conflict can be seen from a single illustration.

The route to improvement here, I think, is to honor fully the complexity of the phenomenon. As you rightly say, the US First Amendment is a deregulatory principle: that's what happens when Congress is told to make no law, while other societies are busily making laws that Congress and the State legislatures clearly cannot make.

But perhaps the issue isn't whether the First Amendment over-protects freedom of thought against privacy regulation. Perhaps the question is instead how to make privacy regulation consistent with the overarching commitment to freedom of thought. That would be the problem, at any rate, for those whose commitments to both are very strong, and who therefore are likely to reject an approach that requires them to be in conflict unnecessarily. That's my situation, just to take one guy at random, so in the next course I have to do that work. Wherever you want to take this essay, I hope that I will be able to respond to it, and to keep the conversation growing, in Part One of CPC.

You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Webs Webs

r5 - 04 Feb 2021 - 04:27:21 - JohnClayton
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM