Law in the Internet Society

View   r2  >  r1  ...
ZaneMullerSecondEssay 2 - 11 Jan 2020 - Main.ZaneMuller
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Line: 18 to 18
 

What exactly are the harmful externalities we want to reduce?

Changed:
<
<
Even once we have divided privacy into secrecy, anonymity and autonomy, the harms of digital surveillance remain somewhat difficult to articulate. There is the general unease stirred by too-specific targeted ads, the wasted hours of clicking and scrolling, the atomization of online communities into echo chambers, the spread of disinformation and resulting deterioration of democratic politics. For the law to address these, they need to be recharacterized in legally cognizable terms. For example, advocates could point to the chilled freedoms of speech and association that result from ubiquitous cameras and audio recorders. Arguments that targeted advertising manipulates consumer behavior may not win easy acceptance given our national commitment to the idea of personal responsibility, but the insights of behavioral economics, choice architecture and design ethics offer conceptual frameworks. Adverse mental health impacts from digital surveillance could amount to emotional distress, and election law already regulates speech by political actors.
>
>
Even after we have divided privacy into secrecy, anonymity and autonomy, the harms of digital surveillance remain somewhat difficult to articulate. There is the general unease stirred by too-specific targeted ads, the wasted hours of clicking and scrolling, the atomization of online communities into echo chambers, the spread of disinformation and resulting deterioration of democratic politics. For the law to address these, they need to be recharacterized in legally cognizable terms. For example, advocates could point to the chilled freedoms of speech and association that result from ubiquitous cameras, audio recorders and location tracking. Arguments that targeted advertising manipulates consumer behavior may not win easy acceptance given our national commitment to the idea of personal responsibility, but the insights of behavioral economics, choice architecture and design ethics offer useful conceptual frameworks. Adverse mental health impacts from digital surveillance could amount to emotional distress, and election law already regulates speech by political actors.
 

What metrics, if any, will we use to measure their reduction?

Added:
>
>
It is here that the environmental analogy becomes, I think, most troublesome. Early environmental regulations such as the Clean Air Act specified acceptable levels of pollution, typically measured in parts per million. This made them easy to comply with and enforce, and was further grounded in solid scientific evidence about exactly how much mercury was poisonous to a rat or how much of a certain pesticide in a water supply increases the rate of birth defects. Some pollutants, like CFCs, were deemed too harmful compared to their usefulness to be permitted at all.

It is not so simple with privacy. It may be that certain practices – say, deliberately spreading misinformation for the purpose of swaying an election – are so deleterious as to be banned without serious good-faith opposition. But many of the externalities described in the previous section defy easy measurement. Furthermore, while toxic chemicals have fairly steady and uniform effects on those exposed to them, the harms from privacy violations are unlikely to be so evenly distributed. Proponents of confused ideas like a spam tax fall prey to a desire for reliability and empiricism that we often assume are guaranteed by quantitative measurements. The answer instead may be qualitative or objective standards: practices which unreasonably chill freedom or knowingly harm mental health may be forbidden. But such regimes rely on community standards, and our society is already dangerously accustomed to comprehensive surveillance. Is a practice unreasonable if we tolerate it unthinkingly?

 

To whom do we assign duties and liabilities?

Changed:
<
<
-diffculty of assigning liability to ISPs/Platforms under existing comms law
>
>
Existing environmental regulations fall into two broad categories: command and control rules placing strictures on polluting activity, and so called “second generation” regulations that use markets as a tool for environmental protection. Rules take the form of strict limits and prohibitions: auto emissions must be such and such, you may not dump this substance into that body of water. Second generation regulations are more along the lines of cap-and-trade schemes or environmental impact statements.

Obviously, those who would spy on the general public or make tools that enable the practice are the logical parties to hold liable, existing law notwithstanding. Market-based regulations are likely to fail for the same reasons that the current system of consent is ineffective – sophisticated companies will work around them. Existing proposals for the externality regulation framework, however, hew exclusively to these: Michael Froomkin, for example, suggests that anyone undertaking private digital surveillance of the public be required to release something akin to an environmental impact statement. It is difficult to imagine this having any real effect.

 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.

ZaneMullerSecondEssay 1 - 10 Jan 2020 - Main.ZaneMuller
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="SecondEssay"

Threshold Questions for Developing Privacy Protections Modeled on Environmental Law

-- By ZaneMuller - 10 Jan 2020

Anyone who has clicked an “I accept” button can understand the futility of a contractual or consent-based regime for online privacy protection. So while the EU pushes ahead with GDPR and other states consider adopting a version of the CCPA, privacy advocates should begin thinking seriously about more effective reform. My purpose here is to take a step further than we did in class on the idea of privacy protections modeled on environmental law.

The status quo is bad because mass surveillance harms everyone’s privacy – it reduces individuals’ capacity for secrecy and anonymity in ways that subvert their autonomy. And because of the architecture of the network, your failure to maintain secrecy or anonymity can harm my autonomy. That external harm cannot be remedied by consent, and so we need social standards of care.

If we accept that externality regulation is the appropriate model, the question becomes, what would such privacy protections actually look like? Curiously few students have tried to answer – only Zain Haq in 2017 – despite the seemingly ready acceptance of the environmental regulation analogy among the class and the relative lack of writing anywhere on how exactly such regulation would work. A few legal scholars have written in this vein, but they offer incremental, market-oriented and generally flaccid proposals. Some make Zain’s mistake of taking the environmental analogy too far, using metaphors like “data spills” or “spam emissions” to characterize privacy harms.

A comprehensive prescription or model statute is well beyond the scope of this essay. I will instead try to identify and develop a few key questions that proponents of surveillance-externality regulation should consider at the outset.

What exactly are the harmful externalities we want to reduce?

Even once we have divided privacy into secrecy, anonymity and autonomy, the harms of digital surveillance remain somewhat difficult to articulate. There is the general unease stirred by too-specific targeted ads, the wasted hours of clicking and scrolling, the atomization of online communities into echo chambers, the spread of disinformation and resulting deterioration of democratic politics. For the law to address these, they need to be recharacterized in legally cognizable terms. For example, advocates could point to the chilled freedoms of speech and association that result from ubiquitous cameras and audio recorders. Arguments that targeted advertising manipulates consumer behavior may not win easy acceptance given our national commitment to the idea of personal responsibility, but the insights of behavioral economics, choice architecture and design ethics offer conceptual frameworks. Adverse mental health impacts from digital surveillance could amount to emotional distress, and election law already regulates speech by political actors.

What metrics, if any, will we use to measure their reduction?

To whom do we assign duties and liabilities?

-diffculty of assigning liability to ISPs/Platforms under existing comms law


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 2r2 - 11 Jan 2020 - 05:13:02 - ZaneMuller
Revision 1r1 - 10 Jan 2020 - 22:46:15 - ZaneMuller
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM