Law in the Internet Society

View   r5  >  r4  >  r3  >  r2  >  r1
ZaneMullerSecondEssay 5 - 07 Feb 2020 - Main.ZaneMuller
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

ZaneMullerSecondEssay 4 - 04 Feb 2020 - Main.ZaneMuller
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

Changed:
<
<

Threshold Questions for Developing Privacy Protections Modeled on Environmental Law

>
>

NEPA as a Model for a National Privacy Protection Act

 
Changed:
<
<
-- By ZaneMuller - 10 Jan 2020
>
>
-- By ZaneMuller - 3 Feb 2020
 
Changed:
<
<
Anyone who has clicked an “I accept” button can understand the futility of a contractual or consent-based regime for online privacy protection. So while the EU pushes ahead with GDPR and other states consider adopting a version of the CCPA, privacy advocates should begin thinking seriously about more effective reform. My purpose here is to take a step further than we did in class on the idea of privacy protections modeled on environmental law.
>
>
Anyone who has clicked “I accept” understands the futility of a contractual or consent-based regime for online privacy protection. So while the EU pushes ahead with GDPR and other states consider adopting a version of the CCPA, privacy advocates should begin thinking seriously about more effective reform.
 The status quo is bad because mass surveillance harms everyone’s privacy – it reduces individuals’ capacity for secrecy and anonymity in ways that subvert their autonomy. And because of the architecture of the network, your failure to maintain secrecy or anonymity can harm my autonomy. That external harm cannot be remedied by consent, and so we need social standards of care.
Changed:
<
<
If we accept that externality regulation is the appropriate model, the question becomes, what would such privacy protections actually look like? Curiously few students have tried to answer – only Zain Haq in 2017 – despite the seemingly ready acceptance of the environmental regulation analogy among the class and the relative lack of writing anywhere on how exactly such regulation would work. A few legal scholars have written in this vein, but they offer incremental, market-oriented and generally flaccid proposals. Some make Zain’s mistake of taking the environmental analogy too far, using metaphors like “data spills” or “spam emissions” to characterize privacy harms.
>
>
If we accept that externality regulation is the appropriate model, the question becomes, what would such privacy protections actually look like? The 1970 National Environmental Policy Act offers one model, and a potentially useful template for those who support a comprehensive national remedy to harm that surveillance capitalism has done to privacy and democracy. An examination of its key features offers lessons for what a fully-realized National Privacy Policy Act should look like.
 
Changed:
<
<
A comprehensive prescription or model statute is well beyond the scope of this essay. I will instead try to identify and develop a few key questions that proponents of surveillance-externality regulation should consider at the outset.
>
>

The History and Structure of NEPA

 
Changed:
<
<

What exactly are the harmful externalities we want to reduce?

>
>
NEPA was the most pronounced legislative expression of an environmental movement in the United States tracing its roots to the conservationism of Teddy Roosevelt, bolstered by Cold War dread of a nuclear winter and catalyzed by a growing public sense of environmental crisis, made vivid by oil spills, burning rivers, and the threatened extinction of charismatic fauna. What is striking about the statutory language fifty years later is its tenor and clarity; no lip service whatever is paid to the virtues of industrial capitalism or the sanctity of jobs. Improving the quality of the environment by declaring a “basic national charter for protection of the environment” is the explicit, unqualified goal. Importantly, the statute articulates a democratic vision for environmental regulation, to “encourage and facilitate public involvement in decisions which affect the quality of the human environment.”
 
Changed:
<
<
Even after we have divided privacy into secrecy, anonymity and autonomy, the harms of digital surveillance remain somewhat difficult to articulate. There is the general unease stirred by too-specific targeted ads, the wasted hours of clicking and scrolling, the atomization of online communities into echo chambers, the spread of disinformation and resulting deterioration of democratic politics. For the law to address these, they need to be recharacterized in legally cognizable terms. For example, advocates could point to the chilled freedoms of speech and association that result from ubiquitous cameras, audio recorders and location tracking. Arguments that targeted advertising manipulates consumer behavior may not win easy acceptance given our national commitment to the idea of personal responsibility, but the insights of behavioral economics, choice architecture and design ethics offer useful conceptual frameworks. Adverse mental health impacts from digital surveillance could amount to emotional distress, and election law already regulates speech by political actors.
>
>
NEPA is notable for both its simplicity and breadth of coverage. At its core, it is a procedural requirement applying to any and all action by the Federal government, from building interstates to approving the use of Federal funds by state and local governments. Yet the entire statute fills no more than seven pages, and requires simply that Federal actions be accompanied by a review process providing the public and its representatives with “high quality information… [comprising] accurate scientific analysis, expert agency comments, and public scrutiny.” This information comes in the form of Environmental Impact Statements, accompanied by a rigorous procedure for identifying and evaluating the effects of any action and allowing the public to weigh in. This was a departure from prior, more granular environmental legislation, such as the Clean Air Act, that prescribed limits on specific sources pollution but were of limited effect in combating systemic environmental problems and novel sources of degradation.
 
Changed:
<
<

What metrics, if any, will we use to measure their reduction?

>
>
Despite its emphasis on procedure and lack of specific prohibitions, the strength and clarity of the Congressional mandate has allowed Federal agencies, including the EPA and the CEQ, to develop robust and comprehensive regulations that have withstood challenge in federal courts. For example, broad meaning is given to terms like “effects”, so that they include “indirect effects”, “induced” effects, and “related effects on natural systems including ecosystems”.
 
Changed:
<
<
It is here that the environmental analogy becomes, I think, most troublesome. Early environmental regulations such as the Clean Air Act specified acceptable levels of pollution, typically measured in parts per million. This made them easy to comply with and enforce, and was further grounded in solid scientific evidence about exactly how much mercury was poisonous to a rat or how much of a certain pesticide in a water supply increases the rate of birth defects. Some pollutants, like CFCs, were deemed too harmful compared to their usefulness to be permitted at all.
>
>
The environmental review process itself restrains environmental damage by the simple fact of imposing delays and costs on almost anyone who would alter the environment, and by holding public decision makers accountable for their approval of harmful projects. But crucially, it also gives standing to citizens and environmental groups who can use litigation as a tool of environmental protection. Combining public and private enforcement of environmental law makes it that much more effective.
 
Deleted:
<
<
It is not so simple with privacy. It may be that certain practices – say, deliberately spreading misinformation for the purpose of swaying an election – are so deleterious as to be banned without serious good-faith opposition. But many of the externalities described in the previous section defy easy measurement. Furthermore, while toxic chemicals have fairly steady and uniform effects on those exposed to them, the harms from privacy violations are unlikely to be so evenly distributed. Proponents of confused ideas like a spam tax fall prey to a desire for reliability and empiricism that we often assume are guaranteed by quantitative measurements. The answer instead may be qualitative or objective standards: practices which unreasonably chill freedom or knowingly harm mental health may be forbidden. But such regimes rely on community standards, and our society is already dangerously accustomed to comprehensive surveillance. Is a practice unreasonable if we tolerate it unthinkingly?
 
Changed:
<
<

To whom do we assign duties and liabilities?

>
>

Lessons for a National Privacy Protection Act

 
Changed:
<
<
Existing environmental regulations fall into two broad categories: command and control rules placing strictures on polluting activity, and so called “second generation” regulations that use markets as a tool for environmental protection. Rules take the form of strict limits and prohibitions: auto emissions must be such and such, you may not dump this substance into that body of water. Second generation regulations are more along the lines of cap-and-trade schemes or environmental impact statements.
>
>
An effective NPPA would need a similar groundswell of public support across the political spectrum as well as firm grounding in American traditions of privacy, and free association and free thought. Ideally it too would have a clear purpose: a “basic national charter for the protection of privacy”, unqualified by any nonsense about the virtues of big data or specious affirmations of the First Amendment. Democratic accountability would likewise be an essential component for restoring autonomy to the public counterbalancing the asymmetries of information and power enjoyed by the likes of Google and Facebook.
 
Changed:
<
<
Obviously, those who would spy on the general public or make tools that enable the practice are the logical parties to hold liable, existing law notwithstanding. Market-based regulations are likely to fail for the same reasons that the current system of consent is ineffective – sophisticated companies will work around them. Existing proposals for the externality regulation framework, however, hew exclusively to these: Michael Froomkin, for example, suggests that anyone undertaking private digital surveillance of the public be required to release something akin to an environmental impact statement. It is difficult to imagine this having any real effect.
>
>
Simplicity and breadth would be cardinal virtues of an effective NPPA; privacy harms, like environmental harms, are various, dynamic and often difficult to pinpoint absent mandatory public disclosure and scientifically-informed review. Michael Froomkin has proposed Privacy Impact Notices as an analog to the EIS, arguing that doing so would “ignite a regulatory dynamic by collecting information about the privacy costs of previously unregulated activities.” Or put simply, companies like Facebook would have to admit exactly what it is that they are doing with user data and defend its impact on the public.
 
Changed:
<
<
Why? NEPA and its required environmental impact statements have had plenty of effect. Breach liabilities for failure to handle collected data correctly will create incentives to collect less, and for insurers and businesses to find solutions (like improved security regimes and better software governance) that will reduce harms. Strict liability regimes will force the most effective cost-avoiders to reduce costs in a familiar fashion.
>
>
A strong statement of Congressional intent, combined with authorization and funding for the creation of a Privacy Protection Agency, would empower the Federal government to regulate and rein in harmful and heretofore unregulated private surveillance practices. Ideally, the ambit of such a law would be identical to that of NEPA, covering any Federal action, including the domestic spying conducted by the NSA. But its reach to private companies would be assured by attaching the review process to Federal actions such as the licensing of the wireless spectrum or communications infrastructure spending.
 
Changed:
<
<
But you're also over-specifying by direct analogy of particular laws about particular environmental problems. A more general intention to rely on social standards of care and overall liability, instead of contractual promise-making to individual parties, which focuses on what has been done that caused harm rather than on whether consent was collected for it, is a sufficient representation of "environmental" for these purposes. A National Privacy Policy Act, in the NEPA vein, has much more to recommend it than you are allowing for. Put some thought there and see what you come up with.
>
>
The mere existence of such a law would cause reputation-sensitive large public companies to think harder about whether they really want to admit to their surveillance practices. But crucially, it would give standing to private actors who seek to defend privacy and create a legal environment where surveillance capitalism as currently practiced is no longer viable. The long-term success of NEPA in delivering on its ambition is cause for optimism.
 

ZaneMullerSecondEssay 3 - 14 Jan 2020 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Line: 33 to 33
 Obviously, those who would spy on the general public or make tools that enable the practice are the logical parties to hold liable, existing law notwithstanding. Market-based regulations are likely to fail for the same reasons that the current system of consent is ineffective – sophisticated companies will work around them. Existing proposals for the externality regulation framework, however, hew exclusively to these: Michael Froomkin, for example, suggests that anyone undertaking private digital surveillance of the public be required to release something akin to an environmental impact statement. It is difficult to imagine this having any real effect.
Added:
>
>
Why? NEPA and its required environmental impact statements have had plenty of effect. Breach liabilities for failure to handle collected data correctly will create incentives to collect less, and for insurers and businesses to find solutions (like improved security regimes and better software governance) that will reduce harms. Strict liability regimes will force the most effective cost-avoiders to reduce costs in a familiar fashion.

But you're also over-specifying by direct analogy of particular laws about particular environmental problems. A more general intention to rely on social standards of care and overall liability, instead of contractual promise-making to individual parties, which focuses on what has been done that caused harm rather than on whether consent was collected for it, is a sufficient representation of "environmental" for these purposes. A National Privacy Policy Act, in the NEPA vein, has much more to recommend it than you are allowing for. Put some thought there and see what you come up with.

 



ZaneMullerSecondEssay 2 - 11 Jan 2020 - Main.ZaneMuller
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Line: 18 to 18
 

What exactly are the harmful externalities we want to reduce?

Changed:
<
<
Even once we have divided privacy into secrecy, anonymity and autonomy, the harms of digital surveillance remain somewhat difficult to articulate. There is the general unease stirred by too-specific targeted ads, the wasted hours of clicking and scrolling, the atomization of online communities into echo chambers, the spread of disinformation and resulting deterioration of democratic politics. For the law to address these, they need to be recharacterized in legally cognizable terms. For example, advocates could point to the chilled freedoms of speech and association that result from ubiquitous cameras and audio recorders. Arguments that targeted advertising manipulates consumer behavior may not win easy acceptance given our national commitment to the idea of personal responsibility, but the insights of behavioral economics, choice architecture and design ethics offer conceptual frameworks. Adverse mental health impacts from digital surveillance could amount to emotional distress, and election law already regulates speech by political actors.
>
>
Even after we have divided privacy into secrecy, anonymity and autonomy, the harms of digital surveillance remain somewhat difficult to articulate. There is the general unease stirred by too-specific targeted ads, the wasted hours of clicking and scrolling, the atomization of online communities into echo chambers, the spread of disinformation and resulting deterioration of democratic politics. For the law to address these, they need to be recharacterized in legally cognizable terms. For example, advocates could point to the chilled freedoms of speech and association that result from ubiquitous cameras, audio recorders and location tracking. Arguments that targeted advertising manipulates consumer behavior may not win easy acceptance given our national commitment to the idea of personal responsibility, but the insights of behavioral economics, choice architecture and design ethics offer useful conceptual frameworks. Adverse mental health impacts from digital surveillance could amount to emotional distress, and election law already regulates speech by political actors.
 

What metrics, if any, will we use to measure their reduction?

Added:
>
>
It is here that the environmental analogy becomes, I think, most troublesome. Early environmental regulations such as the Clean Air Act specified acceptable levels of pollution, typically measured in parts per million. This made them easy to comply with and enforce, and was further grounded in solid scientific evidence about exactly how much mercury was poisonous to a rat or how much of a certain pesticide in a water supply increases the rate of birth defects. Some pollutants, like CFCs, were deemed too harmful compared to their usefulness to be permitted at all.

It is not so simple with privacy. It may be that certain practices – say, deliberately spreading misinformation for the purpose of swaying an election – are so deleterious as to be banned without serious good-faith opposition. But many of the externalities described in the previous section defy easy measurement. Furthermore, while toxic chemicals have fairly steady and uniform effects on those exposed to them, the harms from privacy violations are unlikely to be so evenly distributed. Proponents of confused ideas like a spam tax fall prey to a desire for reliability and empiricism that we often assume are guaranteed by quantitative measurements. The answer instead may be qualitative or objective standards: practices which unreasonably chill freedom or knowingly harm mental health may be forbidden. But such regimes rely on community standards, and our society is already dangerously accustomed to comprehensive surveillance. Is a practice unreasonable if we tolerate it unthinkingly?

 

To whom do we assign duties and liabilities?

Changed:
<
<
-diffculty of assigning liability to ISPs/Platforms under existing comms law
>
>
Existing environmental regulations fall into two broad categories: command and control rules placing strictures on polluting activity, and so called “second generation” regulations that use markets as a tool for environmental protection. Rules take the form of strict limits and prohibitions: auto emissions must be such and such, you may not dump this substance into that body of water. Second generation regulations are more along the lines of cap-and-trade schemes or environmental impact statements.

Obviously, those who would spy on the general public or make tools that enable the practice are the logical parties to hold liable, existing law notwithstanding. Market-based regulations are likely to fail for the same reasons that the current system of consent is ineffective – sophisticated companies will work around them. Existing proposals for the externality regulation framework, however, hew exclusively to these: Michael Froomkin, for example, suggests that anyone undertaking private digital surveillance of the public be required to release something akin to an environmental impact statement. It is difficult to imagine this having any real effect.

 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.

ZaneMullerSecondEssay 1 - 10 Jan 2020 - Main.ZaneMuller
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="SecondEssay"

Threshold Questions for Developing Privacy Protections Modeled on Environmental Law

-- By ZaneMuller - 10 Jan 2020

Anyone who has clicked an “I accept” button can understand the futility of a contractual or consent-based regime for online privacy protection. So while the EU pushes ahead with GDPR and other states consider adopting a version of the CCPA, privacy advocates should begin thinking seriously about more effective reform. My purpose here is to take a step further than we did in class on the idea of privacy protections modeled on environmental law.

The status quo is bad because mass surveillance harms everyone’s privacy – it reduces individuals’ capacity for secrecy and anonymity in ways that subvert their autonomy. And because of the architecture of the network, your failure to maintain secrecy or anonymity can harm my autonomy. That external harm cannot be remedied by consent, and so we need social standards of care.

If we accept that externality regulation is the appropriate model, the question becomes, what would such privacy protections actually look like? Curiously few students have tried to answer – only Zain Haq in 2017 – despite the seemingly ready acceptance of the environmental regulation analogy among the class and the relative lack of writing anywhere on how exactly such regulation would work. A few legal scholars have written in this vein, but they offer incremental, market-oriented and generally flaccid proposals. Some make Zain’s mistake of taking the environmental analogy too far, using metaphors like “data spills” or “spam emissions” to characterize privacy harms.

A comprehensive prescription or model statute is well beyond the scope of this essay. I will instead try to identify and develop a few key questions that proponents of surveillance-externality regulation should consider at the outset.

What exactly are the harmful externalities we want to reduce?

Even once we have divided privacy into secrecy, anonymity and autonomy, the harms of digital surveillance remain somewhat difficult to articulate. There is the general unease stirred by too-specific targeted ads, the wasted hours of clicking and scrolling, the atomization of online communities into echo chambers, the spread of disinformation and resulting deterioration of democratic politics. For the law to address these, they need to be recharacterized in legally cognizable terms. For example, advocates could point to the chilled freedoms of speech and association that result from ubiquitous cameras and audio recorders. Arguments that targeted advertising manipulates consumer behavior may not win easy acceptance given our national commitment to the idea of personal responsibility, but the insights of behavioral economics, choice architecture and design ethics offer conceptual frameworks. Adverse mental health impacts from digital surveillance could amount to emotional distress, and election law already regulates speech by political actors.

What metrics, if any, will we use to measure their reduction?

To whom do we assign duties and liabilities?

-diffculty of assigning liability to ISPs/Platforms under existing comms law


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 5r5 - 07 Feb 2020 - 23:56:18 - ZaneMuller
Revision 4r4 - 04 Feb 2020 - 04:08:33 - ZaneMuller
Revision 3r3 - 14 Jan 2020 - 23:20:31 - EbenMoglen
Revision 2r2 - 11 Jan 2020 - 05:13:02 - ZaneMuller
Revision 1r1 - 10 Jan 2020 - 22:46:15 - ZaneMuller
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM