Computers, Privacy & the Constitution

View   r14  >  r13  >  r12  >  r11  >  r10  >  r9  ...
UdiKarklinskyFirstPaper 14 - 30 Apr 2017 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="OldPapers"

Complementing Notice with Periodic Disclosures


UdiKarklinskyFirstPaper 13 - 22 Mar 2017 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="OldPapers"

Complementing Notice with Periodic Disclosures


UdiKarklinskyFirstPaper 12 - 11 May 2016 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="OldPapers"

Complementing Notice with Periodic Disclosures


UdiKarklinskyFirstPaper 11 - 26 Jun 2015 - Main.MarkDrake
Line: 1 to 1
Changed:
<
<
META TOPICPARENT name="FirstPaper"
>
>
META TOPICPARENT name="OldPapers"
 

Complementing Notice with Periodic Disclosures


UdiKarklinskyFirstPaper 10 - 14 May 2015 - Main.UdiKarklinsky
Line: 1 to 1
 
META TOPICPARENT name="FirstPaper"
Changed:
<
<

Complementing Notice with Periodical Disclosures

>
>

Complementing Notice with Periodic Disclosures

 

Introduction

Changed:
<
<
Privacy policies or terms of use agreements (“notices”) are too long, time consuming, and complicated for most people, and therefore do not result in truly informed consent of those that click “agree”. To make things worse, notices often require you to consent at an early stage for different collections and uses of data that would span over a long period of time, and are very hard to process in advance. This essay suggests a framework, drawn from the field of behavioral economic analysis of consumer protection, that I found helpful in thinking about these problems, and most importantly, providing ideas for a solution. It should be noted that I am familiar with class-mate’s interesting essay on notices, but I address this issue from a very different angle.
>
>
Privacy policies or terms of use agreements (“notices”) are too long, time consuming, and complicated for most people, and therefore do not result in truly informed consent of those that click “agree”. To make things worse, notices often require you to consent at an early stage for different collections and uses of data that would span over a long period of time, and are very hard to process in advance. This essay considers whether these problems could be solved through complementing the notice with on-going periodic disclosures that would provide information about the data the user gave away and, to the extent possible, the risks involved.
 
Added:
>
>
At least theoretically, such an idea shows great promise. Ideally, we could imagine disclosures that provide each user with a periodic review of the data that was acquired from him specifically, and a general explanation about how has this data been used. Such personalized disclosures could demonstrate to people what have they been sacrificing, and enable a more informed reassessment of personal risks. As many users consider privacy risks as remote and highly unlikely to impact their lives, such reports, which would place an emphasis on each individual’s specific use-patterns and risks, could have a major impact which is unattainable through notices.
 
Added:
>
>
Theoretically, it appears that such disclosures could become reality through one of two ways: regulatory mandated disclosures; or through third parties that would provide such periodic “disclosures” for interested users.
 
Deleted:
<
<

The Framework

 
Changed:
<
<
Bar-Gill and Ferrari discuss the issue of “consumer’s mistakes,” where imperfect information and imperfect rationality lead consumers to misperception about products they use. In certain cases, this results in harm to these consumers, and the writers argue that the more harmful mistakes are those concerning the individual consumer’s product use-pattern, as opposed to mistakes about the product’s attributes or about average use-patterns (because the latter are easier to identify and correct quickly). As a solution, they suggest that when the seller has a long-term relationship and is therefore, voluntarily collecting individual use information, regulation should mandate certain disclosures by sellers of consumer’s product use-pattern. For instance, credit card consumers tend to “optimism” and often fail to take into consideration the probability that they personally will end up paying over-limit and late fees. Mandating credit card issuers to disclose individual fee-paying patterns, could be helpful in gradually amending individual consumers’ misperceptions.
>
>

Regulatory Solutions

 
Changed:
<
<
This framework, I argue, could be applicable to notices. In some sense, consumers’ automatic consents to notices, and continued “pay-with-data” exchanges, reflect a “consumer’s mistake”, which stems from consumers’ information asymmetries and imperfect rationality (optimism, neglect of small probabilities, and myopic behavior). To be clear, I do not argue that mistakes regarding over-paying a few dollars a month are of the same harm and magnitude as the loss of privacy; just, that from a pragmatic standpoint, such framing could be insightful and productive. Like credit card consumers, consenting visitors in different online “pay-with-data” exchanges fail to grasp the long-term consequences of their consent to the initial “contract”. Different mechanisms set to improve the effectiveness of notices could definitely raise people’s awareness, but might be inherently limited because of their timing, usually at the beginning of the relationship. At that stage, even if the notice is very apprehensible, all one can truly learn about is the “product’s attributes” – what data does a certain website collect, for what purposes, etc… Because of consumers’ imperfect information, and propensity toward optimism (“this wouldn’t happen to me”), such “general” notices fail to pass through.
>
>
The idea of forcing all websites to provide such periodic disclosures might sound tempting, but there are several serious issues that should be taken into consideration.
 
Added:
>
>
First, in the age of Big Data, and given most people’s limited technical capabilities, one could worry that such disclosures would still be too complicated for users, who would find themselves clueless in deciphering masses of data thrown at them. This, I believe, could be solved through requiring websites to provide users with automated “summaries” or “highlights” of their recent privacy exposure. For example, a user might benefit from a brief periodic report explaining that the application possesses data about his whereabouts on X amount of days over the last year/month/week. An even more effective disclosure would highlight certain personal details. The “personalization” makes it more likely that the individual will pay attention, as it brings to mind more realistic scenarios.
 
Added:
>
>
Second, for websites that collect and store personal data, I do not think it would be too much of a technical or financial burden to provide such summarized reports, but there are very clear limits to their ability to provide information about the full extent of the privacy exposure. For instance, in many websites we are being monitored not only by that website but also by other companies providing ad servers. The original websites might be able to report what personal information could have been collected, but would be limited in their ability to say what did the other companies collect and especially what did they do with the information. Also, when data is being collected and then sold to other “data brokers” of all sorts, the original website will not know to tell what ended up happening with the information. This puts a very clear limitation on websites’ ability to reflect the full extent of the user’s risk exposure.
 
Changed:
<
<

Thinking About Solutions

>
>
Third, mandatory on-going disclosures, even if designed thoughtfully by the regulator, might not be as effective as hoped. Companies are likely to make disclosures as “dry” as possible, and it would be difficult to require them to effectively highlight the individual risks.
 
Changed:
<
<
Bar-Gill and Ferrari argue in favor of mandating on-going individual use-pattern disclosures when the seller has a long-term relationship and is voluntarily collecting individual use information. Obviously, websites that present notice (for collection and use of data) fit this description perfectly.
>
>
Fourth, such regulation would require a very significant shift from the existing regulatory regime regarding data privacy. The FTC Act and most other US privacy laws do not provide individuals right to access the collected data, and in my research I could not identify any law requiring similar privacy-related periodic disclosures. California enacted a security breach notification law (California Civil Code §1798.82), which could be paralleled to some extent, but it deals with “breaches”, while the problems I mentioned concern consented collection and use of data, an entirely different thing.
 
Changed:
<
<
Alongside “improved” notices, there could also be a great benefit in an ongoing individualized use-pattern disclosure mechanism that will provide people with a chance to gradually “correct their privacy mistakes.” Ideally, a certain website’s disclosure should provide each user with a periodic review of the data that it acquired from him specifically, and a general explanation about how has this data been used. Such personalized disclosure could demonstrate to people what information have they been giving up, and enable a more informed reassessment of personal risks.
>
>
Also, looking forward, it does not appear that Congress is moving in the direction discussed here – as reflected in both recent federal privacy bills S.1995 (Personal Data Protection and Breach Accountability Act of 2014) and S.2025 (Data Broker Accountability and Transparency Act) (though I really cannot attest on their chances of moving forward, or on whether these reflect a “wider” interest in Congress).
 
Deleted:
<
<
In the age of Big Data, and given most people’s limited technical capabilities, one could worry that such disclosures would still be too complicated for consumers, but in my opinion, this depends on design. Throwing masses of data at consumers would probably be ineffective, but an automatic “summary” or “highlights” could be very helpful. For example, a user might benefit from a brief periodical report explaining that the application possesses data about his whereabouts on X amount of days over the last year/month/week. An even more effective disclosure would highlight certain personal details that were collected about you, and provide some explanation on their use. A more personalized disclosure is more likely to get to people, demonstrating what personal information is exposed and making people think twice on whether this is worth it.
 
Changed:
<
<
The big question is how could such disclosures become reality? Regulatory mandated disclosures could, in my opinion, be an effective solution also regarding “use of data.” However, it is important to note that personal data privacy protection is less regulated than general consumer protection, and therefore, to apply this idea here is somewhat more “ambitious”. Also, mandatory on-going disclosures, even if designed thoughtfully by the regulator, might not be as effective as hoped. Companies are likely to make disclosures as “dry” as possible, and it would be difficult to require them to effectively highlight the individual risks. With that regard, technical solutions, putting the “disclosure” in the hands of an independent third party, more adequately incentivized, might have some advantages over regulatory mandated solutions. Perhaps like tosdr.org provides accessibility at the notice stage, others could assist on an ongoing basis, providing automatic periodical reports that identify the information you provide to a certain website, and more importantly, reflect the risks involved in a comprehensible manner. For instance, such software could provide automated simple explanations about “worst-case scenarios” it deduces: “news website Y holds a list of all articles you read this year, including this one about ‘how to hide that you cheated on your wife.’ This information has probably been sold to Z and W and could end up…”). Although there are technical measures that allow users to understand, in some circumstances, what data did they provide, in my research I did not find software that allows on-going potential-risk-oriented “disclosures” which deal exactly with the informational limitations that are so prevalent among users.
>
>

Technical Solutions

 
Changed:
<
<
>
>
Alternatively, we could also think of non-regulatory, technical means to provide such “disclosure”. An independent third-party might be more adequately incentivized than the notice-providing website, and therefore could provide information in a more apprehensible format, and stress, instead of play down, the individual risks. As discussed above, such third-parties would not be able to tell you where the information goes after its initial collection, but they might at least be able to monitor what information you gave away. Perhaps, if this data-mining will be coupled with some sort of general expertise about certain websites’ operation, it would be possible for such third-parties to present an educated assessment regarding the individual’s risks. For instance: “news website Y probably holds a list of all articles you read this year, including ‘how to hide that you cheated on your wife.’ In our assessment, this information could end up…” Such assessments are surely much less effective than solid information, but could still have some, limited, impact on people’s awareness. An additional issue is that technical solutions require each individual to approach (register or download) the third party at some point and many are not likely to make the effort.
 
Deleted:
<
<
This is responsive to one aspect of my comments last time around. This draft presents the problem that the obscurity of the last draft obscured. Now, given an "application" of one article, we can perceive clearly what nonsense the whole proposition was from the beginning. This is progress
 
Changed:
<
<
So—on the basis of some nonsense some people said once, which we don't actually analyze but just sort of assume must be correct and meaningful because they said it—we can imagine a regulatory intervention that would require data-miners to show the ore what was made from it. Never mind, as the draft itself notes, that no factual similarity exists between the credit card transaction log and the Facebook weblog. Never mind that in one case the intervention requires the consumer to be notified about his own spending, and in the other case the requirement would be for disclosure and analysis of third-party activity. Never mind the differences between the regulation of banking and the regulation of speech. Never mind, in fact, anything that would distinguish between the nonsense we are supposed to assume wasn't nonsense in its original context solely—so far as the draft gives us reason to believe it at all—because it was published once, and its importance as an "application" in this context.

It is almost as though the goal were to avoid thinking. Two other fellows thought about something once, and if I simply mechanically "apply" their thinking to the current completely different situation at least I won't have to do any thinking of my own.

Let's go from drafts that tell me some theory is in general useful, and drafts that tell me that someone else used them once, to a thought of your own which isn't recommended by its generation in any particular school of dogma, or by being lifted from something that someone else thought. State your idea simply at the beginning of the draft. Show how that idea develops from the facts you have learned about the world. Answer some of the obvious questions or objections. Leave the reader with an implication she can explore on her own given what you have thought so far for her.

>
>

Conclusion

 
Added:
>
>
The idea of complementing notices with periodic disclosures could seem promising, as, theoretically, it provides individuals with on-going information that would allow them to gradually “correct” unwise consents. However, in practice there are significant limitations that decrease both the possibility of such an idea coming to life, and its potential effectiveness.
 
META TOPICMOVED by="UdiKarklinsky" date="1425659661" from="CompPrivConst.TWikiGuestFirstPaper" to="CompPrivConst.UdiKarklinskyFirstPaper"

UdiKarklinskyFirstPaper 9 - 12 May 2015 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="FirstPaper"

Complementing Notice with Periodical Disclosures

Line: 28 to 28
 The big question is how could such disclosures become reality? Regulatory mandated disclosures could, in my opinion, be an effective solution also regarding “use of data.” However, it is important to note that personal data privacy protection is less regulated than general consumer protection, and therefore, to apply this idea here is somewhat more “ambitious”. Also, mandatory on-going disclosures, even if designed thoughtfully by the regulator, might not be as effective as hoped. Companies are likely to make disclosures as “dry” as possible, and it would be difficult to require them to effectively highlight the individual risks. With that regard, technical solutions, putting the “disclosure” in the hands of an independent third party, more adequately incentivized, might have some advantages over regulatory mandated solutions. Perhaps like tosdr.org provides accessibility at the notice stage, others could assist on an ongoing basis, providing automatic periodical reports that identify the information you provide to a certain website, and more importantly, reflect the risks involved in a comprehensible manner. For instance, such software could provide automated simple explanations about “worst-case scenarios” it deduces: “news website Y holds a list of all articles you read this year, including this one about ‘how to hide that you cheated on your wife.’ This information has probably been sold to Z and W and could end up…”). Although there are technical measures that allow users to understand, in some circumstances, what data did they provide, in my research I did not find software that allows on-going potential-risk-oriented “disclosures” which deal exactly with the informational limitations that are so prevalent among users.
Added:
>
>

This is responsive to one aspect of my comments last time around. This draft presents the problem that the obscurity of the last draft obscured. Now, given an "application" of one article, we can perceive clearly what nonsense the whole proposition was from the beginning. This is progress

So—on the basis of some nonsense some people said once, which we don't actually analyze but just sort of assume must be correct and meaningful because they said it—we can imagine a regulatory intervention that would require data-miners to show the ore what was made from it. Never mind, as the draft itself notes, that no factual similarity exists between the credit card transaction log and the Facebook weblog. Never mind that in one case the intervention requires the consumer to be notified about his own spending, and in the other case the requirement would be for disclosure and analysis of third-party activity. Never mind the differences between the regulation of banking and the regulation of speech. Never mind, in fact, anything that would distinguish between the nonsense we are supposed to assume wasn't nonsense in its original context solely—so far as the draft gives us reason to believe it at all—because it was published once, and its importance as an "application" in this context.

It is almost as though the goal were to avoid thinking. Two other fellows thought about something once, and if I simply mechanically "apply" their thinking to the current completely different situation at least I won't have to do any thinking of my own.

Let's go from drafts that tell me some theory is in general useful, and drafts that tell me that someone else used them once, to a thought of your own which isn't recommended by its generation in any particular school of dogma, or by being lifted from something that someone else thought. State your idea simply at the beginning of the draft. Show how that idea develops from the facts you have learned about the world. Answer some of the obvious questions or objections. Leave the reader with an implication she can explore on her own given what you have thought so far for her.

 
META TOPICMOVED by="UdiKarklinsky" date="1425659661" from="CompPrivConst.TWikiGuestFirstPaper" to="CompPrivConst.UdiKarklinskyFirstPaper"

UdiKarklinskyFirstPaper 8 - 12 May 2015 - Main.UdiKarklinsky
Line: 1 to 1
 
META TOPICPARENT name="FirstPaper"
Changed:
<
<

Law and Economics Applications to Privacy Law

>
>

Complementing Notice with Periodical Disclosures

 

Introduction

Changed:
<
<
Professor Moglen has been accusing us students, and the vast majority of society in general, at surrendering freedom voluntarily, for the limited benefits of “shiny” products. According to Moglen, by inviting these products into our lives, we self-inflict harm, ranging from the “trivial” use of private information for commercial purposes; to the grave risk of persecution by those with power.
>
>
Privacy policies or terms of use agreements (“notices”) are too long, time consuming, and complicated for most people, and therefore do not result in truly informed consent of those that click “agree”. To make things worse, notices often require you to consent at an early stage for different collections and uses of data that would span over a long period of time, and are very hard to process in advance. This essay suggests a framework, drawn from the field of behavioral economic analysis of consumer protection, that I found helpful in thinking about these problems, and most importantly, providing ideas for a solution. It should be noted that I am familiar with class-mate’s interesting essay on notices, but I address this issue from a very different angle.
 
Deleted:
<
<
Under Moglen’s premise, I would like to suggest relevant theories from the fields of classical and behavioral law and economics (“L&E”) that are useful in understanding why people still choose to use these products. Furthermore, such analysis bears the potential to contribute to activists’ discussions on how to raise awareness to the harm people inflict upon themselves and society, and to persuade people to amend their behavior.
 
Changed:
<
<

Classical L&E

>
>

The Framework

 
Changed:
<
<
One possibly relevant theory, drawn from the field of classical L&E, is rational ignorance (or apathy) (Lemley 2001, Rational Ignorance at the Patent Office). Consumers might be rationally ignorant if they make an informed decision that the expected cost of further educating themselves about certain risks is higher than the expected harm. One might argue that our case is indeed such a case of economically rational behavior; if the harm from the use of personal information for commercial purposes does not bother many consumers, and the risk of persecution is extremely low. However, an economically rational actor operates based on the full information available to him (at no cost) and it does not seem to be the case here. The majority of consumers are not even aware of any risks involved and surely cannot perceive the harsh long-term societal risks they impose upon themselves. Therefore, many consumers do not have any dilemma concerning how much should they invest in exploring whether the product is “worth the risks.”
>
>
Bar-Gill and Ferrari discuss the issue of “consumer’s mistakes,” where imperfect information and imperfect rationality lead consumers to misperception about products they use. In certain cases, this results in harm to these consumers, and the writers argue that the more harmful mistakes are those concerning the individual consumer’s product use-pattern, as opposed to mistakes about the product’s attributes or about average use-patterns (because the latter are easier to identify and correct quickly). As a solution, they suggest that when the seller has a long-term relationship and is therefore, voluntarily collecting individual use information, regulation should mandate certain disclosures by sellers of consumer’s product use-pattern. For instance, credit card consumers tend to “optimism” and often fail to take into consideration the probability that they personally will end up paying over-limit and late fees. Mandating credit card issuers to disclose individual fee-paying patterns, could be helpful in gradually amending individual consumers’ misperceptions.
 
Changed:
<
<
Therefore, I think other theories – from the field of behavioral L&E, which sets aside the assumption of rationality – could provide better insight here.
>
>
This framework, I argue, could be applicable to notices. In some sense, consumers’ automatic consents to notices, and continued “pay-with-data” exchanges, reflect a “consumer’s mistake”, which stems from consumers’ information asymmetries and imperfect rationality (optimism, neglect of small probabilities, and myopic behavior). To be clear, I do not argue that mistakes regarding over-paying a few dollars a month are of the same harm and magnitude as the loss of privacy; just, that from a pragmatic standpoint, such framing could be insightful and productive. Like credit card consumers, consenting visitors in different online “pay-with-data” exchanges fail to grasp the long-term consequences of their consent to the initial “contract”. Different mechanisms set to improve the effectiveness of notices could definitely raise people’s awareness, but might be inherently limited because of their timing, usually at the beginning of the relationship. At that stage, even if the notice is very apprehensible, all one can truly learn about is the “product’s attributes” – what data does a certain website collect, for what purposes, etc… Because of consumers’ imperfect information, and propensity toward optimism (“this wouldn’t happen to me”), such “general” notices fail to pass through.
 
Deleted:
<
<

Behavioral L&E

 
Changed:
<
<

Consumers’ misperceptions

>
>

Thinking About Solutions

 
Changed:
<
<
Consumers tend to myopic behavior, ignoring small probabilities (Sunstein 2001, Probability Neglect: Emotions, Worst Cases, and Law) and optimism (Jolls 1998, A Behavioral Approach to Law and Economics). These well-recorded tendencies could potentially lead consumers to underestimate and disregard the risks of their use of such products. Consumers that tend to myopic behavior might disregard long-term effects of their decisions, and therefore will not fear for their personal freedom (even if they understand the gravity of such harm), as long as they consider it as more of a longer-term threat. Tendency to ignore small probabilities could result similarly, if consumers consider a future where they will be under eminent threat of persecution as a “state of nature” of low probability. Also, even if consumers realize that this is already an existing threat, they might still disregard it if they live where there is currently a very small chance of such an occurrence. Even if the consumer fully understands the gravity of being persecuted, he will not take it into account because he assigns 0% probability to such a scenario. Similarly, Optimism could bring a consumer, even if he understands the general grave risks involved, to think that “it wouldn’t happen to me.”
>
>
Bar-Gill and Ferrari argue in favor of mandating on-going individual use-pattern disclosures when the seller has a long-term relationship and is voluntarily collecting individual use information. Obviously, websites that present notice (for collection and use of data) fit this description perfectly.
 
Changed:
<
<
This leads to the conclusion that perhaps, in order for things to get better, they first have to get worse. As the situation will deteriorate, potentially more and more people will be exposed to stories about those harmed, and only such “availability” could eliminate these tendencies. If the government took your neighbor, you will know that it might also eventually come for you. Is it possible to fix these misperceptions in less painful ways? In the context of crime deterrence, it has been argued that in order to eliminate criminals’ perception of “I’ll never get caught” the police should make arrests as “loud” as possible, to reach other criminals’ attention and impact their perception of the arrest probability. Applying this logic here provides an additional justification to the need for activists to effectively communicate the risks to as many consumers as possible.
>
>
Alongside “improved” notices, there could also be a great benefit in an ongoing individualized use-pattern disclosure mechanism that will provide people with a chance to gradually “correct their privacy mistakes.” Ideally, a certain website’s disclosure should provide each user with a periodic review of the data that it acquired from him specifically, and a general explanation about how has this data been used. Such personalized disclosure could demonstrate to people what information have they been giving up, and enable a more informed reassessment of personal risks.
 
Changed:
<
<
So the result of the application of all this rhetoric to the situation is to confirm our view that people should be better informed and the people who are better informed should work as hard as possible to inform others.

Contract Design

Another contribution of behavioral L&E is to the understanding of information asymmetries and contract design. Researches on the exploitative nature of consumer contracts have had a significant impact on regulation and practices in different markets, such as home loans or mobile telecommunications (Bar-Gill 2009, The Law, Economics and Psychology of Subprime Mortgage Contracts). As far as I know, despite growing attention to the relations between companies such as Google and Facebook and their users, these contracts/privacy policies are still not being taken as seriously (by researchers and regulators) as more “classic” consumer contracts. The reason, I suspect, is the lack of money consideration; a very unpersuasive justification. To the contrary – because no money changes hands, these cases are even more prone to abusive practices – consumers are much better “trained” to identify “how much” are they paying in a deal, in comparison to “what they give.” Therefore, more efforts are required in order to promise sufficient disclosures that will overcome information asymmetries and allow consumers to truly realize what they sacrifice.

It is also important that consumers will understand that the freedoms that they give up have value (even monetary!). Researchers have identified an endowment effect: people tend to ascribe more value to what they own (Knetsch 1989, The Endowment Effect and Evidence of Nonreversible Indifferences Curves). Such an effect means that if consumers will have better understanding of the freedoms they lose and their worth, and develop a sense of ownership over them, Google will have much harder time taking them. With that regard, perhaps even the information on how much am I, as a consumer, is worth to Google, could affect my decision.

Here the outcome is that if people knew their value they wouldn't sell too low. But presumably we should not run an active slave market so that people would know what their bodies were worth, and wouldn't accept low wages. Or should we?

Conclusion

The application of (mainly behavioral) L&E into privacy law could improve the understanding of why consumers disregard the harms of their choices; and raise interesting ideas about how to promote change. Obviously, this paper has only touched upon several applications, and there is much room for further thinking.

I'm not sure what "applications" we got here. I think what was demonstrated instead is that a vocabulary could be applied. We now know that it is possible to use some pre-existing system of description to describe also this stuff. There is absolutely no intellectual or social payoff yet for doing so, so far as the essay extends. We haven't learned anything new, or been presented with a new idea, yet.

I think the way to improve this draft is by having an actual good idea to offer the reader up front. You could then show the reader how you got this good idea through behavioral economics, if that's in fact how you did it, which would be an advertisement for behavioral economics, if your purpose is to advertise it. Those of us who are more interested in the idea itself than we are in how you came by it could pay attention to that too, of course.

>
>
In the age of Big Data, and given most people’s limited technical capabilities, one could worry that such disclosures would still be too complicated for consumers, but in my opinion, this depends on design. Throwing masses of data at consumers would probably be ineffective, but an automatic “summary” or “highlights” could be very helpful. For example, a user might benefit from a brief periodical report explaining that the application possesses data about his whereabouts on X amount of days over the last year/month/week. An even more effective disclosure would highlight certain personal details that were collected about you, and provide some explanation on their use. A more personalized disclosure is more likely to get to people, demonstrating what personal information is exposed and making people think twice on whether this is worth it.
 
Added:
>
>
The big question is how could such disclosures become reality? Regulatory mandated disclosures could, in my opinion, be an effective solution also regarding “use of data.” However, it is important to note that personal data privacy protection is less regulated than general consumer protection, and therefore, to apply this idea here is somewhat more “ambitious”. Also, mandatory on-going disclosures, even if designed thoughtfully by the regulator, might not be as effective as hoped. Companies are likely to make disclosures as “dry” as possible, and it would be difficult to require them to effectively highlight the individual risks. With that regard, technical solutions, putting the “disclosure” in the hands of an independent third party, more adequately incentivized, might have some advantages over regulatory mandated solutions. Perhaps like tosdr.org provides accessibility at the notice stage, others could assist on an ongoing basis, providing automatic periodical reports that identify the information you provide to a certain website, and more importantly, reflect the risks involved in a comprehensible manner. For instance, such software could provide automated simple explanations about “worst-case scenarios” it deduces: “news website Y holds a list of all articles you read this year, including this one about ‘how to hide that you cheated on your wife.’ This information has probably been sold to Z and W and could end up…”). Although there are technical measures that allow users to understand, in some circumstances, what data did they provide, in my research I did not find software that allows on-going potential-risk-oriented “disclosures” which deal exactly with the informational limitations that are so prevalent among users.
 
META TOPICMOVED by="UdiKarklinsky" date="1425659661" from="CompPrivConst.TWikiGuestFirstPaper" to="CompPrivConst.UdiKarklinskyFirstPaper"

UdiKarklinskyFirstPaper 7 - 29 Apr 2015 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="FirstPaper"

Law and Economics Applications to Privacy Law

Line: 26 to 26
 This leads to the conclusion that perhaps, in order for things to get better, they first have to get worse. As the situation will deteriorate, potentially more and more people will be exposed to stories about those harmed, and only such “availability” could eliminate these tendencies. If the government took your neighbor, you will know that it might also eventually come for you. Is it possible to fix these misperceptions in less painful ways? In the context of crime deterrence, it has been argued that in order to eliminate criminals’ perception of “I’ll never get caught” the police should make arrests as “loud” as possible, to reach other criminals’ attention and impact their perception of the arrest probability. Applying this logic here provides an additional justification to the need for activists to effectively communicate the risks to as many consumers as possible.
Added:
>
>
So the result of the application of all this rhetoric to the situation is to confirm our view that people should be better informed and the people who are better informed should work as hard as possible to inform others.

 

Contract Design

Line: 33 to 38
 It is also important that consumers will understand that the freedoms that they give up have value (even monetary!). Researchers have identified an endowment effect: people tend to ascribe more value to what they own (Knetsch 1989, The Endowment Effect and Evidence of Nonreversible Indifferences Curves). Such an effect means that if consumers will have better understanding of the freedoms they lose and their worth, and develop a sense of ownership over them, Google will have much harder time taking them. With that regard, perhaps even the information on how much am I, as a consumer, is worth to Google, could affect my decision.
Added:
>
>
Here the outcome is that if people knew their value they wouldn't sell too low. But presumably we should not run an active slave market so that people would know what their bodies were worth, and wouldn't accept low wages. Or should we?

 

Conclusion

The application of (mainly behavioral) L&E into privacy law could improve the understanding of why consumers disregard the harms of their choices; and raise interesting ideas about how to promote change. Obviously, this paper has only touched upon several applications, and there is much room for further thinking.

Changed:
<
<
>
>
I'm not sure what "applications" we got here. I think what was demonstrated instead is that a vocabulary could be applied. We now know that it is possible to use some pre-existing system of description to describe also this stuff. There is absolutely no intellectual or social payoff yet for doing so, so far as the essay extends. We haven't learned anything new, or been presented with a new idea, yet.

I think the way to improve this draft is by having an actual good idea to offer the reader up front. You could then show the reader how you got this good idea through behavioral economics, if that's in fact how you did it, which would be an advertisement for behavioral economics, if your purpose is to advertise it. Those of us who are more interested in the idea itself than we are in how you came by it could pay attention to that too, of course.

 
META TOPICMOVED by="UdiKarklinsky" date="1425659661" from="CompPrivConst.TWikiGuestFirstPaper" to="CompPrivConst.UdiKarklinskyFirstPaper"

UdiKarklinskyFirstPaper 6 - 06 Mar 2015 - Main.UdiKarklinsky
Line: 1 to 1
 
META TOPICPARENT name="FirstPaper"
Changed:
<
<
*Data protection in France following the terrorist attacks in Paris: Nothing new under the sun*
Center text
>
>

Law and Economics Applications to Privacy Law

Introduction

Professor Moglen has been accusing us students, and the vast majority of society in general, at surrendering freedom voluntarily, for the limited benefits of “shiny” products. According to Moglen, by inviting these products into our lives, we self-inflict harm, ranging from the “trivial” use of private information for commercial purposes; to the grave risk of persecution by those with power.

Under Moglen’s premise, I would like to suggest relevant theories from the fields of classical and behavioral law and economics (“L&E”) that are useful in understanding why people still choose to use these products. Furthermore, such analysis bears the potential to contribute to activists’ discussions on how to raise awareness to the harm people inflict upon themselves and society, and to persuade people to amend their behavior.

Classical L&E

 
Added:
>
>
One possibly relevant theory, drawn from the field of classical L&E, is rational ignorance (or apathy) (Lemley 2001, Rational Ignorance at the Patent Office). Consumers might be rationally ignorant if they make an informed decision that the expected cost of further educating themselves about certain risks is higher than the expected harm. One might argue that our case is indeed such a case of economically rational behavior; if the harm from the use of personal information for commercial purposes does not bother many consumers, and the risk of persecution is extremely low. However, an economically rational actor operates based on the full information available to him (at no cost) and it does not seem to be the case here. The majority of consumers are not even aware of any risks involved and surely cannot perceive the harsh long-term societal risks they impose upon themselves. Therefore, many consumers do not have any dilemma concerning how much should they invest in exploring whether the product is “worth the risks.”
 
Changed:
<
<
Introduction
>
>
Therefore, I think other theories – from the field of behavioral L&E, which sets aside the assumption of rationality – could provide better insight here.
 
Deleted:
<
<
The terrorist attacks that occurred in Paris between the 7th and the 9th of January 2015 have been called the “France’s 9/11” by certain international and French newspaper. If, from an operational point of view, these attacks largely differ from what happened in the United States in 2001, from a legal standpoint, however, they may trigger the same consequences, especially with regard to data protection. Indeed, the French government, as it has already mentioned, intends to reform several aspects of its national legislation in relation to data protection and has called for the European Union to act similarly. Listening at the speech of the French Prime Minister, one could not miss the references made to the Patriot Act enacted by the United States after 9/11. Although no legislative step has been undertaken yet, I believe that this situation deserves an overview. Therefore, I will consecutively address (i) the current French law on data protection and (ii) the purported reforms announced by the French government.
 
Changed:
<
<
i.   The current French law on data protection
>
>

Behavioral L&E

 
Changed:
<
<
On 6th January 1978, France enacted the Law on Information Technology, Data Files and Civil Liberty. France was thus one of the first countries that adopted a law dealing with computers and freedoms. This law established the Commission Nationale de l’Informatique et des Libertés (“CNIL”), an independent administrative authority whose main goal is to ensure the protection of personal data. Since then, the Law of 1978 has been modified by the Law of 6th August 2004, which transposed the 95/46/EC Directive into national legislation. The CNIL is one of the leading European authorities fighting for the protection of data protection. For instance, in its highly publicized decision against Google Spain on 13th May 2014, the European Court of Justice took up the economic approach adopted by the CNIL in its decision against Google on 3rd January of 2014. Moreover, the French legal regime of data protection is of a criminal nature. Indeed, violation of the Law of 1978, as modified, may trigger fines up to ¤1,5 million for legal persons and up to ¤300,000 for natural persons as well as up to 5 years of imprisonment. In addition, regarding the transfer of personal data outside the European Union, the CNIL, pursuant to the 95/46/EC Directive, has set out different procedures depending on the country to which personal data are transferred. Transfers of personal data to countries whose level of data protection are deemed to be equivalent to the European Union are easily accomplished by filing a form with the CNIL. However, it is strictly forbidden to transfer such data to countries where the level of protection is considered insufficient, unless the company undertaking a data transfer within its own structure has adopted Binding Corporate Rules or if the two companies that are parties to a data transfer have signed standard contractual clauses adopted by the European Commission. The terrorist attacks in Paris have not called into question the missions or the existence of the CNIL. However, one could believe that the potential enactment of a purported French “Passenger Name Record” (“PNR”) system, as announced by the Prime Minister, would broaden the possibilities to collect, process and transfer personal data.
>
>

Consumers’ misperceptions

 
Changed:
<
<
ii.   The purported reforms announced by the French government
>
>
Consumers tend to myopic behavior, ignoring small probabilities (Sunstein 2001, Probability Neglect: Emotions, Worst Cases, and Law) and optimism (Jolls 1998, A Behavioral Approach to Law and Economics). These well-recorded tendencies could potentially lead consumers to underestimate and disregard the risks of their use of such products. Consumers that tend to myopic behavior might disregard long-term effects of their decisions, and therefore will not fear for their personal freedom (even if they understand the gravity of such harm), as long as they consider it as more of a longer-term threat. Tendency to ignore small probabilities could result similarly, if consumers consider a future where they will be under eminent threat of persecution as a “state of nature” of low probability. Also, even if consumers realize that this is already an existing threat, they might still disregard it if they live where there is currently a very small chance of such an occurrence. Even if the consumer fully understands the gravity of being persecuted, he will not take it into account because he assigns 0% probability to such a scenario. Similarly, Optimism could bring a consumer, even if he understands the general grave risks involved, to think that “it wouldn’t happen to me.”
 
Changed:
<
<
In fact, the purported adoption of a PNR system by France would be nothing more than a scam since this system is already in place. Not surprisingly, the speech given by the Prime Minister following the terrorist attacks was merely political. Indeed, Article 7 of the Law on the fight against terrorism of 23rd January 2006 expressly authorized the collect and process of PNR data even though “sensitive” data, as defined by the Law of 6th January 1978, could not be used. Thus, for instance, the use of data revealing what meals passengers ordered was forbidden. Although this article has been abrogated, an equivalent provision remains in force in France. Moreover, for a long time, the French Code of Customs has enabled customs officers to “require the submission of papers and documents of any kind, relating to operations of interest for their services, regardless of the medium”. Consequently, customs officers are authorized to collect PNR data. Furthermore, the Prime Minister also mentioned that France was contemplating the adoption of a “French Patriot Act”. Considering the domestic and European obstacles that the government would have to surmount, the transposition into French law of a US inspired Patriot Act is not really on the agenda. Nevertheless, a controversial step towards the reinforcement of mass surveillance has already been made through the enactment of the Law on military programming of 18th December 2013. Accordingly, where authorized by the Prime Minister, police officers are allowed to collect any electronic data and to intercept any electronic communications for the purpose of identifying and preventing – in real time – potential threats to the national security. Despite the apparent violation of both the right to liberty and security and the right to privacy enshrined in the European Convention on Human Rights and whose constitutional value has been established by the Conseil Constitutionnel (French Supreme Court), the latter has upheld the preventive aspect of the Law, as described above. This Law entered into force on 1st January 2015. Yet, this legislative arsenal did not prevent terrorist attacks. Stiffening mass surveillance did not work in the first place but it makes no doubt that further legislative steps in this direction will be taken.
>
>
This leads to the conclusion that perhaps, in order for things to get better, they first have to get worse. As the situation will deteriorate, potentially more and more people will be exposed to stories about those harmed, and only such “availability” could eliminate these tendencies. If the government took your neighbor, you will know that it might also eventually come for you. Is it possible to fix these misperceptions in less painful ways? In the context of crime deterrence, it has been argued that in order to eliminate criminals’ perception of “I’ll never get caught” the police should make arrests as “loud” as possible, to reach other criminals’ attention and impact their perception of the arrest probability. Applying this logic here provides an additional justification to the need for activists to effectively communicate the risks to as many consumers as possible.
 
Deleted:
<
<
Conclusion
 
Changed:
<
<
Surveillance has certainly been strengthened since the terrorist attacks but French law on data protection has not been formally impacted yet. However, it would be naïve to believe that the assault on privacy is over. Here is the paradox: rather than reinforcing our liberties whose scope has been temporarily reduced by the terrorist attacks, the legislative reforms will follow the same path.
>
>

Contract Design

Another contribution of behavioral L&E is to the understanding of information asymmetries and contract design. Researches on the exploitative nature of consumer contracts have had a significant impact on regulation and practices in different markets, such as home loans or mobile telecommunications (Bar-Gill 2009, The Law, Economics and Psychology of Subprime Mortgage Contracts). As far as I know, despite growing attention to the relations between companies such as Google and Facebook and their users, these contracts/privacy policies are still not being taken as seriously (by researchers and regulators) as more “classic” consumer contracts. The reason, I suspect, is the lack of money consideration; a very unpersuasive justification. To the contrary – because no money changes hands, these cases are even more prone to abusive practices – consumers are much better “trained” to identify “how much” are they paying in a deal, in comparison to “what they give.” Therefore, more efforts are required in order to promise sufficient disclosures that will overcome information asymmetries and allow consumers to truly realize what they sacrifice.

It is also important that consumers will understand that the freedoms that they give up have value (even monetary!). Researchers have identified an endowment effect: people tend to ascribe more value to what they own (Knetsch 1989, The Endowment Effect and Evidence of Nonreversible Indifferences Curves). Such an effect means that if consumers will have better understanding of the freedoms they lose and their worth, and develop a sense of ownership over them, Google will have much harder time taking them. With that regard, perhaps even the information on how much am I, as a consumer, is worth to Google, could affect my decision.

Conclusion

The application of (mainly behavioral) L&E into privacy law could improve the understanding of why consumers disregard the harms of their choices; and raise interesting ideas about how to promote change. Obviously, this paper has only touched upon several applications, and there is much room for further thinking.

META TOPICMOVED by="UdiKarklinsky" date="1425659661" from="CompPrivConst.TWikiGuestFirstPaper" to="CompPrivConst.UdiKarklinskyFirstPaper"

UdiKarklinskyFirstPaper 5 - 04 Mar 2015 - Main.ArthurMERLEBERAL
Line: 1 to 1
 
META TOPICPARENT name="FirstPaper"
Changed:
<
<

Minor League Snoops

The spying of the federal government on American citizens is well documented. The revelations of Edward Snowden have filled news archives with articles detailing the snooping of the United States government. This accepted and understood. To what extent though does an individual have to anticipate surveillance at a more local level? Is there a difference in the ability of local governments versus the federal government to spy on citizens and should one be more acceptable than the other?
>
>
*Data protection in France following the terrorist attacks in Paris: Nothing new under the sun*
Center text
 
Deleted:
<
<

The Dragnet

Earlier this year, it was revealed that police in Oakland California were employing Automatic License Plate Readers (ALPRs) to track license plates. The Electronic Frontier Foundation (EFF), which analyzed the data, found that as few as two patrol cars equipped with ALPRs were responsible for collecting 63,272 data points. On average, the EFF found that individual plates were recorded on average 1.3 times. With the sheer amount of plates being collected, it seems safe to say that this is not a targeted approach. The movements of individuals are being tracked with little regard for reasonable suspicion and for the sole sake of data collection.
 
Changed:
<
<

The Technology

According to the American Civil Liberties Union (ACLU), ALPR technology consists of high-speed cameras that capture license plates and then add and compare them to a growing database of previously recorded plates. The cameras may also more than just license plates, and sometimes record vehicle occupants and location. Oakland is not the only city to employ the technology, but they are among the most willing to divulge their use of the collected data. Police in Los Angeles have been reticent to be completely open regarding their use of ALPRs. A district court in San Diego ruled last year that the collection of license plate data did not constitute public information and that the holders of the information were not required to turn it over. The judge cited concerns that criminals could use the information to locate the ALPR cameras or areas of police investigation.
>
>
Introduction
 
Changed:
<
<

How is it Different?

Americans are accustomed to submitting to various forms of surveillance without a second thought. Many Americans have accepted that a camera may photograph them if they run a red light and security cameras are now just a fact of life. What can be said though, is that for the most part, these forms of surveillance are used only after the fact. Traffic violators are snapped only after they have run a red light and security footage is often used to identify criminals after they have committed a crime. On the other hand, ALPRs are used to actively create a database of not only who is on the road, but also where they are going.
>
>
The terrorist attacks that occurred in Paris between the 7th and the 9th of January 2015 have been called the “France’s 9/11” by certain international and French newspaper. If, from an operational point of view, these attacks largely differ from what happened in the United States in 2001, from a legal standpoint, however, they may trigger the same consequences, especially with regard to data protection. Indeed, the French government, as it has already mentioned, intends to reform several aspects of its national legislation in relation to data protection and has called for the European Union to act similarly. Listening at the speech of the French Prime Minister, one could not miss the references made to the Patriot Act enacted by the United States after 9/11. Although no legislative step has been undertaken yet, I believe that this situation deserves an overview. Therefore, I will consecutively address (i) the current French law on data protection and (ii) the purported reforms announced by the French government.
 
Changed:
<
<

Facial Recognition

The use of facial recognition technology further complicates the matter. It is not far fetched to imagine government authorities coupling ALPR technology with face recognition software to more precisely identify their targets. While it has always been hinted that license plates were not always the target of ALPRs, a Freedom of Information Act request by the ACLU on the Drug Enforcement Administration (DEA) revealed that in some cases, “Occupant photos are not an occasional, accidental byproduct of the technology, but one that is intentionally being cultivated.” In fact this pairing makes so much sense that some APLR manufacturers are already looking to incorporate facial recognition technology into their new models. This allows law enforcement to attach a current visual profile to the already large amount of information they are able to gather about an individual just from their automobile data. While there is some "research" going towards thwarting facial recognition software, it seems likely that the average driver will generally be defenseless against the tracking of ALPR cameras.
>
>
i.   The current French law on data protection
 
Changed:
<
<

Just Don't be a Criminal

The much touted mantra of those that would have us accept government surveillance is that if we're doing nothing wrong, we have nothing to worry about. But even if this is assumed to be true, the specter of human error looms large. With courts deciding that citizens no right to know specifically what sort of data is being kept in databases
>
>
On 6th January 1978, France enacted the Law on Information Technology, Data Files and Civil Liberty. France was thus one of the first countries that adopted a law dealing with computers and freedoms. This law established the Commission Nationale de l’Informatique et des Libertés (“CNIL”), an independent administrative authority whose main goal is to ensure the protection of personal data. Since then, the Law of 1978 has been modified by the Law of 6th August 2004, which transposed the 95/46/EC Directive into national legislation. The CNIL is one of the leading European authorities fighting for the protection of data protection. For instance, in its highly publicized decision against Google Spain on 13th May 2014, the European Court of Justice took up the economic approach adopted by the CNIL in its decision against Google on 3rd January of 2014. Moreover, the French legal regime of data protection is of a criminal nature. Indeed, violation of the Law of 1978, as modified, may trigger fines up to ¤1,5 million for legal persons and up to ¤300,000 for natural persons as well as up to 5 years of imprisonment. In addition, regarding the transfer of personal data outside the European Union, the CNIL, pursuant to the 95/46/EC Directive, has set out different procedures depending on the country to which personal data are transferred. Transfers of personal data to countries whose level of data protection are deemed to be equivalent to the European Union are easily accomplished by filing a form with the CNIL. However, it is strictly forbidden to transfer such data to countries where the level of protection is considered insufficient, unless the company undertaking a data transfer within its own structure has adopted Binding Corporate Rules or if the two companies that are parties to a data transfer have signed standard contractual clauses adopted by the European Commission. The terrorist attacks in Paris have not called into question the missions or the existence of the CNIL. However, one could believe that the potential enactment of a purported French “Passenger Name Record” (“PNR”) system, as announced by the Prime Minister, would broaden the possibilities to collect, process and transfer personal data.
 
Changed:
<
<
Data Privacy
>
>
ii.   The purported reforms announced by the French government
 
Changed:
<
<
Sharing
>
>
In fact, the purported adoption of a PNR system by France would be nothing more than a scam since this system is already in place. Not surprisingly, the speech given by the Prime Minister following the terrorist attacks was merely political. Indeed, Article 7 of the Law on the fight against terrorism of 23rd January 2006 expressly authorized the collect and process of PNR data even though “sensitive” data, as defined by the Law of 6th January 1978, could not be used. Thus, for instance, the use of data revealing what meals passengers ordered was forbidden. Although this article has been abrogated, an equivalent provision remains in force in France. Moreover, for a long time, the French Code of Customs has enabled customs officers to “require the submission of papers and documents of any kind, relating to operations of interest for their services, regardless of the medium”. Consequently, customs officers are authorized to collect PNR data. Furthermore, the Prime Minister also mentioned that France was contemplating the adoption of a “French Patriot Act”. Considering the domestic and European obstacles that the government would have to surmount, the transposition into French law of a US inspired Patriot Act is not really on the agenda. Nevertheless, a controversial step towards the reinforcement of mass surveillance has already been made through the enactment of the Law on military programming of 18th December 2013. Accordingly, where authorized by the Prime Minister, police officers are allowed to collect any electronic data and to intercept any electronic communications for the purpose of identifying and preventing – in real time – potential threats to the national security. Despite the apparent violation of both the right to liberty and security and the right to privacy enshrined in the European Convention on Human Rights and whose constitutional value has been established by the Conseil Constitutionnel (French Supreme Court), the latter has upheld the preventive aspect of the Law, as described above. This Law entered into force on 1st January 2015. Yet, this legislative arsenal did not prevent terrorist attacks. Stiffening mass surveillance did not work in the first place but it makes no doubt that further legislative steps in this direction will be taken.

Conclusion

Surveillance has certainly been strengthened since the terrorist attacks but French law on data protection has not been formally impacted yet. However, it would be naïve to believe that the assault on privacy is over. Here is the paradox: rather than reinforcing our liberties whose scope has been temporarily reduced by the terrorist attacks, the legislative reforms will follow the same path.


UdiKarklinskyFirstPaper 4 - 03 Mar 2015 - Main.AndrewChung
Line: 1 to 1
 
META TOPICPARENT name="FirstPaper"
Changed:
<
<
Minor League Snoops
>
>

Minor League Snoops

 The spying of the federal government on American citizens is well documented. The revelations of Edward Snowden have filled news archives with articles detailing the snooping of the United States government. This accepted and understood. To what extent though does an individual have to anticipate surveillance at a more local level? Is there a difference in the ability of local governments versus the federal government to spy on citizens and should one be more acceptable than the other?
Changed:
<
<
The Dragnet
>
>

The Dragnet

 Earlier this year, it was revealed that police in Oakland California were employing Automatic License Plate Readers (ALPRs) to track license plates. The Electronic Frontier Foundation (EFF), which analyzed the data, found that as few as two patrol cars equipped with ALPRs were responsible for collecting 63,272 data points. On average, the EFF found that individual plates were recorded on average 1.3 times. With the sheer amount of plates being collected, it seems safe to say that this is not a targeted approach. The movements of individuals are being tracked with little regard for reasonable suspicion and for the sole sake of data collection.
Deleted:
<
<
The Technology According to the ACLU, ALPR technology consists of high-speed cameras which capture license plates and then add and compare them to a growing database of previously recorded plates. The cameras may also more than just license plates, and sometimes record vehicle occupants and location. Oakland is not the only city to employ the technology, but they are among the most willing to divulge their use of the collected data. Police in Los Angeles have been reticent to be completely open regarding their use of ALPRs. A district court in San Diego ruled last year that the collection of license plate data did not constitute public information and that the holders of the information were not required to turn it over. The judge cited concerns that criminals could use the information to locate the ALPR cameras or areas of police investigation.
 \ No newline at end of file
Added:
>
>

The Technology

According to the American Civil Liberties Union (ACLU), ALPR technology consists of high-speed cameras that capture license plates and then add and compare them to a growing database of previously recorded plates. The cameras may also more than just license plates, and sometimes record vehicle occupants and location. Oakland is not the only city to employ the technology, but they are among the most willing to divulge their use of the collected data. Police in Los Angeles have been reticent to be completely open regarding their use of ALPRs. A district court in San Diego ruled last year that the collection of license plate data did not constitute public information and that the holders of the information were not required to turn it over. The judge cited concerns that criminals could use the information to locate the ALPR cameras or areas of police investigation.

How is it Different?

Americans are accustomed to submitting to various forms of surveillance without a second thought. Many Americans have accepted that a camera may photograph them if they run a red light and security cameras are now just a fact of life. What can be said though, is that for the most part, these forms of surveillance are used only after the fact. Traffic violators are snapped only after they have run a red light and security footage is often used to identify criminals after they have committed a crime. On the other hand, ALPRs are used to actively create a database of not only who is on the road, but also where they are going.

Facial Recognition

The use of facial recognition technology further complicates the matter. It is not far fetched to imagine government authorities coupling ALPR technology with face recognition software to more precisely identify their targets. While it has always been hinted that license plates were not always the target of ALPRs, a Freedom of Information Act request by the ACLU on the Drug Enforcement Administration (DEA) revealed that in some cases, “Occupant photos are not an occasional, accidental byproduct of the technology, but one that is intentionally being cultivated.” In fact this pairing makes so much sense that some APLR manufacturers are already looking to incorporate facial recognition technology into their new models. This allows law enforcement to attach a current visual profile to the already large amount of information they are able to gather about an individual just from their automobile data. While there is some "research" going towards thwarting facial recognition software, it seems likely that the average driver will generally be defenseless against the tracking of ALPR cameras.

Just Don't be a Criminal

The much touted mantra of those that would have us accept government surveillance is that if we're doing nothing wrong, we have nothing to worry about. But even if this is assumed to be true, the specter of human error looms large. With courts deciding that citizens no right to know specifically what sort of data is being kept in databases

Data Privacy

Sharing


UdiKarklinskyFirstPaper 3 - 03 Mar 2015 - Main.AndrewChung
Line: 1 to 1
Changed:
<
<
META TOPICPARENT name="WebPreferences"

"Exceptions" to First Amendment

>
>
META TOPICPARENT name="FirstPaper"
Minor League Snoops The spying of the federal government on American citizens is well documented. The revelations of Edward Snowden have filled news archives with articles detailing the snooping of the United States government. This accepted and understood. To what extent though does an individual have to anticipate surveillance at a more local level? Is there a difference in the ability of local governments versus the federal government to spy on citizens and should one be more acceptable than the other?
 
Added:
>
>
The Dragnet Earlier this year, it was revealed that police in Oakland California were employing Automatic License Plate Readers (ALPRs) to track license plates. The Electronic Frontier Foundation (EFF), which analyzed the data, found that as few as two patrol cars equipped with ALPRs were responsible for collecting 63,272 data points. On average, the EFF found that individual plates were recorded on average 1.3 times. With the sheer amount of plates being collected, it seems safe to say that this is not a targeted approach. The movements of individuals are being tracked with little regard for reasonable suspicion and for the sole sake of data collection.
 
Changed:
<
<

- viewed as separate rights of religion, speech, press, assembly and petition as opposed to a general freedom against an anti-totalitarian regime

- What is the fear in looking at it holistically?

- Parent/child

- "Perverts" - Terrorists - General crime? - -> Is this what we are coming towards? Other three have been generally accepted reasons to surveil -- but Minority Report type situation is when people are astounded and offended -- because then they become potential targets.
>
>
The Technology According to the ACLU, ALPR technology consists of high-speed cameras which capture license plates and then add and compare them to a growing database of previously recorded plates. The cameras may also more than just license plates, and sometimes record vehicle occupants and location. Oakland is not the only city to employ the technology, but they are among the most willing to divulge their use of the collected data. Police in Los Angeles have been reticent to be completely open regarding their use of ALPRs. A district court in San Diego ruled last year that the collection of license plate data did not constitute public information and that the holders of the information were not required to turn it over. The judge cited concerns that criminals could use the information to locate the ALPR cameras or areas of police investigation.
 \ No newline at end of file

UdiKarklinskyFirstPaper 2 - 28 Feb 2015 - Main.LeylaHadi
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"
Added:
>
>

"Exceptions" to First Amendment

 
Added:
>
>

- viewed as separate rights of religion, speech, press, assembly and petition as opposed to a general freedom against an anti-totalitarian regime

- What is the fear in looking at it holistically?

- Parent/child

- "Perverts" - Terrorists - General crime? - -> Is this what we are coming towards? Other three have been generally accepted reasons to surveil -- but Minority Report type situation is when people are astounded and offended -- because then they become potential targets.

UdiKarklinskyFirstPaper 1 - 28 Feb 2015 - Main.CarolineVisentini
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="WebPreferences"

Revision 14r14 - 30 Apr 2017 - 22:11:13 - EbenMoglen
Revision 13r13 - 22 Mar 2017 - 13:36:18 - EbenMoglen
Revision 12r12 - 11 May 2016 - 21:05:14 - EbenMoglen
Revision 11r11 - 26 Jun 2015 - 20:26:31 - MarkDrake
Revision 10r10 - 14 May 2015 - 19:50:24 - UdiKarklinsky
Revision 9r9 - 12 May 2015 - 21:07:02 - EbenMoglen
Revision 8r8 - 12 May 2015 - 19:49:37 - UdiKarklinsky
Revision 7r7 - 29 Apr 2015 - 23:32:57 - EbenMoglen
Revision 6r6 - 06 Mar 2015 - 17:04:25 - UdiKarklinsky
Revision 5r5 - 04 Mar 2015 - 18:30:37 - ArthurMERLEBERAL
Revision 4r4 - 03 Mar 2015 - 23:20:04 - AndrewChung
Revision 3r3 - 03 Mar 2015 - 20:39:09 - AndrewChung
Revision 2r2 - 28 Feb 2015 - 20:05:47 - LeylaHadi
Revision 1r1 - 28 Feb 2015 - 14:44:35 - CarolineVisentini
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM