Computers, Privacy & the Constitution

Correction

-- By MatthewEckman - 15 May 2009

I made a mistake in class one day. Professor Moglen, as I recall, was speaking in ominous tones about the fact that grocery stores use supersaver discount cards to track customer purchases. He noted that they presumably they do this in order to market products more efficiently and increase profit. The professor was also alarmed about the fact that gmail looks for keywords in email messages sent and received so that firms that advertise on gmail can more accurately target their market. When Professor Moglen finished I leapt to my feet.

"Oh NO!" I cried, in mock horror. "THE END OF DAYS IS COME, JUST AS THE ANCIENTS PREDICTED!"

I didn't really, but I wanted to. Instead I said something like, "but isn't perfect information necessary for a well-functioning market?" Though I may not have articulated my point very well, that wasn't the mistake that’s bothering me. My mistake was in not shutting up. After I asked the question Professor Moglen seemed suddenly and strangely reticent and so I, already somewhat embarassed and increasingly nervous, answered my own question. I said something like, "Oh I see. It's a one-sided well-functioning market." I didn't know then what I meant, and I still don't. But Professor Moglen seemed to approve of my answer and I acquiesced in it. Maybe he was doing me a favor, or thought that he was. Maybe his intention was to spare me further embarassment. Throughout the course it has often occurred to me that there is something—something important, something fundamental but not terribly difficult to understand—that much of the class "gets" whereas I do not.

In a term paper the internal barriers to free expression are lower than they are in a classroom setting. Even when that paper is posted on a publicly accessible wiki, the opportunity to embarass oneself is relatively limited. So I will ask my question again and attempt to provide a more satisfactory answer. Anyone who wants to chime in is welcome.

The point that I tried to make in class isn't subtle or profound. I was simply asking: "And what's wrong with that? Don’t consumers obviously benefit?" (I had the same reaction to certain passages from "No Place to Hide.") The simple answer seems to be: "There's nothing wrong with it per se, but it's somewhat alarming. We need to ensure that the information collected is used for good and not evil." I might add that the People need to know that and how technology is being used to gather information about them and what sort of information is being gathered. Then we can have a public discussion about what safeguards are appropriate, and of course this will be eminently debatable. I don't have a very definite idea about the kinds of safeguards that need to be in place--and that because I am for the most part still one of the naive People--but the path down which to proceed I see clearly enough, and I imagine that just about everyone would agree that it's the proper path. (Or at least they would, behind a veil of ignorance in which no one knows whether or not she is an executive or director of a telecom company--or similarly situated in a position in which her profit motive might overwhelm her civic scruples--in our information society.) I trust that people who are very much concerned with privacy issues have my best interests at heart, more or less. They provide or work to provide the necessary ensurance. I am glad they are there to play the roles of privacy advocate and industry watchdog, but for whatever reason I simply do not feel compelled to join them. I am able to sleep through the wake-up calls that some privacy advocates seem to consider so urgent. (I sometimes seem to hear them shouting: "Wake up, Sheeple!") My spidey-sense is tingling, but it isn't ringing off the hook. Maybe it's not that there is something I don't "get" but rather that I lack a certain temperament.

All this strikes me as painfully platitudinous. It boils down to a simple admonition to be smart about all this privacy and technology stuff. Obviously there is no algorithm for how to proceed "smartly," but I am confident that the general direction and appropriate methods are clear enough and that there is widespread agreement on what they are. Hashing out the specifics of what to do in response to any particular privacy concern will require an informed public debate. I trust that a well-functioning democracy (or well-enough) will reach a satisfactory and largely rational concensus. And I trust that there are sufficiently influential lobbies, litigators, judges, legislators, etc. who effectually (more-or-less) act as my proxy in the democratic process.

In his book O’Harrow discusses the case of James Turner. Turner rented a car and paid for it using a debit card. On his trip he sometimes drove 80 mph, and eventually he came to find that the car rental company, Acme, had taken the unusual measure of charging speeding fines to his debit card. Acme had been using the GPS system installed on the car to monitor Turner’s position and velocity at all times. This is of course outrageous. I expect most anyone would agree that this kind of thing cannot stand, even if Turner’s rental agreement included a waiver. And of course it did not stand:

Turner sued, charging Acme with invading his privacy. Included in the court records were maps that showed the exact longitude and latitude of Turner's Voyager, down to six decimal places, and the exact time he was speeding, down to the second. "Said surveillance by Defendant seriously interfered with the Plaintiff's solitude, seclusion and in his private affairs," said the papers his attorney filed in Connecticut Superior Court. The trial date was set for the spring of 2004. "The Defendant knew or should have known the use of a 'GPS' system would be offensive to persons of ordinary sensibilities."

Turner also submitted a claim to the state Department of Consumer Protection. After an investigation, the department ordered Acme to stop fining people, and to return the speeding fees. Turner wasn't the only one watched and tagged by the company. "It's horrible. It's like I was some sort of an animal that was tagged by scientists so they could observe my mating habits," another driver told Wired.

Privacy violations, just like torts, should be addressed as they arise. Is there some general reason for thinking that they cannot or will not be addressed? Is there something inherently evil about putting a GPS system in a rental car or using electronic data to target consumers? If there is, I really don't get it. We did alright by Turner.


Customers may benefit as customers from the market efficiencies enabled by profiling and data mining. But it's not clear that customers benefit as citizens when the government subpeonas those same records, doing an end-run around the judicial safeguards against direct surveillance.

When you enjoy no privacy from commercial entities, perhaps the worst you have to fear is price discrimination. The danger when the government and governmental agencies enter the picture is authoritarianism.

-- AndreiVoinigescu - 16 May 2009

I agree with Andrei's points, and I would also emphasize something Andrei only implied: the harm of data aggregation. No, there is nothing evil with putting a GPS system in a rental car, or using your purchase history on Amazon to give suggestions about what to buy. The evil comes when people can own your identity by pulling all the information about you together. With aggregation, the things that do not hit your common sense threshold of "what can not stand" are done, and then together can be used for evil. This can be done by an authoritarian state (as Andrei points out), or a private investigator.

Each and every one of us, consciously or unconsciously, spends a large amount of time crafting personae or a persona that we present to the world. We tell the story of ourselves, usually emphasizing the good and often leaving out the bad. When someone tracks (or pays a person or people who already tracked) your cell phone, your car, your text messages and emails, your purchases and your salary, they are the ones that get to tell the story of you, complete with footnotes, and your ability to control your sense of 'self' is wrested away.

-- JustinColannino - 16 May 2009

Hey Matthew: the issue of Google knowing enough about you to target detailed advertising was a topic that we looked at in "Law and the Internet Society." I think it is much easier to see the dangers of this information existing in a context of easily obtainable subpoenas and the like, but I agree, the commercial point (that Google's intimate knowledge of your life is a negative force) strikes many people as absurd. I personally do see Google's possession of this information as problematic; however, (in my opinion) we never developed an argument capable of swaying those who did not already intuitively see the situation as dangerous.

Although I still don't have the perfect argument, I think I see to some degree where the weakness/points of contention lie in the arguments that are usually made against information/identity collection by google and its kin.

1) The argument is often going to be primarily moral. Although you could make an non-moral argument, I don't think it would be convincing to someone who entered the discussion with a strong belief in capitalist economics (as discussed later). Because the argument is likely going to revolve around considerations of human dignity and the like, I think is almost necessarily going to be intuitive to a certain degree.

2) People will take issue with the effectiveness of the information in driving demand and causing sales. Part of the argument that people usually make against Google ads assumes that they are, or have the potential to be, extremely effective. This argument depends on a contention that, even if the ads are not effective right now, some future iteration of advertising will be: the information in Google's hands will be saved forever, and waits only for an advance in behavioral psychology to become an unstoppable instrument of economic demand generation. As far as I can see, the argument against information collection will always rest on this assertion to some degree, and although it does not seem to me like an overly controversial assumption, it is difficult to actually prove to a skeptic.

3) Similar to #1, a consequentialist argument is probably not going to work out, so long as we are basing the discussion within the framework of capitalist economic doctrine. As you said in your essay, it doesn't seem (at least superficially) that Google being able to drive demand with targeted messages is a negative thing. Even if we assume #2 to be true (that Google can learn enough about you to present ads in such a context and with such timing as to be basically irresistible), it is not clear why this is a bad thing. If Google leveraged this hypothetical irresistible ad technology, at its worst it would create a marketplace where demand was ratcheted up to the maximum. While one could argue that this could possibly cause negative economic effects (for example people over-leveraging themselves to purchase advertised goods), many people approaching from a free-market perspective are going to fail to see anything wrong with this state of affairs - the argument will turn into a debate over hypothetical effects and economic doctrine: the argument against information collection will be tied to the old, familiar, and generally unwinnable conflict between different economic ideologies.

4) Peoples' belief in personal autonomy generally prevents them from seeing the group autonomy problem as affecting themselves. Even if we imagine a google capable of turning the great host of people into predictable and behaviorally transparent purchasing machines, this does not strike home as a moral issue for many people. If one believes that this state of affairs (ramped up demand creation) is not economically problematic, it is typically difficult to see it as morally or otherwise problematic. Although I imagine that most people would have a problem seeing themselves as autonomy-less purchasing machines driven by Google's deep and intimate knowledge of their psychology, people generally refuse to think such a thing is possible for them personally. Even if one is able to see a lack of autonomy in others, it is difficult (maybe impossible) to truly believe that you are capable of being played in the same manner as those around you. People will often accept, from a behavioral perspective, that it is possible to know others "better than they know themselves." People believe in the efficacy of advertising and the possibility of a weakened autonomy in others, but every individual will insist that they are the exception. The difficulty of seeing yourself as a behavioral machine, and the assumption that you are making choices to some degree independent of context, makes it very hard to present the information/identity gathering of google as a personal threat.

Wow. Too much writing - I should have made that my paper! These are the problems I have come across when trying to make the argument you address in your paper, and why I think it is such a hard argument to make to someone who does not have an intuitive belief in the problematic nature of information collection on the internet. The constitutional and criminal problems raised by Justin and Andre should not be understated; however, the pure economic autonomy argument is often very unconvincing to someone skeptical to the basic assumptions of the claim.

-- TheodoreSmith - 18 May 2009

Matt,

I think Andrei, Justin and Ted have done a nice job of presenting a more academic response to your paper, and while I generally agree with their premises, I still found them a bit unsatisfactory in addressing the problem I have with your paper. I’ve given a great deal of thought to the question you pose and also why I find the academic response unsatisfactory, and I think that perhaps it’s that these responses to you lack, well, perhaps a certain Wyoming-ness that seems to me essential in these debates. Why is it a problem the grocery store tracks my purchases or my rental company my car? It could be as, Andrei suggests, because the government, too, could access this information and use it coercively, or as Justin does, that in aggregate, this data gives its holder a great deal of power over its over its originators. But there’s something else crucial underlying people’s reactions, which I think, very deeply, is this: it’s just none of their goddamn business. It’s no one else’s business where I go or what I do when I go there, and it is this intrinsic violation that I, at least, am railing against when I talk about privacy invasion.

In his comment, Ted hinted at something that I think needs to be stated more explicitly, which is that we are clearly coming from entirely different normative systems, a fact which is probably obvious, but not unimportant to recognize openly. The competing paradigms make having a discussion about this topic extremely difficult, because the paradigm acts as a filter which alters everything we see. In this case, as long as you remain locked in a capitalist framework, you will not be able to recognize or understand the problem we are talking about, because your overriding value is efficient markets (though, as Ted points out, even that value may not be served by the current system) and not, what I will unsatisfactorily call “personhood” values, which would be served by a different system. These personhood values are what underlie and inform my “none of your goddamn business” response. People resist the notion that “perfect information [is] necessary for a well-functioning market” because you aren’t talking about “information” in a vacuum--- you’re talking about information about people, about their inner recesses and intimate lives, their garbage cans and diaries.

To this you say, so what? Don’t we want to be sold things we like? I’m aware there’s a value difference between us that may make this answer difficult or even impossible for you to hear, but I will say it anyway: I am not a machine that exists merely to be sold things. Again: I am not a machine that exists merely to be sold things. My values, dreams, beliefs, ideas, thoughts, hopes, fears, anguishes, nightmares--- the things that make up my very soul, if you want to call it that--- do not exist for the benefit of becoming commodities in your perfect market. I resist your resistance because I see a person's place in the world you yearn for, and it is a place of valuelessness and waste, of your human worth being reduced to nothing more than the dollar that an advertiser will pay to get a good look at you. It is a place I don’t want to go, and neither should you, though your essay supposes that you do indeed want go there, that having the perfect shampoo for your hair advertised to you is worth the price of giving up on personhood, because aren’t well-functioning markets just so great? But, Matt, there should be, and is, much, much more to life than that, and if we say in the name of capitalism that all is well, I fear we’ve lost something essential to our being men at all: a precious and private soul.

I realize that it’s perhaps a fool’s errand to counter normative statements with normative statements, but I felt that, important and logical as the other comments were, we are, at heart, having a discussion about what we value, and so it seemed necessary to use the language of values with you. I hope you find what you value in the world you seem to be arguing for, because I’m quite sure I won’t…

-- DanaDelger - 19 May 2009

 

Navigation

Webs Webs

r6 - 19 May 2009 - 03:25:58 - DanaDelger
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM