The Student Privacy Pledge: A Dangerous Political Facade

-- By AlexandraRosen - 09 Dec 2015


Since January 2015, over 200+ companies—including Google, Apple and Microsoft— signed what is referred to as the Student Privacy Pledge (the "Pledge"). The legally binding pledge written by the Future of Privacy Forum (FPF) and Software and Information Industry Association (SIIA) is a “vow to collect, store or use student data only for educational purposes.” While on its face the Pledge seems like a positive step towards stricter student privacy protection on the Web, and did quell public (particularly parents') fears' regarding the potential exploitation of students' information in the education context, the Pledge’s impact on companies' collection and use of personal information and activity should not be overstated.

In the following paper, I will discuss whether the existence and wide support lauding the Pledge as an effective mechanism for protecting student privacy on the Web creates a false sense that security within the status quo and in turn, distracts from real progress toward more effective privacy protections.

The Pledge is an Empty Promise

Given my strong skepticism of the Pledge’s ability to impact company behavior, I was surprised to see that on December 1, 2015, the Electronic Frontier Foundation (EFF) filed a complaint with the FTC against Google's Apps for Education (GAFE) alleging that it violates the company’s pledge to limit its use of student data. In light of this complaint, I consider whether my critique of the Pledge is completely misplaced--does the Pledge protect student's privacy? Not really...

If anything, EFF’s complaint is evidence that the Pledge is not preventing the collection and use of student personal information.

1. Limited scope

Apparently, EFF’s complaint against Google incorrectly interprets the Pledge to apply to various GAFE activities. Shortly after the complaint was filed, both FPF and SIIA issued statements criticizing the complaint as a large misunderstanding of the Pledge itself. For example, FPF Executive Polonetsky said "[w]e have reviewed the EFF complaint but do not believe it has merit."” Similarly, SIIA's MacCarthy? noted that the EFF complaint contains important misunderstandings about the Pledge that largely stems from EFF interpreting the prohibitions of the Pledge too broadly.

The Pledge does not impose a blanket prohibition on companies' collection and use of student information. Instead, the Pledge obligates signatories to “be transparent about collection and use of data” in conducting certain activity in particular context. The scope of activities covered by the Pledge is explicitly limited not to include “the use of student information for purposes of adaptive learning or customized education. Thus, the Pledge restricts Google’s use of data collected from students in the classroom to enhancing educational purposes, but does not restrict Google’s use of data collected from users (who may also be “students” for certain parts of their day depending on where the user is at the particular time and which Google product (or app) they are using). For example, if a child uses GAFE in the classroom during school, in order to comply with the Pledge, Google cannot sell information collected through this "educational service" to third party advertisers. However, if a child is at home and uses Chrome to browse for leisure, Google may sell the child’s information to a third party advertiser because the information was not collected through an “educational/school service.” The limits of the Pledge on Google, therefore, are narrow.

2. Limited enforcement

The Pledge does not include a specific enforcement provision “nor is there an enforcement regime behind the effort that monitors compliance, and takes disciplinary action or informs the FTC when a company is not compliant.” Despite any specific enforcement provision, in the U.S. a company’s security and other commitments made under the Pledge are legally enforceable by the FTC and State AGs under Section 5 of the Consumer Protection Act.” However, even if noncompliance triggers FTC enforcement, the FTC’s enforcement powers are limited. For example, in 2012, Google paid $22.5 million to settle an FTC complaint that it violated an advertising industry pledge by misrepresenting the way it tracked Web users. While the FTC’s $22.5 million fine was record-setting, it was arguably only a slap on the wrist for Google [the agency’s fine represented about 0% of Google’s income in 2012].

Pledge creates a false sense of security

EFF attorney Cardozo, who wrote the complaint, told the WSJ that “[t]he best way for Google to comply would be to simply not collect any data on the activities of logged-in [GAFE] users.” Even if that is factually accurate, the self-regulation regime emplaced by the Pledge does not require, nor suggest, signatories take such extreme measures in limiting their activities related to data collection and use. EFF’s request of the FTC, to “require Google to destroy all student data it has collected and used in violation of the Pledge and to prevent the [Google] from collecting such data in the future,” is outside the scope of the Pledge and Google (and other signatories) will likely shrug off EFF’s demands as a lofty “suggestion.”

The Pledge Does More Harm Than Good.

Considering that the Pledge applies to only a narrow set of activities in limited contexts and that within that narrow scope, the FTC does not have sufficient teeth to deter companies from noncompliance with the Pledge, I circle back to my initial perspective and ask does the Pledge do any good?

I think it is important to consider the context in which the Pledge was passed and widely adopted. The initiative came at a time when state legislatures were rushing to enact restrictions on data collection and data mining. Supporters of education technology, like SIIA who had “long resisted efforts to strengthen federal privacy law,” feared that legislative proposals would hurt their business models and wrote the Pledge hoping to avoid stricter proposals from being passed. Therefore, the impetus for the Pledge was to limit future restraints on companies’ data collection.

Despite any perverse incentives that led to the creation of the regime, if the Pledge improved privacy for students, then the initial motives are irrelevant. However, the existence and the propaganda-like support for the Pledge as sufficient protection makes the likelihood of any real rules passing slim, at best. The 200+ signatories are free to continue collecting and mining student data largely without limit (except for the very narrow constraints imposed by the Pledge) and without privacy advocates and politicians constantly looking over their shoulder and questioning their every move. Furthermore, by explicitly permitting the collection, maintenance, use and sharing of student personal information “needed for authorized educational/school purposes,” the Pledge leaves the door open for companies (like Google) to continue, if not increase, its data mining of students by claiming various uses and services are needed for educational purposes. The Pledge was (and continues to be) a perfect and carefully crafted distraction from public scrutiny. The fact that EFF filed a complaint against Google and that critics have responded with legitimate arguments that the complaint itself is beyond the scope of the Pledge protections is evidence that the Pledge is not only relatively useless as privacy protection, its existence and appearance enables further intrusions on student’s information privacy.

Why bother? That's always precisely the point of these self-regulatory exercises. We can take that for granted once we have seen who is sponsoring the show and who wants to be seen in the front row of the audience. Naturally there will also be non-profit do-gooder "intellectual" participation, accompanied by professional rhetoric and moderate fully-disclosed payment. Naturally the point of it all is a false sense of security.

Lawyers are realists, at least around here. So telling them that is like telling them that the sky is either blue or gray and the sun rises in the east every morning. The real questions are:

  1. Who is supposed to feel this sense of security, and what business models depend on their not feeling an insecurity instead?
  2. At whom is it aimed? That is to say, whose business are these people virtuously forswearing going into? As usual, it will help to assume that they are going to benefit as investors and partners from the businesses they virtuously aren't going to go into, so you can gain some insight into this situation by watching what they are doing with the other hand while they are waving the brightly-colored silk handkerchief you are presently paying too much attention to.
  3. What unintended consequences can this be made to have by foregoing attention paid to the brightly-colored silk handkerchief being waved and thinking instead about the tired old magic trick actually being worked by the clever magician and her sexy assistant? How does the nonsense at which everybody else is looking give you a chance to foil the trick that both this bunch of smoothies and the other bunch they are investing in and trying to screw are either too virtuous to be doing or too virtuous not to be doing?

Yes, but. See top. You circle around getting to the outer suburbs of #1, but #2 is invisible, which means you don't ever really figure out what the trick is, which means that #3 isn't yet thinkable. Instead you do the thing you're supposed to do which is to follow the handkerchief and harrumph the harrumph you're supposed to have, which means you were so busy being all lawyerly about the details of enforceability and feeling superior to them on the fake reveal that you never even had a chance to look over to where they were preparing the real trick they're going to be playing on human civilization the next time the lights go out momentarily. That's the space to cover in thinking your way to draft 2.

You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.