Law in the Internet Society
Within the discussion of whether informed consent in the privacy context can even be a reasonable framework for regulating collectors of data, I was interested in how users perceive the collection of their data. This isn’t about their reaction when asked to agree to the privacy policy (because people aren’t reading that anyways) but rather their reaction when faced with the results of data that was collected and used. What I’ve observed is that users aren’t averse to the actual harvesting of their data as much as they are to the perception that the collector will do something nefarious with it.

My personal contradiction is illustrated by the way that I view certain apps, namely maps services (I use Google Maps) versus Apple’s Find My Friends. From a rational perspective, these two apps aren’t that different. Both have access to fairly detailed data, including my currently location but also my speed and direction of movement. If anything, Find My Friends might be less of a privacy liability, since Apple’s revenue stream is less dependent on the selling of data than Alphabet’s is.

Nevertheless, I willingly use Google Maps but consider Find My Friends too “creepy” to sign up for. I think the difference lies in how salient the process of data collection and recording is. If I can see where my friends and family are at all times, it seems totally obvious that Apple also has access to that same information. That same kind of feedback doesn’t exist in Google Maps, where I simply input an address and get navigated to my destination. There’s no indication that Google collects where I’m coming from, going to, and how I get there, even though I’m peripherally aware that it’s happening.

The disparity between how consumers perceive invasions of privacy and how privacy is actually violated has implications for both companies collecting data and users attempting to protect their own data.

For companies, this has led to a push to frame their services as helpful or benign as opposed to invasive. One article discusses the public response to the use of data by two media streaming companies, Netflix and Spotify. In 2017, Netflix tweeted “To the 53 people who've watched A Christmas Prince every day for the past 18 days: Who hurt you?” and was criticized. Meanwhile, Spotify released playlists for each user of their most played songs along with recommendations based on their listening habits, largely to positive reception. Again, it doesn’t seem apparent why Netflix’s actions are considered “creepier”, given that both companies openly track their users’ viewing and/or listening habits.

Part is this difference is likely explained by whether users perceive that they’re getting something of value in return for the use of their data. In the above example, Netflix is merely exposing the extent to which it collects granular data, while Spotify has used that data to create something that consumers see as valuable in a curated list of their favorite tracks. Users also prefer a close nexus between the action of collecting and the response by the company. A 2012 study found that 72% thought it was appropriate for an online grocery delivery service to use a shopper’s order history to recommend recipes. Conversely, 68% felt that it was inappropriate to shop for a product online and see advertisements for said product on a different website.

For consumers, a critical look at possible privacy liabilities can be meaningful. In this course, we’ve discussed how cell phones are intentionally designed with multiple data-collection features – cameras, microphones, accelerometers, etc. A few weeks after that class, I had the realization that the AirPods? I recently bought were no different. What are portrayed as features in Siri integration and the automatic pausing of music when the AirPods? are removed are also platforms for further data collection. The auto-pause function is driven by accelerometers and optical sensors that can detect when the earphones are taken out of the ear, but presumably can be used for other purposes. Meanwhile, Siri integration relies on built-in microphones, which are now positioned inches from the user’s mouth instead of buried in whatever pocket or bag phones are normally kept in. The use case for AirPods? make them more invasive than an iPhone itself. While phones are interacted with briefly and then often put aside, earphones remain out and exposed for hours at a time.

Admittedly, I haven’t radically changed the way in which I interact with technology. The next time I drive, I’ll probably fire up Google Maps and let it tell me whether to stick with local roads or take the interstate. But I do so aware that I’m basically letting Google take a ride in the passenger seat. Is not getting lost worth letting someone else know where I’m going? Some privacy tradeoffs might be worth it to me, but some might not. An optical sensor in the AirPods? doesn’t make the audio quality better, and the microphones certainly don’t make the battery last longer. Is it really worth it to give Apple eyes and ears into my personal space just so the music pauses every time I take the earphones out?

Is that really a question? To me, the answer seems so obvious the question is unnecessary. (But of course I do not touch anything made by the King of the Undead, Now Dead, ever.)

The draft has a single point to make: that perceptions of invasiveness are based not on knowledge, but on the "friend or foe" intuitions evolution has embedded in human psychology. Capitalizing on this fallible trust heuristic empowers both commercial pursuit of vast wealth and the authoritarian perfection of despotism in the Chinese style.

But the way of improving all the heuristrics of our fallile social perceptions is universal education. Teaching one another what is actually going on helps us to decide whether the interactive mapping in the car is better for us than cultivating our sense of direction and our ability to plan a trip ahead of time, as it helps us to decide whether we should have our music collection on a disk drive at home and stream it to ourselves or give a streaming service access to our emotions from moment to moment, to remember forever. Learning what the technical choices really are, and how our minds actually develop and deploy themselves in relation to the alternatives means that we can free ourselves from the automatic and dismayingly self-harmful intuitions that, as you say in your collection, had been baked durably in while we were still Australopithecines. If there is a route to improvement in this already good draft, it is to consider the role of education as well as technical brainwashing in the eventual shape of the human spirit under the conditions of presently-existing "social media."


Webs Webs

r2 - 18 Jan 2020 - 14:21:12 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM