Law in the Internet Society

Voice Activation and Its Fundamental Disagreements with Privacy

-- By MattDial - 29 Nov 2019

It’s a trope of many science fiction movies, and seemingly necessary for any futuristic society. In many science fiction movies, having a voice-activated digital assistant is a staple of a futuristic society- the utopian vision of technology seems to move interaction with it from our hands to our voices. The modern realization of this idea, however, has come with myriad privacy concerns- namely, in order to activate such an assistant with your voice, it has to be listening. Herein I will look at whether a voice-activated digital assistant is possible within a legal framework prioritizing privacy.

"It" has to be listening, and it has to be a third-party service platform, because we don't make this obviously federated service a federated service. If the assistant were running on your server, it would be listening to you within your network, not listening to your and putting its data elsewhere. That would change the calculus significantly.

The Current (Broad) Legal Framework

Assuming operation within a legal system prioritizing privacy is perhaps the first hurdle to overcome. We do not currently exist in an environmental law model for privacy of data, at least here in the United States. While the Supreme Court has recently stated in the most analogous case that Fourth Amendment search-and-seizure concerns relating to law enforcement can apply to data-collecting such as cell-site location information, police can still access this data through a “probable cause” warrant. Carpenter v. US, 138 S. Ct. 2206 (2018). Never mind that the digital assistants are collecting this data either way.

Outside the Fourth Amendment, there is the more general concern about our voice data being used or sold by the developers themselves. Apple and Google have denied taking part in this and there are internal options to turn off voice activation of their assistants. While there is some legal movement against the companies to stop automatic audio recording, there has been no definite ruling. The centralization of the tech industry around the few companies who produce digital assistants, combined with their hesitance to address the underlying privacy concerns, begs the more central question- is there any way to make this technology work?

Potential Solutions

Opting In

As the tech giants producing these products have more eavesdropping habits exposed, they have responded. But their baby steps exist in a field in need of huge leaps. Amazon has said that allowing human employees to review your Alexa voice data will be opt-in instead of opt-out, but this does not follow an ecosystem legal model of privacy, instead allowing for a waiver of rights affecting more than the individual speaker’s owner. If any guest in your home doesn’t want their words reviewed by an Amazon employee but you’ve opted in, too bad for them. Furthermore, it is unclear if not opting into the speaker or smart phone’s listening system would prevent third-party applications running through the device from discretely recording voice data on their own. What could help is a form of specifically enforced oversight where tech giants could disallow third-party apps from accessing the microphones and cameras, beyond the current system of “we can terminate a developer’s system if we think they have impermissibly used user data”. They have this power now, but there is no incentive for them to investigate violations by third party apps or limit connection to their own servers.

The Kill-Switch

Another possible solution offered is a hardware “kill-switch,” where the microphone, camera, or internet connection within the device can be physically disconnected with a switch built into the product. But there are several issues with the implementation of this type of feature. Firstly, so far this feature has been offered mostly in smart speakers, with some overtures from HP and Apple to include them or a version of them in future laptops. But where such a feature would arguably be even more needed is smartphones, as they follow the users around more and are therefore a better target for collecting an all-inclusive picture of a user’s data. These features borrow from the hardware developer Purism, who developed kill-switches for its laptops starting in 2014. Purism’s implementation was a last-line defense to prevent hackers and malware from accessing these features, and ostensibly these switches in mainstream tech devices serve the same purpose. But another issue with kill-switches as a privacy solution is that most of the trust issues come from the developers of the devices themselves. Just look at Google’s Nest home security device, which contained a microphone that it failed to disclose to its consumers for two years. Consumers then have to trust the less-than-trustworthy companies on the efficacy of their kill-switches. Lastly and most importantly, the kill-switch acts as a physical line of defense from unwanted listening and recording, but the switches undermine the entire purpose of a device taking commands from your voice. If you want to control the device with your voice, you would have to reconnect the microphone and send your voice data to the developer’s servers. The whole purpose of the device’s “functionality” is undermined by this protection. A kill-switch seems to act only as a physical embodiment of the "opt-in" method, where once you do opt into the system, you are right back in the underlying privacy mess.

Conclusion

Attempts to balance privacy concerns with the basic functions of a smart speaker or smart phone’s digital assistant fail with or without “protections”. As currently implemented, the products function as intended and track your voice data, but there is little to no privacy afforded. Activate a physical barrier like the kill-switch, however, and the product ceases to provide its stated function. The smart speaker or digital assistant is a design that cannot be compatible with any basic idea of secrecy or anonymity required for a private existence. These concerns are baked into the very design of all these digital assistants, and there isn’t any current defense against them other than simply not using devices that contain them. Even if science fiction predicted that the future would mean technological commands getting out of our hands and into our voice-boxes, it did not fully predict what would follow along with that future. The basic idea of a digital assistant with whom a human user can interact and interface is perhaps still promising, but as for the initial activation method- tried and true mechanical activation is the best bet.

Activation is only one of the aspects of the problem once the assistant is a service delivered centrally from a platform rather than running on a Raspberry Pi of your own. But on the other side of that architecture of the service, the problems don't look too bad at all. So why don't we have federated voice recognition and "skills" that implement service models for voice browsing through personal hardware? It's that Google et al. consume free software and emit some, but don't emit software that runs their services. A competitor built around designing free software services also scalable for industry, whether that were in Europe or in India, would be a geopolitical force for change. We've done it before.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.

Navigation

Webs Webs

r3 - 11 Jan 2020 - 15:18:56 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM