Computers, Privacy & the Constitution

Splitting The Red Herring Sea

A Different Model For Cybersecurity

-- By MenahemGrossman - 06 Mar 2015

Introduction

Much of conventional cybersecurity efforts focus on keeping intruders out, by using firewalls and similar techniques; and encrypting data, making it difficult for thieves to use even when it is intercepted. However, experience has taught cybersecurity experts never to assume that their networks are impenetrable to determined hackers. I would suggest a different approach to protecting sensitive data, to supplement existing methods. (I would be surprised if no one has thought of this before. It may be that it is unworkable, or that I simply did not look hard enough; but as far as I know, it is my own idea.) My idea is that rather than focusing exclusively on trying to prevent thieves from getting their hands on our data, we can let them help themselves—to lots and lots of it, more than they can handle. In other words, I propose to protect data by hiding it like the proverbial needle in a haystack. This would be accomplished by surrounding our data with enormous amounts of real-seeming, but actually false, data.

If we are encrypting our data, how can the data look "like" data that it isn't? If we are not encrypting our data, why did we give up doing something that works in order to depend instead solely upon confetti?

For example, to nullify monitoring of our web searches or browsing habits, ‘bots’ could be used that enter searches and surf the web endlessly in our names.

You could look at the TrackMeNot? add-on for Firefox, which does this.

This could be a simple tool that anyone could download and set to run quietly in the background without much drain on bandwidth, and it would make us seem to data miners like completely omnivorous creatures. Eventually, big data would catch on, but the result would be the same: our true activities would be buried in an avalanche of information overload. No doubt big data will try to develop algorithms to discern human activity, but it should not be too hard to develop a tool that mimics human behavior really well—and does so many times over. In fact, I would bet that this already exists somewhere.

You could have found what you were looking for. So in the next draft, you should.

Technical Difficulties

More complex by several orders of magnitude would be a similar effort to camouflage two-way communications such as e-mail or sensitive documents stored on a company server. The obstacles are technical, sociological, and possibly legal.

Maybe you're looking at the wrong layer. And you aren't really thinking about confetti in relation to encryption. You could, for example, think about adding entropy to VPNs, which is a place where confetti has a high functionality.

The biggest problem is the technical one: if we want to camouflage our true communications by surrounding them with fake ones, we need a way for the recipient to tell the difference and only pay attention to the real messages. (Documents stored on a non-public server are also essentially communications between the creator and the authorized readers, and I expect they could be dealt with in a fundamentally similar way.) This would presumably need to be accomplished through transmission of a “pointer” message that would tell the recipient’s email client which messages to ignore and hide from the recipient user. (If the messages are stored on google’s servers, the system would need to be such that the separation/sorting would occur only at the level of the recipient’s local machine.) This creates the obvious potential for a snoop to try to intercept the pointer itself and see through the camouflage.

This circular problem may indeed be insurmountable, but I have a hunch it is not. A first step might be to protect the pointers themselves in the same way, by sending numerous false pointers. In effect there would be a continuous cycle whereby a stream of true pointers would indicate the subsequent true pointers and false pointers would point to false pointers. The trick is to get the system rolling by having the sender and recipient on the same page at the inception. This would require some sort of digital handshake at the outset to synchronize sender and receiver to the correct stream of pointers, but from there on, the system would sustain itself. Granted, that initial handshake/synchronization might still be snooped, but it can be done very carefully—even in person, or through some other portal, making it very hard to trace.

Logistical and Legal Difficulties

The sociological/logistical problem is that when something like this is adopted specifically between a pair of people, it may arouse suspicion and cause more harm than good. The answer is to scale the system up and have a network of people using it with each other, so that they can’t all be pursued. In such a multi-lateral system, however, the handshake problem seems far more difficult. I still hope that there can be developed a system in which a central server enables the process between multiple users, by creating a separate stream of pointers for each pair of users who communicate. Of course, trusting an external server to do this job is a vulnerability. Furthermore, generating a critical mass of users is likely to prove difficult in the present environment of ignorance and apathy regarding security. Finally, creating real-seeming but fake communication and documents is significantly more complex than mimicking browsing and searching patterns, and would need to be tailor-made for different applications. (E.g., hospitals would create fake medical records, Target would create fake customer financial data sheets, etc.) Again, this may be unrealistic in an environment where companies don’t even bother encrypting sensitive material.

An additional wrinkle in protecting communication is that government surveillance would use automated processes to search for and flag messages meeting particular criteria. To defeat the selective searching, the fake messages would need to be designed to attract government interest. Sending fake messages with the words “kill” and “Obama” in the same sentence so as to inundate government surveillance and overtax the government’s ability to follow up on suspicious communication might violate the law. Even if it is not presently illegal, there would presumably be attempts to legislate in response. Public sentiment would also likely be unsympathetic to efforts that are designed specifically to blind government anti-terrorism efforts. Still, if done more subtly, it could be made so it would be hard to prove that the fake messages are intended to interfere with government specifically, and there would then be a strong First Amendment defense.

The primary direction of improvement for the essay is to get in touch with the relevant computer science and current tech, rather than trying to make it all up at once as it goes along.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.

Navigation

Webs Webs

r2 - 26 Apr 2015 - 20:47:16 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM