Law in the Internet Society

Artificial intelligence and mass surveillance

-- By RaulMazzarella; December 6, 2019

The technological development of Artificial Intelligence

Mass surveillance for illegitimate reasons has been a widely discussed topic since Edward Snowden's first revelations in 2013. Now, this topic is becoming even more relevant for the use of a new technology that is making this practice even more powerful: artificial intelligence.

According to Professor Nils J. Nilsson, Artificial Intelligence (“AI”) is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment. These intelligent machines, are nowadays, for example, helping run smartphones, internet search engines, digital voice assistants and Netflix movie queues. Additionally, these AI machines are now capable of reading video footage to search for specific things (like a specific type of car) from any CCTV system that is uploaded to the cloud, to detect the difference between a normal actor and someone acting out of bounds in a bank, or in general to identify people using facial recognition and 5G networks.

Its use in mass surveillance

States are also using this technology for their own benefit and specifically for surveillance purposes, being China a great example. Some reports say that China uses this technology to collect massive amounts of data on how, where, and with whom ethnic groups spend their time and eventually they can feed these data into algorithms that allow them to predict, the everyday life of the population of this country. On the same note, China has set up the world’s most sophisticated surveillance state in Xinjiang province, tracking citizens’ daily movements and smartphones use. But China is not the only country that has been using AI, according to some studies, at least seventy-five countries globally are actively using AI technologies for surveillance purposes. This includes smart city/safe city platforms (fifty-six countries), facial recognition systems (sixty-four countries), and smart policing (fifty-two countries) and everything indicates that the spread of AI will keep growing.

The problems of AI

The possible issues of this kind of technology are almost unimaginable. AI facial recognition would practically end anonymity and privacy, and some experts have expressed concerns about error rates and increased false positives for minority people. On the same note, there are fears about algorithmic bias in AI training and their harmful influence on predictive policing algorithms and other analytic devices used by law enforcement, among other potential problems.

Additionally, AI can, for example, help autocratic governments to obtain and manipulate available truthful information to spread erroneous or false news. These tactics may be used to send personalized messages focused on (or against) specific persons or groups of people. However, the use of this technology raises some ethical questions in even the most democratic of countries, and some even believe that this could be a threat to democracy.

Just a few days ago, Edward Snowden was interviewed about this very topic, where he stated that “before 2013 (…) it was very expensive (…) and that created a natural constraint on how much surveillance was done. The rise of technology meant that, now, you could have individual officers who could now easily monitor teams of people and even populations of people, entire movements, across borders, across languages, across cultures, so cheaply that it would happen overnight.”

Two possible countermeasures

The political and academic proposal

Relevant controls to the ability of the States to use this technology should be applied everywhere possible. The problem is not only that the States are watching its people, is how there are doing it and what they can achieve with that. According to Snowden “the invention of artificial general intelligence is opening Pandora’s Box and I believe that box will be opened. We can’t prevent it from being opened. But what we can do is, we can slow the process of unlocking that box (…) until the world is prepared to handle the evils that we know will be released into the world from that box.”

I agree with this approach and I believe that it is the labor of policymakers to prevent that the use of this technology goes out of control, as it is starting to happen everywhere in the world, and it’s the role of the academia to warn the general public and the policymakers about the risks of the application of this kind of technology. It is also the role of the companies that develop this technology to use strict ethical codes to avoid the misuse of the same.

Self-education

It is always a possibility that the combined efforts of policymakers, academia, and the companies are not enough or are not fast enough to stop or diminish the evils that come with this technology. For that reason, the general public should be educated about this issue and should try to adopt technological and practical countermeasures to this technology, whenever possible. I agree with professor Eben Moglen that some of these features should be to implement strong encryption everywhere possible, to prefer decentralized services, to choose free software modifiable by its users, to avoid the use of technological equipment that could be tracking you, and to use private servers systems such as freedom box. However, in my view, the main responsibility of the general public is to protest and let their governments know, that the right to privacy must not be surrendered in any case.

Conclusion

This paper discussed the development of AI and its potential use and misuse in relation to mass surveillance. This kind of technology is rapidly improving and growing across the globe, without any clear countermeasures to this date. For this reason, people have begun to worry about it and policymakers, academia, the companies that develop this technology and the general public should start doing something about it, before its too late. This is because, at the end of the day, it is a responsibility of the entire society to try to protect the human right to privacy, for all of us.

This is a pretty good start at collecting some material. But it's just some Googling.

A central problem is AI-hype-avoidance, which you don't do well enough. All this AI promotion is a little grain and a lot of bullshit. You quote Snowden without hearing what he's saying. "General artificial intelligence," which he rightly describes as still in the future (as it has been since very smart, capable people began trying to convince me it was around the corner nearly half a century ago) is not the stuff being called AI right now. That's made of something also called "machine learning" (which it also isn't), which means pattern recognition. Building programs of a kind called "neural networks" and "training" them to approximately recognize complex, shifting patterns in huge quantities of data generated by dynamic processes is not a small achievement by any means. It can be used to find small tumors in vast amount of CAT scan imagery. It can help to find pickpockets in train station crowds. It can make buildings operate much more energy-efficiently. It can do small wonders for public transport resource planning. But it isn't "general artificial intelligence," which is Snowden's point.

What would most help to improve the draft is to be inside the ideas, instead of outside them. More reading of real sources, not Googling for quotes, is in order. Then perhaps we can ask harder questions. Why is predicting the daily life of predictable people more dangerous than keeping everyone under real-time surveillance all the time? What does "Artificial intelligence" add to the danger posed by fully-surveilled learning, which you do now everytime you take a course mounted on this university's Canvas software, without "AI"? In other words, how does our policy analysis get sharper if we add a hypo-dermic full of AI concern?

Navigation

Webs Webs

r3 - 12 Jan 2020 - 14:14:21 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM