Law in the Internet Society

Artificial intelligence and mass surveillance

-- By RaulMazzarella; Final version

The technological development

Mass surveillance for illegitimate reasons has been a widely discussed topic since Edward Snowden's first revelations in 2013. Now, this topic is becoming even more relevant for the alleged use of a technology that is making this practice even more powerful: artificial intelligence. Just a few days ago, Edward Snowden was interviewed about this very topic, where he stated that “the invention of artificial general intelligence is opening Pandora’s Box and I believe that box will be opened.”

What is artificial intelligence?

According to Professor Nils J. Nilsson, Artificial Intelligence (“AI”) is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment. Other renewed experts such as Marvin Minsky and Seymour Pepert have discussed profusely, among other topics, if you can make computers think like human beings through having structures like human brains. We agree with professor Seymour Pepert and Eben Moglen’s knowledge-oriented AI approach. As an example, given by professor Moglen, a computer does a good job playing chess, it wins chess games, but there are two problems: (1) it does not know that chess is a game; and (2) it does not know what it is playing. So, it is important to understand in which respect a machine is intelligent and in which respect that is not true. If playing chess involves knowing that chess is a game is hopelessly not doing it. We also believe that Nilsson and Snowden are not talking also about the “AI” that we have now, where machines are for example, helping run smartphones, internet search engines or reading video footage to search for specific things (like a specific type of car), or in general to identify people using facial recognition and 5G networks. We are not talking about mere machine learning, neural networks or the kind, here we are talking about “General Artificial Intelligence” that is something much bigger that hasn’t happened yet, where a machine will have the capacity to understand what is doing and generate new ideas about it. This is the reason why notwithstanding the hype for self-driving cars, they are still in an early stage of development.

Its use in mass surveillance

Despite the aforementioned distinction, States are using the “AI” that we have today for their own benefit and specifically for surveillance purposes, being China a great example. Some reports say that China uses this technology to collect massive amounts of data on how, where, and with whom ethnic groups spend their time and eventually they can feed these data into algorithms that allow them to predict, the everyday life of the population of this country. On the same note, China has set up the world’s most sophisticated surveillance state in Xinjiang province, tracking citizens’ daily movements and smartphones use.

But China is not the only country that has been using “AI”, according to some studies, at least seventy-five countries globally are actively using AI technologies for surveillance purposes.

The problems of "AI" and General AI

The possible issues of this kind of technology are almost unimaginable. If this easy monitoring ends in a “General Artificial Intelligence” capable of predicting the behavior of predictable people across the world, as it is starting to happen in China, as Snowden fears, we will end up imprisoned by our past actions, we will be predetermined my models of behavior constructed on past experiences. If the prediction model ends as the standard to select candidates to get jobs, to get into universities or even medical services, the world would enter into dangerous territory, where you are predetermined to have or not a certain life, not by you own present choices, but for your past choices or the choices of people before you. As Snowden says we will become prisoners of the past, which would be terrible for freedom of speech, of thought and any other kind of freedom that you could imagine.

The present “AI” has its problems too. For example, “AI” could replace credit scores and reshape how we get loans, which would make a democratic society closer to the credit system of the Chinese communist party. “AI” could also help autocratic governments to obtain and manipulate available truthful information to spread erroneous or false news. These tactics may be used to send personalized messages focused on (or against) specific persons or groups of people. The implementation of “General Artificial Intelligence” on this kind of issues would be even more disastrous. The use of this technology raises some ethical questions in even the most democratic of countries, and some even believe that this could be a threat to democracy.

On the same note, facial recognition practically ends anonymity and privacy, and some experts have expressed concerns about error rates, increased false positives for minority people and algorithmic bias.

Two possible countermeasures

The political and academic proposal

Relevant controls to the ability of the States to use this technology should be applied everywhere possible. The problem is not only that the States are watching and beginning to predict the behavior of its people, is how there are doing it and what they can achieve with that. I believe that it is the labor of policymakers to prevent that the use of this technology goes out of control, as it is starting to happen everywhere in the world, and it’s the role of the academia to warn the general public and the policymakers about the risks of the application of this kind of technology. It is also the role of the companies that develop this technology to use strict ethical codes to avoid the misuse of the same.

Policies should focus on preventing the aforementioned potential damages that we discussed here and should allow the cooperation between humans and machines to maximize efficiency because we believe that machines would never be able to replace in a complete manner the human intelligence which is vastly different. It is important to do this in the right way, and not let big corporations with their own agendas create the regulations, especially we should avoid any limitation of liability for potential damages within these new regulations.

Self-education

It is always a possibility that the combined efforts of policymakers, academia, and the companies are not enough or are not fast enough to stop or diminish the evils that come with this technology. For that reason, the general public should be educated about this issue and should try to adopt technological and practical countermeasures to this technology, whenever possible. I agree with professor Eben Moglen that some of these features should be to implement strong encryption everywhere possible, to prefer decentralized services, to choose free software modifiable by its users, to avoid the use of technological equipment that could be tracking you, and to use private servers systems such as freedom box. However, in my view, the main responsibility of the general public is to protest and let their governments know, that the right to privacy must not be surrendered in any case.

Conclusion

This paper discussed what is AI, the development of technologies so-called AI and its potential use and misuse in relation to mass surveillance. Making the distinction of what is and what is not AI, we have to be aware that this kind of technology is rapidly improving and growing across the globe, without any clear countermeasures to this date. For this reason, people have begun to worry about it and policymakers, academia, the companies that develop this technology and the general public should start doing something about it in the way and for the right people, before it’s too late.

Navigation

Webs Webs

r4 - 27 Jan 2020 - 22:21:38 - RaulMazzarella
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM