Law in the Internet Society

View   r4  >  r3  ...
RaulMazzarellaSecondEssay 4 - 27 Jan 2020 - Main.RaulMazzarella
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

Artificial intelligence and mass surveillance

Changed:
<
<
-- By RaulMazzarella; December 6, 2019
>
>
-- By RaulMazzarella; Final version
 
Changed:
<
<

The technological development of Artificial Intelligence

>
>

The technological development

 
Changed:
<
<
Mass surveillance for illegitimate reasons has been a widely discussed topic since Edward Snowden's first revelations in 2013. Now, this topic is becoming even more relevant for the use of a new technology that is making this practice even more powerful: artificial intelligence.
>
>
Mass surveillance for illegitimate reasons has been a widely discussed topic since Edward Snowden's first revelations in 2013. Now, this topic is becoming even more relevant for the alleged use of a technology that is making this practice even more powerful: artificial intelligence. Just a few days ago, Edward Snowden was interviewed about this very topic, where he stated that “the invention of artificial general intelligence is opening Pandora’s Box and I believe that box will be opened.”
 
Changed:
<
<
According to Professor Nils J. Nilsson, Artificial Intelligence (“AI”) is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment. These intelligent machines, are nowadays, for example, helping run smartphones, internet search engines, digital voice assistants and Netflix movie queues. Additionally, these AI machines are now capable of reading video footage to search for specific things (like a specific type of car) from any CCTV system that is uploaded to the cloud, to detect the difference between a normal actor and someone acting out of bounds in a bank, or in general to identify people using facial recognition and 5G networks.
>
>

What is artificial intelligence?

 
Added:
>
>
According to Professor Nils J. Nilsson, Artificial Intelligence (“AI”) is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment. Other renewed experts such as Marvin Minsky and Seymour Pepert have discussed profusely, among other topics, if you can make computers think like human beings through having structures like human brains. We agree with professor Seymour Pepert and Eben Moglen’s knowledge-oriented AI approach. As an example, given by professor Moglen, a computer does a good job playing chess, it wins chess games, but there are two problems: (1) it does not know that chess is a game; and (2) it does not know what it is playing. So, it is important to understand in which respect a machine is intelligent and in which respect that is not true. If playing chess involves knowing that chess is a game is hopelessly not doing it. We also believe that Nilsson and Snowden are not talking also about the “AI” that we have now, where machines are for example, helping run smartphones, internet search engines or reading video footage to search for specific things (like a specific type of car), or in general to identify people using facial recognition and 5G networks. We are not talking about mere machine learning, neural networks or the kind, here we are talking about “General Artificial Intelligence” that is something much bigger that hasn’t happened yet, where a machine will have the capacity to understand what is doing and generate new ideas about it. This is the reason why notwithstanding the hype for self-driving cars, they are still in an early stage of development.
 

Its use in mass surveillance

Changed:
<
<
States are also using this technology for their own benefit and specifically for surveillance purposes, being China a great example. Some reports say that China uses this technology to collect massive amounts of data on how, where, and with whom ethnic groups spend their time and eventually they can feed these data into algorithms that allow them to predict, the everyday life of the population of this country. On the same note, China has set up the world’s most sophisticated surveillance state in Xinjiang province, tracking citizens’ daily movements and smartphones use. But China is not the only country that has been using AI, according to some studies, at least seventy-five countries globally are actively using AI technologies for surveillance purposes. This includes smart city/safe city platforms (fifty-six countries), facial recognition systems (sixty-four countries), and smart policing (fifty-two countries) and everything indicates that the spread of AI will keep growing.
>
>
Despite the aforementioned distinction, States are using the “AI” that we have today for their own benefit and specifically for surveillance purposes, being China a great example. Some reports say that China uses this technology to collect massive amounts of data on how, where, and with whom ethnic groups spend their time and eventually they can feed these data into algorithms that allow them to predict, the everyday life of the population of this country. On the same note, China has set up the world’s most sophisticated surveillance state in Xinjiang province, tracking citizens’ daily movements and smartphones use.
 
Added:
>
>
But China is not the only country that has been using “AI”, according to some studies, at least seventy-five countries globally are actively using AI technologies for surveillance purposes.
 
Changed:
<
<

The problems of AI

The possible issues of this kind of technology are almost unimaginable. AI facial recognition would practically end anonymity and privacy, and some experts have expressed concerns about error rates and increased false positives for minority people. On the same note, there are fears about algorithmic bias in AI training and their harmful influence on predictive policing algorithms and other analytic devices used by law enforcement, among other potential problems.

Additionally, AI can, for example, help autocratic governments to obtain and manipulate available truthful information to spread erroneous or false news. These tactics may be used to send personalized messages focused on (or against) specific persons or groups of people. However, the use of this technology raises some ethical questions in even the most democratic of countries, and some even believe that this could be a threat to democracy.

Just a few days ago, Edward Snowden was interviewed about this very topic, where he stated that “before 2013 (…) it was very expensive (…) and that created a natural constraint on how much surveillance was done. The rise of technology meant that, now, you could have individual officers who could now easily monitor teams of people and even populations of people, entire movements, across borders, across languages, across cultures, so cheaply that it would happen overnight.”

>
>

The problems of "AI" and General AI

 
Added:
>
>
The possible issues of this kind of technology are almost unimaginable. If this easy monitoring ends in a “General Artificial Intelligence” capable of predicting the behavior of predictable people across the world, as it is starting to happen in China, as Snowden fears, we will end up imprisoned by our past actions, we will be predetermined my models of behavior constructed on past experiences. If the prediction model ends as the standard to select candidates to get jobs, to get into universities or even medical services, the world would enter into dangerous territory, where you are predetermined to have or not a certain life, not by you own present choices, but for your past choices or the choices of people before you. As Snowden says we will become prisoners of the past, which would be terrible for freedom of speech, of thought and any other kind of freedom that you could imagine.
 
Added:
>
>
The present “AI” has its problems too. For example, “AI” could replace credit scores and reshape how we get loans, which would make a democratic society closer to the credit system of the Chinese communist party. “AI” could also help autocratic governments to obtain and manipulate available truthful information to spread erroneous or false news. These tactics may be used to send personalized messages focused on (or against) specific persons or groups of people. The implementation of “General Artificial Intelligence” on this kind of issues would be even more disastrous. The use of this technology raises some ethical questions in even the most democratic of countries, and some even believe that this could be a threat to democracy.
 
Added:
>
>
On the same note, facial recognition practically ends anonymity and privacy, and some experts have expressed concerns about error rates, increased false positives for minority people and algorithmic bias.
 

Two possible countermeasures

The political and academic proposal

Changed:
<
<
Relevant controls to the ability of the States to use this technology should be applied everywhere possible. The problem is not only that the States are watching its people, is how there are doing it and what they can achieve with that. According to Snowden “the invention of artificial general intelligence is opening Pandora’s Box and I believe that box will be opened. We can’t prevent it from being opened. But what we can do is, we can slow the process of unlocking that box (…) until the world is prepared to handle the evils that we know will be released into the world from that box.”
>
>
Relevant controls to the ability of the States to use this technology should be applied everywhere possible. The problem is not only that the States are watching and beginning to predict the behavior of its people, is how there are doing it and what they can achieve with that. I believe that it is the labor of policymakers to prevent that the use of this technology goes out of control, as it is starting to happen everywhere in the world, and it’s the role of the academia to warn the general public and the policymakers about the risks of the application of this kind of technology. It is also the role of the companies that develop this technology to use strict ethical codes to avoid the misuse of the same.
 
Changed:
<
<
I agree with this approach and I believe that it is the labor of policymakers to prevent that the use of this technology goes out of control, as it is starting to happen everywhere in the world, and it’s the role of the academia to warn the general public and the policymakers about the risks of the application of this kind of technology. It is also the role of the companies that develop this technology to use strict ethical codes to avoid the misuse of the same.
>
>
Policies should focus on preventing the aforementioned potential damages that we discussed here and should allow the cooperation between humans and machines to maximize efficiency because we believe that machines would never be able to replace in a complete manner the human intelligence which is vastly different. It is important to do this in the right way, and not let big corporations with their own agendas create the regulations, especially we should avoid any limitation of liability for potential damages within these new regulations.
 

Self-education

Line: 45 to 46
 

Conclusion

Changed:
<
<
This paper discussed the development of AI and its potential use and misuse in relation to mass surveillance. This kind of technology is rapidly improving and growing across the globe, without any clear countermeasures to this date. For this reason, people have begun to worry about it and policymakers, academia, the companies that develop this technology and the general public should start doing something about it, before its too late. This is because, at the end of the day, it is a responsibility of the entire society to try to protect the human right to privacy, for all of us.

This is a pretty good start at collecting some material. But it's just some Googling.

A central problem is AI-hype-avoidance, which you don't do well enough. All this AI promotion is a little grain and a lot of bullshit. You quote Snowden without hearing what he's saying. "General artificial intelligence," which he rightly describes as still in the future (as it has been since very smart, capable people began trying to convince me it was around the corner nearly half a century ago) is not the stuff being called AI right now. That's made of something also called "machine learning" (which it also isn't), which means pattern recognition. Building programs of a kind called "neural networks" and "training" them to approximately recognize complex, shifting patterns in huge quantities of data generated by dynamic processes is not a small achievement by any means. It can be used to find small tumors in vast amount of CAT scan imagery. It can help to find pickpockets in train station crowds. It can make buildings operate much more energy-efficiently. It can do small wonders for public transport resource planning. But it isn't "general artificial intelligence," which is Snowden's point.

>
>
This paper discussed what is AI, the development of technologies so-called AI and its potential use and misuse in relation to mass surveillance. Making the distinction of what is and what is not AI, we have to be aware that this kind of technology is rapidly improving and growing across the globe, without any clear countermeasures to this date. For this reason, people have begun to worry about it and policymakers, academia, the companies that develop this technology and the general public should start doing something about it in the way and for the right people, before it’s too late.
 
Deleted:
<
<
What would most help to improve the draft is to be inside the ideas, instead of outside them. More reading of real sources, not Googling for quotes, is in order. Then perhaps we can ask harder questions. Why is predicting the daily life of predictable people more dangerous than keeping everyone under real-time surveillance all the time? What does "Artificial intelligence" add to the danger posed by fully-surveilled learning, which you do now everytime you take a course mounted on this university's Canvas software, without "AI"? In other words, how does our policy analysis get sharper if we add a hypo-dermic full of AI concern?
 
Deleted:
<
<
  \ No newline at end of file

Revision 4r4 - 27 Jan 2020 - 22:21:38 - RaulMazzarella
Revision 3r3 - 12 Jan 2020 - 14:14:21 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM