Law in the Internet Society

View   r3  >  r2  ...
RaulMazzarellaSecondEssay 3 - 12 Jan 2020 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Line: 46 to 46
 

Conclusion

This paper discussed the development of AI and its potential use and misuse in relation to mass surveillance. This kind of technology is rapidly improving and growing across the globe, without any clear countermeasures to this date. For this reason, people have begun to worry about it and policymakers, academia, the companies that develop this technology and the general public should start doing something about it, before its too late. This is because, at the end of the day, it is a responsibility of the entire society to try to protect the human right to privacy, for all of us.

Added:
>
>

This is a pretty good start at collecting some material. But it's just some Googling.

A central problem is AI-hype-avoidance, which you don't do well enough. All this AI promotion is a little grain and a lot of bullshit. You quote Snowden without hearing what he's saying. "General artificial intelligence," which he rightly describes as still in the future (as it has been since very smart, capable people began trying to convince me it was around the corner nearly half a century ago) is not the stuff being called AI right now. That's made of something also called "machine learning" (which it also isn't), which means pattern recognition. Building programs of a kind called "neural networks" and "training" them to approximately recognize complex, shifting patterns in huge quantities of data generated by dynamic processes is not a small achievement by any means. It can be used to find small tumors in vast amount of CAT scan imagery. It can help to find pickpockets in train station crowds. It can make buildings operate much more energy-efficiently. It can do small wonders for public transport resource planning. But it isn't "general artificial intelligence," which is Snowden's point.

What would most help to improve the draft is to be inside the ideas, instead of outside them. More reading of real sources, not Googling for quotes, is in order. Then perhaps we can ask harder questions. Why is predicting the daily life of predictable people more dangerous than keeping everyone under real-time surveillance all the time? What does "Artificial intelligence" add to the danger posed by fully-surveilled learning, which you do now everytime you take a course mounted on this university's Canvas software, without "AI"? In other words, how does our policy analysis get sharper if we add a hypo-dermic full of AI concern?

 \ No newline at end of file

Revision 3r3 - 12 Jan 2020 - 14:14:21 - EbenMoglen
Revision 2r2 - 07 Dec 2019 - 04:51:08 - RaulMazzarella
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM