Law in the Internet Society

Coding and Controlling Thought

-- By JustinFlaumenhaft - 25 Nov 2020

The History of Artificial Intelligence (AI) and its Limitations

The Origins of AI

The first conference to study “artificial intelligence” was held at Dartmouth College in the summer of 1956. It was a small conference attended by just eleven people, but it proposed an enormous undertaking:

The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves [1].

Thus, the field of artificial intelligence was born. It became clear, however, that the project of endowing a machine with intelligence would require much more than a single summer at Dartmouth to accomplish. In the years that followed, AI researchers set out to conquer various domains of human competence with computers. Early successes in machine-driven translation and pattern recognition stoked the optimism of the AI research community. Marvin Minksy, head of the MIT AI Lab, announced that “within a generation we will have intelligent computers like HAL in the film, 2001” [2].

The Gadfly of AI

Among the burgeoning field of AI’s staunchest critics was Huberts Dreyfus. Dreyfus was an unlikely figure to emerge as an AI commentator: he was not only a philosopher, but a continental philosopher, interested primarily in existentialism and phenomenology. This was an obscure area of expertise even by the standards of his philosopher colleagues. It is hardly surprising, then, that the AI community paid little heed to Dreyfus’s criticism—many derided it as foolish [3].

In Dreyfus’ view, the AI researchers fundamentally misunderstood human intelligence, the phenomenon they were attempting to simulate. According to Dreyfus, the AI researchers tended to think of the human mind in much the same way as they thought of computers: as “general-purpose symbol manipulators.” On this view, the human mind was continuous with even a simple digital calculator, in that both worked by processing information, in the form of binary bits (via neurons or transistors), according to formal rules. This view contemplated a world organized neatly into a set of independent, determinate facts and governed by strict rules—the perfect substrate for a computer-like mind [2].

Drawing from phenomenology, Dreyfus highlighted some crucial differences between the ways that humans and computers functioned. Dreyfus stressed that humans, unlike computers, are embodied beings that participate in a world of relevance, meaning, and goals. On Dreyfus’s view, these distinct aspects of human existence were essential to human-like intelligence. He doubted that a disembodied machine detachedly manipulating symbols and following instructions could adequately emulate intelligent behavior [2].

In particular, Dreyfus pointed to how human judgements are informed by context-dependent factors whose nuances and indeterminacy would elude even a very comprehensive set of instructions. Dreyfus contended that a person’s very useful sense of what is “relevant” to a particular situation could not bed reduced to a system of formal rules. Intelligence also relies upon common sense, social practices, and tacit skills, which are also extremely difficult, if not impossible, to commit to rigid rules. Dreyfus predicted that the AI research program would soon face insurmountable obstacles if it continued on its course [2].

Tree Climbing with One's Eyes on the Moon

By the mid 1970’s, after enduring significant ridicule, Dreyfus seemed to have been vindicated. The early successes in areas like machine language translations and pattern recognition were followed by significant stagnation. In the realm of machine translation, the fuzzy line between semantics and syntax proved to a serious challenge for computers. When it came to improving Chess programs, sorting through the vast number of possible move sequences to analyze the real one was difficult without a chess player’s learned intuition for which sequences are relevant. Formal, explicit instructions for computers faltered in the face of ambiguity and vagueness [2].

The initial results had buoyed unrealistically high expectations about the progress of AI research. While Dreyfus acknowledged the brilliance of the AI researchers work, he suggested that their efforts had brought them no closer to AI than climbing a tree brought one to the moon. The quixotic quest to formalize all of human understanding and knowledge—a pursuit stretching back to Plato—had reached a dead end.

From Symbol Manipulation to Behavior Manipulation

The Rise of Big Data

The preceding history is useful for putting contemporary AI into perspective. Amid calls by high profile individuals to take precautions against super-intelligent AI, it is worth bearing in mind the history of overzealousness about the capabilities of AI as well as its proven limitations. The real threat posed by “AI” is not its achievement of super-intelligence, but its use by surveillance capitalists to monetize the data it extracts from its users and to manipulate their behavior.

The ultimate disillusionment with the grand AI ambitions hatched at the Dartmouth conference ultimately led the field of AI in a different direction. Interest turned from symbolic AI to perceptrons—which were loosely modeled on neurons—ultimately gave rise to the machine learning models like artificial neural networks.

Conclusion

These events brought us to our current paradigm of AI, defined by machine learning. If symbolic AI relied upon the cleverness of its programmers, machine learning algorithms relies equally upon their training data. Machine learning requires vast quantities of training data to function adequately. This fact plays a critical role in incentivizing internet companies to extract data from users. Facebook and Google need data to train the algorithms whose services they sell to advertisers.

Thus, in a certain sense, the failures of Symbolic AI, by leading to the alternative approaches offered by machine learning, fueled the demand for data and ushered in a new chapter of surveillance capitalism. The quest to build computers that emulated human thought ended with computer programs used to control human thought.

[1] http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf

[2] Dreyfus, Hubert L. What Computers Can't Do: The Limits of Artificial Intelligence. The MIT Press, 1984.

[3] Dreyfus, Hubert L. “Standing Up to Analytic Philosophy and Artificial Intelligence at MIT in the Sixties,” Proceedings and Addresses of the American Philosophical Association , NOVEMBER 2013, Vol. 87 (NOVEMBER 2013), pp. 78-92 .


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.

Navigation

Webs Webs

r1 - 25 Nov 2020 - 12:18:40 - JustinFlaumenhaft
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM