Law in the Internet Society

View   r8  >  r7  >  r6  >  r5  >  r4  >  r3  ...
JustinFlaumenhaftSecondEssay 8 - 07 Jan 2021 - Main.JustinFlaumenhaft
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Line: 46 to 46
  If symbolic AI relied upon the cleverness of its programmers, machine learning relies equally upon the quantity of its training data: machine learning models require vast volumes of training data to work well. For example, Google’s AlphaGo? , which famously won four out of five go matches against the reigning go world champion, was trained on “30 million board positions from 160,000 real-life games taken from a go database”[4]. While mastery of go is a notable achievement, it provides scant evidence of human-like general intelligence. A machine learning algorithm can glean patterns from large datasets to accomplish narrowly defined tasks, but it does not have anything like a mind of their own. AlphaGo? cannot even be said to know that it is playing go.
Changed:
<
<
However, companies like Facebook and Google, searching for more profitable business models, looked to machine learning algorithms to turn the vast stores of data they collect into valuable commodities to be sold to advertisers. Thus, the quest to build computers that emulated human thought ended not with intelligent computers, but with computers that preyed on human intelligence by monitoring and influencing it.
>
>
However, companies like Facebook and Google, searching for more profitable business models, looked to machine learning algorithms to turn their vast stores of data into valuable commodities to be sold to advertisers. Thus, the quest to build computers that emulated human thought ended not with intelligent computers, but with computers that preyed on human intelligence by monitoring and influencing it.
 [1] http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf

JustinFlaumenhaftSecondEssay 7 - 07 Jan 2021 - Main.JustinFlaumenhaft
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Line: 15 to 15
 The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves [1].
Changed:
<
<
Thus, the field of artificial intelligence was born. It became clear, however, that the project of endowing a machine with intelligence would require much more than a single summer at Dartmouth to accomplish. In the years that followed, AI researchers set out to conquer many different domains of human competence with computers. Early successes in machine-driven translation and pattern recognition stoked the optimism of the AI research community. Marvin Minksy, head of the MIT AI Lab, announced that “within a generation we will have intelligent computers like HAL in the film, 2001” [2].
>
>
Thus, the field of artificial intelligence was born. It became clear, however, that the project of endowing a machine with intelligence would require much more than a single summer to accomplish. In the years that followed, AI researchers set out to conquer many different domains of human competence with computers. Early successes in machine-driven translation and pattern recognition stoked the optimism of the AI research community. Marvin Minksy, head of the MIT AI Lab, announced that “within a generation we will have intelligent computers like HAL in the film, 2001” [2].
 

The Gadfly of AI

Line: 24 to 24
  In Dreyfus’ view, the AI researchers fundamentally misunderstood the phenomenon they were attempting to emulate. According to Dreyfus, the AI researchers tended to think of the human mind in much the same way as they thought of computers: as “general-purpose symbol manipulators.” On this view, the human mind was continuous with a simple digital calculator. Both worked by processing information, in the form of binary bits (via neurons or transistors), according to formal rules. This view contemplated a world organized neatly into a set of independent, determinate facts and governed by strict rules—the perfect substrate for a computer-like mind [2].
Changed:
<
<
Drawing from phenomenology, Dreyfus highlighted some crucial differences between humans and computers. Dreyfus stressed that humans, unlike computers, are embodied beings that participate in a world of relevance, meaning, and goals. On Dreyfus’ view, these characteristics of human existence were essential to human-like intelligence. He doubted that a disembodied machine detachedly manipulating symbols and following instructions could exhibit genuinely intelligent behavior [2].
>
>
Drawing from phenomenology, Dreyfus highlighted some crucial differences between humans and computers. Dreyfus stressed that humans, unlike computers, are embodied beings that participate in a world of relevance, meaning, and goals. On Dreyfus’ view, these characteristics of human existence were essential to human-like intelligence. He doubted that a disembodied machine detachedly manipulating symbols and following instructions could exhibit genuinely intelligent behavior and predicted that the AI research program would soon face insurmountable obstacles if it continued on its course [2].
 
Deleted:
<
<
Dreyfus emphasized that human judgements are informed by context-dependent factors whose nuances and indeterminacy are difficult to account for in even a very comprehensive set of instructions. Dreyfus contended that a person’s very useful sense of what is “relevant” to a particular situation could not be reduced to a system of formal rules. Likewise, common sense, social practices, and tacit skills, which involved more than mere calculation, were extremely difficult, if not impossible, to formalize. Dreyfus predicted that the AI research program would soon face insurmountable obstacles if it continued on its course [2].
 

Tree Climbing with One's Eyes on the Moon

Changed:
<
<
By the mid 1970’s, after enduring a decade of ridicule, Dreyfus seemed to have been vindicated. The early successes in areas like machine language translations and pattern recognition were followed by significant stagnation. In the realm of machine translation, the fuzzy line between semantics and syntax proved to be a serious challenge for computers. When it came to improving chess programs, sorting through the vast number of possible move sequences was difficult without a chess player’s learned intuition for which sequences are relevant. Formal, explicit instructions for computers faltered in the face of ambiguity, vagueness, and complexity.
>
>
By the mid 1970’s, after enduring a decade of ridicule, Dreyfus seemed to have been vindicated. The early successes in areas like machine language translations and pattern recognition were followed by significant stagnation. In the realm of machine translation, the fuzzy line between semantics and syntax proved to be a serious challenge for computers. Formal, explicit instructions for computers faltered in the face of ambiguity, vagueness, and complexity.
  Promising initial results had buoyed unrealistically high expectations about the progress of AI research. While Dreyfus acknowledged the ingenuity of the AI researchers' work, he suggested that their efforts had brought them no closer to artificial intelligence than climbing a tree brought one closer to the moon. The quixotic quest to formalize all of human understanding and knowledge—a pursuit stretching back to Plato—had reached a dead end.
Line: 42 to 42
  The preceding history is useful for putting contemporary AI into perspective. Amid calls to take precautions against super-intelligent AI, it is worth bearing in mind the history of overzealousness about the capabilities of AI as well as AI's proven limitations. The real threat posed by “AI” is not the fantasy of super-intelligence, but rather the use of the technology by surveillance capitalists to mine the data of its users and influence their behavior.
Changed:
<
<
The ultimate disillusionment with the grand AI ambitions hatched at the Dartmouth conference led the field of AI in a different direction. Interest turned from symbolic AI to perceptrons—loosely modeled on neurons—and ultimately gave rise to the machine learning models like artificial neural networks, which define the contemporary paradigm of AI.

If symbolic AI relied upon the cleverness of its programmers, machine learning relies equally upon the quantity of its training data: machine learning models require vast volumes of training data to work well. This fact plays a critical role in incentivizing internet companies to extract data from users. Facebook and Google need data to train the algorithms whose services they sell to advertisers.

>
>
The ultimate disillusionment with the grand AI ambitions hatched at the Dartmouth conference led the field of AI in a different direction. Interest turned from symbolic AI to perceptrons—loosely modeled on neurons—and ultimately gave rise to the machine learning models like artificial neural networks, which define the contemporary paradigm of AI.These models mark a distinct shift from symbolic AI. Instead of utilizing a set of predetermined, explicit rules, artificial neural networks “learn” by being fed large volumes of data. These algorithms use statistical and mathematical methods to identify patterns in the data through a process of iterative adjustments.
 
Changed:
<
<
Thus, in a certain sense, the failures of Symbolic AI, by leading to the alternative approaches offered by machine learning, fueled the demand for data and ushered in a new chapter of surveillance capitalism. The quest to build computers that emulated human thought ended not with intelligent computers, but with computers that preyed on human intelligence by monitoring and influencing it.
>
>
If symbolic AI relied upon the cleverness of its programmers, machine learning relies equally upon the quantity of its training data: machine learning models require vast volumes of training data to work well. For example, Google’s AlphaGo? , which famously won four out of five go matches against the reigning go world champion, was trained on “30 million board positions from 160,000 real-life games taken from a go database”[4]. While mastery of go is a notable achievement, it provides scant evidence of human-like general intelligence. A machine learning algorithm can glean patterns from large datasets to accomplish narrowly defined tasks, but it does not have anything like a mind of their own. AlphaGo? cannot even be said to know that it is playing go.
 
Changed:
<
<
Only if the desires of AI researchers are what "fuel" such developments. More likely, and in fact what historically I saw happen, was that companies who had acquired lots of data but had very narrow business models tried to find other ways to monetize the data by processing it further, ending up with what we see in the form of presently-existing surveillance capitalism. This highly processed human behavior pattern matching is to general artificial intelligence what Cheezwhizz and Velveeta are scalable artisanal cheese-making.
>
>
However, companies like Facebook and Google, searching for more profitable business models, looked to machine learning algorithms to turn the vast stores of data they collect into valuable commodities to be sold to advertisers. Thus, the quest to build computers that emulated human thought ended not with intelligent computers, but with computers that preyed on human intelligence by monitoring and influencing it.
  [1] http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf
Line: 58 to 54
 [3] Dreyfus, Hubert L. “Standing Up to Analytic Philosophy and Artificial Intelligence at MIT in the Sixties,” Proceedings and Addresses of the American Philosophical Association , NOVEMBER 2013, Vol. 87 (NOVEMBER 2013), pp. 78-92 .
Changed:
<
<

The history is very useful in explaining what's happened, as is usual with history. Some remarks on the structure of other "AI" "successes" such as programs that win chess and go games without knowing that chess and go are games, or that they are playing, would be helpful. This helps to show what Dreyfus was talking about.

But it would also strengthen the draft to discuss what the CS people who didn't believe in AI did believe in, and what they did with it. What Seymour Pappert was doing with logo, what Sherry Turkle's thinking and writing achieved, how that affected what Richard Stallman and I were thinking, and what we did with it—the other story that runs alongside the AI idea is about the humanities and the technology. It's finest day has not yet come, either.

>
>
[4] https://www.scientificamerican.com/article/how-the-computer-beat-the-go-master/
 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.

JustinFlaumenhaftSecondEssay 6 - 31 Dec 2020 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Line: 48 to 48
  Thus, in a certain sense, the failures of Symbolic AI, by leading to the alternative approaches offered by machine learning, fueled the demand for data and ushered in a new chapter of surveillance capitalism. The quest to build computers that emulated human thought ended not with intelligent computers, but with computers that preyed on human intelligence by monitoring and influencing it.
Added:
>
>
Only if the desires of AI researchers are what "fuel" such developments. More likely, and in fact what historically I saw happen, was that companies who had acquired lots of data but had very narrow business models tried to find other ways to monetize the data by processing it further, ending up with what we see in the form of presently-existing surveillance capitalism. This highly processed human behavior pattern matching is to general artificial intelligence what Cheezwhizz and Velveeta are scalable artisanal cheese-making.

 [1] http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf

[2] Dreyfus, Hubert L. What Computers Can't Do: The Limits of Artificial Intelligence. The MIT Press, 1984.

Line: 56 to 60
 
Added:
>
>
The history is very useful in explaining what's happened, as is usual with history. Some remarks on the structure of other "AI" "successes" such as programs that win chess and go games without knowing that chess and go are games, or that they are playing, would be helpful. This helps to show what Dreyfus was talking about.

But it would also strengthen the draft to discuss what the CS people who didn't believe in AI did believe in, and what they did with it. What Seymour Pappert was doing with logo, what Sherry Turkle's thinking and writing achieved, how that affected what Richard Stallman and I were thinking, and what we did with it—the other story that runs alongside the AI idea is about the humanities and the technology. It's finest day has not yet come, either.

 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

JustinFlaumenhaftSecondEssay 5 - 10 Dec 2020 - Main.JustinFlaumenhaft
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

JustinFlaumenhaftSecondEssay 4 - 29 Nov 2020 - Main.JustinFlaumenhaft
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Line: 15 to 15
 The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves [1].
Changed:
<
<
Thus, the field of artificial intelligence was born. It became clear, however, that the project of endowing a machine with intelligence would require much more than a single summer at Dartmouth to accomplish. In the years that followed, AI researchers set out to conquer various domains of human competence with computers. Early successes in machine-driven translation and pattern recognition stoked the optimism of the AI research community. Marvin Minksy, head of the MIT AI Lab, announced that “within a generation we will have intelligent computers like HAL in the film, 2001” [2].
>
>
Thus, the field of artificial intelligence was born. It became clear, however, that the project of endowing a machine with intelligence would require much more than a single summer at Dartmouth to accomplish. In the years that followed, AI researchers set out to conquer many different domains of human competence with computers. Early successes in machine-driven translation and pattern recognition stoked the optimism of the AI research community. Marvin Minksy, head of the MIT AI Lab, announced that “within a generation we will have intelligent computers like HAL in the film, 2001” [2].
 

The Gadfly of AI

Line: 24 to 24
  In Dreyfus’ view, the AI researchers fundamentally misunderstood the phenomenon they were attempting to emulate. According to Dreyfus, the AI researchers tended to think of the human mind in much the same way as they thought of computers: as “general-purpose symbol manipulators.” On this view, the human mind was continuous with a simple digital calculator. Both worked by processing information, in the form of binary bits (via neurons or transistors), according to formal rules. This view contemplated a world organized neatly into a set of independent, determinate facts and governed by strict rules—the perfect substrate for a computer-like mind [2].
Changed:
<
<
Drawing from phenomenology, Dreyfus highlighted some crucial differences between the ways in which humans and computers functioned. Dreyfus stressed that humans, unlike computers, are embodied beings that participate in a world of relevance, meaning, and goals. On Dreyfus’ view, these characteristics of human existence were essential to human-like intelligence. He doubted that a disembodied machine detachedly manipulating symbols and following instructions could exhibit genuinely intelligent behavior [2].
>
>
Drawing from phenomenology, Dreyfus highlighted some crucial differences between humans and computers. Dreyfus stressed that humans, unlike computers, are embodied beings that participate in a world of relevance, meaning, and goals. On Dreyfus’ view, these characteristics of human existence were essential to human-like intelligence. He doubted that a disembodied machine detachedly manipulating symbols and following instructions could exhibit genuinely intelligent behavior [2].
 
Changed:
<
<
Dreyfus emphasized that human judgements are informed by context-dependent factors whose nuances and indeterminacy are difficult to account for in even a very comprehensive set of instructions. Dreyfus contended that a person’s very useful sense of what is “relevant” to a particular situation could not be reduced to a system of formal rules. Likewise, common sense, social practices, and tacit skills, which involved more than mere calculation, were extremely difficult, if not impossible, to commit to rigid rules. Dreyfus predicted that the AI research program would soon face insurmountable obstacles if it continued on its course [2].
>
>
Dreyfus emphasized that human judgements are informed by context-dependent factors whose nuances and indeterminacy are difficult to account for in even a very comprehensive set of instructions. Dreyfus contended that a person’s very useful sense of what is “relevant” to a particular situation could not be reduced to a system of formal rules. Likewise, common sense, social practices, and tacit skills, which involved more than mere calculation, were extremely difficult, if not impossible, to formalize. Dreyfus predicted that the AI research program would soon face insurmountable obstacles if it continued on its course [2].
 

Tree Climbing with One's Eyes on the Moon

Line: 40 to 40
 

The Rise of Big Data

Changed:
<
<
The preceding history is useful for putting contemporary AI into perspective. Amid calls to take precautions against super-intelligent AI, it is worth bearing in mind the history of overzealousness about the capabilities of AI as well as AI's proven limitations. The real threat posed by “AI” is not the fantasy of super-intelligence, but rather the use of the technology by surveillance capitalists to monetize the data it extracts from its users and influence their behavior.
>
>
The preceding history is useful for putting contemporary AI into perspective. Amid calls to take precautions against super-intelligent AI, it is worth bearing in mind the history of overzealousness about the capabilities of AI as well as AI's proven limitations. The real threat posed by “AI” is not the fantasy of super-intelligence, but rather the use of the technology by surveillance capitalists to mine the data of its users and influence their behavior.
  The ultimate disillusionment with the grand AI ambitions hatched at the Dartmouth conference led the field of AI in a different direction. Interest turned from symbolic AI to perceptrons—loosely modeled on neurons— and ultimately gave rise to the machine learning models like artificial neural networks, which define the contemporary paradigm of AI.

If symbolic AI relied upon the cleverness of its programmers, machine learning relies equally upon the quantity of its training data: machine learning models require vast volumes of training data to work well. This fact plays a critical role in incentivizing internet companies to extract data from users. Facebook and Google need data to train the algorithms whose services they sell to advertisers.

Changed:
<
<
Thus, in a certain sense, the failures of Symbolic AI, by leading to the alternative approaches offered by machine learning, fueled the demand for data and ushered in a new chapter of surveillance capitalism. The quest to build computers which emulated human thought ended not with intelligent computers, but with computers that preyed on human intelligence by monitoring and influencing it.
>
>
Thus, in a certain sense, the failures of Symbolic AI, by leading to the alternative approaches offered by machine learning, fueled the demand for data and ushered in a new chapter of surveillance capitalism. The quest to build computers that emulated human thought ended not with intelligent computers, but with computers that preyed on human intelligence by monitoring and influencing it.
 [1] http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf

JustinFlaumenhaftSecondEssay 3 - 28 Nov 2020 - Main.JustinFlaumenhaft
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Line: 11 to 11
 

The Origins of AI

Changed:
<
<
The first conference to study “artificial intelligence” was held at Dartmouth College in the summer of 1956. It was a small conference attended by just eleven people, but it proposed an enormous undertaking:
>
>
The first conference dedicated to the study of “artificial intelligence” was held at Dartmouth College in the summer of 1956. It was a small conference, attended by just eleven people, but it proposed an enormous undertaking:
 The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves [1].
Line: 24 to 24
  In Dreyfus’ view, the AI researchers fundamentally misunderstood the phenomenon they were attempting to emulate. According to Dreyfus, the AI researchers tended to think of the human mind in much the same way as they thought of computers: as “general-purpose symbol manipulators.” On this view, the human mind was continuous with a simple digital calculator. Both worked by processing information, in the form of binary bits (via neurons or transistors), according to formal rules. This view contemplated a world organized neatly into a set of independent, determinate facts and governed by strict rules—the perfect substrate for a computer-like mind [2].
Changed:
<
<
Drawing from phenomenology, Dreyfus highlighted some crucial differences between the ways in which humans and computers functioned. Dreyfus stressed that humans, unlike computers, are embodied beings that participate in a world of relevance, meaning, and goals. On Dreyfus’s view, these characteristics of human existence were essential to human-like intelligence. He doubted that a disembodied machine detachedly manipulating symbols and following instructions could exhibit genuinely intelligent behavior [2].
>
>
Drawing from phenomenology, Dreyfus highlighted some crucial differences between the ways in which humans and computers functioned. Dreyfus stressed that humans, unlike computers, are embodied beings that participate in a world of relevance, meaning, and goals. On Dreyfus’ view, these characteristics of human existence were essential to human-like intelligence. He doubted that a disembodied machine detachedly manipulating symbols and following instructions could exhibit genuinely intelligent behavior [2].
 
Changed:
<
<
Dreyfus also emphasized that human judgements are informed by context-dependent factors whose nuances and indeterminacy are difficult to account for in even a very comprehensive set of instructions. Dreyfus contended that a person’s very useful sense of what is “relevant” to a particular situation could not be reduced to a system of formal rules. Likewise, common sense, social practices, and tacit skills, which involved more than mere calculation, were also extremely difficult, if not impossible, to commit to rigid rules. Dreyfus predicted that the AI research program would soon face insurmountable obstacles if it continued on its course [2].
>
>
Dreyfus emphasized that human judgements are informed by context-dependent factors whose nuances and indeterminacy are difficult to account for in even a very comprehensive set of instructions. Dreyfus contended that a person’s very useful sense of what is “relevant” to a particular situation could not be reduced to a system of formal rules. Likewise, common sense, social practices, and tacit skills, which involved more than mere calculation, were extremely difficult, if not impossible, to commit to rigid rules. Dreyfus predicted that the AI research program would soon face insurmountable obstacles if it continued on its course [2].
 

Tree Climbing with One's Eyes on the Moon

Changed:
<
<
By the mid 1970’s, after enduring a decade of ridicule, Dreyfus seemed to have been vindicated. The early successes in areas like machine language translations and pattern recognition were followed by significant stagnation. In the realm of machine translation, the fuzzy line between semantics and syntax proved to a serious challenge for computers. When it came to improving chess programs, sorting through the vast number of possible move sequences was difficult without a chess player’s learned intuition for which sequences are relevant. Formal, explicit instructions for computers faltered in the face of ambiguity, vagueness, and complexity.
>
>
By the mid 1970’s, after enduring a decade of ridicule, Dreyfus seemed to have been vindicated. The early successes in areas like machine language translations and pattern recognition were followed by significant stagnation. In the realm of machine translation, the fuzzy line between semantics and syntax proved to be a serious challenge for computers. When it came to improving chess programs, sorting through the vast number of possible move sequences was difficult without a chess player’s learned intuition for which sequences are relevant. Formal, explicit instructions for computers faltered in the face of ambiguity, vagueness, and complexity.
 
Changed:
<
<
The initial results had buoyed unrealistically high expectations about the progress of AI research. While Dreyfus acknowledged the ingenuity of the AI researchers' work, he suggested that their efforts had brought them no closer to AI than climbing a tree brought one closer to the moon. The quixotic quest to formalize all of human understanding and knowledge—a pursuit stretching back to Plato—had reached a dead end.
>
>
Promising initial results had buoyed unrealistically high expectations about the progress of AI research. While Dreyfus acknowledged the ingenuity of the AI researchers' work, he suggested that their efforts had brought them no closer to artificial intelligence than climbing a tree brought one closer to the moon. The quixotic quest to formalize all of human understanding and knowledge—a pursuit stretching back to Plato—had reached a dead end.
 

From Symbol Manipulation to Behavior Manipulation

Line: 40 to 40
 

The Rise of Big Data

Changed:
<
<
The preceding history is useful for putting contemporary AI into perspective. Amid calls to take precautions against super-intelligent AI, it is worth bearing in mind the history of overzealousness about the capabilities of AI as well as its proven limitations. The real threat posed by “AI” is not the fantasy of super-intelligence, but rather the use of the technology by surveillance capitalists to monetize the data it extracts from its users and influence their behavior.
>
>
The preceding history is useful for putting contemporary AI into perspective. Amid calls to take precautions against super-intelligent AI, it is worth bearing in mind the history of overzealousness about the capabilities of AI as well as AI's proven limitations. The real threat posed by “AI” is not the fantasy of super-intelligence, but rather the use of the technology by surveillance capitalists to monetize the data it extracts from its users and influence their behavior.
 
Changed:
<
<
The ultimate disillusionment with the grand AI ambitions hatched at the Dartmouth conference led the field of AI in a different direction. Interest turned from symbolic AI to perceptrons—which were loosely modeled on neurons— and ultimately gave rise to the machine learning models like artificial neural networks, which define the contemporary paradigm of AI.
>
>
The ultimate disillusionment with the grand AI ambitions hatched at the Dartmouth conference led the field of AI in a different direction. Interest turned from symbolic AI to perceptrons—loosely modeled on neurons— and ultimately gave rise to the machine learning models like artificial neural networks, which define the contemporary paradigm of AI.
 
Changed:
<
<
If symbolic AI relied upon the cleverness of its programmers, machine learning relies equally upon the quantity of its training data: machine learning models require vast volumes of training data to extrapolate effectively. This fact plays a critical role in incentivizing internet companies to extract data from users. Facebook and Google need data to train the algorithms whose services they sell to advertisers.
>
>
If symbolic AI relied upon the cleverness of its programmers, machine learning relies equally upon the quantity of its training data: machine learning models require vast volumes of training data to work well. This fact plays a critical role in incentivizing internet companies to extract data from users. Facebook and Google need data to train the algorithms whose services they sell to advertisers.
  Thus, in a certain sense, the failures of Symbolic AI, by leading to the alternative approaches offered by machine learning, fueled the demand for data and ushered in a new chapter of surveillance capitalism. The quest to build computers which emulated human thought ended not with intelligent computers, but with computers that preyed on human intelligence by monitoring and influencing it.

JustinFlaumenhaftSecondEssay 2 - 25 Nov 2020 - Main.JustinFlaumenhaft
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Changed:
<
<

Coding and Controlling Thought

>
>

From Intelligence to Influence

 -- By JustinFlaumenhaft - 25 Nov 2020
Line: 10 to 10
 

The History of Artificial Intelligence (AI) and its Limitations

The Origins of AI

Added:
>
>
 The first conference to study “artificial intelligence” was held at Dartmouth College in the summer of 1956. It was a small conference attended by just eleven people, but it proposed an enormous undertaking:
Changed:
<
<
The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves [1].
>
>
The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves [1].
  Thus, the field of artificial intelligence was born. It became clear, however, that the project of endowing a machine with intelligence would require much more than a single summer at Dartmouth to accomplish. In the years that followed, AI researchers set out to conquer various domains of human competence with computers. Early successes in machine-driven translation and pattern recognition stoked the optimism of the AI research community. Marvin Minksy, head of the MIT AI Lab, announced that “within a generation we will have intelligent computers like HAL in the film, 2001” [2].
Line: 18 to 19
 

The Gadfly of AI

Added:
>
>
 Among the burgeoning field of AI’s staunchest critics was Huberts Dreyfus. Dreyfus was an unlikely figure to emerge as an AI commentator: he was not only a philosopher, but a continental philosopher, interested primarily in existentialism and phenomenology. This was an obscure area of expertise even by the standards of his philosopher colleagues. It is hardly surprising, then, that the AI community paid little heed to Dreyfus’s criticism—many derided it as foolish [3].
Changed:
<
<
In Dreyfus’ view, the AI researchers fundamentally misunderstood human intelligence, the phenomenon they were attempting to simulate. According to Dreyfus, the AI researchers tended to think of the human mind in much the same way as they thought of computers: as “general-purpose symbol manipulators.” On this view, the human mind was continuous with even a simple digital calculator, in that both worked by processing information, in the form of binary bits (via neurons or transistors), according to formal rules. This view contemplated a world organized neatly into a set of independent, determinate facts and governed by strict rules—the perfect substrate for a computer-like mind [2].
>
>
In Dreyfus’ view, the AI researchers fundamentally misunderstood the phenomenon they were attempting to emulate. According to Dreyfus, the AI researchers tended to think of the human mind in much the same way as they thought of computers: as “general-purpose symbol manipulators.” On this view, the human mind was continuous with a simple digital calculator. Both worked by processing information, in the form of binary bits (via neurons or transistors), according to formal rules. This view contemplated a world organized neatly into a set of independent, determinate facts and governed by strict rules—the perfect substrate for a computer-like mind [2].
 
Changed:
<
<
Drawing from phenomenology, Dreyfus highlighted some crucial differences between the ways that humans and computers functioned. Dreyfus stressed that humans, unlike computers, are embodied beings that participate in a world of relevance, meaning, and goals. On Dreyfus’s view, these distinct aspects of human existence were essential to human-like intelligence. He doubted that a disembodied machine detachedly manipulating symbols and following instructions could adequately emulate intelligent behavior [2].
>
>
Drawing from phenomenology, Dreyfus highlighted some crucial differences between the ways in which humans and computers functioned. Dreyfus stressed that humans, unlike computers, are embodied beings that participate in a world of relevance, meaning, and goals. On Dreyfus’s view, these characteristics of human existence were essential to human-like intelligence. He doubted that a disembodied machine detachedly manipulating symbols and following instructions could exhibit genuinely intelligent behavior [2].
 
Changed:
<
<
In particular, Dreyfus pointed to how human judgements are informed by context-dependent factors whose nuances and indeterminacy would elude even a very comprehensive set of instructions. Dreyfus contended that a person’s very useful sense of what is “relevant” to a particular situation could not bed reduced to a system of formal rules. Intelligence also relies upon common sense, social practices, and tacit skills, which are also extremely difficult, if not impossible, to commit to rigid rules. Dreyfus predicted that the AI research program would soon face insurmountable obstacles if it continued on its course [2].
>
>
Dreyfus also emphasized that human judgements are informed by context-dependent factors whose nuances and indeterminacy are difficult to account for in even a very comprehensive set of instructions. Dreyfus contended that a person’s very useful sense of what is “relevant” to a particular situation could not be reduced to a system of formal rules. Likewise, common sense, social practices, and tacit skills, which involved more than mere calculation, were also extremely difficult, if not impossible, to commit to rigid rules. Dreyfus predicted that the AI research program would soon face insurmountable obstacles if it continued on its course [2].
 

Tree Climbing with One's Eyes on the Moon

Deleted:
<
<
By the mid 1970’s, after enduring significant ridicule, Dreyfus seemed to have been vindicated. The early successes in areas like machine language translations and pattern recognition were followed by significant stagnation. In the realm of machine translation, the fuzzy line between semantics and syntax proved to a serious challenge for computers. When it came to improving Chess programs, sorting through the vast number of possible move sequences to analyze the real one was difficult without a chess player’s learned intuition for which sequences are relevant. Formal, explicit instructions for computers faltered in the face of ambiguity and vagueness [2].
 
Changed:
<
<
The initial results had buoyed unrealistically high expectations about the progress of AI research. While Dreyfus acknowledged the brilliance of the AI researchers work, he suggested that their efforts had brought them no closer to AI than climbing a tree brought one to the moon. The quixotic quest to formalize all of human understanding and knowledge—a pursuit stretching back to Plato—had reached a dead end.
>
>
By the mid 1970’s, after enduring a decade of ridicule, Dreyfus seemed to have been vindicated. The early successes in areas like machine language translations and pattern recognition were followed by significant stagnation. In the realm of machine translation, the fuzzy line between semantics and syntax proved to a serious challenge for computers. When it came to improving chess programs, sorting through the vast number of possible move sequences was difficult without a chess player’s learned intuition for which sequences are relevant. Formal, explicit instructions for computers faltered in the face of ambiguity, vagueness, and complexity.

The initial results had buoyed unrealistically high expectations about the progress of AI research. While Dreyfus acknowledged the ingenuity of the AI researchers' work, he suggested that their efforts had brought them no closer to AI than climbing a tree brought one closer to the moon. The quixotic quest to formalize all of human understanding and knowledge—a pursuit stretching back to Plato—had reached a dead end.

 

From Symbol Manipulation to Behavior Manipulation

The Rise of Big Data

Deleted:
<
<
The preceding history is useful for putting contemporary AI into perspective. Amid calls by high profile individuals to take precautions against super-intelligent AI, it is worth bearing in mind the history of overzealousness about the capabilities of AI as well as its proven limitations. The real threat posed by “AI” is not its achievement of super-intelligence, but its use by surveillance capitalists to monetize the data it extracts from its users and to manipulate their behavior.
 
Changed:
<
<
The ultimate disillusionment with the grand AI ambitions hatched at the Dartmouth conference ultimately led the field of AI in a different direction. Interest turned from symbolic AI to perceptrons—which were loosely modeled on neurons—ultimately gave rise to the machine learning models like artificial neural networks.
>
>
The preceding history is useful for putting contemporary AI into perspective. Amid calls to take precautions against super-intelligent AI, it is worth bearing in mind the history of overzealousness about the capabilities of AI as well as its proven limitations. The real threat posed by “AI” is not the fantasy of super-intelligence, but rather the use of the technology by surveillance capitalists to monetize the data it extracts from its users and influence their behavior.
 
Added:
>
>
The ultimate disillusionment with the grand AI ambitions hatched at the Dartmouth conference led the field of AI in a different direction. Interest turned from symbolic AI to perceptrons—which were loosely modeled on neurons— and ultimately gave rise to the machine learning models like artificial neural networks, which define the contemporary paradigm of AI.
 
Changed:
<
<

Conclusion

These events brought us to our current paradigm of AI, defined by machine learning. If symbolic AI relied upon the cleverness of its programmers, machine learning algorithms relies equally upon their training data. Machine learning requires vast quantities of training data to function adequately. This fact plays a critical role in incentivizing internet companies to extract data from users. Facebook and Google need data to train the algorithms whose services they sell to advertisers.
>
>
If symbolic AI relied upon the cleverness of its programmers, machine learning relies equally upon the quantity of its training data: machine learning models require vast volumes of training data to extrapolate effectively. This fact plays a critical role in incentivizing internet companies to extract data from users. Facebook and Google need data to train the algorithms whose services they sell to advertisers.
 
Changed:
<
<
Thus, in a certain sense, the failures of Symbolic AI, by leading to the alternative approaches offered by machine learning, fueled the demand for data and ushered in a new chapter of surveillance capitalism. The quest to build computers that emulated human thought ended with computer programs used to control human thought.
>
>
Thus, in a certain sense, the failures of Symbolic AI, by leading to the alternative approaches offered by machine learning, fueled the demand for data and ushered in a new chapter of surveillance capitalism. The quest to build computers which emulated human thought ended not with intelligent computers, but with computers that preyed on human intelligence by monitoring and influencing it.
 [1] http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf

JustinFlaumenhaftSecondEssay 1 - 25 Nov 2020 - Main.JustinFlaumenhaft
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="SecondEssay"

Coding and Controlling Thought

-- By JustinFlaumenhaft - 25 Nov 2020

The History of Artificial Intelligence (AI) and its Limitations

The Origins of AI

The first conference to study “artificial intelligence” was held at Dartmouth College in the summer of 1956. It was a small conference attended by just eleven people, but it proposed an enormous undertaking:

The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves [1].

Thus, the field of artificial intelligence was born. It became clear, however, that the project of endowing a machine with intelligence would require much more than a single summer at Dartmouth to accomplish. In the years that followed, AI researchers set out to conquer various domains of human competence with computers. Early successes in machine-driven translation and pattern recognition stoked the optimism of the AI research community. Marvin Minksy, head of the MIT AI Lab, announced that “within a generation we will have intelligent computers like HAL in the film, 2001” [2].

The Gadfly of AI

Among the burgeoning field of AI’s staunchest critics was Huberts Dreyfus. Dreyfus was an unlikely figure to emerge as an AI commentator: he was not only a philosopher, but a continental philosopher, interested primarily in existentialism and phenomenology. This was an obscure area of expertise even by the standards of his philosopher colleagues. It is hardly surprising, then, that the AI community paid little heed to Dreyfus’s criticism—many derided it as foolish [3].

In Dreyfus’ view, the AI researchers fundamentally misunderstood human intelligence, the phenomenon they were attempting to simulate. According to Dreyfus, the AI researchers tended to think of the human mind in much the same way as they thought of computers: as “general-purpose symbol manipulators.” On this view, the human mind was continuous with even a simple digital calculator, in that both worked by processing information, in the form of binary bits (via neurons or transistors), according to formal rules. This view contemplated a world organized neatly into a set of independent, determinate facts and governed by strict rules—the perfect substrate for a computer-like mind [2].

Drawing from phenomenology, Dreyfus highlighted some crucial differences between the ways that humans and computers functioned. Dreyfus stressed that humans, unlike computers, are embodied beings that participate in a world of relevance, meaning, and goals. On Dreyfus’s view, these distinct aspects of human existence were essential to human-like intelligence. He doubted that a disembodied machine detachedly manipulating symbols and following instructions could adequately emulate intelligent behavior [2].

In particular, Dreyfus pointed to how human judgements are informed by context-dependent factors whose nuances and indeterminacy would elude even a very comprehensive set of instructions. Dreyfus contended that a person’s very useful sense of what is “relevant” to a particular situation could not bed reduced to a system of formal rules. Intelligence also relies upon common sense, social practices, and tacit skills, which are also extremely difficult, if not impossible, to commit to rigid rules. Dreyfus predicted that the AI research program would soon face insurmountable obstacles if it continued on its course [2].

Tree Climbing with One's Eyes on the Moon

By the mid 1970’s, after enduring significant ridicule, Dreyfus seemed to have been vindicated. The early successes in areas like machine language translations and pattern recognition were followed by significant stagnation. In the realm of machine translation, the fuzzy line between semantics and syntax proved to a serious challenge for computers. When it came to improving Chess programs, sorting through the vast number of possible move sequences to analyze the real one was difficult without a chess player’s learned intuition for which sequences are relevant. Formal, explicit instructions for computers faltered in the face of ambiguity and vagueness [2].

The initial results had buoyed unrealistically high expectations about the progress of AI research. While Dreyfus acknowledged the brilliance of the AI researchers work, he suggested that their efforts had brought them no closer to AI than climbing a tree brought one to the moon. The quixotic quest to formalize all of human understanding and knowledge—a pursuit stretching back to Plato—had reached a dead end.

From Symbol Manipulation to Behavior Manipulation

The Rise of Big Data

The preceding history is useful for putting contemporary AI into perspective. Amid calls by high profile individuals to take precautions against super-intelligent AI, it is worth bearing in mind the history of overzealousness about the capabilities of AI as well as its proven limitations. The real threat posed by “AI” is not its achievement of super-intelligence, but its use by surveillance capitalists to monetize the data it extracts from its users and to manipulate their behavior.

The ultimate disillusionment with the grand AI ambitions hatched at the Dartmouth conference ultimately led the field of AI in a different direction. Interest turned from symbolic AI to perceptrons—which were loosely modeled on neurons—ultimately gave rise to the machine learning models like artificial neural networks.

Conclusion

These events brought us to our current paradigm of AI, defined by machine learning. If symbolic AI relied upon the cleverness of its programmers, machine learning algorithms relies equally upon their training data. Machine learning requires vast quantities of training data to function adequately. This fact plays a critical role in incentivizing internet companies to extract data from users. Facebook and Google need data to train the algorithms whose services they sell to advertisers.

Thus, in a certain sense, the failures of Symbolic AI, by leading to the alternative approaches offered by machine learning, fueled the demand for data and ushered in a new chapter of surveillance capitalism. The quest to build computers that emulated human thought ended with computer programs used to control human thought.

[1] http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf

[2] Dreyfus, Hubert L. What Computers Can't Do: The Limits of Artificial Intelligence. The MIT Press, 1984.

[3] Dreyfus, Hubert L. “Standing Up to Analytic Philosophy and Artificial Intelligence at MIT in the Sixties,” Proceedings and Addresses of the American Philosophical Association , NOVEMBER 2013, Vol. 87 (NOVEMBER 2013), pp. 78-92 .


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 8r8 - 07 Jan 2021 - 23:56:49 - JustinFlaumenhaft
Revision 7r7 - 07 Jan 2021 - 20:21:46 - JustinFlaumenhaft
Revision 6r6 - 31 Dec 2020 - 17:44:14 - EbenMoglen
Revision 5r5 - 10 Dec 2020 - 05:35:20 - JustinFlaumenhaft
Revision 4r4 - 29 Nov 2020 - 04:33:13 - JustinFlaumenhaft
Revision 3r3 - 28 Nov 2020 - 17:13:51 - JustinFlaumenhaft
Revision 2r2 - 25 Nov 2020 - 20:18:48 - JustinFlaumenhaft
Revision 1r1 - 25 Nov 2020 - 12:18:40 - JustinFlaumenhaft
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM