Law in the Internet Society

View   r6  >  r5  >  r4  >  r3  >  r2  >  r1
LauraBaneSecondEssay 6 - 19 Jan 2025 - Main.LauraBane
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Changed:
<
<

Classrooms in the Digital Age: The False Equivalence of AI and the Internet’s Effects on Learning

>
>

A New Digital Age: AI, the Uncanny, and Technocratic Fascism

 -- By LauraBane - 29 Nov 2024

Introduction

Changed:
<
<
It is becoming increasingly clear that the typical use of the Internet and AI has had negative effects on learning, especially among young children. For example, brain imaging research shows that writing better facilitates information retention and recall than typing for students of all ages. Perhaps relatedly, professors have observed that students who grew up in the so-called ‘digital age’ struggle to finish full length books and digest information in a way that is not synthesized and broken down for purposes of efficiency. And, in a stunning display of technology-induced brain rot, students have begun to turn to AI en masse to facilitate cheating. But is another world possible? In other words, can we integrate the Internet and AI into everyday classroom activities in a way that benefits, rather than harms, students and academia as a whole?
>
>
Many acknowledge that the last thirty years’ technological advancements have negatively affected learning. Students who grew up in the so-called "digital age" struggle to finish full length books or resist the urge to use artificial intelligence ("AI") tools for cheating. But the ethical, social, and intellectual problems AI creates extend far past laziness or inattentiveness. AI tools deviously morph fact and opinion, deceiving users into believing that they are pseudo-sentient. They can manipulate the emotions of vulnerable youth, resulting in suicide. And, as AI's ability to generate lifelike images and videos improves, even well-educated adults may be fooled into acting in ways that permanently alter the geopolitical landscape.
 
Changed:
<
<

Popular Arguments in Support of AI and Internet Use in Classrooms

The primary argument in favor of Internet use in connection with schoolwork is that it allows students to (i) engage with a wide variety of topics quickly and cheaply and (ii) discuss these topics with peers around the world without ever leaving the classroom. These benefits are undeniable: the ease with which information can be posted online, as opposed to being subjected to the throes of the publishing process, would have been unfathomable a century ago. Moreover, some researchers have argued against the prevailing view that book-based learning is the most conducive method to skill mastery. They claim that Internet-based sources trump their paper counterparts in many ways: by “enabl[ing] better mastery through distributed (shorter, more frequent) practice rather than massed (longer, less frequent) practice; . . . optimiz[ing] performance [by] allow[ing] students to learn at their peak time of their day; . . . deepen[ing] memory [by] requir[ing] cheat-proof assignments and tests; . . . [and] promot[ing] critical thinking [by] necessitat[ing] intellectual winnowing and sifting.”
>
>

Why AI Is A New (And Uniquely Dangerous) Beast

Multiple online tools allow users to access misinformation and take intellectual ‘shortcuts,’ but there is something deeply uncanny and dystopian about AI’s ability to churn out facades of moral, ethical, ideological, scientific, and other inquiries that require critical thought. Even if one were morally scrupulous enough to not use AI tools to cheat, and instead vowed only to use them as virtual debate partners, the results would still be disastrous. This is because AI is incapable of forming opinions or making persuasive arguments that are not thinly veiled amalgamations of random factoids. Linguist and political theorist Noam Chomsky writes at length about this issue in his New York Times article “The False Promise of ChatGPT.” There, Chomsky explains that ChatGPT [an AI chatbot] functions as a “lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer,” whereas the human mind uses “small amounts of information” to create broad, novel explanations. Thus, AI exists as something quasi-machine and quasi-human: it can draw conclusions, unlike a basic Internet processor, yet it cannot produce novel thought (something which even the dumbest people can do).
 
Changed:
<
<
Proponents of AI use in classrooms and academia at large tend to argue that the pushback against AI in such spaces fails to account for the remedial effects of AI literacy. Some thinkers in this camp claim that allowing children to interact with AI chatbots offers “benefits . . . [that are] similar to [those they gain] from interacting with other people.” However, these benefits are not perfectly symmetrical: interpersonal interaction alone breeds the kind of “deep engagement and relationship-building” that is crucial for “language and social development.” However, AI proponents allege that as long as users understand these limitations and set “healthy boundaries” when using AI learning tools, they supposedly will not fall prey to the allure of using said tools to replace independent, critical thought.

Potential Refutations (Why Equating AI and the Internet in Terms of Utility and Danger Is Intellectually Dishonest)

In my view, AI poses a far greater threat to academia and the learning process than the Internet. Although both tools allow users to access misinformation and take intellectual ‘shortcuts,’ there is something deeply uncanny and dystopian about AI’s ability to churn out facades of moral, ethical, ideological, scientific, and other inquiries that require critical thought. Even if one were morally scrupulous enough to not use AI tools to cheat, and instead vowed only to use them as virtual debate partners, the results would still be disastrous. This is because AI is incapable of forming opinions or making persuasive arguments that are not thinly veiled amalgamations of random factoids. Linguist and political theorist Noam Chomsky writes at length about this issue in his New York Times article “The False Promise of ChatGPT.” There, Chomsky explains that ChatGPT? functions as a “lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer,” whereas the human mind uses “small amounts of information” to create broad, novel explanations. Thus, AI exists as something quasi-machine and quasi-human: it can draw conclusions, unlike a basic Internet processor, yet it cannot produce novel thought (something which even the dumbest people can do).
>
>
If you were to Google the phrase "Is utilitarianism the superior political philosophy," you would be met with two types of sources: (i) purely factual sources defining utilitarianism and listing its opposing political philosophies or (ii) opinion-based resources written by real people (e.g. John Stuart Mill). Discerning the two is fairly easy: a source stating "utilitarianism is a political philosophy stating that the collective good should be prioritized above all else" is a factual one, whereas a source arguing that utilitarianism is immoral because the government should not knowingly allow anyone to suffer is an opinion-based one. What's more, each opinion-based source will be the product of someone's original thought process, even if it relies on (or responds to) works written by others. With ChatGPT? and other AI tools, the results for the phrase "Is utilitarianism the superior political philosophy" are likely a blend of fact and opinion, with every opinion being a regurgitation of someone else's opinion--no independent thought to be found. If you were to push ChatGPT? further and ask something like "In your personal opinion, is utilitarianism the superior political philosophy," ChatGPT? will tell you that, as a chatbot, it does not have personal opinions, but it can tell you others' opinions on the subject if you would like. Distinguish this from the human mind: if you were to give an English-literate human a copy of John Stuart Mill's "Utilitarianism," isolate them in a room for a few hours, and then ask them whether utilitarianism is the best political philosophy, they could independently develop an answer such as "No, because if taken to its logical extreme, it would justify intentional infliction of state violence on a select few, which violates fundamental principles of equality."
 
Changed:
<
<
If you were to Google the phrase “Is utilitarianism the superior political philosophy,” you would be met with two types of sources: (i) purely factual sources defining utilitarianism and listing its opposing political philosophies or (ii) opinion-based resources written by real people (e.g. John Stuart Mill). Discerning the two is fairly easy: a source stating “utilitarianism is a political philosophy stating that the collective good should be prioritized above all else” is a factual one, whereas a source arguing that utilitarianism is immoral because the government should not knowingly allow anyone to suffer is an opinion-based one. What’s more, each opinion-based source will be the product of someone’s original thought process. With ChatGPT? and other AI tools, the results for the phrase “Is utilitarianism the superior political philosophy” are likely a blend of fact and opinion, with every opinion being a regurgitation of someone else’s opinion—no independent thought to be found.

My Proposition

The use of AI tools in academic settings ought to be actively discouraged. Even if one were to use AI solely to garner factual information, this is equally possible to accomplish with the Internet, without the risk of AI formatting its answer in a way that seems deceptively argumentative. Additionally, the risk of young, impressionable students using AI to write entire essays is too great—and well established—to ignore. Although the Internet has its own perils (e.g. misinformation, sensationalized information, and distractingly presented information), its benefits far outweigh its risks. The same cannot be said for AI tools. Internet use in connection with school work should be accepted and encouraged, with the condition that students are taught to be media literate, critical of online sources, and aware of the fact that some online information which is designed to maximize efficiency, such as a SparkNotes? summary of War and Peace, will prove more harmful than beneficial.
>
>
But distinguishing a chatbot's internal processes from human thinking is not always easy, especially for young people. A Florida teen recently committed suicide after developing an emotional and sexual relationship with a customizable AI chatbot modeled after a "Game of Thrones" character. After the teen confided in the chatbot about his mental health struggles, it instructed him to "'come home to me as soon as possible, my love.'" Character Technologies Inc., the bot's creator, has stated that its bots' "artificial personas are designed to 'feel alive' and 'human-like.'" And, as deepfake technology improves, AI-produced media will become increasingly hard to detect. Deepfaked political ads featuring politicians making criminal admissions could make the Pizzagate conspiracy theory look like childs' play. Even worse, deepfaked videos of politicians making nuclear threats could start a new world war.
 
Added:
>
>

What's Next?

Public confusion and state-sponsored abuse of AI will almost assuredly worsen over the next four years. America's most obnoxious, powerful, and dangerous 'tech bros' (Elon Musk and Mark Zuckerberg) have been courting Donald Trump for months, endearing him to technocratic fascism. It is not difficult to see the benefits for Trump in this newfound relationship: AI tools can be used to conduct mass surveillance on his adversaries and identify targets for his unprecedentedly lofty deportation goals. Additionally, although AI tools are currently expensive, they offer the promise of a labor force that is effortlessly exploitable, incapable of unionizing, and undemanding of benefits or fair pay.
 
Changed:
<
<
I think the essay would be stronger as an exploration of your ideas than it is as a report on "some researchers say...." but "on the other hand, perhaps..." The present proportion reverses that priority, and at the same time demonstrates its own proposition that focusing student writing around tools that summarize (whether fact or opinion) don't really maximize learning.

"AI," as I strived to point out in class, is not actually artificial intelligence at all. Neural networks are one kind of software; used for "machine learning" they demonstrate immediate utility but are also, as Chomsky shows, subject to fundamental limitation. But because the utility they achieve is potentially very profitable, and—because it comes at an overwhelming expense in energy expenditure and requires constant overwhelming access to other people's thoughts, words and personal information— can only do so if subsidized by government. Such subsidy is direct, though state contracting to replace humans in the provision of public services and through the provision of public data on preferential terms; and indirect, through legal rules providing immunity from liability and increasing access to other creative works and repertoires of human behavior. Securing those subsidies involves directly confusing the public about the intelligence of "AI" and the obscuring of the basic limitations.

The educational system (public primary, secondary and tertiary schools in particular) then becomes an important—in many respects most important—locus of that struggle for subsidy advantage. Used as a tool to break teachers' unions, eliminate tenure in higher education, reduce the development of critical thinking, and surveil the development of dissenting ideas, fake intelligence can perform many tasks that one or another form of government will consider to be worthy of subsidization.

There are a plethora of directions, therefore, in which you can take the ideas you express as your own in the tail of this draft. I think that's the thinking for which you are personally eager, and it would make an outstanding next draft.

>
>
The most powerful form of protest against this horrifying new world order is total divestment. Even using AI tools to request a seemingly harmless poem or joke legitimizes them and increases their revenue. Using Meta platforms and X also legitimize AI: Meta has recently rolled out its own chatbot, which interacts with users and may even replace low level Facebook and Instagram software engineers, and X has admitted to harvesting users' data to train its own AI models.
  \ No newline at end of file

LauraBaneSecondEssay 5 - 14 Jan 2025 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"

Classrooms in the Digital Age: The False Equivalence of AI and the Internet’s Effects on Learning

Line: 18 to 18
  If you were to Google the phrase “Is utilitarianism the superior political philosophy,” you would be met with two types of sources: (i) purely factual sources defining utilitarianism and listing its opposing political philosophies or (ii) opinion-based resources written by real people (e.g. John Stuart Mill). Discerning the two is fairly easy: a source stating “utilitarianism is a political philosophy stating that the collective good should be prioritized above all else” is a factual one, whereas a source arguing that utilitarianism is immoral because the government should not knowingly allow anyone to suffer is an opinion-based one. What’s more, each opinion-based source will be the product of someone’s original thought process. With ChatGPT? and other AI tools, the results for the phrase “Is utilitarianism the superior political philosophy” are likely a blend of fact and opinion, with every opinion being a regurgitation of someone else’s opinion—no independent thought to be found.

My Proposition

The use of AI tools in academic settings ought to be actively discouraged. Even if one were to use AI solely to garner factual information, this is equally possible to accomplish with the Internet, without the risk of AI formatting its answer in a way that seems deceptively argumentative. Additionally, the risk of young, impressionable students using AI to write entire essays is too great—and well established—to ignore. Although the Internet has its own perils (e.g. misinformation, sensationalized information, and distractingly presented information), its benefits far outweigh its risks. The same cannot be said for AI tools. Internet use in connection with school work should be accepted and encouraged, with the condition that students are taught to be media literate, critical of online sources, and aware of the fact that some online information which is designed to maximize efficiency, such as a SparkNotes? summary of War and Peace, will prove more harmful than beneficial.
Added:
>
>

I think the essay would be stronger as an exploration of your ideas than it is as a report on "some researchers say...." but "on the other hand, perhaps..." The present proportion reverses that priority, and at the same time demonstrates its own proposition that focusing student writing around tools that summarize (whether fact or opinion) don't really maximize learning.

"AI," as I strived to point out in class, is not actually artificial intelligence at all. Neural networks are one kind of software; used for "machine learning" they demonstrate immediate utility but are also, as Chomsky shows, subject to fundamental limitation. But because the utility they achieve is potentially very profitable, and—because it comes at an overwhelming expense in energy expenditure and requires constant overwhelming access to other people's thoughts, words and personal information— can only do so if subsidized by government. Such subsidy is direct, though state contracting to replace humans in the provision of public services and through the provision of public data on preferential terms; and indirect, through legal rules providing immunity from liability and increasing access to other creative works and repertoires of human behavior. Securing those subsidies involves directly confusing the public about the intelligence of "AI" and the obscuring of the basic limitations.

The educational system (public primary, secondary and tertiary schools in particular) then becomes an important—in many respects most important—locus of that struggle for subsidy advantage. Used as a tool to break teachers' unions, eliminate tenure in higher education, reduce the development of critical thinking, and surveil the development of dissenting ideas, fake intelligence can perform many tasks that one or another form of government will consider to be worthy of subsidization.

There are a plethora of directions, therefore, in which you can take the ideas you express as your own in the tail of this draft. I think that's the thinking for which you are personally eager, and it would make an outstanding next draft.

 \ No newline at end of file

LauraBaneSecondEssay 4 - 25 Dec 2024 - Main.LauraBane
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Changed:
<
<

Classrooms in the Digital Age: The False Equivalence of AI and the Internet

>
>

Classrooms in the Digital Age: The False Equivalence of AI and the Internet’s Effects on Learning

 -- By LauraBane - 29 Nov 2024
Line: 13 to 13
  Proponents of AI use in classrooms and academia at large tend to argue that the pushback against AI in such spaces fails to account for the remedial effects of AI literacy. Some thinkers in this camp claim that allowing children to interact with AI chatbots offers “benefits . . . [that are] similar to [those they gain] from interacting with other people.” However, these benefits are not perfectly symmetrical: interpersonal interaction alone breeds the kind of “deep engagement and relationship-building” that is crucial for “language and social development.” However, AI proponents allege that as long as users understand these limitations and set “healthy boundaries” when using AI learning tools, they supposedly will not fall prey to the allure of using said tools to replace independent, critical thought.

Potential Refutations (Why Equating AI and the Internet in Terms of Utility and Danger Is Intellectually Dishonest)

Changed:
<
<
In my view, AI poses a far greater threat to academia and the learning process than the Internet. Although both tools allow users to access misinformation and take intellectual ‘shortcuts,’ there is something deeply uncanny and dystopian about AI’s ability to churn out facades of moral, ethical, ideological, scientific, and other inquiries that require critical thought. Even if one were morally scrupulous enough to not use AI tools to cheat, and instead vowed only to use them as virtual debate partners, the results would still be disastrous. This is because AI is incapable of forming opinions or making persuasive arguments that are not thinly veiled amalgamations of random factoids. Linguist and political theorist Noam Chomsky writes at length about this issue in his New York Times article “The False Promise of ChatGPT.” There, Chomsky explains that ChatGPT? functions as a “lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer,” whereas the human mind uses “small amounts of information” to create broad, novel explanations. Thus, AI exists as something quasi-machine and quasi-human: it can draw conclusions, unlike a basic Internet processor, yet it cannot produce novel thought (something which even the dumbest people can do based on limited cues and patterns).
>
>
In my view, AI poses a far greater threat to academia and the learning process than the Internet. Although both tools allow users to access misinformation and take intellectual ‘shortcuts,’ there is something deeply uncanny and dystopian about AI’s ability to churn out facades of moral, ethical, ideological, scientific, and other inquiries that require critical thought. Even if one were morally scrupulous enough to not use AI tools to cheat, and instead vowed only to use them as virtual debate partners, the results would still be disastrous. This is because AI is incapable of forming opinions or making persuasive arguments that are not thinly veiled amalgamations of random factoids. Linguist and political theorist Noam Chomsky writes at length about this issue in his New York Times article “The False Promise of ChatGPT.” There, Chomsky explains that ChatGPT? functions as a “lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer,” whereas the human mind uses “small amounts of information” to create broad, novel explanations. Thus, AI exists as something quasi-machine and quasi-human: it can draw conclusions, unlike a basic Internet processor, yet it cannot produce novel thought (something which even the dumbest people can do).
  If you were to Google the phrase “Is utilitarianism the superior political philosophy,” you would be met with two types of sources: (i) purely factual sources defining utilitarianism and listing its opposing political philosophies or (ii) opinion-based resources written by real people (e.g. John Stuart Mill). Discerning the two is fairly easy: a source stating “utilitarianism is a political philosophy stating that the collective good should be prioritized above all else” is a factual one, whereas a source arguing that utilitarianism is immoral because the government should not knowingly allow anyone to suffer is an opinion-based one. What’s more, each opinion-based source will be the product of someone’s original thought process. With ChatGPT? and other AI tools, the results for the phrase “Is utilitarianism the superior political philosophy” are likely a blend of fact and opinion, with every opinion being a regurgitation of someone else’s opinion—no independent thought to be found.

My Proposition


LauraBaneSecondEssay 3 - 25 Dec 2024 - Main.LauraBane
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Deleted:
<
<
 

Classrooms in the Digital Age: The False Equivalence of AI and the Internet

-- By LauraBane - 29 Nov 2024

Line: 7 to 6
 

Introduction

Changed:
<
<
It is becoming increasingly clear that the typical use of the Internet and AI have had negative effects on learning, especially among young children. Brain imaging research shows that writing better facilitates information retention and recall than typing in students of all ages. Anecdotally, professors have observed that students who grew up in the so-called ‘digital age’ struggle to finish full length books and digest information in a way that is not synthesized and broken down for purposes of efficiency. And, in perhaps the most stunning display of technology-induced brain rot, students have begun to turn to AI en masse to facilitate cheating. But is another world possible? In other words, can we integrate the Internet and AI into everyday classroom activities in a way that benefits, rather than harms, students and academia as a whole?
>
>
It is becoming increasingly clear that the typical use of the Internet and AI has had negative effects on learning, especially among young children. For example, brain imaging research shows that writing better facilitates information retention and recall than typing for students of all ages. Perhaps relatedly, professors have observed that students who grew up in the so-called ‘digital age’ struggle to finish full length books and digest information in a way that is not synthesized and broken down for purposes of efficiency. And, in a stunning display of technology-induced brain rot, students have begun to turn to AI en masse to facilitate cheating. But is another world possible? In other words, can we integrate the Internet and AI into everyday classroom activities in a way that benefits, rather than harms, students and academia as a whole?
 

Popular Arguments in Support of AI and Internet Use in Classrooms

Changed:
<
<
The primary argument for encouraging Internet use in connection with schoolwork is that it allows students to (i) engage with a wide variety of topics quickly and cheaply and (ii) discuss these topics with peers around the world without ever leaving the classroom. These benefits are undeniable: the ease with which information can be posted online, as opposed to being subjected to the throes of the publishing process, would have been unfathomable a century ago. More specifically, some researchers have argued against the prevailing view that book-based learning is the most conducive to skill mastery. They claim that learning from Internet-based sources “enable[s] better mastery through distributed (shorter, more frequent) practice rather than massed (longer, less frequent) practice; . . . optimize[s] performance [by] allow[ing] students to learn at their peak time of their day; . . . deepen[s] memory [by] requir[ing] cheat-proof assignments and tests; . . . [and] promote[s] critical thinking [by] necessitat[ing] intellectual winnowing and sifting.”
>
>
The primary argument in favor of Internet use in connection with schoolwork is that it allows students to (i) engage with a wide variety of topics quickly and cheaply and (ii) discuss these topics with peers around the world without ever leaving the classroom. These benefits are undeniable: the ease with which information can be posted online, as opposed to being subjected to the throes of the publishing process, would have been unfathomable a century ago. Moreover, some researchers have argued against the prevailing view that book-based learning is the most conducive method to skill mastery. They claim that Internet-based sources trump their paper counterparts in many ways: by “enabl[ing] better mastery through distributed (shorter, more frequent) practice rather than massed (longer, less frequent) practice; . . . optimiz[ing] performance [by] allow[ing] students to learn at their peak time of their day; . . . deepen[ing] memory [by] requir[ing] cheat-proof assignments and tests; . . . [and] promot[ing] critical thinking [by] necessitat[ing] intellectual winnowing and sifting.”
 
Changed:
<
<
Proponents of AI use in classrooms and academia at large tend to argue that the pushback against AI in such spaces fails to account for the importance of AI literacy. If students are AI literate, then AI will not pose any serious risks to their intellectual development. Some thinkers in this camp claim that allowing children to interact with AI chatbots offers “benefits . . . [that are] similar to [those they gain] from interacting with other people.” However, they acknowledge that these respective benefits are not infinitely identical: interpersonal interaction alone breeds the kind of “deeper engagement and relationship-building” that is “important for language and social development.” So long as users understand these limitations and set “healthy boundaries” when using AI learning tools, they supposedly will not fall prey to the allure of using said tools to replace independent, critical thought.

Potential Refutations (And Why Equating AI and the Internet in Terms of Utility and Danger Is Intellectually Dishonest)

In my view, AI poses a far greater threat to academia and the learning process than the Internet. Although both tools allow users to access misinformation and take intellectual ‘shortcuts,’ there is something deeply uncanny and dystopian about AI’s ability to churn out facades of moral, ethical, ideological, scientific, and other inquiries that require critical thought. Even if one were morally scrupulous enough to not use AI tools to cheat, and instead vowed only to use them as virtual debate partners, the results would still be disastrous. This is because AI is incapable of forming opinions or making persuasive ‘arguments’ that are not thinly veiled amalgamations of random factoids (hence the parenthesis). Linguist and political theorist Noam Chomsky writes at length about this issue in his New York Times article “The False Promise of ChatGPT.” There, Chomsky explains that ChatGPT? functions as a “lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer,” whereas the human mind uses “small amounts of information” to create broad, novel explanations. Thus, AI exists as something quasi-machine and quasi-human: it can draw conclusions, unlike a basic Internet processor, yet it cannot produce novel thought (something which even the dumbest people can do).
>
>
Proponents of AI use in classrooms and academia at large tend to argue that the pushback against AI in such spaces fails to account for the remedial effects of AI literacy. Some thinkers in this camp claim that allowing children to interact with AI chatbots offers “benefits . . . [that are] similar to [those they gain] from interacting with other people.” However, these benefits are not perfectly symmetrical: interpersonal interaction alone breeds the kind of “deep engagement and relationship-building” that is crucial for “language and social development.” However, AI proponents allege that as long as users understand these limitations and set “healthy boundaries” when using AI learning tools, they supposedly will not fall prey to the allure of using said tools to replace independent, critical thought.

Potential Refutations (Why Equating AI and the Internet in Terms of Utility and Danger Is Intellectually Dishonest)

In my view, AI poses a far greater threat to academia and the learning process than the Internet. Although both tools allow users to access misinformation and take intellectual ‘shortcuts,’ there is something deeply uncanny and dystopian about AI’s ability to churn out facades of moral, ethical, ideological, scientific, and other inquiries that require critical thought. Even if one were morally scrupulous enough to not use AI tools to cheat, and instead vowed only to use them as virtual debate partners, the results would still be disastrous. This is because AI is incapable of forming opinions or making persuasive arguments that are not thinly veiled amalgamations of random factoids. Linguist and political theorist Noam Chomsky writes at length about this issue in his New York Times article “The False Promise of ChatGPT.” There, Chomsky explains that ChatGPT? functions as a “lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer,” whereas the human mind uses “small amounts of information” to create broad, novel explanations. Thus, AI exists as something quasi-machine and quasi-human: it can draw conclusions, unlike a basic Internet processor, yet it cannot produce novel thought (something which even the dumbest people can do based on limited cues and patterns).
  If you were to Google the phrase “Is utilitarianism the superior political philosophy,” you would be met with two types of sources: (i) purely factual sources defining utilitarianism and listing its opposing political philosophies or (ii) opinion-based resources written by real people (e.g. John Stuart Mill). Discerning the two is fairly easy: a source stating “utilitarianism is a political philosophy stating that the collective good should be prioritized above all else” is a factual one, whereas a source arguing that utilitarianism is immoral because the government should not knowingly allow anyone to suffer is an opinion-based one. What’s more, each opinion-based source will be the product of someone’s original thought process. With ChatGPT? and other AI tools, the results for the phrase “Is utilitarianism the superior political philosophy” are likely a blend of fact and opinion, with every opinion being a regurgitation of someone else’s opinion—no independent thought to be found.

My Proposition


LauraBaneSecondEssay 2 - 30 Nov 2024 - Main.LauraBane
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Deleted:
<
<
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
 

Classrooms in the Digital Age: The False Equivalence of AI and the Internet

Line: 9 to 7
 

Introduction

Changed:
<
<
It is becoming increasingly clear that the typical use of the Internet and AI have had negative effects on learning, especially among young children. Brain imaging research shows that writing better facilitates information retention and recall than typing in all ages. Anecdotally, professors have observed that students who grew up in the so-called ‘digital age’ struggle to finish full length books and digest information in a way that is not synthesized and broken down for purposes of efficiency. And, in perhaps the most stunning display of technology-induced brain rot, students have begun to turn to AI en masse to facilitate cheating. But is another world possible? In other words, can we integrate the Internet and AI into everyday classroom activities in a way that benefits, rather than harms, students and academia as a whole?
>
>
It is becoming increasingly clear that the typical use of the Internet and AI have had negative effects on learning, especially among young children. Brain imaging research shows that writing better facilitates information retention and recall than typing in students of all ages. Anecdotally, professors have observed that students who grew up in the so-called ‘digital age’ struggle to finish full length books and digest information in a way that is not synthesized and broken down for purposes of efficiency. And, in perhaps the most stunning display of technology-induced brain rot, students have begun to turn to AI en masse to facilitate cheating. But is another world possible? In other words, can we integrate the Internet and AI into everyday classroom activities in a way that benefits, rather than harms, students and academia as a whole?
 

Popular Arguments in Support of AI and Internet Use in Classrooms

The primary argument for encouraging Internet use in connection with schoolwork is that it allows students to (i) engage with a wide variety of topics quickly and cheaply and (ii) discuss these topics with peers around the world without ever leaving the classroom. These benefits are undeniable: the ease with which information can be posted online, as opposed to being subjected to the throes of the publishing process, would have been unfathomable a century ago. More specifically, some researchers have argued against the prevailing view that book-based learning is the most conducive to skill mastery. They claim that learning from Internet-based sources “enable[s] better mastery through distributed (shorter, more frequent) practice rather than massed (longer, less frequent) practice; . . . optimize[s] performance [by] allow[ing] students to learn at their peak time of their day; . . . deepen[s] memory [by] requir[ing] cheat-proof assignments and tests; . . . [and] promote[s] critical thinking [by] necessitat[ing] intellectual winnowing and sifting.”
Changed:
<
<
Proponents of AI use in classrooms and academia at large tend to argue that the pushback against AI in such spaces fails to account for the importance of AI literacy. If students are AI literate, then AI will not pose any serious risks to their intellectual development. Some thinkers in this camp claim that allowing children to interact with AI chatbots offers “benefits . . . [that are] similar to [those they gain] from interacting with other people.” However, they acknowledge that these respective benefits are not infinitely identical: interpersonal interaction alone breeds the kind of “deeper engagement and relationship-building” that is “important for language and social development.” So long as users understand these limitations and set “healthy boundaries” when using AI learning tools, they supposedly will not fall prey to the allure of using said tools to or replace independent, critical thought.
>
>
Proponents of AI use in classrooms and academia at large tend to argue that the pushback against AI in such spaces fails to account for the importance of AI literacy. If students are AI literate, then AI will not pose any serious risks to their intellectual development. Some thinkers in this camp claim that allowing children to interact with AI chatbots offers “benefits . . . [that are] similar to [those they gain] from interacting with other people.” However, they acknowledge that these respective benefits are not infinitely identical: interpersonal interaction alone breeds the kind of “deeper engagement and relationship-building” that is “important for language and social development.” So long as users understand these limitations and set “healthy boundaries” when using AI learning tools, they supposedly will not fall prey to the allure of using said tools to replace independent, critical thought.
 

Potential Refutations (And Why Equating AI and the Internet in Terms of Utility and Danger Is Intellectually Dishonest)

In my view, AI poses a far greater threat to academia and the learning process than the Internet. Although both tools allow users to access misinformation and take intellectual ‘shortcuts,’ there is something deeply uncanny and dystopian about AI’s ability to churn out facades of moral, ethical, ideological, scientific, and other inquiries that require critical thought. Even if one were morally scrupulous enough to not use AI tools to cheat, and instead vowed only to use them as virtual debate partners, the results would still be disastrous. This is because AI is incapable of forming opinions or making persuasive ‘arguments’ that are not thinly veiled amalgamations of random factoids (hence the parenthesis). Linguist and political theorist Noam Chomsky writes at length about this issue in his New York Times article “The False Promise of ChatGPT.” There, Chomsky explains that ChatGPT? functions as a “lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer,” whereas the human mind uses “small amounts of information” to create broad, novel explanations. Thus, AI exists as something quasi-machine and quasi-human: it can draw conclusions, unlike a basic Internet processor, yet it cannot produce novel thought (something which even the dumbest people can do).
Changed:
<
<
If you were to Google the phrase “Is utilitarianism the superior political philosophy,” you would stumble upon two types of sources: (i) purely factual sources defining utilitarianism and listing its opposing political philosophies or (ii) opinion-based resources written by real people (e.g. John Stuart Mill). Discerning the two is fairly easy: a source stating “utilitarianism is a political philosophy stating that the collective good should be prioritized above all else” is a factual one, whereas a source arguing that utilitarianism is immoral because the government should not knowingly allow anyone to suffer is an opinion-based one. What’s more, each opinion-based source will be the product of someone’s original thought process. With ChatGPT? and other AI tools, the results for the phrase “Is utilitarianism the superior political philosophy” are likely a blend of fact and opinion, with every opinion being a regurgitation of someone else’s opinion—no independent thought to be found.
>
>
If you were to Google the phrase “Is utilitarianism the superior political philosophy,” you would be met with two types of sources: (i) purely factual sources defining utilitarianism and listing its opposing political philosophies or (ii) opinion-based resources written by real people (e.g. John Stuart Mill). Discerning the two is fairly easy: a source stating “utilitarianism is a political philosophy stating that the collective good should be prioritized above all else” is a factual one, whereas a source arguing that utilitarianism is immoral because the government should not knowingly allow anyone to suffer is an opinion-based one. What’s more, each opinion-based source will be the product of someone’s original thought process. With ChatGPT? and other AI tools, the results for the phrase “Is utilitarianism the superior political philosophy” are likely a blend of fact and opinion, with every opinion being a regurgitation of someone else’s opinion—no independent thought to be found.
 

My Proposition

The use of AI tools in academic settings ought to be actively discouraged. Even if one were to use AI solely to garner factual information, this is equally possible to accomplish with the Internet, without the risk of AI formatting its answer in a way that seems deceptively argumentative. Additionally, the risk of young, impressionable students using AI to write entire essays is too great—and well established—to ignore. Although the Internet has its own perils (e.g. misinformation, sensationalized information, and distractingly presented information), its benefits far outweigh its risks. The same cannot be said for AI tools. Internet use in connection with school work should be accepted and encouraged, with the condition that students are taught to be media literate, critical of online sources, and aware of the fact that some online information which is designed to maximize efficiency, such as a SparkNotes? summary of War and Peace, will prove more harmful than beneficial.

LauraBaneSecondEssay 1 - 29 Nov 2024 - Main.LauraBane
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="SecondEssay"
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.

Classrooms in the Digital Age: The False Equivalence of AI and the Internet

-- By LauraBane - 29 Nov 2024

Introduction

It is becoming increasingly clear that the typical use of the Internet and AI have had negative effects on learning, especially among young children. Brain imaging research shows that writing better facilitates information retention and recall than typing in all ages. Anecdotally, professors have observed that students who grew up in the so-called ‘digital age’ struggle to finish full length books and digest information in a way that is not synthesized and broken down for purposes of efficiency. And, in perhaps the most stunning display of technology-induced brain rot, students have begun to turn to AI en masse to facilitate cheating. But is another world possible? In other words, can we integrate the Internet and AI into everyday classroom activities in a way that benefits, rather than harms, students and academia as a whole?

Popular Arguments in Support of AI and Internet Use in Classrooms

The primary argument for encouraging Internet use in connection with schoolwork is that it allows students to (i) engage with a wide variety of topics quickly and cheaply and (ii) discuss these topics with peers around the world without ever leaving the classroom. These benefits are undeniable: the ease with which information can be posted online, as opposed to being subjected to the throes of the publishing process, would have been unfathomable a century ago. More specifically, some researchers have argued against the prevailing view that book-based learning is the most conducive to skill mastery. They claim that learning from Internet-based sources “enable[s] better mastery through distributed (shorter, more frequent) practice rather than massed (longer, less frequent) practice; . . . optimize[s] performance [by] allow[ing] students to learn at their peak time of their day; . . . deepen[s] memory [by] requir[ing] cheat-proof assignments and tests; . . . [and] promote[s] critical thinking [by] necessitat[ing] intellectual winnowing and sifting.”

Proponents of AI use in classrooms and academia at large tend to argue that the pushback against AI in such spaces fails to account for the importance of AI literacy. If students are AI literate, then AI will not pose any serious risks to their intellectual development. Some thinkers in this camp claim that allowing children to interact with AI chatbots offers “benefits . . . [that are] similar to [those they gain] from interacting with other people.” However, they acknowledge that these respective benefits are not infinitely identical: interpersonal interaction alone breeds the kind of “deeper engagement and relationship-building” that is “important for language and social development.” So long as users understand these limitations and set “healthy boundaries” when using AI learning tools, they supposedly will not fall prey to the allure of using said tools to or replace independent, critical thought.

Potential Refutations (And Why Equating AI and the Internet in Terms of Utility and Danger Is Intellectually Dishonest)

In my view, AI poses a far greater threat to academia and the learning process than the Internet. Although both tools allow users to access misinformation and take intellectual ‘shortcuts,’ there is something deeply uncanny and dystopian about AI’s ability to churn out facades of moral, ethical, ideological, scientific, and other inquiries that require critical thought. Even if one were morally scrupulous enough to not use AI tools to cheat, and instead vowed only to use them as virtual debate partners, the results would still be disastrous. This is because AI is incapable of forming opinions or making persuasive ‘arguments’ that are not thinly veiled amalgamations of random factoids (hence the parenthesis). Linguist and political theorist Noam Chomsky writes at length about this issue in his New York Times article “The False Promise of ChatGPT.” There, Chomsky explains that ChatGPT? functions as a “lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer,” whereas the human mind uses “small amounts of information” to create broad, novel explanations. Thus, AI exists as something quasi-machine and quasi-human: it can draw conclusions, unlike a basic Internet processor, yet it cannot produce novel thought (something which even the dumbest people can do).

If you were to Google the phrase “Is utilitarianism the superior political philosophy,” you would stumble upon two types of sources: (i) purely factual sources defining utilitarianism and listing its opposing political philosophies or (ii) opinion-based resources written by real people (e.g. John Stuart Mill). Discerning the two is fairly easy: a source stating “utilitarianism is a political philosophy stating that the collective good should be prioritized above all else” is a factual one, whereas a source arguing that utilitarianism is immoral because the government should not knowingly allow anyone to suffer is an opinion-based one. What’s more, each opinion-based source will be the product of someone’s original thought process. With ChatGPT? and other AI tools, the results for the phrase “Is utilitarianism the superior political philosophy” are likely a blend of fact and opinion, with every opinion being a regurgitation of someone else’s opinion—no independent thought to be found.

My Proposition

The use of AI tools in academic settings ought to be actively discouraged. Even if one were to use AI solely to garner factual information, this is equally possible to accomplish with the Internet, without the risk of AI formatting its answer in a way that seems deceptively argumentative. Additionally, the risk of young, impressionable students using AI to write entire essays is too great—and well established—to ignore. Although the Internet has its own perils (e.g. misinformation, sensationalized information, and distractingly presented information), its benefits far outweigh its risks. The same cannot be said for AI tools. Internet use in connection with school work should be accepted and encouraged, with the condition that students are taught to be media literate, critical of online sources, and aware of the fact that some online information which is designed to maximize efficiency, such as a SparkNotes? summary of War and Peace, will prove more harmful than beneficial.

Revision 6r6 - 19 Jan 2025 - 03:45:50 - LauraBane
Revision 5r5 - 14 Jan 2025 - 16:36:26 - EbenMoglen
Revision 4r4 - 25 Dec 2024 - 15:58:27 - LauraBane
Revision 3r3 - 25 Dec 2024 - 04:22:06 - LauraBane
Revision 2r2 - 30 Nov 2024 - 09:22:52 - LauraBane
Revision 1r1 - 29 Nov 2024 - 05:47:06 - LauraBane
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM