Law in the Internet Society

Classrooms in the Digital Age: The False Equivalence of AI and the Internet

-- By LauraBane - 29 Nov 2024

Introduction

It is becoming increasingly clear that the typical use of the Internet and AI have had negative effects on learning, especially among young children. Brain imaging research shows that writing better facilitates information retention and recall than typing in students of all ages. Anecdotally, professors have observed that students who grew up in the so-called ‘digital age’ struggle to finish full length books and digest information in a way that is not synthesized and broken down for purposes of efficiency. And, in perhaps the most stunning display of technology-induced brain rot, students have begun to turn to AI en masse to facilitate cheating. But is another world possible? In other words, can we integrate the Internet and AI into everyday classroom activities in a way that benefits, rather than harms, students and academia as a whole?

Popular Arguments in Support of AI and Internet Use in Classrooms

The primary argument for encouraging Internet use in connection with schoolwork is that it allows students to (i) engage with a wide variety of topics quickly and cheaply and (ii) discuss these topics with peers around the world without ever leaving the classroom. These benefits are undeniable: the ease with which information can be posted online, as opposed to being subjected to the throes of the publishing process, would have been unfathomable a century ago. More specifically, some researchers have argued against the prevailing view that book-based learning is the most conducive to skill mastery. They claim that learning from Internet-based sources “enable[s] better mastery through distributed (shorter, more frequent) practice rather than massed (longer, less frequent) practice; . . . optimize[s] performance [by] allow[ing] students to learn at their peak time of their day; . . . deepen[s] memory [by] requir[ing] cheat-proof assignments and tests; . . . [and] promote[s] critical thinking [by] necessitat[ing] intellectual winnowing and sifting.”

Proponents of AI use in classrooms and academia at large tend to argue that the pushback against AI in such spaces fails to account for the importance of AI literacy. If students are AI literate, then AI will not pose any serious risks to their intellectual development. Some thinkers in this camp claim that allowing children to interact with AI chatbots offers “benefits . . . [that are] similar to [those they gain] from interacting with other people.” However, they acknowledge that these respective benefits are not infinitely identical: interpersonal interaction alone breeds the kind of “deeper engagement and relationship-building” that is “important for language and social development.” So long as users understand these limitations and set “healthy boundaries” when using AI learning tools, they supposedly will not fall prey to the allure of using said tools to replace independent, critical thought.

Potential Refutations (And Why Equating AI and the Internet in Terms of Utility and Danger Is Intellectually Dishonest)

In my view, AI poses a far greater threat to academia and the learning process than the Internet. Although both tools allow users to access misinformation and take intellectual ‘shortcuts,’ there is something deeply uncanny and dystopian about AI’s ability to churn out facades of moral, ethical, ideological, scientific, and other inquiries that require critical thought. Even if one were morally scrupulous enough to not use AI tools to cheat, and instead vowed only to use them as virtual debate partners, the results would still be disastrous. This is because AI is incapable of forming opinions or making persuasive ‘arguments’ that are not thinly veiled amalgamations of random factoids (hence the parenthesis). Linguist and political theorist Noam Chomsky writes at length about this issue in his New York Times article “The False Promise of ChatGPT.” There, Chomsky explains that ChatGPT? functions as a “lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer,” whereas the human mind uses “small amounts of information” to create broad, novel explanations. Thus, AI exists as something quasi-machine and quasi-human: it can draw conclusions, unlike a basic Internet processor, yet it cannot produce novel thought (something which even the dumbest people can do).

If you were to Google the phrase “Is utilitarianism the superior political philosophy,” you would be met with two types of sources: (i) purely factual sources defining utilitarianism and listing its opposing political philosophies or (ii) opinion-based resources written by real people (e.g. John Stuart Mill). Discerning the two is fairly easy: a source stating “utilitarianism is a political philosophy stating that the collective good should be prioritized above all else” is a factual one, whereas a source arguing that utilitarianism is immoral because the government should not knowingly allow anyone to suffer is an opinion-based one. What’s more, each opinion-based source will be the product of someone’s original thought process. With ChatGPT? and other AI tools, the results for the phrase “Is utilitarianism the superior political philosophy” are likely a blend of fact and opinion, with every opinion being a regurgitation of someone else’s opinion—no independent thought to be found.

My Proposition

The use of AI tools in academic settings ought to be actively discouraged. Even if one were to use AI solely to garner factual information, this is equally possible to accomplish with the Internet, without the risk of AI formatting its answer in a way that seems deceptively argumentative. Additionally, the risk of young, impressionable students using AI to write entire essays is too great—and well established—to ignore. Although the Internet has its own perils (e.g. misinformation, sensationalized information, and distractingly presented information), its benefits far outweigh its risks. The same cannot be said for AI tools. Internet use in connection with school work should be accepted and encouraged, with the condition that students are taught to be media literate, critical of online sources, and aware of the fact that some online information which is designed to maximize efficiency, such as a SparkNotes? summary of War and Peace, will prove more harmful than beneficial.

Navigation

Webs Webs

r2 - 30 Nov 2024 - 09:22:52 - LauraBane
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM