Law in Contemporary Society

Should Lawyers Fear ChatGPT?

-- By HoDongChyung - 07 Apr 2023

The “How” Matters

There’s buzz and anxiety about ChatGPT’s ability to replace human lawyers. The chatbot can summarize cases and even draft memos on legal issues with remarkable accuracy and inhumane speed. While ChatGPT’s ability to perform these legal tasks is certainly impressive, the “how” behind its performance illuminates its limitations and thereby invites us to evaluate for ourselves what it means to be an effective lawyer.

ChatGPT’s “Brain”

The primary statistical operations that power ChatGPT are its algorithm and the training for that algorithm. Training involves setting the parameters for the algorithm through a combination of statistical formulas and human feedback.

The Algorithm

The specific type of algorithm that powers ChatGPT’s outputs is the transformer architecture. There are several kinds of machine learning processes, including linear regression, nearest neighbors, neural networks, and others. The transformer architecture is a type of neural network, a machine learning process that is modeled after the human brain. In a simple neural network, the algorithm (1) ingests a series of inputs like text, (2) makes some transformations and alterations to it with a formula with weights, and (3) then produces the desired outputs. The formula that performs the alterations to the inputs is called a neuron. In a deeper neural network, one neuron spits out its output to another neuron, which is connected to several other neurons, each doing its own alterations to the input it receives, until eventually it reaches the last layer of neurons that produces the final, comprehensible outputs. Though the anatomy of this neural network resembles that of the human brain, what these formulas do within each neuron is hardly human.

The primary type of formula that powers each of the neurons is called the “self-attention” technique. This technique consists of taking a sequence of words (i.e. a paragraph), breaking that sequence into individual words (a process called tokenization), and then reducing those words into numerical representations (a process called embedding). These representations aren’t simple numbers; each word is represented by three sets of matrices – multi-dimensional arrays of numbers – which are then multiplied in various ways to produce a final output matrix of numbers. This final output represents the model’s inferred meaning of each word within a sentence upon analyzing its relationship with all the other words, which also undergo the same numeric transformations. There are other machine learning elements that aid the production of ChatGPT’s output, including a feed-forward neural network and a decoder. Without overcomplicating my description, these tools perform additional mathematical transformations, which all aid in approximating the best meaning of each word in the context of a paragraph or a group of words. This precise understanding of each word is what enables ChatGPT to provide its impressively tailored response to a user’s question. Needless to say, this is not how humans respond to questions.

The Training

ChatGPT’s training consists of both unsupervised and supervised learning techniques. The unsupervised part consists of the aforementioned algorithm tailoring billions of parameters (i.e. weights and coefficients) for its formulas on its own based on its formulaic processing of a large amount of textual data – potentially 560 GB of data comprised of books, articles, and other textual materials available on the web.

The supervised part consists of human beings writing desirable responses and ranking model output responses. ChatGPT then tweaks its parameters to more closely comport with these desirable responses and rankings. The amount and the way in which ChatGPT tweaks itself is further controlled by statistical formulas that calculate a “reward” that assesses ChatGPT’s current performance and then makes small incremental updates to its model to increase its rewards.

In a nutshell, ChatGPT is a program produces its response to a user’s query through its transformer architecture that recognizes words contextually and using numerical representations of those words. The robustness of this recognition ability is borne from reference to a large database of existing textual data. ChatGPT then adjusts how it performs this pattern recognition by processing, through statistical formulas, feedback provided by human beings.

Thus, ChatGPT doesn’t deduce that two plus two equals four like we do; it rather guesses that the answer is four by computing numerical relationships between the user prompt and its vast data repository to then infer that four is likely the answer. As Chomsky noted, ChatGPT cannot explain with causal reasoning.

ChatGPT vs. Lawyer

Yes, ChatGPT can produce memos on legal issues and research findings on case law. But how it does so is distinctly not human. ChatGPT would produce the words of the memo using contextual pattern recognition and by performing mathematical calculations on numerical representations of words. By contrast, human beings are producing the words of a memo with an instinctive understanding of words, a range of reasoning skills, and a sensitivity as to how the words we put on the memo will affect a client’s life.

ChatGPT also isn’t capable of legal creativity. It may, for example, produce a novel argument for protecting digital security under the equal protection clause but it is only doing so by referencing existing arguments for current fundamental interests and other related discussions. It does not make this argument from an inspired mix of original observation, emotion, and imagination – like the creativity behind the long-term litigation strategy that set the stage for overturning national segregation. To the extent that the chatbot proposes something new from old information, it is creative. But the phenomenon of human creativity is far more multi-dimensional and intangible than such a reductionist definition.

Perhaps the world holds that a successful lawyer does not require a dynamic toolkit of creativity, emotional capacity, and legal reasoning skills. All that matters is the end-product, not the means - as long as we get a good memo, who cares? If we answer “exactly!” to this question, ChatGPT isn’t the reason for our anxiety for the future of lawyers. It is instead the paltry regard with which the world holds lawyers and their role in society.

You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.bvg


Webs Webs

r4 - 26 May 2023 - 21:21:02 - HoDongChyung
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM