Law in Contemporary Society

View   r4  >  r3  >  r2  >  r1
HoDongChyungSecondEssay 4 - 26 May 2023 - Main.HoDongChyung
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Line: 6 to 6
 -- By HoDongChyung - 07 Apr 2023
Added:
>
>

The “How” Matters

 There’s buzz and anxiety about ChatGPT’s ability to replace human lawyers. The chatbot can summarize cases and even draft memos on legal issues with remarkable accuracy and inhumane speed. While ChatGPT’s ability to perform these legal tasks is certainly impressive, the “how” behind its performance illuminates its limitations and thereby invites us to evaluate for ourselves what it means to be an effective lawyer.

ChatGPT’s “Brain”

Changed:
<
<
The primary statistical operations that power ChatGPT are its algorithm and the training for that algorithm. Training involves setting the parameters for the algorithm after referencing large amounts of data to produce test responses and subsequently assessing whether those test responses were indeed accurate. After this assessment, the algorithm tweaks its parameters to improve its performance.
>
>
The primary statistical operations that power ChatGPT are its algorithm and the training for that algorithm. Training involves setting the parameters for the algorithm through a combination of statistical formulas and human feedback.
 

The Algorithm

Changed:
<
<
The specific type of algorithm that powers ChatGPT’s outputs is the transformer architecture. There are several kinds of machine learning processes, including linear regression, nearest neighbors, neural networks, and others. The transformer architecture is a type of neural network, a machine learning process that is modeled after the human brain. In a simple neural network, the algorithm ingests a series of inputs like text, makes some transformations and alterations to it with a formula with weights, and then produces outputs. This formula that performs the alterations to the inputs is called a neuron. In a deeper neural network, one neuron spits out its output to another neuron, which is connected to several other neurons, each doing its own alterations to the input it receives, until eventually it reaches the last layer of neurons that produces final, comprehensible outputs. In ChatGPT’s neural network, these outputs are responses to user prompts. Though the anatomy of this neural network resembles that of the human brain, what these formulas do within each neuron hardly maintain that resemblance.
>
>
The specific type of algorithm that powers ChatGPT’s outputs is the transformer architecture. There are several kinds of machine learning processes, including linear regression, nearest neighbors, neural networks, and others. The transformer architecture is a type of neural network, a machine learning process that is modeled after the human brain. In a simple neural network, the algorithm (1) ingests a series of inputs like text, (2) makes some transformations and alterations to it with a formula with weights, and (3) then produces the desired outputs. The formula that performs the alterations to the inputs is called a neuron. In a deeper neural network, one neuron spits out its output to another neuron, which is connected to several other neurons, each doing its own alterations to the input it receives, until eventually it reaches the last layer of neurons that produces the final, comprehensible outputs. Though the anatomy of this neural network resembles that of the human brain, what these formulas do within each neuron is hardly human.
 
Changed:
<
<
The primary type of formula that powers each of the neurons is called the “self-attention” technique. This technique consists of taking a sequence of words (i.e. a paragraph), breaking that sequence into individual words (a process called tokenization), and then reducing those words into numerical representations (a process called embedding). These representations aren’t simple numbers; each word is represented by three sets of matrices – multi-dimensional arrays of numbers – which are then multiplied in various ways to produce a final matrix of numbers. This final output represents the model’s inferred meaning of each word within a sentence upon analyzing its relationship with all the other words’ numeric representations. There are other machine learning elements that aid the production of ChatGPT’s output, including a feed-forward neural network and a decoder. Without overcomplicating my description, what all these tools do in tandem is reduce single words into a series of numbers, which in turn undergo a series of mathematical transformations to then approximate the best meaning of each word in the context of a paragraph or a group of words. This precise understanding of each word is what enables ChatGPT to provide its impressively tailored response to a user’s question. Needless to say, this is not how humans respond to questions.
>
>
The primary type of formula that powers each of the neurons is called the “self-attention” technique. This technique consists of taking a sequence of words (i.e. a paragraph), breaking that sequence into individual words (a process called tokenization), and then reducing those words into numerical representations (a process called embedding). These representations aren’t simple numbers; each word is represented by three sets of matrices – multi-dimensional arrays of numbers – which are then multiplied in various ways to produce a final output matrix of numbers. This final output represents the model’s inferred meaning of each word within a sentence upon analyzing its relationship with all the other words, which also undergo the same numeric transformations. There are other machine learning elements that aid the production of ChatGPT’s output, including a feed-forward neural network and a decoder. Without overcomplicating my description, these tools perform additional mathematical transformations, which all aid in approximating the best meaning of each word in the context of a paragraph or a group of words. This precise understanding of each word is what enables ChatGPT to provide its impressively tailored response to a user’s question. Needless to say, this is not how humans respond to questions.
 

The Training

Changed:
<
<
ChatGPT also relies on both unsupervised and supervised learning techniques. The unsupervised part consists of the aforementioned algorithm calculating billions of parameters (i.e. weights and coefficients) for its formulas on its own by ingesting a large amount of textual data – potentially 560 GB of data comprised of books, articles, and other textual materials available on the web.
>
>
ChatGPT’s training consists of both unsupervised and supervised learning techniques. The unsupervised part consists of the aforementioned algorithm tailoring billions of parameters (i.e. weights and coefficients) for its formulas on its own based on its formulaic processing of a large amount of textual data – potentially 560 GB of data comprised of books, articles, and other textual materials available on the web.
 
Changed:
<
<
The supervised part consists of human beings writing desirable responses and ranking model output responses. ChatGPT then tweaks its parameters to more closely comport with these desirable responses and rankings.
>
>
The supervised part consists of human beings writing desirable responses and ranking model output responses. ChatGPT then tweaks its parameters to more closely comport with these desirable responses and rankings. The amount and the way in which ChatGPT tweaks itself is further controlled by statistical formulas that calculate a “reward” that assesses ChatGPT’s current performance and then makes small incremental updates to its model to increase its rewards.
 
Changed:
<
<
In a nutshell, ChatGPT is a program produces its response to a user’s query through its transformer architecture that recognizes words contextually and using numerical representations of those words. The robustness of this recognition ability is borne from reference to a large database of existing textual data. ChatGPT then adjusts how it performs this pattern recognition based on feedback provided by human beings.
>
>
In a nutshell, ChatGPT is a program produces its response to a user’s query through its transformer architecture that recognizes words contextually and using numerical representations of those words. The robustness of this recognition ability is borne from reference to a large database of existing textual data. ChatGPT then adjusts how it performs this pattern recognition by processing, through statistical formulas, feedback provided by human beings.
 
Changed:
<
<
Thus, ChatGPT doesn’t deduce that two plus two equals four like we do; it rather guesses that the answer is four by computing numerical relationships between the user prompt and its vast data repository to then infer that four is likely the answer.
>
>
Thus, ChatGPT doesn’t deduce that two plus two equals four like we do; it rather guesses that the answer is four by computing numerical relationships between the user prompt and its vast data repository to then infer that four is likely the answer. As Chomsky noted, ChatGPT cannot explain with causal reasoning.
 

ChatGPT vs. Lawyer

Changed:
<
<
Yes, ChatGPT can produce memos on legal issues and research findings on case law. But how it does so is distinctly not human. ChatGPT would produce the words of the memo via contextual guesses on what the words on the memo should be based on textual data available on the web and by performing mathematical calculations on numerical representations of words. By contrast, human beings are producing the words of a memo with an instinctive understanding of words, a range of reasoning skills, and a sensitivity as to how the words we put on the memo will affect a client’s life.
>
>
Yes, ChatGPT can produce memos on legal issues and research findings on case law. But how it does so is distinctly not human. ChatGPT would produce the words of the memo using contextual pattern recognition and by performing mathematical calculations on numerical representations of words. By contrast, human beings are producing the words of a memo with an instinctive understanding of words, a range of reasoning skills, and a sensitivity as to how the words we put on the memo will affect a client’s life.
 
Changed:
<
<
ChatGPT also isn’t capable of human creativity. It may, for example, produce a novel argument for protecting digital security under the equal protection clause but it is only doing so by referencing existing arguments for current fundamental rights and other related discussions. It does not make this argument from an inspired mix of observation, emotion, and imagination – like the creativity behind the long-term litigation strategy that set the stage for overturning national segregation. To the extent that the chatbot propose something new from old information, it is creative. But the phenomenon of human creativity is far more multi-dimensional and intangible than such a reductionist definition.
>
>
ChatGPT also isn’t capable of legal creativity. It may, for example, produce a novel argument for protecting digital security under the equal protection clause but it is only doing so by referencing existing arguments for current fundamental interests and other related discussions. It does not make this argument from an inspired mix of original observation, emotion, and imagination – like the creativity behind the long-term litigation strategy that set the stage for overturning national segregation. To the extent that the chatbot proposes something new from old information, it is creative. But the phenomenon of human creativity is far more multi-dimensional and intangible than such a reductionist definition.
 
Changed:
<
<
Perhaps the world thinks that a successful lawyer does not require a dynamic toolkit of creativity, emotional capacity, and legal reasoning skills. All that matters is the end-product, not the means and as long as we get the right answer or write a good memo, who cares? If we answer “exactly” to this question, ChatGPT isn’t the reason for our anxiety for the future of the legal profession. It is instead the paltry regard with which the world holds lawyers and their role in society.
>
>
Perhaps the world holds that a successful lawyer does not require a dynamic toolkit of creativity, emotional capacity, and legal reasoning skills. All that matters is the end-product, not the means - as long as we get a good memo, who cares? If we answer “exactly!” to this question, ChatGPT isn’t the reason for our anxiety for the future of lawyers. It is instead the paltry regard with which the world holds lawyers and their role in society.
 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.

HoDongChyungSecondEssay 3 - 22 May 2023 - Main.HoDongChyung
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Changed:
<
<

ChatGPT, ChatGPT, Who's the Smartest of Them All?

>
>

Should Lawyers Fear ChatGPT?

 -- By HoDongChyung - 07 Apr 2023
Added:
>
>
There’s buzz and anxiety about ChatGPT’s ability to replace human lawyers. The chatbot can summarize cases and even draft memos on legal issues with remarkable accuracy and inhumane speed. While ChatGPT’s ability to perform these legal tasks is certainly impressive, the “how” behind its performance illuminates its limitations and thereby invites us to evaluate for ourselves what it means to be an effective lawyer.
 
Changed:
<
<

An Early Application of Legal Tech

>
>

ChatGPT’s “Brain”

The primary statistical operations that power ChatGPT are its algorithm and the training for that algorithm. Training involves setting the parameters for the algorithm after referencing large amounts of data to produce test responses and subsequently assessing whether those test responses were indeed accurate. After this assessment, the algorithm tweaks its parameters to improve its performance.
 
Changed:
<
<
Legal technology is not new. Heck, the pencil is arguably legal technology because it was a piece of technology that enabled lawyers to write briefs and judges to pen opinions. Our notion of technology has evolved over time from fires and wheels to rockets and disruptive software. But all aforementioned forms fit the dictionary's definition of technology, which is the “practical application of knowledge especially in a particular area.” ChatGPT, a statistical application to generate probably accurate outputs, might feel like a novel form of legal technology but it's better characterized as a difference in degree. In 1990, judge Robert parker, a federal district court judge in the Eastern District of Texas, used crude statistics to try approximately 3000 lawsuits regarding deaths due to asbestos exposure. This became known as the bellwether trials. In order to try the cases within a reasonable timeframe, he selected 160 cases and then allocated them into five disease categories and then determined the average amount of damages for each category based on the cases allocated to those categories. He then determined that these cases were representative of the rest of the cases and assigned the remaining cases to each of these categories. This was an example of the use of statistics - albeit rudimentary averaging and sampling - to facilitate a legal outcome. ChatGPT is an invention that is different mostly in degree; it's built from machine learning, which is a far more advanced form statistics than averaging and sampling.
>
>

The Algorithm

The specific type of algorithm that powers ChatGPT’s outputs is the transformer architecture. There are several kinds of machine learning processes, including linear regression, nearest neighbors, neural networks, and others. The transformer architecture is a type of neural network, a machine learning process that is modeled after the human brain. In a simple neural network, the algorithm ingests a series of inputs like text, makes some transformations and alterations to it with a formula with weights, and then produces outputs. This formula that performs the alterations to the inputs is called a neuron. In a deeper neural network, one neuron spits out its output to another neuron, which is connected to several other neurons, each doing its own alterations to the input it receives, until eventually it reaches the last layer of neurons that produces final, comprehensible outputs. In ChatGPT’s neural network, these outputs are responses to user prompts. Though the anatomy of this neural network resembles that of the human brain, what these formulas do within each neuron hardly maintain that resemblance.
 
Changed:
<
<

A Threat Framework for Legal Tech

>
>
The primary type of formula that powers each of the neurons is called the “self-attention” technique. This technique consists of taking a sequence of words (i.e. a paragraph), breaking that sequence into individual words (a process called tokenization), and then reducing those words into numerical representations (a process called embedding). These representations aren’t simple numbers; each word is represented by three sets of matrices – multi-dimensional arrays of numbers – which are then multiplied in various ways to produce a final matrix of numbers. This final output represents the model’s inferred meaning of each word within a sentence upon analyzing its relationship with all the other words’ numeric representations. There are other machine learning elements that aid the production of ChatGPT’s output, including a feed-forward neural network and a decoder. Without overcomplicating my description, what all these tools do in tandem is reduce single words into a series of numbers, which in turn undergo a series of mathematical transformations to then approximate the best meaning of each word in the context of a paragraph or a group of words. This precise understanding of each word is what enables ChatGPT to provide its impressively tailored response to a user’s question. Needless to say, this is not how humans respond to questions.
 
Deleted:
<
<
WestLaw? is also another example of legal technology but it's not perceived as a threat to the legal profession. When then is legal technology perceived as a threat? While certainly not a comprehensive list, I propose three main factors: exclusivity to the legal profession, the scope of the technology, and the autonomy of the technology.
 
Changed:
<
<
Exclusivity means how much of that technology is exclusively concerned with the legal profession. For example, a fence is arguably legal technology because it was a means to assert possession claims, a domain in property law. However, a fence is not exclusively concerned with property law. It also serves non-legal practical ends like safety. We, therefore, refrain from attaching the label of legal technology onto the fence.
>
>

The Training

 
Changed:
<
<
Scope of the technology means how many areas of the legal profession does the technology touch and with what depth. For example, WestLaw? can assist in a wide range of legal subject matter – intellectual property, criminal law, and even case citation. WestLaw? even provides a synopsis of cases but can't write briefs or memos. As powerful as WestLaw? is, there are limits to its scope for occupying the legal profession. In addition, WestLaw? requires manual inputs in various stages for it to be a legal tool – a user has to input a search query and a user has to apply search filters.
>
>
ChatGPT also relies on both unsupervised and supervised learning techniques. The unsupervised part consists of the aforementioned algorithm calculating billions of parameters (i.e. weights and coefficients) for its formulas on its own by ingesting a large amount of textual data – potentially 560 GB of data comprised of books, articles, and other textual materials available on the web.
 
Changed:
<
<
In other words, WestLaw? is not autonomous. Autonomy, the third threat factor, is the degree in which technology can independently conduct the task at hand. For example, ChatGPT is unprecedentedly autonomous. It can write an articulate poem with just a few words to direct it. Not only is it autonomous but its scope in the legal field is wide. It can take the LSAT and perform at the 95th percentile and it can write a legal brief with coherence but many inaccuracies. ChatGPT isn't exclusively concerned with the legal profession but perhaps that deficiency doesn't quell the threat much when the technology can still so expansively occupy the legal profession.
>
>
The supervised part consists of human beings writing desirable responses and ranking model output responses. ChatGPT then tweaks its parameters to more closely comport with these desirable responses and rankings.
 
Changed:
<
<

ChatGPT Rings a False Alarm of Fear

>
>
In a nutshell, ChatGPT is a program produces its response to a user’s query through its transformer architecture that recognizes words contextually and using numerical representations of those words. The robustness of this recognition ability is borne from reference to a large database of existing textual data. ChatGPT then adjusts how it performs this pattern recognition based on feedback provided by human beings.
 
Changed:
<
<
I am excited about ChatGPT's impact on the legal profession. There's a lot of fear behind it but I think the fear betrays the limited confidence we hold of the human mind. As Jaron Lanier remarked, just because a car runs faster than we do, we don't say it's a better runner than us. Although artificial intelligence possesses a very different nature than a car, the analogy still holds – just because a machine can produce outcomes faster, more accurately, and more prolifically, that doesn't make it better than the human mind. A lawyer's fear for ChatGPT is proportionate to the low esteem he holds of legal competencies. Human legal competency is comprised of more than the breadth of legal matters or how many legal tasks can be done independently. Success in the legal profession includes understanding people and harnessing emotions productively for representation. The Robinsons of the world battle for their clients, a soulless machine . The legal profession, at its core, is about helping people and machines, axiomatically, cannot do this better than human beings. As Jaron Lanier also noted, “we have to say consciousness is a real thing and there is a mystical interiority to people that's different from other stuff because if we don't say people are special, how can we make a society or make technologies that serve people?”
>
>
Thus, ChatGPT doesn’t deduce that two plus two equals four like we do; it rather guesses that the answer is four by computing numerical relationships between the user prompt and its vast data repository to then infer that four is likely the answer.
 
Changed:
<
<
There are other non-threatening ways to perceive ChatGPT. One way to think about ChatGPT (or its progeny) is that it's simply an interdisciplinary tool. As mentioned above, ChatGPT is really a statistical tool and the legal profession is no stranger to using statistics to aid legal outcomes.
>
>

ChatGPT vs. Lawyer

 
Changed:
<
<
But more pointedly, as Naom Chomsky commented, ChatGPT runs on human-generated data. It looks to news articles, musical lyrics, and case opinions that were generated by human beings to produce these quasi-human outputs. In other words, AI can approximate us only insofar as we enable it to. If we cease to be creative or intelligent because we become complacent with AI's seeming ability to replace these faculties, we not only stunt our own growth but starve AI from growing as well.
>
>
Yes, ChatGPT can produce memos on legal issues and research findings on case law. But how it does so is distinctly not human. ChatGPT would produce the words of the memo via contextual guesses on what the words on the memo should be based on textual data available on the web and by performing mathematical calculations on numerical representations of words. By contrast, human beings are producing the words of a memo with an instinctive understanding of words, a range of reasoning skills, and a sensitivity as to how the words we put on the memo will affect a client’s life.
 
Changed:
<
<
I don't understand the point of the essay. You appear to be creating a theory to explain why something that is not a threat is "perceived as a threat." You then have multiple factors combining to determine whether something is wrongfully perceived as a threat. What use is such a theory?
>
>
ChatGPT also isn’t capable of human creativity. It may, for example, produce a novel argument for protecting digital security under the equal protection clause but it is only doing so by referencing existing arguments for current fundamental rights and other related discussions. It does not make this argument from an inspired mix of observation, emotion, and imagination – like the creativity behind the long-term litigation strategy that set the stage for overturning national segregation. To the extent that the chatbot propose something new from old information, it is creative. But the phenomenon of human creativity is far more multi-dimensional and intangible than such a reductionist definition.
 
Changed:
<
<
You do not correctly explain at the close of the draft why ChatGPT is not a threat to lawyers. You don't summarize well Chomsky's explanation of why an artificial general intelligence isn't possible based on a language model alone, nor do you actually describe what a generative large language model can and can't write to assist lawyers. What would be most helpful to the reader is clear technical explanation; she can decide for herself what is and what is not a threat to whom if she has a good understanding of how things work. The next draft should provide that.
>
>
Perhaps the world thinks that a successful lawyer does not require a dynamic toolkit of creativity, emotional capacity, and legal reasoning skills. All that matters is the end-product, not the means and as long as we get the right answer or write a good memo, who cares? If we answer “exactly” to this question, ChatGPT isn’t the reason for our anxiety for the future of the legal profession. It is instead the paltry regard with which the world holds lawyers and their role in society.
 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.

HoDongChyungSecondEssay 2 - 19 Apr 2023 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Changed:
<
<

ChatGPT? , ChatGPT? , Who's the Smartest of Them All?

>
>

ChatGPT, ChatGPT, Who's the Smartest of Them All?

 -- By HoDongChyung - 07 Apr 2023
Line: 10 to 10
 

An Early Application of Legal Tech

Changed:
<
<
Legal technology is not new. Heck, the pencil is arguably legal technology because it was a piece of technology that enabled lawyers to write briefs and judges to pen opinions. Our notion of technology has evolved over time from fires and wheels to rockets and disruptive software. But all aforementioned forms fit the dictionary’s definition of technology, which is the “practical application of knowledge especially in a particular area.” ChatGPT? , a statistical application to generate probably accurate outputs, might feel like a novel form of legal technology but it’s better characterized as a difference in degree. In 1990, judge Robert parker, a federal district court judge in the Eastern District of Texas, used crude statistics to try approximately 3000 lawsuits regarding deaths due to asbestos exposure. This became known as the bellwether trials. In order to try the cases within a reasonable timeframe, he selected 160 cases and then allocated them into five disease categories and then determined the average amount of damages for each category based on the cases allocated to those categories. He then determined that these cases were representative of the rest of the cases and assigned the remaining cases to each of these categories. This was an example of the use of statistics - albeit rudimentary averaging and sampling - to facilitate a legal outcome. ChatGPT? is an invention that is different mostly in degree; it’s built from machine learning, which is a far more advanced form statistics than averaging and sampling.
>
>
Legal technology is not new. Heck, the pencil is arguably legal technology because it was a piece of technology that enabled lawyers to write briefs and judges to pen opinions. Our notion of technology has evolved over time from fires and wheels to rockets and disruptive software. But all aforementioned forms fit the dictionary's definition of technology, which is the “practical application of knowledge especially in a particular area.” ChatGPT, a statistical application to generate probably accurate outputs, might feel like a novel form of legal technology but it's better characterized as a difference in degree. In 1990, judge Robert parker, a federal district court judge in the Eastern District of Texas, used crude statistics to try approximately 3000 lawsuits regarding deaths due to asbestos exposure. This became known as the bellwether trials. In order to try the cases within a reasonable timeframe, he selected 160 cases and then allocated them into five disease categories and then determined the average amount of damages for each category based on the cases allocated to those categories. He then determined that these cases were representative of the rest of the cases and assigned the remaining cases to each of these categories. This was an example of the use of statistics - albeit rudimentary averaging and sampling - to facilitate a legal outcome. ChatGPT is an invention that is different mostly in degree; it's built from machine learning, which is a far more advanced form statistics than averaging and sampling.
 

A Threat Framework for Legal Tech

Changed:
<
<
WestLaw? is also another example of legal technology but it’s not perceived as a threat to the legal profession. When then is legal technology perceived as a threat? While certainly not a comprehensive list, I propose three main factors: exclusivity to the legal profession, the scope of the technology, and the autonomy of the technology.
>
>
WestLaw? is also another example of legal technology but it's not perceived as a threat to the legal profession. When then is legal technology perceived as a threat? While certainly not a comprehensive list, I propose three main factors: exclusivity to the legal profession, the scope of the technology, and the autonomy of the technology.
 Exclusivity means how much of that technology is exclusively concerned with the legal profession. For example, a fence is arguably legal technology because it was a means to assert possession claims, a domain in property law. However, a fence is not exclusively concerned with property law. It also serves non-legal practical ends like safety. We, therefore, refrain from attaching the label of legal technology onto the fence.
Changed:
<
<
Scope of the technology means how many areas of the legal profession does the technology touch and with what depth. For example, WestLaw? can assist in a wide range of legal subject matter – intellectual property, criminal law, and even case citation. WestLaw? even provides a synopsis of cases but can’t write briefs or memos. As powerful as WestLaw? is, there are limits to its scope for occupying the legal profession. In addition, WestLaw? requires manual inputs in various stages for it to be a legal tool – a user has to input a search query and a user has to apply search filters.
>
>
Scope of the technology means how many areas of the legal profession does the technology touch and with what depth. For example, WestLaw? can assist in a wide range of legal subject matter – intellectual property, criminal law, and even case citation. WestLaw? even provides a synopsis of cases but can't write briefs or memos. As powerful as WestLaw? is, there are limits to its scope for occupying the legal profession. In addition, WestLaw? requires manual inputs in various stages for it to be a legal tool – a user has to input a search query and a user has to apply search filters.
 
Changed:
<
<
In other words, WestLaw? is not autonomous. Autonomy, the third threat factor, is the degree in which technology can independently conduct the task at hand. For example, ChatGPT? is unprecedentedly autonomous. It can write an articulate poem with just a few words to direct it. Not only is it autonomous but its scope in the legal field is wide. It can take the LSAT and perform at the 95th percentile and it can write a legal brief with coherence but many inaccuracies. ChatGPT? isn’t exclusively concerned with the legal profession but perhaps that deficiency doesn’t quell the threat much when the technology can still so expansively occupy the legal profession.
>
>
In other words, WestLaw? is not autonomous. Autonomy, the third threat factor, is the degree in which technology can independently conduct the task at hand. For example, ChatGPT is unprecedentedly autonomous. It can write an articulate poem with just a few words to direct it. Not only is it autonomous but its scope in the legal field is wide. It can take the LSAT and perform at the 95th percentile and it can write a legal brief with coherence but many inaccuracies. ChatGPT isn't exclusively concerned with the legal profession but perhaps that deficiency doesn't quell the threat much when the technology can still so expansively occupy the legal profession.
 
Changed:
<
<

ChatGPT? Rings a False Alarm of Fear

>
>

ChatGPT Rings a False Alarm of Fear

 
Changed:
<
<
I am excited about ChatGPT? ’s impact on the legal profession. There’s a lot of fear behind it but I think the fear betrays the limited confidence we hold of the human mind. As Jaron Lanier remarked, just because a car runs faster than we do, we don’t say it’s a better runner than us. Although artificial intelligence possesses a very different nature than a car, the analogy still holds – just because a machine can produce outcomes faster, more accurately, and more prolifically, that doesn’t make it better than the human mind. A lawyer’s fear for ChatGPT? is proportionate to the low esteem he holds of legal competencies. Human legal competency is comprised of more than the breadth of legal matters or how many legal tasks can be done independently. Success in the legal profession includes understanding people and harnessing emotions productively for representation. The Robinsons of the world battle for their clients, a soulless machine . The legal profession, at its core, is about helping people and machines, axiomatically, cannot do this better than human beings. As Jaron Lanier also noted, “we have to say consciousness is a real thing and there is a mystical interiority to people that’s different from other stuff because if we don’t say people are special, how can we make a society or make technologies that serve people?”
>
>
I am excited about ChatGPT's impact on the legal profession. There's a lot of fear behind it but I think the fear betrays the limited confidence we hold of the human mind. As Jaron Lanier remarked, just because a car runs faster than we do, we don't say it's a better runner than us. Although artificial intelligence possesses a very different nature than a car, the analogy still holds – just because a machine can produce outcomes faster, more accurately, and more prolifically, that doesn't make it better than the human mind. A lawyer's fear for ChatGPT is proportionate to the low esteem he holds of legal competencies. Human legal competency is comprised of more than the breadth of legal matters or how many legal tasks can be done independently. Success in the legal profession includes understanding people and harnessing emotions productively for representation. The Robinsons of the world battle for their clients, a soulless machine . The legal profession, at its core, is about helping people and machines, axiomatically, cannot do this better than human beings. As Jaron Lanier also noted, “we have to say consciousness is a real thing and there is a mystical interiority to people that's different from other stuff because if we don't say people are special, how can we make a society or make technologies that serve people?”
 
Changed:
<
<
There are other non-threatening ways to perceive ChatGPT? . One way to think about ChatGPT? (or its progeny) is that it’s simply an interdisciplinary tool. As mentioned above, ChatGPT? is really a statistical tool and the legal profession is no stranger to using statistics to aid legal outcomes.
>
>
There are other non-threatening ways to perceive ChatGPT. One way to think about ChatGPT (or its progeny) is that it's simply an interdisciplinary tool. As mentioned above, ChatGPT is really a statistical tool and the legal profession is no stranger to using statistics to aid legal outcomes.
 
Changed:
<
<
But more pointedly, as Naom Chomsky commented, ChatGPT? runs on human-generated data. It looks to news articles, musical lyrics, and case opinions that were generated by human beings to produce these quasi-human outputs. In other words, AI can approximate us only insofar as we enable it to. If we cease to be creative or intelligent because we become complacent with AI’s seeming ability to replace these faculties, we not only stunt our own growth but starve AI from growing as well.
>
>
But more pointedly, as Naom Chomsky commented, ChatGPT runs on human-generated data. It looks to news articles, musical lyrics, and case opinions that were generated by human beings to produce these quasi-human outputs. In other words, AI can approximate us only insofar as we enable it to. If we cease to be creative or intelligent because we become complacent with AI's seeming ability to replace these faculties, we not only stunt our own growth but starve AI from growing as well.

I don't understand the point of the essay. You appear to be creating a theory to explain why something that is not a threat is "perceived as a threat." You then have multiple factors combining to determine whether something is wrongfully perceived as a threat. What use is such a theory?

You do not correctly explain at the close of the draft why ChatGPT is not a threat to lawyers. You don't summarize well Chomsky's explanation of why an artificial general intelligence isn't possible based on a language model alone, nor do you actually describe what a generative large language model can and can't write to assist lawyers. What would be most helpful to the reader is clear technical explanation; she can decide for herself what is and what is not a threat to whom if she has a good understanding of how things work. The next draft should provide that.

 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
Line: 37 to 43
 
Changed:
<
<
Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.
>
>
Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.bvg

HoDongChyungSecondEssay 1 - 07 Apr 2023 - Main.HoDongChyung
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="SecondEssay"

ChatGPT? , ChatGPT? , Who's the Smartest of Them All?

-- By HoDongChyung - 07 Apr 2023

An Early Application of Legal Tech

Legal technology is not new. Heck, the pencil is arguably legal technology because it was a piece of technology that enabled lawyers to write briefs and judges to pen opinions. Our notion of technology has evolved over time from fires and wheels to rockets and disruptive software. But all aforementioned forms fit the dictionary’s definition of technology, which is the “practical application of knowledge especially in a particular area.” ChatGPT? , a statistical application to generate probably accurate outputs, might feel like a novel form of legal technology but it’s better characterized as a difference in degree. In 1990, judge Robert parker, a federal district court judge in the Eastern District of Texas, used crude statistics to try approximately 3000 lawsuits regarding deaths due to asbestos exposure. This became known as the bellwether trials. In order to try the cases within a reasonable timeframe, he selected 160 cases and then allocated them into five disease categories and then determined the average amount of damages for each category based on the cases allocated to those categories. He then determined that these cases were representative of the rest of the cases and assigned the remaining cases to each of these categories. This was an example of the use of statistics - albeit rudimentary averaging and sampling - to facilitate a legal outcome. ChatGPT? is an invention that is different mostly in degree; it’s built from machine learning, which is a far more advanced form statistics than averaging and sampling.

A Threat Framework for Legal Tech

WestLaw? is also another example of legal technology but it’s not perceived as a threat to the legal profession. When then is legal technology perceived as a threat? While certainly not a comprehensive list, I propose three main factors: exclusivity to the legal profession, the scope of the technology, and the autonomy of the technology.

Exclusivity means how much of that technology is exclusively concerned with the legal profession. For example, a fence is arguably legal technology because it was a means to assert possession claims, a domain in property law. However, a fence is not exclusively concerned with property law. It also serves non-legal practical ends like safety. We, therefore, refrain from attaching the label of legal technology onto the fence.

Scope of the technology means how many areas of the legal profession does the technology touch and with what depth. For example, WestLaw? can assist in a wide range of legal subject matter – intellectual property, criminal law, and even case citation. WestLaw? even provides a synopsis of cases but can’t write briefs or memos. As powerful as WestLaw? is, there are limits to its scope for occupying the legal profession. In addition, WestLaw? requires manual inputs in various stages for it to be a legal tool – a user has to input a search query and a user has to apply search filters.

In other words, WestLaw? is not autonomous. Autonomy, the third threat factor, is the degree in which technology can independently conduct the task at hand. For example, ChatGPT? is unprecedentedly autonomous. It can write an articulate poem with just a few words to direct it. Not only is it autonomous but its scope in the legal field is wide. It can take the LSAT and perform at the 95th percentile and it can write a legal brief with coherence but many inaccuracies. ChatGPT? isn’t exclusively concerned with the legal profession but perhaps that deficiency doesn’t quell the threat much when the technology can still so expansively occupy the legal profession.

ChatGPT? Rings a False Alarm of Fear

I am excited about ChatGPT? ’s impact on the legal profession. There’s a lot of fear behind it but I think the fear betrays the limited confidence we hold of the human mind. As Jaron Lanier remarked, just because a car runs faster than we do, we don’t say it’s a better runner than us. Although artificial intelligence possesses a very different nature than a car, the analogy still holds – just because a machine can produce outcomes faster, more accurately, and more prolifically, that doesn’t make it better than the human mind. A lawyer’s fear for ChatGPT? is proportionate to the low esteem he holds of legal competencies. Human legal competency is comprised of more than the breadth of legal matters or how many legal tasks can be done independently. Success in the legal profession includes understanding people and harnessing emotions productively for representation. The Robinsons of the world battle for their clients, a soulless machine . The legal profession, at its core, is about helping people and machines, axiomatically, cannot do this better than human beings. As Jaron Lanier also noted, “we have to say consciousness is a real thing and there is a mystical interiority to people that’s different from other stuff because if we don’t say people are special, how can we make a society or make technologies that serve people?”

There are other non-threatening ways to perceive ChatGPT? . One way to think about ChatGPT? (or its progeny) is that it’s simply an interdisciplinary tool. As mentioned above, ChatGPT? is really a statistical tool and the legal profession is no stranger to using statistics to aid legal outcomes.

But more pointedly, as Naom Chomsky commented, ChatGPT? runs on human-generated data. It looks to news articles, musical lyrics, and case opinions that were generated by human beings to produce these quasi-human outputs. In other words, AI can approximate us only insofar as we enable it to. If we cease to be creative or intelligent because we become complacent with AI’s seeming ability to replace these faculties, we not only stunt our own growth but starve AI from growing as well.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 4r4 - 26 May 2023 - 21:21:02 - HoDongChyung
Revision 3r3 - 22 May 2023 - 06:32:48 - HoDongChyung
Revision 2r2 - 19 Apr 2023 - 18:19:08 - EbenMoglen
Revision 1r1 - 07 Apr 2023 - 04:21:13 - HoDongChyung
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM