| |
AnishaAmurthurFirstPaper 2 - 16 Apr 2025 - Main.AnishaAmurthur
|
|
> > |
META TOPICPARENT | name="FirstPaper" |
Consequences with the Rise of AI Power
Big Company Dominance & Data Privacy:
AI concentrates its power on those privileged enough to decide how technology should be applied. However, users have to succumb to
to these rules in order to share important moments of their lives, connect with family members, and celebrate moments. This unchecked growth of AI systems allows companies access into moments that should never know- whether we like it or not, by very virtue of using these products, we are inviting tech billionaires a window into our personal lives. Big Tech companies have become an “additional uninvited family member” that has all details of our family gatherings, and can choose to spread or use that information as they please. However, these Big Tech companies might be worse than an uninvited guest, as they carry a lot of metadata about how we operate. This is even more dangerous in the case of children’s privacy, as many parents do not realize how much they are endangering their children with their social media posts. Most people do not read terms and agreements either, because we are in a hurry to leverage these tools. Alternative tools are hard to gain traction, and barriers to entry appear high.
Job Scarcity:
Another negative consequence of the rise of AI is job displacement. Even as entering lawyers, our profession is being questioned, given the efficiency with which AI tools can conduct legal research. There is a human element to client-lawyer interaction, which gives hope for litigation. However, for transactional practices, AI is superior at reviewing and modifying contracts in an error-free manner. As we continue to use AI for increasingly complicated tasks, there runs the risk of people not seeing the value of learning and skill development. The most important reason that skill development is emphasized is for jobs. However, if AI can replace humans at jobs, the interest in learning will reduce. This is unfortunate in terms of building an educated society, one that can stand up for social justice. We can’t expect robots to know when to stand up for civil injustice; that is a method that only humans can use. Taking away the incentives for education will drastically reduce the ability for people to stand up for their rights, and have a say in societal events. Otherwise, we will be living at the expense of the wishes of the technology billionaires mentioned above.
Fraud:
Already, there has been a plethora of AI fraud targeting senior citizens, stealing their personally identifiable information and personal belongings. Even the best AI tools make mistakes, and society’s over-reliance on AI tools is challenging because there have been no equivalent human verification requirements. As a result, by the time the errors are realized, a lot of damage has been done. A prime example is the issues with wrongful denials of insurance claims, which resulted in many people suffering before they were able to appeal the erroneous AI decisions. This ultimately led to the uprising caused by Luigi Mangione. When we have a society in which technology tools are making decisions, this will lead to frustrations from the people who feel like they are voices are not being heard.
Environmental Impact:
Ironically, cloud-based models are inherently not environmentally friendly. With the U.S.'s current exit from the Paris Climate agreement, we need to be mindful of monitoring ChatGPT? platforms that have vast energy expenditures and commit substantial drains on water usage. The dependency on such platforms does not seem to be decreasing with China’s Deepseek. This will require international negotiations towards moderation of energy from models, which is an uphill battle.
Solutions:
Ideally, government intervention is warranted. There should be laws that limit the control that Big Technology companies have over users’ private metadata. Users themselves can commit to reducing time spent on these apps. However, that would likely take a while given their popularity. We need to incentivize people to use other types of apps that do not have surveillance.
Furthermore, as a country, we must decide whether we want to adhere to GDPR laws or other laws governing data privacy. There are many up-and-coming state laws, but this is challenging given the lack of uniformity. This problem becomes even more difficult about environmental issues, the gravity of which is not uniformly agreed upon by multiple nations. Thus, it will be important that companies find ways to restrict the usage of these models. |
|
|
|
This site is powered by the TWiki collaboration platform. All material on this collaboration platform is the property of the contributing authors. All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
|
|
| |