Computers, Privacy & the Constitution

Consequences with the Rise of AI Power

Big Company Dominance & Data Privacy:

AI concentrates its power on those privileged enough to decide how technology should be applied

Why is this particularly true of machine learning software as opposed to other software? If "AI" is a synonym for software generally, what has changed? If it isn't, then we need an explanation.

. However, users have to succumb to to these rules in order to share important moments of their lives, connect with family members, and celebrate moments. This unchecked growth of AI systems allows companies access into moments that should never know- whether we like it or not, by very virtue of using these products, we are inviting tech billionaires a window into our personal lives.

I don't understand the relationship between "AI" and "moments." Software that reads X-rays looking for images that might be tumors? Software that transcribes lectures? Software that predicts whether based on past whether measurements without atmospheric physics simulations? Or are we talking about behavior collection occurring not through "AI" but because people use centralized "social media" to do their messaging and small data sharing? If it's this last, what has that to do with "AI" at all?

Big Tech companies have become an “additional uninvited family member” that has all details of our family gatherings, and can choose to spread or use that information as they please. However, these Big Tech companies might be worse than an uninvited guest, as they carry a lot of metadata about how we operate. This is even more dangerous in the case of children’s privacy, as many parents do not realize how much they are endangering their children with their social media posts. Most people do not read terms and agreements either, because we are in a hurry to leverage these tools. Alternative tools are hard to gain traction, and barriers to entry appear high.

What is the barrier to entry involved in setting up a web server and doing your sharing on it? Cost? Regulatory friction? Time? I can teach you how to do it in a couple hours using hardware fished form the trash, software that's free, and any form of Internet connectivity.

Job Scarcity:

Another negative consequence of the rise of AI is job displacement. Even as entering lawyers, our profession is being questioned, given the efficiency with which AI tools can conduct legal research.

No, there is no way for machine learning systems to practice law. That's some sort of scary nonsense being peddled by people who have something to sell but if you have actually learned how to learn lawyering (which may not be happening in a law school that has destroyed its library, I grant you), you would know why this is just a monster under the bed to frighten children with.

There is a human element to client-lawyer interaction, which gives hope for litigation. However, for transactional practices, AI is superior at reviewing and modifying contracts in an error-free manner.

Proof? Evidence of any kind? Who says?

As we continue to use AI for increasingly complicated tasks, there runs the risk of people not seeing the value of learning and skill development. The most important reason that skill development is emphasized is for jobs. However, if AI can replace humans at jobs, the interest in learning will reduce. This is unfortunate in terms of building an educated society, one that can stand up for social justice. We can’t expect robots to know when to stand up for civil injustice; that is a method that only humans can use. Taking away the incentives for education will drastically reduce the ability for people to stand up for their rights, and have a say in societal events. Otherwise, we will be living at the expense of the wishes of the technology billionaires mentioned above.

Fraud:

Already, there has been a plethora of AI fraud targeting senior citizens, stealing their personally identifiable information and personal belongings. Even the best AI tools make mistakes, and society’s over-reliance on AI tools is challenging because there have been no equivalent human verification requirements. As a result, by the time the errors are realized, a lot of damage has been done. A prime example is the issues with wrongful denials of insurance claims, which resulted in many people suffering before they were able to appeal the erroneous AI decisions. This ultimately led to the uprising caused by Luigi Mangione.

Trying to give a technical cause for one man's decision to commit murder seems like a very peculiar form of social analysis.

When we have a society in which technology tools are making decisions, this will lead to frustrations from the people who feel like they are voices are not being heard.

As opposed to when people other than themselves who do not seem to be listening are making decisions instead? What is the history of the last 5,000 years about, then?

Environmental Impact:

Ironically, cloud-based models are inherently not environmentally friendly.

What's the irony?

With the U.S.'s current exit from the Paris Climate agreement, we need to be mindful of monitoring ChatGPT? platforms that have vast energy expenditures and commit substantial drains on water usage.

Facts?

The dependency on such platforms does not seem to be decreasing with China’s Deepseek.

Why would one software change affect this analysis, if analysis it is?

This will require international negotiations towards moderation of energy from models, which is an uphill battle.

Solutions:

Ideally, government intervention is warranted. There should be laws that limit the control that Big Technology companies have over users’ private metadata. Users themselves can commit to reducing time spent on these apps. However, that would likely take a while given their popularity. We need to incentivize people to use other types of apps that do not have surveillance.

Furthermore, as a country, we must decide whether we want to adhere to GDPR laws or other laws governing data privacy. There are many up-and-coming state laws, but this is challenging given the lack of uniformity. This problem becomes even more difficult about environmental issues, the gravity of which is not uniformly agreed upon by multiple nations. Thus, it will be important that companies find ways to restrict the usage of these models.

The way to improve this draft is to put some basis in fact underneath its editorializing. It would help to have others' writing to benefit from, but there is nothing underneath any of the sentences here, which aren't solidly grounded in any technical analysis, any historical writing, any actual legal scholarship, any anything. Why not start from learning you have done, working outward from the reading that has taught you and showing your readers what you have learned and where you got it? Then perhaps we can see more clearly what we have to reason with.

Navigation

Webs Webs

r3 - 05 May 2025 - 18:28:19 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM