EsmeraldaHernandezFirstEssay 4 - 30 Nov 2024 - Main.EsmeraldaHernandez
|
|
< < | Revision 3 is unreadable | > > |
META TOPICPARENT | name="FirstEssay" |
Dreaming of something bigger (and something completely obtainable)
-- By EsmeraldaHernandez - 25 Oct 2024
Regardless of the degree of skill or trust with which each individual leverages digital technology, engaging with said technology is almost inevitable. In my past draft, I complained about certain aspects of my technology usage that made the experience of going online even more annoying than usual. However, it is all the more important to recognize what is at the heart of my first essay and complaint. Digital technology doesn’t work in a way that best serves us as a society. This draft, instead, asks: why has the online environment become a monster we know and tolerate? What is the alternative? Why can’t we exist in this other reality? Because I am no computer scientist, I will be focusing on social media and advertising (if anyone in class has more of a knowledge of the coding behind this media, please feel free to expand on that end).
How should our technology work?
The internet should ultimately be an outlet that empowers society, an outlet that allows the world to work together to solve problems. Now that we are more connected than ever, the democratization of knowledge through the internet is in sight. We could end the education crisis in America, not to mention bridging the gap in communities that lack educational opportunities worldwide.
Through the internet, we could easily distribute educational materials and online teaching tools to youth, and help the best teachers in the world teach students everywhere. Digital technology could enable the voice of single people to travel to multiple countries simultaneously, reducing inequality. In an ideal internet, we could collaborate with others and create large problem solving systems without worrying that someone is taking advantage of us. Even dreaming of the basics, we could connect with friends (or strangers) and exchange conversations or media freely.
Why do we not have what we want?
We want to use digital technology to its fullest extent, as we have with technologies of the past. So why do we not? As Shoshana Zuboff so eloquently put it: “Surveillance capitalism is not the same as digital technology. It is an economic logic that has hijacked the digital for its own purposes.” Surveillance capitalism has infiltrated the digital world to claim the human experience as free raw material to convert into behavioral data, which is sold to others and used to keep users captive.
Surveillance capitalism began hitting at a high pace from the start. Google, in the early 2000s discovered how to turn personal information into predictions of where ads should be placed. By 2013, Facebook could develop subliminal cues to shape users’ real-life feelings and actions, allowing marketers to strike at moments of “maximum vulnerability.” These days, even your child’s conversations with a toy can be sold to the CIA.
The internet we have, although not perfect by any means, is not even accessible to many parts of the world. As of 2023, there is a stark digital divide where, although 63 percent of the world’s population is connected, many countries can only count on 27 percent of their populations as internet users. As new technologies develop, the poor continue to be left out. This digital inequity further roadblocks the dream of a worldwide educational and collaborative network, and a plethora of brilliant minds continue to go one more day without the tools they might need to bring about interesting developments.
Still, the fire might have started long ago. We can see it in the very existence of the terms “information superhighway” and in “the market for eyeballs.” We can see it in the discussions around the first broadcast licenses for television.
How can we get closer to having what we want?
To get closer to a world in which we are able to use the internet as we dream it, without having to make conscious decisions to protect ourselves from surveillance, we must first reclaim our right to privacy. Zuboff and Moglen advise on “front end” and “back end” disruption of revenue flows of markets that do business with our data. As a collective, we must speak up in outrage about the mass taking of our data and cry out for the internet that we could have, one that could bring about educational and developmental renaissance. We must advocatate for laws to “ensrine respect for the value of personal data.” We must make others aware of exactly how much we are losing to surveillance capitalism, and advocate for the distribution of digital technology to those without. There is little excuse for the lack of distribution when the costs of hardware are so low that sourcing the technology or hosting your own digital service can be affordable. It might not be easy, but it will move us closer to the internet we want.
On a personal level, you can stop handing your data to the Zuckerbergs of the world. Put up a website with your social profile on it and share the link with others if you want to. Refuse to partake in exchanges that require apps like Zoom and Google Drive (there’s always an alternative). Start an mail server with friends. ‘Surf’ responsibly: whether that be through obtaining a FreedomBox, or (at the least) installing Firefox and using the add-ons. It might seem like a lot of time to devote to extricating yourself from the status quo, but it is well-worth the process. To prove that your data is valuable, you must treat it as such.
Circling Back
In my first draft, I mostly complained about the scraping of my data and the onslaught AI ‘features’ that I felt were being forced on me and my peers. It felt like a rant (and it was a little therapeutic to write about what I was dealing with, in a way). Still, it was only a rant, with no solutions. We've been discussing the solutions I wrote above throughout the entire semester, but writing them out in this essay format helped me actually realize just how little I needed to do to get out of the situation I complained about in the first place. It was an interesting exercise to work through, and I’m glad I worked through it.
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines:
Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.h |
|
EsmeraldaHernandezFirstEssay 3 - 18 Nov 2024 - Main.EsmeraldaHernandez
|
|
> > | Revision 3 is unreadable | |
< < |
META TOPICPARENT | name="FirstEssay" |
A Bumbling Giant is Eating my Data: The "Hunger for Necessecities" and the Justification for Dispossession
-- By EsmeraldaHernandez - 25 Oct 2024
Google AI Told Me It’s Okay to Eat One Rock a Day
In May 2024, Google announced that it would be rolling out AI overviews, an artificial intelligence powered search feature that would provide users with “information [they could] trust in the blink of an eye.” This feature would create a brief (and compulsory) AI-generated response with each search. This response would provide a summary of the information found across top search results.
When this rollout happened, Google users immediately noticed inconsistencies across the responses. Users reported bizarre overviews that suggested that cheese would stick better if users add glue to their pizza or that eating at least one small rock per day is recommended for people. Other results. These answers come as the result of a ‘hallucination’ from the AI. The glue-on-pizza suggestion, at least, appeared to be sourced from a twelve year old sarcastic Reddit post. It was evident that Google’s AI Overview was incapable of filtering out joke posts and sarcastic comments on the internet at that time.
There’s no question as to why Reddit posts were heavily referenced in Google Overview search results. In early 2024, Reddit signed a $60 million contract with Google to share its content in order to train Google artificial intelligence models. The truth is, nothing can train this so-called artificial intelligence quite like raw, human data. And there is no fountain of data more plentiful than social media. This piece aims to understand why social media users seem to gloss over such blatant abuses of privacy, especially when this particular ‘product’ continuously provides absolutely no value.
The 'Parasite with the Mind of God' Just Got a Little Bolder
AI chatbots and search engines cannot produce real thoughts (obviously). They can, however, mimic human speech—speech that comes from text that has been scraped from the internet. As AI becomes more powerful, users become more and more aware of their unwilling role in training this world-consuming program, especially with GPT-4 and Google Gemini, which use publicly available information that is scraped from the internet. Most recently, LinkedIn? 's recent User Agreement and Privacy Policy update, taking effect on November 20, announced that it had been using user data to train its AI without the consent of its users. Similarly, a new privacy policy from X, which is taking effect on November 15, allows the site to share user data with third-party collaborators in order to train their AI models, a policy that does not make clear exactly how to opt out of this data sharing. Equally as concerning are the environmental effects that come with AI usage. Overall, this seems like a raw deal for users. So why do we keep agreeing to it?
Time is Money and We Need Money
Soshana Zuboff discusses the dawn of the personal digital assistant in The Age of Surveillance Capitalism. There, Zuboff noted that Hal Varian, Google’s chief economist, recognized that the “needs of second-modernity individuals [would] subvert any resistance to the rendition of personal experience as the quid pro quo for the promise of a less stressful and more effective life.” Google Now’s predictive search function allowed the “search engine [to] come to you, a parallel to today’s promise that its new AI Overview function would “do the work for you.” That is, if all the A.I. that websites have crammed into their features do their job, users will no longer need to click on links, read a full post, or even do their own research. Users are being handed this convenience and are being fed whatever companies decide they want to feed their users in exchange (and what it chooses to feed users is unusable garbage, with tidbits of truth mixed in).
Zuboff writes (using the examples of china, textiles, and the ever present Model T) that luxuries of one generation then become the necessities of the next, a fundamental feature of the evolution of capitalism. Personal assistants were seen as a need, according to Varian, who assumed that the middle class and poor would ask themselves “What do rich people have now?” Varian even made it clear that the need for a digital assistant would be so visible that “everyone [would] expect to be tracked and monitored, since the advantages, in terms of convenience, safety and services [would] be so great.” (254-259)
Time is a commodity now more than ever. Before the times of industry, life was less rigid, determined by the day-to-day and seasonal needs. With capitalism came the rigid work schedule and need for punctuality, creating a need for punctuality and scheduling. For many, life is planned to the hour, if not the minute, and members of the working class find themselves at the mercy of the hourly wage. Phrases like “waste of time” and “against the clock” are commonplace, and they identify the preciousness of this intangible concept.
Time is money, as they say. The upper-class already has plenty of money, and they have plenty of time. In order to save themselves the time that would be spent reading an article or searching for sources, a majority of people are content with having their social media data scraped in order to facilitate these AI Overviews and “all-knowing” chatbots. The days of writing your own emails and doing your own research can be gone, just like that. Nevermind that some of the answers will be wrong or plainly ridiculous, the “hunger for new necessities” justifies it all. As with the development of the personal assistant, people will be content with their social media data being used to train the AI beast, just because the quick answers it provides will save valuable time.
The draft confuses me a little. If I don't participate in bloviating on the platforms (which I don't) so that there is no "social media data to scrape" from me, does that mean that I don't have to think there's a trade-off involved in getting mindless, inaccurate summaries of web pages instead of search results?
The effort to turn everything into a drama of failed consent, or a series of bilateral trade-offs between things we supposedly don't want and things we don't need doesn't produce useful analysis. For reasons I have given more than once in class and in the writings I have assigned, the consent model is inapposite in dealing with environmental problems, which privacy violation and the reduction of intellectual quality in the fake intelligence "revolution" both are.
Let's try a draft in which, instead of starting with what we don't like, we begin from what we want. How should our technology work? What prevents us from making it work that way? I don't have to cope with AI summaries from my search engine if I can search differently. I don't have to use platforms for communication or ancillary services that record my behavior, or if I use those services I don't need to provide behavior in recordable form. If we actually chose our technology to achieve our intentions, which has been the human practice for the last several million years, we can't we have what we want?
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines:
Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.h |
|
EsmeraldaHernandezFirstEssay 2 - 17 Nov 2024 - Main.EbenMoglen
|
|
META TOPICPARENT | name="FirstEssay" |
| |
< < | It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted. | | A Bumbling Giant is Eating my Data: The "Hunger for Necessecities" and the Justification for Dispossession | | Time is money, as they say. The upper-class already has plenty of money, and they have plenty of time. In order to save themselves the time that would be spent reading an article or searching for sources, a majority of people are content with having their social media data scraped in order to facilitate these AI Overviews and “all-knowing” chatbots. The days of writing your own emails and doing your own research can be gone, just like that. Nevermind that some of the answers will be wrong or plainly ridiculous, the “hunger for new necessities” justifies it all. As with the development of the personal assistant, people will be content with their social media data being used to train the AI beast, just because the quick answers it provides will save valuable time. | |
> > |
The draft confuses me a little. If I don't participate in bloviating on the platforms (which I don't) so that there is no "social media data to scrape" from me, does that mean that I don't have to think there's a trade-off involved in getting mindless, inaccurate summaries of web pages instead of search results?
The effort to turn everything into a drama of failed consent, or a series of bilateral trade-offs between things we supposedly don't want and things we don't need doesn't produce useful analysis. For reasons I have given more than once in class and in the writings I have assigned, the consent model is inapposite in dealing with environmental problems, which privacy violation and the reduction of intellectual quality in the fake intelligence "revolution" both are.
Let's try a draft in which, instead of starting with what we don't like, we begin from what we want. How should our technology work? What prevents us from making it work that way? I don't have to cope with AI summaries from my search engine if I can search differently. I don't have to use platforms for communication or ancillary services that record my behavior, or if I use those services I don't need to provide behavior in recordable form. If we actually chose our technology to achieve our intentions, which has been the human practice for the last several million years, we can't we have what we want?
| |
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines: | | | |
< < | Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list. | > > | Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.h | | \ No newline at end of file |
|
EsmeraldaHernandezFirstEssay 1 - 25 Oct 2024 - Main.EsmeraldaHernandez
|
|
> > |
META TOPICPARENT | name="FirstEssay" |
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
A Bumbling Giant is Eating my Data: The "Hunger for Necessecities" and the Justification for Dispossession
-- By EsmeraldaHernandez - 25 Oct 2024
Google AI Told Me It’s Okay to Eat One Rock a Day
In May 2024, Google announced that it would be rolling out AI overviews, an artificial intelligence powered search feature that would provide users with “information [they could] trust in the blink of an eye.” This feature would create a brief (and compulsory) AI-generated response with each search. This response would provide a summary of the information found across top search results.
When this rollout happened, Google users immediately noticed inconsistencies across the responses. Users reported bizarre overviews that suggested that cheese would stick better if users add glue to their pizza or that eating at least one small rock per day is recommended for people. Other results. These answers come as the result of a ‘hallucination’ from the AI. The glue-on-pizza suggestion, at least, appeared to be sourced from a twelve year old sarcastic Reddit post. It was evident that Google’s AI Overview was incapable of filtering out joke posts and sarcastic comments on the internet at that time.
There’s no question as to why Reddit posts were heavily referenced in Google Overview search results. In early 2024, Reddit signed a $60 million contract with Google to share its content in order to train Google artificial intelligence models. The truth is, nothing can train this so-called artificial intelligence quite like raw, human data. And there is no fountain of data more plentiful than social media. This piece aims to understand why social media users seem to gloss over such blatant abuses of privacy, especially when this particular ‘product’ continuously provides absolutely no value.
The 'Parasite with the Mind of God' Just Got a Little Bolder
AI chatbots and search engines cannot produce real thoughts (obviously). They can, however, mimic human speech—speech that comes from text that has been scraped from the internet. As AI becomes more powerful, users become more and more aware of their unwilling role in training this world-consuming program, especially with GPT-4 and Google Gemini, which use publicly available information that is scraped from the internet. Most recently, LinkedIn? 's recent User Agreement and Privacy Policy update, taking effect on November 20, announced that it had been using user data to train its AI without the consent of its users. Similarly, a new privacy policy from X, which is taking effect on November 15, allows the site to share user data with third-party collaborators in order to train their AI models, a policy that does not make clear exactly how to opt out of this data sharing. Equally as concerning are the environmental effects that come with AI usage. Overall, this seems like a raw deal for users. So why do we keep agreeing to it?
Time is Money and We Need Money
Soshana Zuboff discusses the dawn of the personal digital assistant in The Age of Surveillance Capitalism. There, Zuboff noted that Hal Varian, Google’s chief economist, recognized that the “needs of second-modernity individuals [would] subvert any resistance to the rendition of personal experience as the quid pro quo for the promise of a less stressful and more effective life.” Google Now’s predictive search function allowed the “search engine [to] come to you, a parallel to today’s promise that its new AI Overview function would “do the work for you.” That is, if all the A.I. that websites have crammed into their features do their job, users will no longer need to click on links, read a full post, or even do their own research. Users are being handed this convenience and are being fed whatever companies decide they want to feed their users in exchange (and what it chooses to feed users is unusable garbage, with tidbits of truth mixed in).
Zuboff writes (using the examples of china, textiles, and the ever present Model T) that luxuries of one generation then become the necessities of the next, a fundamental feature of the evolution of capitalism. Personal assistants were seen as a need, according to Varian, who assumed that the middle class and poor would ask themselves “What do rich people have now?” Varian even made it clear that the need for a digital assistant would be so visible that “everyone [would] expect to be tracked and monitored, since the advantages, in terms of convenience, safety and services [would] be so great.” (254-259)
Time is a commodity now more than ever. Before the times of industry, life was less rigid, determined by the day-to-day and seasonal needs. With capitalism came the rigid work schedule and need for punctuality, creating a need for punctuality and scheduling. For many, life is planned to the hour, if not the minute, and members of the working class find themselves at the mercy of the hourly wage. Phrases like “waste of time” and “against the clock” are commonplace, and they identify the preciousness of this intangible concept.
Time is money, as they say. The upper-class already has plenty of money, and they have plenty of time. In order to save themselves the time that would be spent reading an article or searching for sources, a majority of people are content with having their social media data scraped in order to facilitate these AI Overviews and “all-knowing” chatbots. The days of writing your own emails and doing your own research can be gone, just like that. Nevermind that some of the answers will be wrong or plainly ridiculous, the “hunger for new necessities” justifies it all. As with the development of the personal assistant, people will be content with their social media data being used to train the AI beast, just because the quick answers it provides will save valuable time.
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines:
Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list. |
|
|
|
This site is powered by the TWiki collaboration platform. All material on this collaboration platform is the property of the contributing authors. All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
|
|