Law in the Internet Society

View   r3  >  r2  >  r1
RochishaTogareSecondEssay 3 - 10 Jan 2022 - Main.RochishaTogare
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Changed:
<
<

What We Want To See and What We Need To See

>
>

What We Want To See and What We Need To See (Second Draft)

 
Changed:
<
<
-- By RochishaTogare - 06 Dec 2021
>
>
-- By RochishaTogare - 09 Jan 2021
 
Changed:
<
<
From the moment you download and open TikTok? , the algorithm is watching what you watch. How long you spend on a video before scrolling, how you interact with the video, what videos you skip and what videos you share—everything is used to create your “ForYouPage.” And this isn’t unique to TikTok? .
>
>
From the moment you download and open TikTok? , the algorithm is watching what you watch in order to develop your ForYouPage? . And this isn't unique to TikTok? .
 
Changed:
<
<
YouTube? does the same in order to develop your Recommended Videos page. Turned off browsing and search history on the platform? It’s not going to change much; your IP address, your watch-time, your interactions, and the interactions of those connected with you, are all still tracked and analyzed. Furthermore, Alphabet, YouTube? ’s parent company that also owns Google and the broader suite of G-products like Gmail and GDrive, scans your emails, tracks your search history, and traces your metaphorical internet footprints to ensure you have the “best viewing experience.”
>
>
YouTube? does the same in order to develop your Recommended Videos page. Turned off browsing and search history on the platform? It’s not going to change much; your IP address, your watch-time, your interactions, and the interactions of those connected with you, are all still tracked and analyzed. Tech companies trace your metaphorical internet footprints to ensure you have the “best viewing experience.”

When questioned as to why and how it’s okay for social media platforms to be tracking user data as closely as they do, there are two common responses.

 

“It’s OK, it’s anonymous.”

Changed:
<
<
When questioned as to why and how it’s okay for social media platforms to be tracking user data as closely as they do, there are two common responses.
>
>
The first response is that it’s anonymized data. Anonymized data is data the can’t be linked to the user it’s from. Much of the modern world runs on this anonymized data, and while it is true that individual data isn’t usually used to track back to a user, the various pieces of data collected from a user can eventually be used to build a profile that is akin to human DNA— just without the name attached. In fact, there was a whole scandal surrounding photos paparazzi took of celebrities entering taxi cabs in New York. An individual was able to track the cab, the start and end locations of the trip, total trip cost, and the tip the celebrity paid, using the obviously printed medallion numbers that were pictured in the paparazzi shots and information from the photos.
 
Changed:
<
<
The first response is that it’s anonymized data. Anonymized data is data the can’t be linked to the user it’s from. Much of the modern world runs on this anonymized data, and while it is true that individual data isn’t usually used to track back to a user, the various pieces of data collected from a user can eventually be used to build a profile that is akin to human DNA—just without the name attached.
>
>
Taxi cabs and paparazzi photos are considerably low tech compared to the amount of data that BigTech? companies collect. We all know that the data that these platforms collect, anonymized or otherwise, are sold to advertisers. The tradeoff is that we get access to content for free in exchange for this information. That said, even platforms without ads, a service we often pay for like Spotify Premium or YouTube? Premium, still track our data. In fact, many companies require, or at least say that they require, data in order to operate their services.
 
Changed:
<
<

Curated but not curated

>
>

A right to privacy, a right to honesty

 
Changed:
<
<
Many platforms get out of being liable by arguing that they don't have a responsibility to a user's content, as they are not editing or curating, but are simply publishers of content. Different from the past when, as in the context of newspapers, curators, editors, and publishers were largely one and the same, modern laws provide platforms safe harbor from their users' actions. That is, Facebook is not liable for the content posted on its platform, and neither are TikTok? , Instagram, Snapchat, Youtube, or any other social media platform. As of now, there isn't a clear liability for the way companies use advertising to contact users, and similarly, beyond standard advertising laws, these platforms terms and conditions allow anonymized but otherwise individual data to be used to target users in this manner.
>
>
If we argue that privacy is a right, just like some argue that access to the world wide web is a right, would things change? We could subsequently say that if privacy is a right, then by jeopardizing this privacy BigTech? creates harm. We can start simple and pose that the least tech companies should do is protect their users from false information targeted to their data profile (think the Cambridge Analytica mess), or that for all the benefits BigTech? gains from user data that they should spend more on protecting their user bases. Yet, just the big four tech companies (Amazon, Apple, Facebook, and Google) spent $55 million in 2018 lobbying, and have instead seemed to redirect their efforts to shirk responsibility for the million's of users' rights they hold in their hands.

Even if we don't want to consider the right to privacy, we should at least consider the right to the truth, and the harm individuals face when platforms don't consider the repercussions of misinformation on their users. Even without attempting to delve into the divisive world of politics, we see concern when Facebook for years refused to allow individuals to request takedowns of their own photos. In fact, even copyright law doesn't give many rights to our own likeness; a photographer taking a photo of you owns the photo, despite it being your image. Similarly, for years there was no way to take down a photo from Facebook that someone uploaded of you without your consent. Only more recently have states adopted some measures to protect victims in cases of revenge porn, yet that is an exception.

Curated but not curated

 
Added:
>
>
Platforms get out of being liable for user-posted content by arguing that they don't have a responsibility to the user. Platforms are not editing or curating, but are simply publishers of content. This is their second defense. Different from the past when, as in the context of newspapers, curators, editors, and publishers were largely one and the same, modern laws provide platforms safe harbor from their users' actions. That is, Facebook is not liable for the content posted on its platform, and neither are TikTok? , Instagram, Snapchat, Youtube, or any other social media platform. As of now, there isn't a clear liability for the way companies use advertising to contact users, and similarly, beyond standard advertising laws, these platforms terms and conditions allow anonymized but otherwise individual data to be used to target users in this manner.
 
Deleted:
<
<

Solution

 
Changed:
<
<
One possible solution to this problem is to place liability on the intent of the algorithm being used. Algorithms meant to help with ease of use in connecting friends with friends, or building local relationships, versus algorithms in place to produce targeted advertising and political influence. Such determinations will undeniably require case-by-case analyses. In the end, I believe we must come up with some sort of solution to, legally or otherwise, to categorize and regulate the influence social media, AI, and internet data undeniably have over our lives.
>
>

Is there a solution?

 
Changed:
<
<
This has served the purpose of a first draft: to clear away the brush and expose the issue. But the next draft needs to do what this one does not. Too much space here is spent describing a problem with which we are all familiar, having spent a semester talking about it. So we can put all that in to sentences and come to the idea. What we have now is that computer programs should be regulated according to their "intent." That's not viable: we don't try to regulate human or corporate conduct that way either. We need an act, and harm, and duty to avoid the harm, and then—in some situations—we say we con cern ourselves with "intent." But, as Oliver Wendell Holmes Jr. showed more than 125 years ago, we don't really mean it. Is federal legislation predicated on the interstate commerce power to regulate software on the basis of "the algorithm's intent," where maximizing engagement or delivering advertisements are to be treated as malign intentions? Surely we can agree that's not happening. We need, then, a different form of idea than a magic "solution."
>
>
We cannot deny that there are some conveniences to having our data collected. We open YouTube? and get posts that will likely be of our interest, or we'll scroll through Spotify and find songs that are definitely our jam. Are these conveniences worth it? One possible solution is to place liability on the intent of the algorithm being used (to connect people with friends vs. connect people with advertisers), but such intent is hard to regulate, and it's definitely not how individuals and non-tech companies are regulated. If ever used, intent is but a piece of the large picture. Regardless, such determinations will undeniably require case-by-case analyses.
 
Added:
>
>
Ultimately, I believe that in order for any solution to be recognized, we need to value privacy as much as we value convenience. And this value has to come from the userbase that is so beholden to these platforms. We need to demand change with our actions, maybe first and foremost by giving up the convenience of platforms, because regulation is far from creating the change needed to recognize let alone protect our rights.
 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.

RochishaTogareSecondEssay 2 - 07 Jan 2022 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Deleted:
<
<
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
 

What We Want To See and What We Need To See

Line: 26 to 25
 One possible solution to this problem is to place liability on the intent of the algorithm being used. Algorithms meant to help with ease of use in connecting friends with friends, or building local relationships, versus algorithms in place to produce targeted advertising and political influence. Such determinations will undeniably require case-by-case analyses. In the end, I believe we must come up with some sort of solution to, legally or otherwise, to categorize and regulate the influence social media, AI, and internet data undeniably have over our lives.
Added:
>
>
This has served the purpose of a first draft: to clear away the brush and expose the issue. But the next draft needs to do what this one does not. Too much space here is spent describing a problem with which we are all familiar, having spent a semester talking about it. So we can put all that in to sentences and come to the idea. What we have now is that computer programs should be regulated according to their "intent." That's not viable: we don't try to regulate human or corporate conduct that way either. We need an act, and harm, and duty to avoid the harm, and then—in some situations—we say we con cern ourselves with "intent." But, as Oliver Wendell Holmes Jr. showed more than 125 years ago, we don't really mean it. Is federal legislation predicated on the interstate commerce power to regulate software on the basis of "the algorithm's intent," where maximizing engagement or delivering advertisements are to be treated as malign intentions? Surely we can agree that's not happening. We need, then, a different form of idea than a magic "solution."

 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.

RochishaTogareSecondEssay 1 - 05 Jan 2022 - Main.RochishaTogare
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="SecondEssay"
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.

What We Want To See and What We Need To See

-- By RochishaTogare - 06 Dec 2021

From the moment you download and open TikTok? , the algorithm is watching what you watch. How long you spend on a video before scrolling, how you interact with the video, what videos you skip and what videos you share—everything is used to create your “ForYouPage.” And this isn’t unique to TikTok? .

YouTube? does the same in order to develop your Recommended Videos page. Turned off browsing and search history on the platform? It’s not going to change much; your IP address, your watch-time, your interactions, and the interactions of those connected with you, are all still tracked and analyzed. Furthermore, Alphabet, YouTube? ’s parent company that also owns Google and the broader suite of G-products like Gmail and GDrive, scans your emails, tracks your search history, and traces your metaphorical internet footprints to ensure you have the “best viewing experience.”

“It’s OK, it’s anonymous.”

When questioned as to why and how it’s okay for social media platforms to be tracking user data as closely as they do, there are two common responses.

The first response is that it’s anonymized data. Anonymized data is data the can’t be linked to the user it’s from. Much of the modern world runs on this anonymized data, and while it is true that individual data isn’t usually used to track back to a user, the various pieces of data collected from a user can eventually be used to build a profile that is akin to human DNA—just without the name attached.

Curated but not curated

Many platforms get out of being liable by arguing that they don't have a responsibility to a user's content, as they are not editing or curating, but are simply publishers of content. Different from the past when, as in the context of newspapers, curators, editors, and publishers were largely one and the same, modern laws provide platforms safe harbor from their users' actions. That is, Facebook is not liable for the content posted on its platform, and neither are TikTok? , Instagram, Snapchat, Youtube, or any other social media platform. As of now, there isn't a clear liability for the way companies use advertising to contact users, and similarly, beyond standard advertising laws, these platforms terms and conditions allow anonymized but otherwise individual data to be used to target users in this manner.

Solution

One possible solution to this problem is to place liability on the intent of the algorithm being used. Algorithms meant to help with ease of use in connecting friends with friends, or building local relationships, versus algorithms in place to produce targeted advertising and political influence. Such determinations will undeniably require case-by-case analyses. In the end, I believe we must come up with some sort of solution to, legally or otherwise, to categorize and regulate the influence social media, AI, and internet data undeniably have over our lives.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 3r3 - 10 Jan 2022 - 05:00:39 - RochishaTogare
Revision 2r2 - 07 Jan 2022 - 19:02:15 - EbenMoglen
Revision 1r1 - 05 Jan 2022 - 16:27:43 - RochishaTogare
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM