Law in the Internet Society

View   r2  >  r1  ...
KifayaAbdulkadirFirstEssay 2 - 28 Nov 2021 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Deleted:
<
<
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
 

Ethical Concerns of Digital Advertising: How Facebook continues to perpetuate Discrimination and Bias Through Targeted Advertising.

Line: 12 to 11
 Targeted advertising is a form of digital advertising that allows companies to direct their sponsored posts towards a specific demographic. The basis of targeted advertising is affinity profiling- where users are categorized according to their ‘assumed interests’ based on likes and interactions on the platform and not their ‘actual personal attributes’. Social media platforms like Facebook track your activity on these sites, collate all this information and share it with advertising companies. Advertising companies can then send ads to users with the specific attributes they are looking for and reach their target audience. While not only being a violation of privacy, targeted advertising has been used by companies to unlawfully discriminate against traditionally marginalized groups.
Added:
>
>
What is the evidence that 'actual personal attributes" such as age, sex, place of residence, income and property information, etc., are not selectable by advertisers? Isn't that precisely what you are discussing below? More factual precision here would be helpful.

 In 2016, Facebook came under fire for its multi-cultural affinity feature, which allowed advertisers to exclude users from a particular ad campaign based on ‘ethnic affinities’. In carrying out their investigation, Pro-Publica purchased an ad from Facebook’s housing category and in it they had the option to exclude African-Americans, Asian-Americans and Hispanics. The outcome of this being that companies that wanted a job or housing advertisement to be targeted towards a specific race or to exclude a specific race, could do so using this feature. Facebook claimed that ethnic affinity was not synonymous with race but simply an assumption of one’s closeness to a particular ethnicity based on their likes and interactions. According to them this did not amount to discrimination as it was an inference of one’s ethnicity and not always an accurate assessment. A White user who interacted frequently with things popular within the Black community would then ethnically be classified by the algorithm as Black- this to them was not discrimination.
Added:
>
>

A couple of links here would make it possible to use many fewer words.

 A subsequent investigation by New York Times further exposed that numerous companies including Verizon, IKEA and Goldman Sachs were limiting their job advertisements on Facebook to a particular age group and specifically to men. Additionally, the targeted ads reinforced gender stereotypes- women would receive job advertisements for working at jewelry and makeup shops while men would receive adverts on construction or Finance. Women and older people would not receive ads about insurance which would only be targeted towards men. Older people between the age of 55-70 would not receive job adverts that younger people would receive. Facebook, in defending this practice, claimed that age-based target advertisement was beneficial as it helped employers recruit people of various ages.

Upon being flooded with numerous class action lawsuits for discrimination, Facebook made policy changes to remedy this. However, the changes implemented have still not efficiently addressed discriminatory ads. They simply found a way to work around it. While Facebook removed the ‘ethnic affinity’ feature from the platform, ad companies today, can still target their ads to a specific zip code, thereby excluding locations with certain demographics such as a predominantly Hispanic or Black communities. Ad companies can still limit their advertisement by age and gender thus still perpetuating this gender and age based discrimination. Even where the companies do not segment their adverts according to ages, the algorithm will still send it out to a specific age or gender group. The bias is unfortunately built into the algorithm- when an algorithm notices a trend of men clicking on ads on banking jobs then it will redirect all such future job advertisements to men only, regardless of whether women also clicked on such ads.

Added:
>
>
Any evidence for this technical claim?

 Facebook has tried to distance itself from the discrimination allegations claiming that they are not liable for third party publications on the website. But they cannot escape liability when their algorithm is contributing to the bias as well. Technology is as biased and flawed as the human behind it- if Facebook designs their algorithm in a manner that permits and reflects biases in the real world then that is the way it will present itself in the digital world. Their continuous refusal to be transparent with their algorithm system raises suspicion that they are well aware of the concerns and have no interest in fixing them. Depriving users of equal access to information is illegal and allowing Facebook to find loopholes and evade liability will only reinforce its belief in its infallibility.
Changed:
<
<
Big Tech companies primary concern has been and always will be profitability. Any changes they effect to ‘remedy’ violations of privacy are merely a ruse. These companies will not develop a moral conscience; It would be quixotic to expect them to provide solutions for the problems they created. When confronted with this level of unchecked power of technology companies, it’s critical for the government to intervene. The government needs to step up and regulate targeted advertisement to ensure that these practices are fair and unbiased to all users. Facebook should also be compelled to be transparent with their algorithm system in order to fully decipher where the problem lies. This discriminatory practice is a significant threat to public interest which far outweighs the profitability concerns for the corporations.
>
>
What is the actual legal claim being made here?

Big Tech companies primary concern has been and always will be profitability. Any changes they effect to ‘remedy’ violations of privacy are merely a ruse. These companies will not develop a moral conscience; It would be quixotic to expect them to provide solutions for the problems they created. When confronted with this level of unchecked power of technology companies, it’s critical for the government to intervene. The government needs to step up and regulate targeted advertisement to ensure that these practices are fair and unbiased to all users.

"Government" is not sufficiently precise. If you mean that legislation is required, you should say so. If you are asserting that existing statutory authority is sufficient, what agency possess it, what rules do they need to promulgate, and how? Do the rules that apply to job advertisements apply also to product promotion? In that case, what is "fair and unbiased" and how do we know? If some forms of advertising are your particular focus, what are they?

Facebook should also be compelled to be transparent with their algorithm system in order to fully decipher where the problem lies. This discriminatory practice is a significant threat to public interest which far outweighs the profitability concerns for the corporations.

The draft clearly lays out your area of interest in general terms. Improving it means adding specifics in several dimensions. First is factual. You offer no links to sources for any factual statements. Even when you apparently referring to specific documents you [provide no links or references. You make factual claims about technological issues without any substantiation. Second is legal. You provide no legal specifics with respect to the forms of anti-discrimination provisions either in existing law already being violated acceding to your account, or necessary to address what you non-specifically call "this problem." Third is contextual. You write as though Facebook were the only ad-tech enterprise. You do not explain what differentiates the duopolists Facebook and Google, or why other ad-tech enterprises are different. You talk about the need for "algorithmic transparency," but you don't explain what these "algorithms" actually are, or how reading the code will reveal anything important to the reviewer.

Writing for the web, using links to help the reader locate context and valuable analysis for herself, is a skill—like writing the 20th century "research paper" but much more flexible and powerful. Practicing that skill here will make all the difference.

 



KifayaAbdulkadirFirstEssay 1 - 22 Oct 2021 - Main.KifayaAbdulkadir
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="FirstEssay"
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.

Ethical Concerns of Digital Advertising: How Facebook continues to perpetuate Discrimination and Bias Through Targeted Advertising.

-- By KifayaAbdulkadir - 22 Oct 2021

Arguably, one of the greatest evils prompted by global digitalization is surveillance capitalism. Personal data is the currency of the digital world and like puppets, we are just handing it over freely. Social media has infiltrated our lives to the extent that is impossible to exist in this current day and age without having a digital imprint. The amount of online data available has increased and with it, the monetary value of the information to companies. In just 2020, Facebook’s targeted online advertising generated over 84.2 billion U.S. dollars- amounting to 97.9% of Facebooks total global revenue.

Targeted advertising is a form of digital advertising that allows companies to direct their sponsored posts towards a specific demographic. The basis of targeted advertising is affinity profiling- where users are categorized according to their ‘assumed interests’ based on likes and interactions on the platform and not their ‘actual personal attributes’. Social media platforms like Facebook track your activity on these sites, collate all this information and share it with advertising companies. Advertising companies can then send ads to users with the specific attributes they are looking for and reach their target audience. While not only being a violation of privacy, targeted advertising has been used by companies to unlawfully discriminate against traditionally marginalized groups.

In 2016, Facebook came under fire for its multi-cultural affinity feature, which allowed advertisers to exclude users from a particular ad campaign based on ‘ethnic affinities’. In carrying out their investigation, Pro-Publica purchased an ad from Facebook’s housing category and in it they had the option to exclude African-Americans, Asian-Americans and Hispanics. The outcome of this being that companies that wanted a job or housing advertisement to be targeted towards a specific race or to exclude a specific race, could do so using this feature. Facebook claimed that ethnic affinity was not synonymous with race but simply an assumption of one’s closeness to a particular ethnicity based on their likes and interactions. According to them this did not amount to discrimination as it was an inference of one’s ethnicity and not always an accurate assessment. A White user who interacted frequently with things popular within the Black community would then ethnically be classified by the algorithm as Black- this to them was not discrimination. A subsequent investigation by New York Times further exposed that numerous companies including Verizon, IKEA and Goldman Sachs were limiting their job advertisements on Facebook to a particular age group and specifically to men. Additionally, the targeted ads reinforced gender stereotypes- women would receive job advertisements for working at jewelry and makeup shops while men would receive adverts on construction or Finance. Women and older people would not receive ads about insurance which would only be targeted towards men. Older people between the age of 55-70 would not receive job adverts that younger people would receive. Facebook, in defending this practice, claimed that age-based target advertisement was beneficial as it helped employers recruit people of various ages.

Upon being flooded with numerous class action lawsuits for discrimination, Facebook made policy changes to remedy this. However, the changes implemented have still not efficiently addressed discriminatory ads. They simply found a way to work around it. While Facebook removed the ‘ethnic affinity’ feature from the platform, ad companies today, can still target their ads to a specific zip code, thereby excluding locations with certain demographics such as a predominantly Hispanic or Black communities. Ad companies can still limit their advertisement by age and gender thus still perpetuating this gender and age based discrimination. Even where the companies do not segment their adverts according to ages, the algorithm will still send it out to a specific age or gender group. The bias is unfortunately built into the algorithm- when an algorithm notices a trend of men clicking on ads on banking jobs then it will redirect all such future job advertisements to men only, regardless of whether women also clicked on such ads.

Facebook has tried to distance itself from the discrimination allegations claiming that they are not liable for third party publications on the website. But they cannot escape liability when their algorithm is contributing to the bias as well. Technology is as biased and flawed as the human behind it- if Facebook designs their algorithm in a manner that permits and reflects biases in the real world then that is the way it will present itself in the digital world. Their continuous refusal to be transparent with their algorithm system raises suspicion that they are well aware of the concerns and have no interest in fixing them. Depriving users of equal access to information is illegal and allowing Facebook to find loopholes and evade liability will only reinforce its belief in its infallibility.

Big Tech companies primary concern has been and always will be profitability. Any changes they effect to ‘remedy’ violations of privacy are merely a ruse. These companies will not develop a moral conscience; It would be quixotic to expect them to provide solutions for the problems they created. When confronted with this level of unchecked power of technology companies, it’s critical for the government to intervene. The government needs to step up and regulate targeted advertisement to ensure that these practices are fair and unbiased to all users. Facebook should also be compelled to be transparent with their algorithm system in order to fully decipher where the problem lies. This discriminatory practice is a significant threat to public interest which far outweighs the profitability concerns for the corporations.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 2r2 - 28 Nov 2021 - 15:13:53 - EbenMoglen
Revision 1r1 - 22 Oct 2021 - 20:14:42 - KifayaAbdulkadir
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM