Law in the Internet Society

How Social Media Exacerbates Income Inequality

-- By AmayGupta - 11 Oct 2019

Growing up in Los Angeles, I have met my fair share of people who, when asked what career they wish to pursue, respond with “influencer.” The barrage of images of Mercedes-Benz G-wagons, Saint Laurent sweatshirts, and celebrity culture that permeates Instagram can easily make us feel like we have to keep up with the Joneses. I don’t think it is a stretch to say that our exposure to the lives of the rich and famous on Instagram impacts the way we act as consumers and impacts the attitudes we have about ourselves and others. After all, I’ve witnessed my friends that make minimum wage as actors in LA drop $700 on Gucci Ace Sneakers to get into clubs so they can scroll through Instagram at the club instead of at home. Yet, according to the US Census Bureau, California has one of the highest rates of poverty. LA’s emphasis on conspicuous consumption makes it seem that those who have it all are hard workers, whereas those who wear the same clothes for over a year are lazy because they are poor. But beyond the psychological effects of social media that coerce vain people like me to buy more things than I need and jeopardize my own finances, one question I have is: do internet advertisers (referencing a majority of millennials and Gen-Zers) worsen socioeconomic inequality by either targeting poorer individuals? I believe that the answer is a resounding yes.

How Our Data Can Lead to Worse Outcomes for Marginalized Groups

Because those that are disadvantaged are subject to more data collection and surveillance than those with higher incomes, they are more vulnerable to negative outcomes resulting from data-driven algorithmic decisions. Not only are lower income internet users more likely to use social media, they are also less likely to use privacy settings on social media websites, such as setting the browser to turn off cookies and limiting privacy settings to limit viewers of posts. This may be the case because lower income individuals tend to be younger in age, and Generation Z and millennials prefer “curated” social media feeds that rely on their data. Moreover, lower income individuals tend to be less educated and more time constrained, thus leading to confusion about privacy settings. Nevertheless, it is no secret that a number of data broker products rely upon a user’s socioeconomic status. According to a US Senate Report, data mining companies that identify poorer individuals from social media behavior have the discretion to sell information to companies that specialize in suspicious financial products including payday loans and debt relief services. One reason is that regulations aimed at protecting consumers such as the Fair Credit Reporting Act don’t have strong provisions protecting users from predictive targeting.

In addition, it will become more difficult to determine if one is being discriminated against in loan applications. Moving towards models that estimate creditworthiness based on internet activity poses legal challenges when it comes to fighting discrimination cases. Credit-scoring tools that use thousands of data points collected without consumer knowledge may provide “objective” scores but obscure discriminatory and subjective lending policies. An article from the Yale Journal of Science and Technology discusses how ZestFinance? , a prominent player in the alternative credit-scoring industry, takes into account how quickly a loan applicant scrolls through the online terms & conditions to help determine how responsible an individual is. In addition, spending habits in the context of a borrower’s geographic location would also be used to indicate conventional spending. Based on current laws, proving violations under the ECOA, which protects against discrimination in credit transactions, requires plaintiffs to demonstrate disparate treatment by showing either that the lender had a discriminatory intent or motive or the decisions had a disproportionately adverse impact on minorities. Because new credit-scoring tools used for housing integrate thousands of data points, these technologies make it incredibly difficult for plaintiffs to make prima facie cases of disparate impact.

In job recruiting, where a variety of companies across all industries use algorithmic sites like Taleo, to evaluate potential applicants, I now worry that these sites also take data such as what we wear, our social behaviors, and who we are friends with to portray us as poor candidates for high income jobs despite our academic achievements, extracurricular involvement, and other markers of success. In higher education, some universities are already using how many friends and photos applicants have on their portals to help determine whether or not to offer them admission. The adage about not judging a book by its cover may be ingrained in the admissions staff, but the algorithms used to “predict” which students will enroll and stay enrolled has no emotion.

Essentially, there are no guarantees that algorithms that utilize our data will not reproduce existing patterns of discrimination or reflect biases that are prevalent in society. What bothers me even more is that low-income consumers may never even know that they were subject to this type of insidious discrimination nor will most of them have the legal resources to pursue a cause of action. Current trends towards arbitration certainly don’t help, and because damages for these violations tend to be low, our generation of lawyers has work to do beyond pushing for laws that inform customers that their data is being surveilled.

Remedies – So Where Do We Begin?

Morality must be part of the solution. Placing the burden of avoiding discrimination resulting from data driven algorithmic tools on individuals is wrong and ignores the fact that these data mining companies are in the best position to avoid discrimination. To me, if a court rules that an individual has no recourse against a company that used internet usage data to prevent them from getting a loan because they failed to protect their privacy, it is the same thing as the police telling a woman that she has no recourse against her rapist because of what she was wearing. Placing the burden on individuals to protect their privacy alone would create a world in which socioeconomic inequality is seen as a choice, not as being taken systemically taken advantage of by data brokers.

The present draft takes too long to set up the problem, leaving itself one paragraph for the rather significant question of what to do about it. Morality, it then says, is "part of the solution," though in the highly material context of the preceding analysis that comes both as a surprise and a disappointment, like recommending Christianity as the cure for poverty.

Making the next draft stronger seems to me easy precisely because this is a fine start. You can compress the statement of the problem, keeping the useful set of sources on which you relied. You should be able to use at least half the next draft to explain what forms of social and legal policy would help.

---- You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Webs Webs

r2 - 30 Nov 2019 - 15:16:10 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM