Law in the Internet Society

How Social Media Exacerbates Income Inequality

-- By AmayGupta - 10 Jan 2020

I don’t think it is a stretch to say that our exposure to the lives of the rich and famous on Instagram impacts the way we act as consumers and impacts the attitudes we have about ourselves and others. Despite the excess consumption Instagram-loving millennials are used to seeing in places like California, “richer” states can have some of the highest rates of poverty. Beyond the psychological effects of social media that coerce vain people like me to buy more things than I need and jeopardize my own finances, one question I have is: does our use of social media worsen socioeconomic inequality and strengthen institutional racism? I believe that the answer is a resounding yes.

How Our Data Can Lead to Worse Outcomes for Marginalized Groups

Because those that are disadvantaged are subject to more data collection and surveillance than those with higher incomes, they are more vulnerable to negative outcomes resulting from data-driven algorithmic decisions. Not only are lower income internet users more likely to use social media, they are also less likely to use privacy settings on social media websites, such as setting the browser to turn off cookies and limiting privacy settings to limit viewers of posts. This may be the case because lower income individuals tend to be younger in age, and Generation Z and millennials prefer “curated” social media feeds that rely on their data. Moreover, lower income individuals tend to be less educated and more time constrained, thus leading to confusion about privacy settings. Nevertheless, it is no secret that a number of data broker products rely upon a user’s socioeconomic status. According to a US Senate Report, data mining companies that identify poorer individuals from social media behavior have the discretion to sell information to companies that specialize in suspicious financial products including payday loans and debt relief services. One reason is that regulations aimed at protecting consumers such as the Fair Credit Reporting Act don’t have strong provisions protecting users from predictive targeting.

In addition, it will become more difficult to determine if one is being discriminated against in loan applications. Moving towards models that estimate creditworthiness based on internet activity poses legal challenges when it comes to fighting discrimination cases. Credit-scoring tools that use thousands of data points collected without consumer knowledge may provide “objective” scores but obscure discriminatory and subjective lending policies. An article from the Yale Journal of Science and Technology discusses how ZestFinance? , a prominent player in the alternative credit-scoring industry, takes into account how quickly a loan applicant scrolls through the online terms & conditions to help determine how responsible an individual is. In addition, spending habits in the context of a borrower’s geographic location would also be used to indicate conventional spending. Based on current laws, proving violations under the ECOA, which protects against discrimination in credit transactions, requires plaintiffs to demonstrate disparate treatment by showing either that the lender had a discriminatory intent or motive or the decisions had a disproportionately adverse impact on minorities. Because new credit-scoring tools used for housing integrate thousands of data points, these technologies make it incredibly difficult for plaintiffs to make prima facie cases of disparate impact.

There are no guarantees that algorithms that utilize our data will not reproduce existing patterns of discrimination or reflect biases that are prevalent in society. What bothers me even more is that low-income consumers may never even know that they were subject to this type of insidious discrimination nor will most of them have the legal resources to pursue a cause of action. Current trends towards arbitration certainly don’t help and damages for these violations tend to be low.

Remedies – So Where Do We Begin?

While I cannot posit a one size fits all solution for various patterns of discrimination, I believe that the main issue when it comes to racial discrimination in the credit industry is a lack of racial data that plaintiffs could rely on to prove disparate treatment or impact. The lack of collection of racial information has an apparent purpose. Because ECOA bans lenders from considering the race or ethnicity of applicants, lenders hesitate to collect this information from credit applications, opting to use proxy variables instead. While it may seem strange to argue that my answer to helping those afflicted by excess data collection is to collect more data, I believe that comparative race data in lending discrimination cases would allow plaintiffs to meet their evidentiary burdens more easily. Even if consumers had access to this data, I believe that the burden of showing a lack of race discrimination should be on the developers of credit-scoring tools. Some proposals (like the Model FaTSCA? here) to eliminate racial disparities advocate for disclosures that would allow consumers to gain more insight into which metrics they are scored on. The social value of enabling a fair credit system should and does outweigh potential claims by developers that disclosures on metrics could be used to replicate software products.

Furthermore, solutions to discrimination in the credit industry should be tied to remedying the harms on a group basis. While individuals can contest denials of credit, the amalgamation of data used to discriminate against communities of color supports a strong inference of structural racism. Plaintiffs may not be able to claim damages due to the running of statutes of limitation, causation issues, and legal costs. The current credit system has probably instilled a sense of complacency in minority communities with denials of credit and contributed to the view of minorities as burdens on our economic system rather than victims of it. Group remedies could include requiring credit issuers to make it easier for consumers to correct misinformation in credit applications as well as requiring issuers to make significant investments in communities where consumers have been affected. While I recognize that these are broad statements devoid of specifics, I do not believe the credit system can be fixed solely by letting individuals litigate abusive lending patterns. Hopefully, requiring issuers to make intensive financial investments in areas where they have discriminated will serve as a deterrent to further discrimination and less reliance on systems that utilize data other than those directly tied to creditworthiness to make individual determinations.

---- You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Webs Webs

r3 - 10 Jan 2020 - 18:38:54 - AmayGupta
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM