Financial Privacy, Digital Redlining, and Restoring the Commons

-- By RaulCarrillo? - 17 Dec 2014

Introduction

We have spent a substantial amount of time in this course discussing invasions of privacy, including in the sphere of personal finance. As we know, data mining is not a new phenomenon. Even the mainstream media has offered reports on how information harvesting is employed to barrage potential customers with advertisements. Purportedly, there is a trade-off between privacy and the convenience of tailored consumer choices.

Even when Internet surveillance offers us with an ostensibly better menu of “services”, the process constitutes an affront on privacy rights. Yet it is even worse when personal information, especially financial information, is harvested to overtly punish the people it is harvested from. In the case of financial services, we now live in a world where data is siphoned and subsequently used to entrench even the socioeconomic inequalities financial services purport to render more equitable.

Algorithms & Legalized Discrimination

In a piece earlier this spring entitled “Redlining for the 21st Century”, Bill Davidow of The Atlantic skimmed the surface of how private companies, especially in the finance, insurance, and real estate sectors are using algorithms that charge particular people more for financial products based on their race, sex, and where they live. For example, algorithms used by mortgage companies will use big data to determine a customer’s ZIP Code through their IP Address, and then proceed to charge them on a higher rate based on what neighborhood they live in. If a human employee did this rather an algorithm – supposedly divorced from human manipulation—that individual would be acting in clear violation of The Fair Housing Act of 1968. Yet the algorithm and thus the practice are permitted.

University of Maryland Law Professors Danielle Keats Citron and Frank Pasquale, author of The Black Box Society, have recently detailed how credit scoring systems have always been “inevitably subjective and value-laden,” yet seemingly “incontestable by the apparent simplicity of [a] single figure.” Although many of the algorithms in question were initially built to eliminate discriminatory practices, credit scoring systems in particular can only be as free from bias as the software behind them. Thus the biases and values of system developers and software programmers are embedded into each and every step of development.

Although algorithms may place a low score on “occupations like migratory work or low-paying service jobs” or consider residents of certain neighborhoods to be less creditworthy, the law does not require credit bureaus to reveal the way they convert data into a score. That process is a trade secret, as we have discussed in class. Although the majority of people negatively impacted by the structure of an algorithm may be certain groups of minorities, the process is almost entirely immune from scrutiny. Title VII of the Civil Rights Act of 1964 has been deemed “largely ill equipped” to address the discrimination that results from data mining. As mathematician and former financier Cathy O’Neil has written, whether or not discriminatory data mining is intentional or not is a moot point. Seemingly innocent choices can have a disparate impact upon protected classes.

In true neoliberal fashion, abuse of algorithms for discriminatory practices is not limited to private companies; governments also act similarly in the realm of public finance. In an essay entitled “Big Data and Human Rights”, Virginia Eubanks, an Associate Professor of Women’s, Gender, and Sexuality Studies at SUNY Albany, notes that “the use of large electronic datasets to track, control and discipline U.S. citizens has a long history, going back at least thirty years.” The National Criminal Information Center (NCIC) and New York’s Welfare Management System (WMS), for example, initially utilized algorithms to expose the discrimination of its own employees. However, faced with the fiscal burden of supplying benefits in the wake of recession, the state of New York commissioned expansive technologies that supplanted the decisions of social workers, granting bureaucracies the ability to deny benefits behind the auspices of a natural decision maker. Computers currently make choices about social spending based on “time worn, race and class motivated assumptions about welfare recipients: they are lazy and must be “prodded” into contributing to their own support, they are prone to fraud, and they are a burden to society unless repeatedly discouraged from claiming their entitlements.

Restoring the Commons

In a collaborative essay for The Morningside Muckraker, I recently wrote about how free software and free culture advocates could benefit from updated understandings of the architecture of public finance, particularly the Modern Money (MM) paradigm. According to this school of thought, money, like data, can be created at zero-marginal cost. We have moved from a political economy of scarcity to a political economy of abundance. This realization—emanating from the legal fact of monetary sovereignty—that the Federal Reserve creates dollars with keystrokes, that the U.S. government, unlike like a state or a household, can’t possibly “go broke”, that Uncle Sam has to worry about inflation but doesn’t need to tax or borrow to spend—in turn renders a new framework for considering how a society could economically support a New Intellectual Commons, as Professor Moglen has called it.

By the same token, practices like financial privacy invasion and digital redlining highlight why economic justice advocates cannot forego an understanding of “Law and the Internet Society.” If we do not understand the technological infrastructure, we will always be one step behind. Would-be-reformers need to know that an attempt to access financial services through the internet –something we should each have free and open access to – may be punished solely by virtue of choice of method. Attempts at social mobility may be stymied. You may be barred from economic advancement via an information infrastructure that is arguably yours by birthright. Coercion abides.

What is occurring with digital redlining fits into Moglen’s narrative of a private assault upon commons, two layers deep. In essence, the government and its licensed agents—particularly banks—data harvested within what should be a free knowledge commons is being used to further deny basic tools for economic advancement to historically marginalized groups, entrenching inequalities on multiple fronts.

We must hold the government accountable in order to hold other interests accountable. As Professor Moglen stated:

“What we do have at the moment is the opportunity for a political insistence upon the importance of the commons. We have a great opportunity which lies in the inevitable populist rising of annoyance, then irritation, then anger, at what has happened to the society in which we were all living in relative safety and prosperity only a few years ago. We have an opportunity to explain to people that too much ownership, and too much leverage, and too much exclusivity was the prevailing justification for and also the prevailing reason that what happened, happened.”

Privacy advocates and economic justice advocates are natural allies in this effort, which will surely be an intense moral struggle for the foreseeable future.

I have some trouble understanding the parts of this essay I didn't write.

We will leave aside for the moment the matter of zero-marginal-cost money. It's not our present job to understand money as we have known it, let alone the economy of payment systems into which we are moving. Sufficient for now to say that anyone proposing zero-marginal-cost money is either fooled or fooling.

What I do not understand is the analysis of banking. The purpose of banking is to create debt. Anywhere that debt can be created under conditions that will cause someone (not by any means necessarily the borrower) to repay the loan, banks will lend, to the limit of their available resources. The price of money reflects its supply and the risks of non-payment. Lenders prioritize access to credit by various (intentionally discriminatory) means, but unless corrupted---which to some extent they inevitably are---they do so in an attempt to maximize the value of their loan portfolio, at which---whichever kind of lenders they are---competition in the money market makes them good, or dead.

The poor do not benefit from banking; they benefit from extension of credit. Unless the price they pay for credit is subsidized, they are unlikely to survive economically, just as in most of the world they cannot subsist unless the basic commodities of existence are socially subsidized.

None of this is much affected by the availability of better social surveillance. At the margins, some poor people in relatively wealthy societies will be indebted beyond their means, because lenders expect that someone (governments, fooled investors) can be made to cover their shorts. The real benefit of enhanced social surveillance lies in pushing more debt on richer parties, owners of substantial assets (like houses in the US), who can be fooled into believing that high levels of debt are good for them, for various reasons. This has little or nothing to do with "red-lining" of any kind.

These may all be incorrect assumptions, but the essay didn't challenge any of them, it just ignored their consequences. So I don't understand what it means to say, and it doesn't tell me.

Revised - Financial Privacy, Digital Discrimination, and The Commons

-- By RaulCarrillo? - 21 Jan 2015

Money and Credit in the Digital Era

We have spent significant time in this course discussing invasions of privacy, including within the sphere of consumer finance. Data-mining assaults our anonymity and autonomy, supposedly compensating us with convenience. Yet the harvesting of financial information outright punishes many of us. Data can be siphoned and subsequently used to exacerbate some of the oldest socioeconomic hierarchies in the country, excluding minorities from the few options for money and credit available on the socioeconomic periphery.

In a collaborative essay for The Morningside Muckraker, I recently wrote about how free software and free culture advocates could benefit from updated understandings of public finance, particularly from the Modern Money (MM) paradigm. This school of thought highlights how the United States and other ¨monetarily sovereign" countries--those with fiat currencies and floating exchange rates--create money at zero-marginal cost. That is, central banks in these countries simply create currency with keystrokes, and no matter how much they create, unlike businesses or households, their governments cannot possibly ¨run out.¨ This does not mean central banks can create purchasing power at zero marginal cost--we still have to account for price stability (inflation)--but it does mean monetarily sovereign governments face no solvency constraint per se.

This is an important legal and economic fact to grasp because it renders most arguments for austerity utterly outdated. Yet, most governments still unnecessarily attempt to balance budgets on the backs of the poor and the middle class, so those who need purchasing power are generally forced to seek credit from banks or more nefarious entities in the private sector.

Algorithms and Legalized Discrimination

This isn’t going well. Worse than usual, in fact. Last spring, in a piece entitled “Redlining for the 21st Century”, Bill Davidow of The Atlantic sketched some ways in which firms employ algorithms to outright deny people loans, or charge them much higher rates, based solely their inferred race. For example, many watchdogs have noted that mortgage providers can now determine a customer’s ZIP Code via their IP address, and then proceed to charge them based on the neighborhood where they currently live. If a human employee did this, that individual would be acting in clear violation of The Fair Housing Act of 1968. Yet, because an algorithm is the actor, the practice often goes unexamined, and people who might otherwise receive credit at a reasonable rate are harmed.

In class, we’ve touched on the legal history of presuming machines to be innocent. In a recent law review article, University of Maryland Law Professors Danielle Keats Citron and Frank Pasquale detailed how credit scoring systems have always been “inevitably subjective and value-laden” despite their ostensibly objective simplicity. Although the systems in question were initially built to eliminate discriminatory practices, they can only be as free from bias as the software behind them, and thus only as righteous as the values of developers and programmers. But the law generally ignores this. In fact, credit bureaus are not required to reveal how they convert data into scores. Those processes are deemed “trade secrets.”

In The Black Box Society, Pasquale notes that although the majority of people negatively impacted by the structure of a particular algorithm may be Black, Latino, women, LGBTQ, or people with disabilities, the algorithms are almost entirely immune from scrutiny. To make matter worse, Title VII of the Civil Rights Act of 1964 has been deemed “largely ill equipped” to address the discrimination that results from data-mining. There’s no applicable law to enforce here.

Even more awfully, digital discrimination isn’t limited to private finance. In an essay entitled [[http://newamerica.org/downloads/OTI-Data-an-Discrimination-FINAL-small.pdf[“Big Data and Human Rights”]], Virginia Eubanks, an Associate Professor of Women’s, Gender, and Sexuality Studies at SUNY Albany, notes that although New York State social service providers initially utilized algorithms to expose the discrimination of its own employees, the tables have turned. Faced with the fiscal burden of supplying benefits in the wake of recession, the state commissioned technologies that replaced the decisions of social workers, allowing bureaucracies to deny benefits behind the auspices of objective machines. Now, computers make choices about spending based on “time worn, race and class motivated assumptions about welfare recipients.”

Assaulting the Commons

It is obvious that the poor have always struggled to gain money and credit. Government spending and private lending are inherently discriminatory. Yet technology is producing new forms of discrimination-- as well as reviving old ones. The racial wealth gap in the 20th century, in particular, can be greatly explained by redlining, as well as general housing and lending discrimination. These practices were supposed to have been eradicated by the Civil Rights Movement and subsequent legislation, at least allowing people to be discriminated against merely because they were poor, and not because of other demographic factors. But the practices occur.

Thus, if economic and racial justice advocates do not understand Big Data, we will always be one step behind. For example, would-be-reformers need to know that attempts to access credit by people of color may be punished solely by virtue of digital method. We may be barred from economic advancement by operating within an infrastructure that is arguably ours by birthright.

Digital discrimination fits into Moglen’s narrative of an assault upon ¨the commons¨, two layers deep. In the age of austerity, the government refuses to subsidize economic existence via public coffers despite clearly having the means to do so, and thus many people must rely on private credit to survive. Yet surveillance is making this option substantially more difficult for certain groups of people. The government and its licensed agents—banks and other lending institutions—harvest data within what should be a free knowledge commons and use it to either outright deny credit to historically marginalized groups, or charge them exorbitantly, thus depriving them of the small shot they might have had at security or mobility. Although financial entities certainly use social surveillance to prey on many people nearer the apex of society, surveillance can be a death knell for minorities on the periphery.