Law in the Internet Society

Social Credit Systems: Dystopian Visions and Real-world Challenges

In the dystopian world of the TV show "Black Mirror," the episode "Nosedive" describes a world where social media ratings determine one’s socioeconomic status and access to essential services. Using a mobile application, everyone constantly rates others on a five-point scale. Those with higher scores can access to better services and exclusive clubs, while those with low scores are penalized in many ways. While this may seem like a far-fetched fiction, the reality of today may be not too distant from this portrayal.

Real-world examples

The first example that comes to mind is China’s Social Credit System (SCS), developed between 2014 and 2020. The SCS uses artificial intelligence "to develop comprehensive data-driven structures for management around algorithms that can produce real time reward-punishment structures for social-legal-economic and other behaviors" (see Larry Cata Backer). The SCS relies on a series of blacklists and redlists managed at different levels (municipal, local, or national). Each authority can manage its own blacklist (e.g., on those who failed to pay fines or child support) and they all converge into the National Credit Information Sharing Platform. As mentioned by Kevin Werbach, this makes possible that "grade A taxpayers receive customs fee waivers and low-interest loans, in addition to the home benefits offered by the tax collection authority". However, Prof. Werbach believes that western's depiction of the SCS is exaggeratedly negative, especially in a world where governments and corporations are extensively tracking our behavior. He cites the idea brought forward by Yuval Noah Harari that free-market capitalism and state-controlled communism can be regarded as distinct data processing systems: the former is decentralized and the latter is centralized.

Starting from this assumption, it shouldn't come as a surprise then that western's versions of social credit experiments are being made mainly by private corporations, especially in the financial sector. Since the 2008 financial crisis, many "fintech" online lenders began experimenting new scoring model for establishing creditworthiness.

Historically, banks have used scoring models to formulate a person's credit score based on the past financial behavior and additional factors bearing predictive value. This phenomenon has also been regulated such as with the Fair Credit Reporting Act and the and the Equal Credit Opportunity Act (ECOA), with the latter prohibiting credit discrimination on the basis of race, color, religion, national origin, sex, marital status, and age.

But the new models are based on a person's "social footprint" which is revealed by elements such as his/her social circle, or shopping habits: surprisingly, it appears that buying felt pads has a positive influence on how the algorithms forecast your financial behavior. Such information is often collected through the individual’s consent, but in other circumstances the data analysis is performed in a hidden manner, as happened in the case of American Express credit card users who had their credit limits reduced after they went shopping in the same places as other customers who didn't pay their bills on time.

Concerns and Implications

As outlined in this 2016 article by Nizan Packin and Yafit Lev-Aretz, these practices cause privacy harms at two levels - direct, to the loan seeker, and derivative, to the loan seeker's contacts, as “social credit systems inherently implicate the information of third parties, who never agreed their information could be collected, evaluated, or analyzed”. Also, they favor social segregation and reduce social mobility, and increase the risk of arbitrary decisions based on incorrect data. For example, the use of social credit systems can nullify the above-mentioned limits set forth in the ECOA, as attributes like gender and race are easily detectable by the algorithm because “they are typically explicitly or implicitly encoded in rich data sets”.

Turning our gaze to Europe, we see that the risk of discrimination highlighted above has already become painfully real. In 2013, the Dutch Tax Authorities employed a self-learning algorithm to detect child care benefits fraud. The algorithm trained itself to use risk indicators such has having low income or belonging to ethnic minorities. As a result, thousands of families were wrongly characterized as fraudsters and suffered severe consequences. This led to the Dutch Government’s resignation and a 3.7 million Euros fine on the Tax Administration from the Dutch Data Protection Authority.


Social credit systems are a product of Surveillance Capitalism, a system that “unilaterally claims human experience as free raw material for translation into behavioral data” (S. Zuboff, The Age of Surveillance Capitalism). The same principles of the extraction of “behavioral surplus”, started by companies like Google and Facebook in the last 20 years to create more effective advertising, are now being used in contexts that are increasingly impactful on our freedoms and rights, such as the right to receive credit, and they will continue to be more so unless a line is drawn.

In Europe, the GDPR has attempted to address the issues that may arise with the use of social scoring systems (and other systems that are meant to “profile” individuals) by introducing Article 22, which allows individuals to opt out of “automated decision making” and obtain human intervention whenever their personal information is used to take a decision which produces a legal effect (e.g., entering into a contract with that individual).

I believe that such a rule is a good start, but simply relying on a human intervention to review a decision already made by an algorithm is not sufficient. The scope of the rule should be expanded to establish a genuine right to be assessed as a human, not based on predictions of our future behaviors derived from our online activity. In light of the examples mentioned above, the adoption of such rules is particularly necessary in contexts such as the provision of banking services or dealings with public administration, but it should eventually come to include all those situations in which fundamental rights and freedoms of individuals are at risk.

A much-improved draft. I don't have further suggestions.


Webs Webs

r3 - 08 Jan 2024 - 16:31:38 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM