Welcome to Eben Moglen's Course Wiki
View   r1
RongFanFirstPaper 1 - 28 Feb 2024 - Main.RongFan
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="CompPrivConst.FirstPaper"

-- RongFan - 28 Feb 2024 1.Issues Caused by Financial Algorithms Early in 2008, Kevin Johnson received a notice from American Express (AE) stating a reduction of his credit card limit because his purchase history data showed he shopped at places where people had a poor repayment history. Infuriated by AE, Johnson decided to go public to force AE to cease using shopping data to measure risk. AE refused to disclose how its algorithm worked, claiming it was proprietary.[ See Ron Lieber, American Express Kept a (Very) Watchful Eye on Charges, The New York Times (Jan. 30, 2009), https://www.nytimes.com/2009/01/31/your-money/credit-and-debit-cards/31money.html.] Johnson was not alone, and the same story repeated as algorithms became ubiquitous in the financial domain. In 2019, a landmark study found that online lenders’ algorithms may discriminate against minorities, raising wide concerns about algorithmic bias.[ See Robert Bartlett, Adair Morse, Richard Stanton, Nancy Wallace, Consumer-lending Discrimination in the FinTech? Era, 143 Journal of Financial Economics 30, 30-56 (2022).] Currently, financial algorithm use can cause algorithmic bias, flash crashes, data breaches, and algorithmic fraud. Since financial decision-making has become predominantly dependent on algorithms, such issues must be properly addressed from a legal perspective. This essay attempts to propose a solution to these issues. 2. Traditional Algorithmic Regulatory Strategies Regulations for algorithms have been discussed extensively. However, few studies specialize in financial algorithms,[ See, e.g., Nydia Remolina, The Role of Financial Regulators in the Governance of Algorithmic Credit Scoring, SMU Centre for AI & Data Governance Research Paper No. 2/2022 (March 15, 2022), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4057986.] the conclusions of which were similar to or derived from the traditional algorithmic regulatory strategies. These traditional strategies can be categorized into three types. The first type is the “transparency strategy,” which concentrates on algorithms’ “black box” problem.[ See Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (2016).] Since algorithms usually remain opaque to the public, this “black box” could be utilized to deceive or discriminate against consumers. The transparency strategy attempts to solve this problem by making algorithms explainable and transparent. For example, one Chinese regulation stipulates that an algorithmic recommendation service provider shall disclose the basic principles, purposes, and main mechanics of its services to consumers.[ See Internet Information Service Algorithmic Recommendation Management Provisions (互联网信息服务算法推荐管理规定) (Jan. 4, 2022), http://www.cac.gov.cn/2022-01/04/c_1642894606364259.htm. ] The second type is the “anti-discrimination strategy,” which strives to eliminate algorithmic discrimination in two main approaches. The first aims to prevent algorithms from considering certain factors. The Equal Credit Opportunity Act can be seen as an example as it prohibits creditors from discriminating against applicants on basis of race, color, religion, national origin, sex, marital status, and age. The second perspective that this strategy points out is that even neutral algorithms can replicate real-world discrimination and thus affirmative action is a necessary remedy for algorithmic discrimination.[ See Anupam Chander, The Racist Algorithm?, 115 Mich. L. Rev. 1023, 1023-1045 (2017).] The last type is the “individual data rights strategy,” which grants individual data rights to increase its capacity to control algorithms. For instance, the EU’s General Data Protection Regulation (GDPR) stipulates a series of rights of the data subject, including the right to access, the right to erase, the right to object, and the right not to be subject to automated processing. As scholars argue, the mechanics that the GDPR provides can“make algorithms more responsible, explicable, and human-centered.”[ See Lilian Edwards & Michael Veale, Slave to the Algorithm? Why a “Right to an Explanation” Is Probably Not the Remedy You Are Looking For, 16 Duke Law & Technology Review 18, 19 (2017).] 3. Legal Strategies Specially for Financial Algorithms These strategies can be applied in the regulation of financial algorithms, but it’s necessary to deliberate on how to implement these strategies in the financial domain. First, the regulation of financial algorithms depends heavily on context. Second, the particularity of financial algorithms can undermine the effectiveness of some strategies. For example, since many financial institutions are private entities and algorithms can be seen as trade secrets, it’s difficult to require them to disclose their algorithms without harming their core interests, causing the transparency strategy ineffective in such a situation. Consequently, considering the limitations of aforesaid strategies, independent legislation other than the general algorithm statute should be introduced to regulate financial algorithms. To be specific, this legislation should first formulate different strategies based on different financial scenarios. For example, in an algorithmic credit scoring scenario, an anti-discrimination strategy would be the most effective tool to deal with pervasive discrimination; while for financial algorithmic recommendation services, since consumers would be largely dependent on individual data rights to retain their right of self-decision to avoid being algorithmically manipulated, individual data right strategy can be particularly helpful. Second, the legislation should consider the limitations of the traditional strategies, and couple with other measures. Take transparency strategy, for example. While its effectiveness is restricted to solve the “black box” problem, the alternative would be independent supervision and accountability. An independent oversight board could be established to set up standards for algorithm design and internally review the major algorithms submitted by financial institutions. To ensure institutions’ compliance, it’s necessary to make top managers in institutions accountable for violating the standards, requiring specific individuals to bear final responsibilities for the financial algorithms. In sum, context-based traditional strategies combined with additional tools could increase the effectiveness of regulation.


Revision 1r1 - 28 Feb 2024 - 21:09:22 - RongFan
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM