Law in the Internet Society

View   r1
LiorSokolFinalEssays 1 - 11 Jan 2022 - Main.LiorSokol
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="AStepInTheRightDirectionForApple"

First Essay

Employees in the Post-Pandemic Era: How Does the Rise in Using Algorithmic Evaluation Methods Violate the Employees' Right to Equality?

In 2020, the COVID-19 pandemic was spreading around the world, and millions of employees were required to move to home working as part of the effort to mitigate the pandemic risks. The homeworking breakthrough will plausibly not be limited to the pandemic period but marking a new era in the labor market.

One of the immediate consequences of homeworking is the increased use of algorithmic evaluation methods. And that is due to the following reasons: First, the lack of in-person interactions between the employer and the employee creates a sense of lack of control for the employer, which is consequently seeking alternative ways of assessment. Second, homeworking is characterized by increased use of technology, making the use of algorithmic evaluation tools simpler and more accessible.

What is an algorithmic evaluation method?

Algorithmic code is software, that for each external input presents a specific output. The programmer "trains" the software by exposing it to big data, that corrects the algorithmic code accordingly. In the context of the labor market, employers use an algorithmic code, that was trained by big data of past employees, in order to predict employees' success, to promote existing employees, or recruit new ones. The data inserted into the code includes both information directly related to the job, such as salaries, working hours, and other productivity metrics tailored to the workplace, and personal data, such as the number of children, personal status, health status, etc. By that, each existing or potential employee is given a data-based profile that evaluates their future chances of success.

How do algorithmic evaluation methods discriminate against employees?

Many studies suggested that algorithmic evaluation methods tend to discriminate against the underprivileged groups, by several mechanisms: First, historical data that is used to train the algorithm reflects existing biases. For instance, when assessing the promotion for management positions potential, the reality in which women were discriminated against in promotion to management positions led to a status quo in which management positions are characterized by a significant male dominance. Therefore, the algorithm tends to assess women as less appropriate for management positions. The problem becomes even severer when the algorithm biases are based on seemingly “neutral” criteria, such as height and weight, that are correlated with sex, making the sources of bias undetectable and much harder to fix. For example, a study examining Google's custom advertising algorithm found that a user searching for names associated with blacks would receive publications related to criminal record information about 25% above average, without any programmer able to detect the bias cause.

Second, algorithmic evaluation leads to indirect discrimination based on generalization. In many cases, the average figure of a particular characteristic differs between different population groups. In these cases, although the figure itself may be relevant in the employee's evaluation, the algorithm assumes it to be true for all members of the same group and thus indirectly discriminates against individuals who are not characterized by the same figure. For instance, Uber's software predetermines fares for each driver. To make the fares' determination process more efficient, Uber has decided to use an algorithmic evaluation software that will predict the actions of the specific driver and decide fare in a customized manner. In a study that tracked the results of Uber's algorithmic evaluation, it was found that women receive lower rates than men because on average women drive slower. Thus, although the speed of travel does predict productivity, the generalization resulting from the use of algorithmic evaluation led to the fact that even a woman who drove at the same speed as another man received a lower wage.

The biased outcomes of algorithmic decision-making are uniquely disturbing in two ways: First, algorithmic methods seem to be "objective", thus employers are much less aware of their biases than if they were to make the decision themselves. Moreover, it is much harder to apply for judicial review on these decisions. Algorithmic evaluations that rely on big data are based on ever-changing dynamic mechanisms that make it difficult to follow logic at their core, they rely on particularly complicated technologies, and the mechanisms by which they operate are inherently non-transparent. Second, it can use as a tool for employers to cover up their discriminatory decisions and strengthen their defense in court. Humans have enough biases without adding external ones or giving them tools to cover them up.

Suggested approach

The internal biases of algorithms require any approach to both increase employers' awareness of algorithmic decision-making biases, so to encourage employers' cautiousness, and give tools to employees for criticism. The framework should be implemented by employers' guidelines for the execution and auditing of algorithmic decision-making. Such guidelines may be provided by federal bodies, such as the EEOC (US Equal Employment Opportunity Commission), and by that giving employers the tools to avoid unintentional discrimination.

The key factors that such guidelines should include, in my opinion, are as follows: First, the algorithm's usage should be made by people with sufficient expertise and a sophisticated understanding of the tools. Second, transparency, meaning, providing an explanation of how the algorithm operates and disclosing the conditions for the algorithmic decision. Third, employers should define external fairness standards in which the algorithmic decision will be reviewed upon. By that, the algorithms’ biases are more likely to be identified to correct mistakes and improve the algorithms. Fourth, instruction to verify and audit the whole process regularly. Employers should implement a data quality control process to develop quality metrics, collect new data, evaluate data quality, and remove inaccurate data from the training data set.

The COVID-19 pandemic created new opportunities in the labor market, but these were accompanied by equality risk deriving from the extensive use of algorithmic evaluation methods. As preventing or denying the usage of such tools is not plausible, deferral bodies should provide the necessary guidelines to employers as to how to mitigate the algorithms biased outcomes.

Second Essay

The Adequate Balance in Governmental Big Data Disclosure Policies

-- By LiorSokol?

Introduction

In recent years, big data is being gathered and held by governments. This "big data" includes data about criminal records, health, real estate ownership, etc. The data is considered "big" as it is collected on a large scale, enabling us to use it for drawing statistical conclusions. For instance, health records can identify sensitivity to a certain disease, or unique side-effects according to gender, race, place of residency, etc., and enable the development of more accurate treatments or a better understanding of its sources. Once this data is collected, a question is being asked whether this data should be "disclosed", and that in several levels – complete public disclosure, so that anyone will be able to access the data, or in-demand disclosure for research purposes in which I will focus on. In the status quo, there is a complex of rules regulating the disclosure in different fields. The HIPAA rules, for instance, deal with health records, and 45 CFR 164.502 limits significantly the duty to disclose protected health data only to the patient's consent. Different sources may require different rules, however, general principles should apply to all fields. In this essay, I will present the main arguments in the literature for each side and try to draw the general principles to apply and enable more extensive disclosure. Normative analysis - Big Data Disclosure

Advantages in Disclosing Data

Disclosing big data that is possessed by the government, can be advantageous in various ways. First, research benefits. In the post-modern era, information is a significant component of the research and development of new products. The existence ability to analyze big data accelerates research's progress. The research benefits are not limited to health records as supra discussed, but the extent to various fields. For instance, criminal records analysis could identify factors that increase criminal behaviors and help uproot them. Second, fulfilling democratic purposes. In a democracy, sovereignty is given to the people, which in their turn gives the government their mandate. Disclosing information can teach the public on the functioning of the state authorities, and by that holding its elected officials accountable for their actions. Moreover, it allows individuals to make informed decisions. For example, the disclosure of crime or health records is a crucial factor to evaluate a residency area.

Disadvantages in Disclosing Data

Nevertheless, there are obvious disadvantages to such disclosure. The first is the violation of the individual's right to privacy. Particularly, the information is often mandatorily collected by the state, lacking the individuals' consent. Even in the cases in which individuals opt in to provide information to the state, it is usually intended for a particular purpose, so that the state discloses the information for a purpose other than the purpose for which it was provided. According to theories of 'privacy as control', the change of purpose is taking individuals' information outside their control, and thus is violating the right to privacy. This violation may be mitigated if the information is published anonymously, but as long as there is a way to connect the information to the individual, the privacy violation cannot be overcome. Second, using citizens' private data is using the individuals as a product. In the digital world, information is a product that sells at a great price. Companies pay a lot of money to direct their advertisements to people that are expected to purchase their products, and therefore a company that can provide information about a potential buyer will be rewarded financially for this. When the government shares its databases, information analysis companies may use these databases to create a user profile analysis of individuals. A combination of information from several databases, such as age, place of residence, economic and family status will allow advertisers to optimize their advertisement. Using individuals' private data as a financial product can affect the way individuals behave, consume and read, thus violating their right to privacy. To sum it up, although sharing governmental big data can be economically and democratically beneficial, the individuals' right to privacy may be severely violated.

Leading principles for data disclosure

We supra presented two main privacy obstacles prevent the full disclosure of data, anonymity, and the use for individual purposes on behalf of the public one. Therefore, the following two principles should apply to any field of governmental data's disclosure: First, the utility to the public interest. In order to prevent disclosing data to private entities to be used as a targeting tool for their private interest, disclosure should be limited to the public interest. The indication should be made both by analyzing the requesting party (for instance, research rather than a commercial corporation), and its purpose. Haven't the requesting party proved a public beneficial purpose, the data should not be disclosed. Second, the re-identification principle. In order to mitigate the potential privacy violation and maintain anonymity, the governmental authority should examine the ability to identify the anonymized data's source and disclose it only if re-identification is impossible. Such principle was presented in the Canadian Ontario case, in which the Supreme Court had to decide whether the disclosure of the first three digits in the postal code of sex offenders in Ontario can be forced upon the ministry. The Supreme Court approved the regional court's decision, according to which the information should be provided due to the inability to re-identify the offenders. The question that remained open was whether the test should be examined by existing or future technologies. In my opinion, the examination should include future reasonable technologies, meaning technologies that can reasonably be expected to exist in the near future. These two principles create an adequate balance – big data will be disclosed only for great public utility, and only if a violation of privacy rights can be mitigated by anonymity. These two principles should be the basis of any specific field regulation, either to extend (like is required in my opinion for health records) or to limit the disclosure.


Revision 1r1 - 11 Jan 2022 - 13:09:46 - LiorSokol
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM