Law in the Internet Society

Employees in the post-pandemic era: how does the rise in using algorithmic evaluation methods violate the employees' right to equality?

-- By LiorSokol - 11 Jan 2022

In 2020, the COVID-19 pandemic was spreading around the world, and millions of employees were required to move to home working as part of the effort to mitigate the pandemic risks. The homeworking breakthrough will plausibly not be limited to the pandemic period but marking a new era in the labor market.

One of the immediate consequences of homeworking is the increased use of algorithmic evaluation methods. And that is due to the following reasons: First, the lack of in-person interactions between the employer and the employee creates a sense of lack of control for the employer, which is consequently seeking alternative ways of assessment. Second, homeworking is characterized by increased use of technology, making the use of algorithmic evaluation tools simpler and more accessible.

What is an algorithmic evaluation method?

Algorithmic code is software, that for each external input presents a specific output. The programmer "trains" the software by exposing it to big data, that corrects the algorithmic code accordingly. In the context of the labor market, employers use an algorithmic code, that was trained by big data of past employees, in order to predict employees' success, to promote existing employees, or recruit new ones. The data inserted into the code includes both information directly related to the job, such as salaries, working hours, and other productivity metrics tailored to the workplace, and personal data, such as the number of children, personal status, health status, etc. By that, each existing or potential employee is given a data-based profile that evaluates their future chances of success.

How do algorithmic evaluation methods discriminate against employees?

Many studies suggested that algorithmic evaluation methods tend to discriminate against the underprivileged groups, by several mechanisms: First, historical data that is used to train the algorithm reflects existing biases. For instance, when assessing the promotion for management positions potential, the reality in which women were discriminated against in promotion to management positions led to a status quo in which management positions are characterized by a significant male dominance. Therefore, the algorithm tends to assess women as less appropriate for management positions. The problem becomes even severer when the algorithm biases are based on seemingly “neutral” criteria, such as height and weight, that are correlated with sex, making the sources of bias undetectable and much harder to fix. For example, a study examining Google's custom advertising algorithm found that a user searching for names associated with blacks would receive publications related to criminal record information about 25% above average, without any programmer able to detect the bias cause.

Second, algorithmic evaluation leads to indirect discrimination based on generalization. In many cases, the average figure of a particular characteristic differs between different population groups. In these cases, although the figure itself may be relevant in the employee's evaluation, the algorithm assumes it to be true for all members of the same group and thus indirectly discriminates against individuals who are not characterized by the same figure. For instance, Uber's software predetermines fares for each driver. To make the fares' determination process more efficient, Uber has decided to use an algorithmic evaluation software that will predict the actions of the specific driver and decide fare in a customized manner. In a study that tracked the results of Uber's algorithmic evaluation, it was found that women receive lower rates than men because on average women drive slower. Thus, although the speed of travel does predict productivity, the generalization resulting from the use of algorithmic evaluation led to the fact that even a woman who drove at the same speed as another man received a lower wage.

The biased outcomes of algorithmic decision-making are uniquely disturbing in two ways: First, algorithmic methods seem to be "objective", thus employers are much less aware of their biases than if they were to make the decision themselves. Moreover, it is much harder to apply for judicial review on these decisions. Algorithmic evaluations that rely on big data are based on ever-changing dynamic mechanisms that make it difficult to follow logic at their core, they rely on particularly complicated technologies, and the mechanisms by which they operate are inherently non-transparent. Second, it can use as a tool for employers to cover up their discriminatory decisions and strengthen their defense in court. Humans have enough biases without adding external ones or giving them tools to cover them up.

Suggested approach

The internal biases of algorithms require any approach to both increase employers' awareness of algorithmic decision-making biases, so to encourage employers' cautiousness, and give tools to employees for criticism. The framework should be implemented by employers' guidelines for the execution and auditing of algorithmic decision-making. Such guidelines may be provided by federal bodies, such as the EEOC (US Equal Employment Opportunity Commission), and by that giving employers the tools to avoid unintentional discrimination.

The key factors that such guidelines should include, in my opinion, are as follows: First, the algorithm's usage should be made by people with sufficient expertise and a sophisticated understanding of the tools. Second, transparency, meaning, providing an explanation of how the algorithm operates and disclosing the conditions for the algorithmic decision. Third, employers should define external fairness standards in which the algorithmic decision will be reviewed upon. By that, the algorithms’ biases are more likely to be identified to correct mistakes and improve the algorithms. Fourth, instruction to verify and audit the whole process regularly. Employers should implement a data quality control process to develop quality metrics, collect new data, evaluate data quality, and remove inaccurate data from the training data set.

The COVID-19 pandemic created new opportunities in the labor market, but these were accompanied by equality risk deriving from the extensive use of algorithmic evaluation methods. As preventing or denying the usage of such tools is not plausible, deferral bodies should provide the necessary guidelines to employers as to how to mitigate the algorithms biased outcomes.


Webs Webs

r3 - 11 Jan 2022 - 13:24:32 - LiorSokol
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM