Law in the Internet Society

Employees in the post-pandemic era: how does the rise in using algorithmic evaluation methods violate the employees' right to equality?

-- By LiorSokol - 22 Oct 2021

In 2020, the COVID-19 pandemic was spreading around the world, and millions of employees were required to move to home working as part of the effort to mitigate the pandemic risks. The homeworking breakthrough will plausibly not be limited to the pandemic period but marking a new era in the labor market (Baert, 2020).

One of the immediate consequences of homeworking is the increased use of algorithmic evaluation methods. And that is due to the following reasons: First, the lack of in-person interactions between the employer and the employee creates a sense of lack of control for the employer, which is consequently seeking alternative ways of assessment. Second, homeworking is characterized by increased use of technology, making the use of algorithmic evaluation tools simpler and more accessible (Köchling, 2020).

What is an algorithmic evaluation method?

Algorithmic code is software, that for each external input presents a specific output. The programmer "trains" the software by exposing it to big data, that corrects the algorithmic code accordingly. In the context of the labor market, employers use an algorithmic code, that was trained by big data of past employees, in order to predict employees' success, to promote existing employees, or recruit new ones. The data inserted into the code includes both information directly related to the job, such as salaries, working hours, and other productivity metrics tailored to the workplace, and personal data, such as the number of children, personal status, health status, etc. (Köchling, 2020). By that, each existing or potential employee is given a data-based profile that evaluates their future chances of success.

How do algorithmic evaluation methods discriminate against employees?

Many studies suggested that algorithmic evaluation methods tend to discriminate against the underprivileged groups, by several mechanisms: First, historical data that is used to train the algorithm reflects existing biases. (Packin, 2018). For instance, when assessing the promotion for management positions potential, the reality in which women were discriminated against in promotion to management positions led to a status quo in which management positions are characterized by a significant male dominance. Therefore, the algorithm tends to assess women as less appropriate for management positions. The problem becomes even severer when the algorithm biases are based on seemingly “neutral” criteria, such as height and weight, that are correlated with sex, making the sources of bias undetectable and much harder to fix. For example, a study examining Google's custom advertising algorithm found that a user searching for names associated with blacks would receive publications related to criminal record information about 25% above average, without any programmer able to detect the bias cause (Sweeney, 2013).

What's the point of giving references in this form if the bibliography they refer to is not made available? Was this text brought from somewhere without the reference list?

Second, algorithmic evaluation leads to indirect discrimination based on generalization. In many cases, the average figure of a particular characteristic differs between different population groups. In these cases, although the figure itself may be relevant in the employee's evaluation, the algorithm assumes it to be true for all members of the same group and thus indirectly discriminates against individuals who are not characterized by the same figure (Köchling, 2020). For instance, Uber's software predetermines fares for each driver. To make the fares' determination process more efficient, Uber has decided to use an algorithmic evaluation software that will predict the actions of the specific driver and decide fare in a customized manner. In a study that tracked the results of Uber's algorithmic evaluation, it was found that women receive lower rates than men because on average women drive slower. Thus, although the speed of travel does predict productivity, the generalization resulting from the use of algorithmic evaluation led to the fact that even a woman who drove at the same speed as another man received a lower wage (Rosenblat, 2016).

Third, the use of algorithmic evaluation tools may be used by employers to justify discriminatory decisions made intentionally, by presenting the algorithmic evaluation as a relevant difference. In doing so, they will give discriminatory treatment while reducing the chances of being sued. This is because, in cases of algorithmic evaluation, it is difficult to point out the discriminatory behavior, as it is well hidden within the algorithmic code and almost completely detached from the employer's actions. In particular, when trying to trace the discriminatory criterion, in many cases the human eye will not be able to identify it at all. Algorithmic evaluations that rely on big data are based on ever-changing dynamic mechanisms that make it difficult to follow logic at their core, they rely on particularly complicated technologies, and the mechanisms by which they operate are inherently non-transparent.

Suggested legal immediate solution

The inability to detect and diminish the discriminatory criteria requires an external solution. Therefore, it is appropriate to adopt the legal principle from the administrative law of imposing a legal reasoning obligation on employers' decisions that are based, fully or partially, on algorithmic evaluation. This legal imposition mitigates the discriminatory decisions made based on algorithmic evaluation in two main mechanisms. First, the technological possibility to develop algorithms with more transparent and simple criteria and predictive mechanisms already exist, but it is almost not used. Imposing the duty of reasoning will encourage the assimilation of transparent algorithms and help to a-priori solve the inability to monitor algorithmic decisions (Pearl, 2018). Second, the reasoning itself enables indirect criticism of the employer's decisions. The employee can use the reasoning as evidence of discrimination, appeal the decision, or demand more comprehensive reasoning (Dotan, 2002). Moreover, the reasoning creates a mechanism for the employer to self-audit even before the decision is made and to be more aware of implied discrimination and the particular difficulties that characterize each and every employee. Another benefit is addressing the employee's emotional need for an explanation of the decision about him, even regardless of whether it is a discriminatory decision or not (Pearl, 2018).

The COVID-19 pandemic created new opportunities in the labor market, but these were accompanied by equality risk deriving from the extensive use of algorithmic evaluation methods. Assuming the inability to prevent it, the legal systems should adjust and mitigate those harms by imposing a reasoning obligation, and the sooner the better.

You nowhere explain why it would be acceptable for an employer to make all these same decisions based on all the same data if it weren't using a computer, or were running a different kind of software to support the employer's decision-making It's not as though employers can't find other ways of behaving unfairly. I don't understand this argument about "duty of reasoning." If the employment is at will, where does this duty get imposed on employers who have do duty to give any reasons to anyone. If this is about collective bargaining agreements for union workers, why is there any difference between the grievance processes under the contract based on whether the employer is using particular forms of decision-support software? Some clarification is in order.

You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Webs Webs

r2 - 05 Dec 2021 - 20:46:11 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM