FINDHR

Fair algorithms in personnel selection?

AlgorithmWatch CH is part of the Horizon Europe project “FINDHR”. In this interdisciplinary research project, we address software-related discriminatory effects within recruiting processes by developing methods, tools, and trainings that are designed to avoid discrimination.

FINDHR hero picture

Project Manager

Moira Daviet
Researcher

How do you find the most suitable applicants from a large application pool? Application and hiring processes are challenging and involve much time and effort. The desire to let algorithmic decision-making systems do part of the work is therefore understandable. Such ADM (automated decision-making) systems can, for example, pre-sort applications based on resumes or rank applicants with the help of online tests. They’re supposed to find the best applicants while saving time and increasing efficiency. However, experience has shown that the use of such systems can reproduce discrimination or even increase discriminatory barriers in the labor market. 

One of the best-known examples is an application software Amazon allegedly developed some years ago. Amazon wanted to increase the efficiency of the hiring process. While the software was still in testing phase, it turned out that the software's recommendations discriminated against women, as their resumes were sorted out. Amazon said it abandoned the project before deploying the software because attempts to end the discrimination against women weren't successful.

What needs to be considered when developing and using personnel selection software to prevent discrimination? In the project FINDHR - Fairness and Intersectional Non-Discrimination in Human Recommendation, we aim to find answers to this question. Our EU-funded project is designed to develop fair algorithms for personnel selection using a context-sensitive approach. This approach isn’t restricted to technical aspects of ADM systems, but extends its scope by factoring in the development context of the system or the social consequences it might have. The interdisciplinary and international research consortium, of which AlgorithmWatch CH is a member, started in November 2022 and works on developing new tools (1) to measure discrimination risks, (2) to produce fairness-aware rankings and interventions, and (3) to provide interpretations to various stakeholders enabling them to act upon. In addition, new technical guidance for conducting impact assessments and algorithmic audits, a protocol for monitoring equity, and a guide for developing equity-aware ADM software will be developed. Another project objective will be the design and providing of specialized training for developers and auditors of ADM systems.

The development of discrimination-sensitive AI applications requires the processing of sensitive data. Therefore, the project will include a legal analysis discussing the tensions between data protection legislation and anti-discrimination legislation in Europe. Throughout the project, groups potentially affected by discrimination will be involved.

AlgorithmWatch CH, together with other project partners, is going to focus on the development of tools ensuring that algorithms in personnel selection procedures are based on ethical principles and won’t reproduce discriminatory biases. AlgorithmWatch CH will also contribute to the project with evidence-based policy work and communication initiatives.

All results will be published as open-access publications, open-source software, open datasets, and open courseware. We will regularly report on the results in our social media channels and on the AlgorithmWatch CH website.

This partner are involved in the project FINDHR:

This work is supported by the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract number 22.00151.

This article is part of a project that has received funding from the European Union's Horizon Europe research and innovation program under grant agreement No 101070212.

Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.

AlgorithmNews CH - abonniere jetzt unseren Newsletter!