Project “FINDHR”
Avoiding discrimination in algorithmic hiring
AlgorithmWatch CH is part of the Horizon Europe project “FINDHR”. In this interdisciplinary research project, we address discrimination by recruiting software by developing methods, tools, and trainings that are designed to avoid discrimination.
Ongoing project
Project duration: November 2022 to October 2025
How do you find the most suitable applicants from a large application pool? Application and hiring processes are challenging and involve much time and effort. The desire to let algorithmic decision-making systems do part of the work is therefore understandable. Such algorithmic systems can, for example, pre-sort applications based on resumes or rank applicants with the help of online tests. They’re supposed to find the best applicants while saving time and increasing efficiency. However, experience has shown that the use of such systems can reproduce discrimination or even increase discriminatory barriers in the labor market.
One of the best-known examples is an application software Amazon allegedly developed some years ago. Amazon wanted to increase the efficiency of the hiring process. While the software was still in testing phase, it turned out that the software's recommendations discriminated against women, as their resumes tended to be sorted out. Amazon said it abandoned the project before deploying the software because attempts to end the discrimination against women weren't successful.
Research objective
What needs to be considered when developing and using personnel selection software to prevent discrimination? In the project “FINDHR - Fairness and Intersectional Non-Discrimination in Human Recommendation”, we aim to find answers to this question. This EU-funded project is designed to develop fair algorithms for personnel selection using a context-sensitive approach. Not only the technical aspects of algorithms are taken into account, but extends its scope by factoring in the development context of the system or the social consequences it might have. The interdisciplinary and international research consortium, of which AlgorithmWatch CH is a member, started in November 2022 and works on:
- Methods to measure and prevent discrimination in algorithmic hiring
- Tools that reduce the risk of discrimination in algorithmic hiring
- Trainings to raise awareness of the risk of discrimination in algorithmic hiring
Legal and ethical perspective
The development of discrimination-sensitive AI applications requires the processing of sensitive data. Therefore, the project includes a legal analysis discussing the tensions between data protection legislation and anti-discrimination legislation in Europe. Throughout the project, groups potentially affected by discrimination are involved.
AlgorithmWatch CH, together with other project partners, is focusing on the development of tools ensuring that algorithms in personnel selection procedures are based on ethical principles and won’t reproduce discriminatory biases. With its evidence-based advocacy work, AlgorithmWatch CH also contributes to communicating the results of the project to various target groups.
Project updates
All results are published as open-access publications, open-source software, open datasets, and open courseware. We regularly report on the results in our social media channels, on the FINDHR website and here on our AlgorithmWatch CH website.
Give us feedback on our ongoing research
We want to hear from you! We are working on tools that prevent discrimination through AI in hiring. Together with our partners we have been focusing on three research topics: Software development, impact assessment and control mechanisms. Our question for you: What do you think about our proposals? Is something missing? What can we improve? Please feedback and comment directly in the documents below – your opinion matters!
- Your feedback on our Software Development Guide
- Your feedback on our Impact Assessment Model (link follows)
- Your feedback on our Equality Monitoring Protocol (link follows)
Anti-discrimination training for algorithmic hiring
We developed an Anti-discrimination training for algorithmic hiring, dedicated to, among others, human resources professionals, recruiters and other professionals who would like to learn about algorithmic discrimination in hiring and how to avoid it.
We designed two types of free training courses:
- a 3h Masterclass
- a 30h course on anti-discrimination for AI in recruitment
Expert Reports
In 2023, we commissioned distinguished experts for research on discrimination in hiring affecting marginalized groups:
- «The Case for Latin American Migrants Seeking Emplyoment Opportunities in Spain» by César Rosales, Nataly Buslón, Fabio Curi and Raquel Jorge (2023)
- «Tracing Bias Transfer Between Employment Discrimination and Algorithmic Hiring with Migrant Tech Workers in Berlin» by Lin, Jie Liang (2023).
- «Ensuring Human Intelligence in AI Hiring Tools» by Paksy Plackis-Cheng, Tejo Chalasani, Sabrina Palme, et al. (2023)
Data Donation Campaign
In the framework of FINDHR, we develop tools to discover and reduce injustices in job selection. To achieve this, we need real CVs, with which we can test how such injustices enter HR recommendation algorithms. Based on these real CVs, we use a software to create artificial CVs to develop methods against discrimination.
Between June 2023 and May 2024, we conducted a data donation campaign, over the course of which over 1100 people have donated their anonymised CVs – many thanks for your contribution!
This partner are involved in the project "FINDHR":
- Universitat Pompeu Fabra
- Universiteit van Amsterdam
- Universitá di Pisa
- Max-Planck-Institut für Sicherheit und Privatsphäre
- Radboud Universiteit
- Universiteit Utrecht
- Women in Development Europe+
- Praksis Association
- Eticas Research and Consulting
- Randstadt
- Adevinta
- AlgorithmWatch CH
This article is part of a project that has received funding from the European Union's Horizon Europe research and innovation program under grant agreement No 101070212. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.
This work is supported by the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract number 22.00151.