
Media release
Discrimination through AI hiring tools: Research project presents new solutions
A major European research project is releasing new tools, approaches, and concrete recommendations aimed at tackling discrimination in job recruiting caused by AI hiring tools.

An increasing number of companies are using AI-assisted recruiting systems in their hiring processes. While these systems can save time, they also carry a significant risk of discrimination. As part of the Horizon Europe project "FINDHR – Fairness and Intersectional Non-Discrimination in Human Recommendation" AlgorithmWatch CH has spent the past three years working with an interdisciplinary European consortium from academia, industry, and civil society to develop solutions that address this issue.
"As discrimination risks by such systems are systematic and scalable, using AI hiring tools comes with a special responsibility – for software developers, recruiters, and policymakers."
Moira Daviet, Researcherin bei AlgorithmWatch CH
The FINDHR guideline documents, tools, and training programs are freely available to use:
Proven risks of discrimination in AI-assisted hiring
AI-assisted hiring systems promise time savings for HR professionals. However, real-world experiences show that these systems can also reinforce existing patterns of discrimination or create new ones – often without the knowledge of people using them. The FINDHR research project places a special focus on intersectional discrimination, in which the combination of several personal characteristics (such as gender, age, religion, origin, or sexual orientation) creates new forms of discrimination and multiplies existing ones.
The project’s results show that discrimination in automated hiring processes is by no means just a theoretical construct, but a lived reality for many people. Researchers conducted interviews with affected individuals in seven European countries (Albania, Bulgaria, Germany, Greece, Italy, the Netherlands, and Serbia). Many reported feelings of powerlessness and frustration, having received only automated rejections – often outside of normal working hours – despite their qualifications, numerous applications and efforts, making it unlikely that their application had ever been reviewed by a human being.
Solutions and methods to counter algorithmic discrimination in hiring
How can the risk of discrimination be reduced when developing and using AI-assisted hiring systems?
"We need an interdisciplinary approach that covers measures in software development, within HR departments, and at the political level. Algorithmic discrimination is not merely a technical problem – it is rooted in our broader societal structure and cannot be solved through technical solutions alone. The social, cultural, and political contexts in which these systems are developed and used must also be carefully considered."
Moira Daviet, Researcherin bei AlgorithmWatch CH
Overview of FINDHR tools and solutions
- FINDHR Toolkits with concrete recommendations for software developers, HR professionals, and policymakers. Read more.
- Guidelines and methods for inclusive software design and for the responsible use, auditing and monitoring of algorithmic recruiting systems. Read more.
- Technical tools and software to reduce the risk of algorithmic discrimination in AI-hiring systems. Read more.
- Training programs for professionals to build awareness about the risks of algorithmic discrimination in hiring. Read more.
- Insights into the experiences of people affected by discrimination in AI-based hiring and a practical manual for jobseekers to draw attention to the often invisible barriers in jobsearch. Read more.
This partner are involved in the project "FINDHR":
- Universitat Pompeu Fabra
- Universiteit van Amsterdam
- Universitá di Pisa
- Max-Planck-Institut für Sicherheit und Privatsphäre
- Radboud Universiteit
- Universiteit Utrecht
- Women in Development Europe+
- Praksis Association
- Eticas Research and Consulting
- Randstadt
- Adevinta
- AlgorithmWatch CH


This article is part of a project that has received funding from the European Union's Horizon Europe research and innovation program under grant agreement No 101070212. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.
This work is supported by the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract number 22.00151.
