
Explainer
Automatically rejected? How AI systems can discriminate in hiring processes.
AI-based recruitment systems can pre-screen and evaluate applications or recommend suitable candidates. These algorithm-based systems can save HR managers time but also carry the risk of systematic discrimination, which often goes unnoticed.

Recruiting staff is both time-consuming and expensive. Employers and HR teams often receive hundreds of applications per job opening and face significant time pressure. More and more public and private organizations are using algorithmic decision-making tools, known as applicant tracking systems (ATS), to help pre-sort applications. HR departments and recruiting companies hope to save time and money by using these systems, which increasingly incorporate AI-supported functions.
Algorithms can also be biased
Algorithmic systems are often perceived as more ‘objective’ or ‘neutral’. They are intended to make hiring processes fairer. However, recruitment systems can reproduce or exacerbate existing discriminatory patterns and create new forms of discrimination.
For example, if senior positions in a company are predominantly occupied by men for an extended period of time, there is a risk that algorithmic systems will adopt this inequality and classify female candidates in the selection process as being inherently less suitable.
One prominent example of this is an application system developed by Amazon. During testing, it became clear that the system disadvantaged female candidates by assigning lower scores to their CVs. According to Amazon, the project was abandoned before being deployed. Attempts to correct the bias were reportedly unsuccessful.
The data used to train algorithms often reflect existing social structures, stereotypes, and biases. Moreover, decisions made by software developers and users can significantly influence the system’s outcomes. Their assumptions and preconceptions can become embedded in the system. In short, these systems are not inherently neutral or objective.
Algorithmic discrimination is complex and subtle
Discrimination by AI and algorithms often goes unnoticed. These systems may disadvantage job seekers in ways that are difficult to detect or measure. Algorithmic discrimination can affect anyone, but it disproportionately impacts people who already face discrimination – based on gender, ethnicity, age, sexual orientation, or religion, for example. These characteristics are protected under European anti-discrimination law. A person can also be discriminated against due to multiple characteristics and thereby experience intensified forms of discrimination, which is referred to as intersectional discrimination. Developers of algorithmic recruitment systems face specific challenges in preventing intersectional discrimination.
Even a person’s name or photograph on their CV can lead to biased outcomes when an AI system evaluates the application. Some job seekers have reported that, after repeated rejections, they tried to achieve better results by changing their names or the spelling of their names to sound more ‘Western’ or to be easier to read. Others edit their photo to appear younger or older, or intentionally downplay their work experience because they fear being filtered out due to age or overqualification.
Even with anonymized applications, algorithmic recruitment systems may still infer demographic characteristics (such as nationality, age, or gender) of candidates, even if these are not explicitly stated in the résumé. Applications often contain subtle clues, known as “proxies”, that enable conclusions to be drawn about these demographic characteristics. For example, language skills, professional experience, or education can be used to infer ethnic origin, age, or gender. Since discrimination through proxy variables is hidden, it is difficult to detect and prevent.
Online application processes can exclude people from the outset
Nowadays automation can be used at many moments in the recruitment process: when publishing job advertisements, pre-selecting applications, in the form of digital assessment tools in job interviews, and even in the final evaluation of candidates.
Algorithmic discrimination can occur even before a candidate submits an application. Both the choice of the job platform and the algorithmic audience targeting significantly influence who gets to see a job advertisement in the first place.
An experiment by AlgorithmWatch showed that online platforms – in this case Facebook – can display job ads in a discriminatory manner. Even when no target audience was selected, the platform automatically optimized ad delivery, resulting in stereotypical distribution. A job ad for truck drivers was shown predominantly to men, while an advertisement for childcare was shown almost exclusively to women, even though we had not specified a gender-specific target group.
Facebook uses, among other information, the advertisement’s image content in order to predict who is likely to click it. This shows that job advertisements must avoid stereotypical imagery or language if discrimination is to be prevented. Algorithmic systems could pick these up and derive decisions from them. If, for example, a company uses an image in a job ad that shows a specific group of people, recommendation algorithms might primarily show the ad to individuals from this group. This can limit the diversity of potential candidates right from the start.
Companies rarely disclose when they use algorithmic systems in hiring. Applicants therefore rarely know whether their documents are initially reviewed by a system or by a person. However, algorithms might assess a CV using different criteria than humans do: While visual elements such as colors or graphics might attract a recruiter’s attention, they can make it harder for algorithms to read and interpret the content. In one experiment, qualified applicants were rejected solely because of their CV formatting – factors that had nothing to do with their qualifications. How well and efficiently the system could read a CV (parsability) turned out to be a decisive factor in the evaluation result.
Algorithmic discrimination is difficult for those affected to recognize or understand. Therefore, greater transparency about the use of algorithmic application systems and better training for job seekers on how to navigate them are urgently needed.
Because many algorithmic systems lack transparency, it is often unclear when, how, or why discrimination occurs – yet the consequences are serious for everyone involved.
The fundamental rights of the persons or groups concerned are violated when they are disadvantaged due to protected characteristics and their access to work is made more difficult. At the same time, employers incur significant costs: beyond reputational risks, they may overlook qualified and competent candidates if AI systems filter them out. This can negatively affect the diversity of employees and ultimately harm the performance of the company.
Less discrimination is possible
Access to the labor market has unfortunately always been characterized by discrimination. Digital processes do not automatically eliminate existing discriminatory patterns. The good news is that less discrimination is possible when all parties involved take responsibility.
AlgorithmWatch was part of the international research project “FINDHR”, which developed methods and recommendations for reducing algorithmic discrimination in recruitment:
Based on the project’s findings, three toolkits were developed to serve different target groups. These contain well-founded background information as well as specific recommendations for action on how to counteract algorithmic discrimination in the field of recruitment.
HR managers can raise awareness within their teams about potential risks of algorithmic discrimination. They can also ensure that recruitment processes and systems are critically reviewed, both internally and externally, to identify potential discrimination.
Software developers can contribute significantly to making algorithmic systems more transparent and easier to audit through inclusive, fairness-conscious software design and by implementing decision logics that are understandable and traceable. For example, during the development and design process, they should consider the needs of the various stakeholders who will engage with the system and gather their different perspectives. This includes designing interfaces to be accessible and usable even for users with limited technical experience. In addition, they should be able to clearly explain which factors led to a particular evaluation. This will help HR managers better interpret algorithmic recommendations and enable candidates to tailor their profiles to specific requirements.
Policymakers can ultimately strengthen legal and political frameworks to better protect job seekers and workers from algorithmic discrimination – such as by creating national and/or EU-level complaint mechanisms to report discriminatory recruitment algorithms. Similarly, companies and authorities that develop, offer, or use algorithmic systems should be required to carry out regular impact assessments and standardized, independent audits to check these systems for any discriminatory risks.



