What is algorithmic discrimination?

Discrimination by algorithms and Artificial Intelligence (AI): we give an overview of the topic.

Estelle Pannatier
Estelle Pannatier
Policy & Advocacy Manager
Moira Daviet
Researcher

Algorithmic systems can make decisions that discriminate against people, e.g. when it comes to allocating social benefits or managing people’s job applications. When the systems’ decisions are based on data that contains biases, such biases are incorporated into the decisions – if appropriate remedies aren’t put in place. There are other sources of discrimination: Any underlying assumption that had an impact during the development of a model or the purpose and the way in which a system is used. This is the case if a facial recognition system is primarily intended to identify Black people, for example, or if a system is used to measure employees’ performance while not taking into account the special needs of people with disabilities. Technically controlled decisions therefore do not necessarily deliver more “neutral” or “objective” results. These systems are not neutral themselves. People with their particular assumptions and interests influence how the systems are developed and used.

Algorithmic systems often discriminate against people who are already disadvantaged. In principle, however, anyone can be affected. Black people are disadvantaged by systems that determine parole measures for criminals; people from poorer neighborhoods are classified as riskier when it comes to loans; women can be sorted out by algorithms in automated application processes because of their gender, despite being qualified for the job. Consequences of such discrimination often go unnoticed since it is usually not known how automated decisions come about. For this reason, those affected are often unable to defend themselves.