Explainer

How and why algorithms discriminate

Automated decision-making systems contain hidden discriminatory prejudices. We’ll explain the causes, possible consequences, and the reasons why existing laws do not provide sufficient protection against algorithmic discrimination.

Estelle Pannatier
Estelle Pannatier
Policy & Advocacy Manager
Moira Daviet
Researcher

Automated decisions are being made throughout our society. Algorithmic systems process tax returns, evaluate job applications, make recommendations, predict crimes or the chances of refugees being integrated into the labor market. The decisions made by such systems have a direct impact on people’s lives. Some people are discriminated against in the process.

What are the causes of algorithmic discrimination?

Algorithmic systems are neither neutral nor objective. They reproduce patterns of discrimination that already exist in society. The data used for “machine learning” processes within the automated decision-making systems reflects social conditions, certain groups may be over- or underrepresented. In such a case, there would be more or less data on a particular group than merited in relation to the total number of people affected by the decisions.

There is a risk that the system design will be influenced by the people involved in the development of decision-making systems. Already at an early stage such as the development, systems are shaped by the developers’ respective assumptions, beliefs, perspectives, and biases. This can affect the systems’ output, i.e. the automated decisions. The systems can also discriminate against people when being applied, for example when facial recognition systems are used to identify Black people in a crowd.

If a system’s decisions affect many people, an equally big number of people is also potentially affected by discriminatory decisions. In most cases, the cause of discrimination can be found in the system properties. This problem, which is particularly associated with algorithmic discrimination, is called scaling effect. Existing discrimination patterns can be reinforced by so-called feedback loops. For example, data from the past is used on the assumption that the future will not differ from the past. In an algorithmic system for predictive policing, this data can then lead to more patrols being carried out in certain neighborhoods than in others. However, increased surveillance also increases the likelihood that the police will uncover more crimes in the respective area. Such predictions by the system are self-fulfilling, like in a self-fulfilling prophecy. They are reinforced when the system is used over a longer period of time and makes predictions based on its own former predictions.

When algorithms are used in areas where major power imbalances exist, the risk of unjust decisions and discrimination is particularly high – be it power imbalances between applicants and companies, employees and employers, suspects and police, refugees and border control authorities, welfare recipients and public authorities, pupils and teachers, or individual users and social media platforms. In such constellations, there is one side that is dependent on the decisions of the other side. If the more powerful side makes decisions automatically, those affected are usually unaware of this. They are at the mercy of such decisions and can hardly defend themselves against them due to the power imbalance.

Algorithmic discrimination case studies: Child benefits, employment, risk scores, criminal prosecution

Even states use algorithmic systems that can discriminate against people. In the Netherlands, it came to light in 2019 that Dutch tax authorities had used a self-learning algorithm to create risk profiles in order to detect child benefit fraud. A mere suspicion based on the system’s risk indicators was enough for the authorities to penalize families for fraud. Tens of thousands of families – often with a low income or members of “ethnic minorities” with dual citizenship – had to pay back child benefit supplements they had received over the years. This caused them to get into debt. A majority of these families even became impoverished. As a result, more than one thousand children had to be placed in foster care. The Dutch Data Protection Authority later came to the conclusion that the processing of the data by the system was discriminatory.

Algorithmic discrimination at the workplace can start with job advertisements. Research by AlgorithmWatch proves that gender stereotypes determine how Facebook displayed job ads: Ads for truck drivers were displayed much more frequently to men, ads for childcare workers much more frequently to women. Algorithms also sort and select CVs or give instructions to employees. When the algorithm becomes the “boss,” decisions about promotions or layoffs can become dubious. A general lack of transparency is a breeding ground for discriminatory decisions.

Credit agencies such as the German Schufa provide information on how “creditworthy” people are. They calculate risk scores that banks and similar companies use to decide whether to grant a loan to someone or to sign a contract with someone. Such decisions can have a huge impact on people, e.g. if they are unjustifiably denied the opportunity to take out a loan or insurance. Yet no one neither knows how such risk scores are calculated nor whether they are automated. This has a great potential for discrimination and is also legally problematic. European data protection law states that decisions with legal consequences for people may not be made purely automated. The European Court of Justice recently ruled against Schufa for this reason.

In criminal prosecution, the police and courts use algorithmic systems to calculate the probability of offenders reoffending. The data that is fed into algorithmic systems for this purpose usually only provides a distorted representation of reality. For example, if the number of offenders’ general contacts with the police is used as an indicator of the likelihood of reoffending (in the sense of: the more police contact the offender has had, the more likely reoffending supposedly becomes), this can lead to discrimination against Black people: Some police officers practice “racial profiling” and check Black people much more frequently than White people. If Black people are stopped more often without justification, this increases the number of contacts they have with the police. Recidivism scores form the basis for the restrictions imposed on offenders after their release from prison. If Black people score higher due to “racial profiling” and this score leads to higher police or court restrictions for them, they are victims of discrimination.

Swiss law on discrimination

In Switzerland, discrimination is defined under the principle of non-discrimination in the Federal Constitution (Art. 8 Abs. 2 BV) as unequal treatment of persons on the basis of a protected characteristic without an objective justification. Unequal treatment exists if a person or group is treated less favorably than another person or group, even though they are in the same or in a comparable situation. The Federal Constitution lists biological characteristics (“race,” gender, age, physical, mental or psychological disability) as well as cultural or other characteristics (origin, language, social status, way of life, religious, ideological, or political convictions) in article 8, paragraph 2 of the Federal Constitution. This list is deliberately not exhaustive, as new groups that are subject to systematic exclusion are to be recognized and new patterns of discrimination can emerge. Stigmatized social groups are generally protected from discrimination. Two further paragraphs of article 8 of the Federal Constitution call for the equality of women and men in all situations (Art. 8 Abs. 3 BV) and the fight against discrimination against people with disabilities (Art. 8 Abs. 4 BV).

The no-discrimination rule only applies to state actors. There is no general ban on discrimination for private individuals in Switzerland. The no-discrimination rule in the Federal Constitution does impose a fundamental duty on the state to prevent discrimination between private individuals (Art. 8 Abs. 2 in conjunction with Art. 35 Abs. 3 BV). However, there is no general law that prohibits discrimination by private individuals. There are no legal means to take action against discrimination by private actors’ algorithmic systems.

The existing protection against discrimination is therefore not sufficient in case of algorithmic discrimination. In addition to the scaling effect and feedback loops, the use of so-called proxy variables also points to legal gaps. Automated decision-making systems can use characteristics for decision-making related to protected characteristics. For example, a system that manages applications may not reject people because of their age, as age is a protected characteristic under the AGG. Still, the system could use the number of years of previous work experience as a proxy variable in order to identify older people and exclude them.

Algorithmic discrimination poses new challenges when it comes to enforcing a ban on discrimination. It is difficult to identify the people affected since they usually do not know that they are being discriminated against by automated decisions.

AlgorithmNews CH - abonniere jetzt unseren Newsletter!