Project "HUMAN"

People must ensure that AI serves the people

When the police try to predict the likelihood of offenders’ criminal recidivism with algorithms, when employers use AI to pre-sort job applications, or when an AI chatbot creates media content: who is affected by this algorithmically generated decisions, recommendations, and content? How to involve those affected in order to defend their rights and interests?

Photo: Amanda Dalbjörn / Unsplash
Angela Müller
Dr. Angela Müller
Executive Director AlgorithmWatch CH | Executive Board Member AlgorithmWatch

Ongoing project
Project duration: January 2024 to June 2025

In a joint project called HUMAN, civil society organizations Intersections and AlgorithmWatch CH map out a toolbox to align ethical impact assessments on algorithms and AI with the perspectives of those affected. Automated systems need to be reviewed on more than just a technical level. With HUMAN, we are pursuing to put scientifically developed methods into practice to ensure that such systems serve the affected people’s interests.

In a first step, AlgorithmWatch’s impact assessment tool for automated decision-making systems is revised. Not only will the revision incorporate latest developments and current research. The tool will also be made digitally available and easier to use in practice. Intersections is creating panels and innovative spaces to include diverse perspectives.

Here you can find the current status of our Impact Assessment Tool (currently only available in German). The tool is freely accessible. As HUMAN is an ongoing project, the tool will be revised and updated as necessary. If you would like to use the tool, please note the data protection information at the end of the page linked above.

In a second step, two public administration practitioners and a media company will be testing the revised tool in two real-life applications. The experiences from these test applications will be continuously entering into the tool’s further development in order to make it as easily accessible, practical, and helpful as possible.

The HUMAN project is designed to highlight that the people and institutions using algorithms and AI have ethical obligations. They must ensure – in addition to political decision-makers, who must set the regulatory conditions – that their systems are in accordance with fundamental human rights and collective interests, and must therefore assess their systems’ various impacts to recognize risks and take countermeasures.

The project started in January 2024 and will continue until June 2025, supported by an Advisory Lab.

HUMAN is sponsored by the Mercator Foundation Switzerland.

Project partner:

Get the briefing on how automated systems impact real people, in Europe and beyond, every two weeks, for free.

For more detailed information, please refer to our privacy policy.