AlgorithmWatch is a non-profit research and advocacy organization that is committed to watch, unpack and analyze automated decision-making (ADM) systems and their impact on society.

Position

20 March 2023

Apply now as Communications- & Campaigns-Manager!

AlgorithmWatch CH is looking for an experienced colleague for communications, PR and campaigning work at our office in Zurich.

Photo by Mikhail Nilov

France: the new law on the 2024 Olympic and Paralympic Games threatens human rights

France proposed a new law on the 2024 Olympic and Paralympic Games (projet de loi relatif aux jeux Olympiques et Paralympiques de 2024) which would legitimize the use of invasive algorithm-driven video surveillance under the pretext of “securing big events”. This new French law would create a legal basis for scanning public spaces to detect specific suspicious events.

Story

10 March 2023

#instagram

Reels of Fortune: Instagram-shaped memories for a bigger reach

The algorithm used to do it for us, now we do it for the algorithm: Platforms seek data on what people think good memories are. One user tells us how she constructs an end-of-year Recap Reel on Instagram.

Foto von Jon Tyson auf Unsplash

Story

23 February 2023

#work

A dollar for your face: Meet the people behind Machine Learning models

Foto von Adam Birkett auf Unsplash

3 February 2023

#publicsphere

What does TikTok know about you? Data donations deliver answers!

Companies like Facebook, Instagram, Google, and TikTok often know about the harmful effects of their algorithmic systems and yet continue to prevent independent research on them. Data donations like DataSkop are one of the few ways to investigate opaque algorithms.

Khari Slaughter for AlgorithmWatch, CC BY 4.0

AlgorithmWatch CH in Council of Europe Expert Group on AI in public administration

The Committee on Legal Cooperation (CDCJ) of the Council of Europe has established a working group on the topic of administrative law and Artificial Intelligence (AI). Angela Mueller, Head of AlgorithmWatch CH and Head of our Policy & Advocacy team, contributes as an expert to the group.

Palais de l'Europe, Strasbourg/Wikimedia Commons

Story

1 February 2023

#aiact #regulation

What to expect from Europe’s first AI oversight agency

Spain announced the first national agency for the supervision of Artificial Intelligence. In its current shape, the plan is very industry-friendly and leaves little space to civil society.

CC-BY N O E L | F E A N S

Projects

Our research projects take a specific look at automated decision-making in certain sectors, ranging from sustainablity, the COVID-19 pandemic, human resources to social media platfroms and public discourse. You can also get involved! Engage and contribute, for example with a data donation! Learn more about our projects

FINDHR hero picture

16 November 2022

FINDHR: Fair algorithms in personnel selection?

AlgorithmWatch CH is part of the Horizon Europe project “FINDHR”. In this interdisciplinary research project, we address software-related discriminatory effects within recruiting processes by developing methods, tools, and trainings that are designed to avoid discrimination.

Read more

Publications

Read our comprehensive reports, analyses and working papers on the impact and ethical questions of algorithmic decision-making, written in collaboration with our network of researchers and civil society experts. See our publications

10 June 2021

Automated Decision-Making Systems in the Public Sector – An Impact Assessment Tool for Public Authorities

How can we ensure a trustworthy use of automated decision-making systems (ADMS) in the public administration? AlgorithmWatch and AlgorithmWatch Switzerland developed a concrete and practicable impact assessment tool for ADMS in the public sector. This publication provides a framework ready to be implemented for the evaluation of specific ADMS by public authorities at different levels.

Read more

Journalistic stories

How does automated decision-making effect our daily lives? Where are the systems applied and what happens when something goes wrong? Read our journalistic investigations on the current use of ADM systems and their consequences. Read our stories

Photo by Mika Baumeister on Unsplash

18 December 2022

Wolt: Couriers’ feelings don’t always match the transparency report

In August, the Finnish delivery service Wolt published its first “algorithmic transparency report”. We asked three couriers about their experiences. They don't always match the report’s contents.

Read more