Publication

Using AI responsibly: Our Impact Assessment Tool

Algorithmic and AI-based systems can have ethically relevant effects on people. Those who use them therefore bear a special responsibility. The impact assessment tool developed by AlgorithmWatch supports and empowers users to fulfill this responsibility.

Recent years have shown what can happen when algorithms and AI systems are not used responsibly: people have been wrongly suspected of social security fraud, discriminated against when looking for work, or deemed not creditworthy based on their skin colour or gender identity. Irresponsibly used algorithmic systems are therefore lose-lose scenarios. Not only can they affect the interests and fundamental rights of the individuals concerned, they are also not in the interest of the organization that wants to use them to solve a problem and achieve certain goals.

In contrast, responsibly deployed systems have the potential to be WIN-WIN. Authorities or companies can create transparency, enable supervision and control, and assign responsibilities. In doing so, they take essential steps to ensure that ethical principles of autonomy and justice are upheld and fundamental rights are protected. Ultimately, they ensure that a system benefits all parties involved and does not harm them.

Impact Assessment Tool

The Impact Assessment Tool developed by AlgorithmWatch helps organizations to fulfil their ethical responsibility when using algorithms and AI. It guides them through two checklists to answer the following questions, develop a common understanding and make this transparent:

1. Triage: What ethical implications could the system have in the context in which it is used?

➡️ This triage enables users to determine whether a more comprehensive impact assessment needs to be carried out for the algorithmic system in question and, if so, which questions need to be answered at stage 2.

2. Transparency report: In a second step, the questions identified as relevant through the triage checklist are answered:

➡️ Compiling these answers results in a transparency report that makes goals and processes explicit, creates a common understanding, enables control, and assigns responsibilities.

3. Transparency – as an opportunity, not a threat: As a third step, we recommend publishing the resulting transparency report – to internal stakeholders within the organization and, optimally, to the public as well. This transparency further contributes to internal and external oversight, ensures accountability, and underscores the responsible use of algorithms and AI.

➡️ To the Impact Assessment Tool

AlgorithmWatch provides the Impact Assessment Tool for free use at the link above. If you would like assistance or have any questions, please contact us! We would also appreciate your support in the form of donations.

HUMAN – a comprehensive approach

As part of a joint project, we combined AlgorithmWatch's Impact Assessment Method with a Panel Assessment Method developed by Intersections. The panel assessment enables to identify and involve relevant stakeholders in order to take their needs into account. This includes both people affected by the system as well as employees who use the system. This project resulted in HUMAN – a comprehensive approach that makes it possible to develop a responsible AI strategy based on a specific use case or to test such a strategy using a specific application.

The impact assessment and panel assessment components of HUMAN are available for free use. Anyone who would like expert support in this area should contact Intersections and AlgorithmWatch CH.

➡️ More about the project HUMAN

Recommendations

An impact assessment is an important step towards the responsible use of algorithms and AI. At the same time, other pieces of the puzzle are needed:

Here are our comprehensive recommendations for the use of algorithms and AI that truly benefit people and society: