Automated Decision-Making Systems in the Public Sector – Some Recommendations

When using automated decision-making systems (ADM systems) in the public sector, authorities act in a unique context and bear special responsibilities towards the people affected. Against this background, the use of ADM systems by public administrations should be subject to stringent transparency mechanisms – including public registers and mandatory impact assessments.

Angela Müller
Dr. Angela Müller
Executive Director AlgorithmWatch CH | Head of Policy & Advocacy
Estelle Pannatier
Estelle Pannatier
Policy & Advocacy Manager

The automation of decision-making procedures and services in public administrations is increasing significantly. Administrations regard it as an opportunity to accelerate efficiency, facilitate processes, as well as expedite mass and routine services. Examples range from chatbots, automated processing of tax declarations or of requests for social benefits, algorithmic systems for detecting risks of welfare fraud, for profiling unemployed people, for predictive policing purposes or for assessing recidivism risks of parolees – all these systems are increasingly being deployed in Europe and beyond.

Doubtlessly, the use of ADM systems offers great potential for public administrations. At the same time, it comes with substantial risks – especially if such systems are not deployed in a careful manner. It can limit people’s access to participation, to public goods and services, and infringe fundamental rights.

While these risks are not limited to the public sector (but often mirror similar risks that pertain to the private sphere), the public sector is of a unique kind. Individuals do not have the freedom to choose the provider of services but are inescapably subject to a particular administration. Authorities have access to sensitive personal data, and their decisions often have consequential effects on individuals. All of this has long been mirrored in unique legal preconditions public authorities are subject to, such as the principles of legality or compliance with fundamental rights.

When deploying ADM systems in the public sector, this unique setting must be considered – autonomy, justice and fairness, harm prevention, and beneficence should be the ultimate benchmarks for the automation of administrative procedures.

Against this background, the use of ADM systems by public authorities must be subject to rigorous requirements, ensuring transparency and accountability vis-à-vis those affected and enabling individual as well as democratic control.

A central challenge is the opaque character of ADM systems in use. Without adequate transparency measures, they will remain black boxes – to the administration and its personnel, to those affected, and to society as a whole, inhibiting critical contestation from the outset. Transparency is thus a necessary – though not yet sufficient – first step towards a responsible use of ADM systems. It is a prerequisite not only for individuals affected to defend themselves but also for the public to enable an evidence-based debate and public scrutiny on the impact of ADM systems. The following policy recommendations focus on mechanisms to establish transparency on the use of ADM systems in the public sector. 

Policy Recommendations

1. Establish public registers for ADM systems used within the public sector

Without the ability to know whether ADM systems are being deployed, all other efforts for the reconciliation of fundamental rights and ADM systems are doomed to fail. We therefore call for legally mandatory public registers of all ADM systems used by public administrations – at communal, regional, national, and supranational levels.

These registers should come with the legal obligation for those responsible for the ADM system to disclose information on the underlying model of the system, its developers and deployers, the purpose of its use, and the results of the algorithmic impact assessment – that is, if applicable, the transparency report (see below).

In specific contexts, legitimate interests might speak against giving the public full access to transparency reports (such as, for example, the protection of personal data). However, in such cases, transparency must be provided vis-à-vis specific fora, e. g., the appropriate oversight institution. This, in turn, must be publicly communicated in the register.

The information included in the register has to be made available in an easily-readable and accessible manner, including structured digital data based on a standardized protocol. Importantly, such registers also enable independent reviews by giving external researchers (academia, civil society, and journalists) access to relevant data on the use of ADM systems by public authorities. This contributes to public scrutiny and to an evidence-based debate on the automation of the public sector – a prerequisite for guaranteeing democratic control and accountability.

2. Systematically assess the impact of every ADM system used in the public sector

In light of the consequential effects ADM systems can have, public authorities should be obliged to systematically evaluate and make transparent the potential risks of any system they plan to deploy. These risks cannot be determined in a generalized manner but only through a case-by-case analysis. Thus, it should be mandatory for public authorities to conduct an impact assessment prior to and during the deployment of any ADM system.

In order to make a difference in practice, ethical reflection must be translated into ready-to-use tools, providing authorities with the means for conducting such an analysis. To this end, AlgorithmWatch Switzerland has developed a practical, user-friendly, and concrete impact assessment tool, enabling the evaluation of an ADM system throughout its entire life cycle.

This two-stage impact assessment procedure generates transparency on the potential risks that come with the use of the system, based on the ethical principles outlined above. At its first stage, it enables a triage, indicating whether a specific system must be subject to additional transparency requirements. This stage can be implemented by authorities in a non-bureaucratic way. If risk signals appear, then at the second stage, public authorities must provide a transparency report, disclosing the risks and the measures taken to mitigate these risks. The more risk signals appear at the first stage, the more comprehensive the transparency requirements become at the second stage – thus, the more demanding it becomes for a public authority to deploy the system.

3. Transparency can only be the first step

In addition to transparency measures, we need frameworks to ensure individual and democratic control over the use of ADM systems as well as accountability towards people affected and towards the wider public.

At the level of individuals, this includes – but is not limited to – giving people access to relevant information in case they were affected by the use of an ADM system; ensuring accessible, affordable, and effective remedies; and considering opt-out mechanisms under certain circumstances. At the societal level, independent centers of expertise and reliable data access frameworks for public interest research would greatly contribute to enhancing public scrutiny as well as control and thus increase accountability on the part of public authorities. Lastly, as a society, we will need to discuss where to draw the line when it comes to automation: When the use of ADM systems cannot be made compatible with fundamental rights and democratic principles, it should be banned.

Eventually, by introducing governance frameworks to ensure transparency, control, and accountability, we work towards a usage of ADM systems in the public sector that actually benefits – rather than harms – individuals and society.

Read more on our policy & advocacy work on impact assessment for ADM systems.

Get the briefing on how automated systems impact real people, in Europe and beyond, every two weeks, for free.

For more detailed information, please refer to our privacy policy.