Governance of Artificial Intelligence in the Council of Europe

The Council of Europe is in charge of upholding human rights, democracy, and the rule of law in Europe. Its member states are currently working on legal frameworks for the development and use of AI systems.
Stanford University Libraries via Public Domain Review

Algorithmic systems – often referred to by the buzzword Artificial Intelligence (AI) – increasingly pervade our daily lives. They are used to detect social benefits fraud, to surveil people at the workplace, or to predict parolees’ risk of reoffending. Often, these systems do not only rest on shaky scientific grounds but can be used in ways that infringe people’s basic rights – like those to non-discrimination, freedom of expression, privacy, or access to justice –, can undermine foundational democratic principles, and through their non-transparent nature and the lack of accountability mechanisms can be in tension with the rule of law. Against this background and in light of its mandate, the Council of Europe has recognized the need for states to govern the development and use of AI systems.

The Council of Europe is an international organization founded in 1949 with the task to uphold human rights, democracy, and the rule of law in Europe. It currently has 46 Member States (27 of which are also members of the European Union (EU)) and is based in Strasbourg. It is not to be confused with the European Council and the Council of the EU, which are both EU bodies – in contrast to the Council of Europe. In 1950, it drafted the European Convention on Human Rights (ECHR), the ratification of which still is a condition for new members to join. The Council of Europe hosts the European Court of Human Rights, which oversees the implementation of the ECHR, and promotes human rights through a range of additional measures, including international conventions, such as the Convention on Cybercrime or Convention 108 on data protection.

What has happened so far?

  • September 2019: The Committee of Ministers of the Council of Europe sets up the Ad hoc Committee on Artificial Intelligence (CAHAI). It was tasked with the mandate of “examining the feasibility and potential elements on the basis of broad multi-stakeholder consultations, of a legal framework for the development, design and application of AI systems, based on the Council’s standards of human rights, democracy and the rule of law”.
    • A “legal framework” could range from non-binding recommendations and guidelines to legally binding international conventions that are open for signature not only by Member States but also by non-Member States. 
  • December 2020: CAHAI issues its “Feasibility Study” on a legal framework on AI.
  • December 2021: CAHAI’s mandate ends. In the final plenary, its final recommendations on “Possible Elements of a Legal Framework on Artificial Intelligence Based on the Council of Europe’s Standards on Human Rights, Democracy and the Rule of Law” are adopted.
  • April 2022: The Committee on Artificial Intelligence (CAI) holds its inaugural meeting in Rome. Its work will be based on and informed by CAHAI’s final recommendations, but CAI is specifically tasked with the mandate to negotiate an “appropriate legal instrument on the development, design, and application of artificial intelligence systems based on the Council of Europe’s standards on human rights, democracy and the rule of law, and conducive to innovation, in accordance with the relevant decisions of the Committee of Ministers” by November 2023 – see Terms of Reference.
  • Mai 2022: Upon its meeting in Turin, the Committee of Ministers of the Council of Europe welcomes CAI’s recommendation to create a legally binding instrument on AI. It confirms CAI’s mandate to negotiate an “appropriate instrument”, yet without specifying what form this could take – that is, without stating whether this should be a legally binding convention or framework convention.

How will the outcome of the Council of Europe’s work protect our rights, our democracy, and principles of rule of law?

CAHAI and CAI clearly recommend that this ‘legal framework’ should take the form of a legally binding instrument. This could mean an international ‘Convention’ or a ‘Framework Convention’, both of which are binding under international law upon states that sign on to it and would comprehensively regulate the development and use of AI-systems. Thus, it would not be limited to a specific sector but contain horizontal requirements – though it could be complemented by additional binding or non-binding sectoral instruments.

Such a (Framework) Convention would subject states to a set of requirements that aim to ensure that if AI systems are developed and used, human rights do not get violated. Signatory states would then need to implement these requirements at a national level by introducing corresponding domestic laws, policies, and safeguards. Affected persons would thus be protected against discriminatory, unjust, and harmful effects of AI systems by domestic safeguards and laws and can seek legal remedies at domestic level. While a monitoring procedure is typically established at Council of Europe level for (Framework) Conventions, what this exactly looks like will be negotiated by CAI. It would certainly not include the possibility for individuals to claim violations of the new AI instrument directly before the European Court of Human Rights, whose mandate is limited to the ECHR. However, individuals could still lodge complaints about a violation of their ECHR rights in relation to AI systems – and the Court will then likely consider the principles enshrined in the specific legal instrument on AI.

How do we contribute?

  • AlgorithmWatch has participated in CAHAI as an official observer in 2020 and 2021. We were active members of both its Policy Development Group (PDG) as well as its Legal Framework Group (LFG).
  • At the end of CAHAI’s mandate and after two years of intense work, we are afraid that the CAHAI’s recommendations fall short of what is needed in terms of ensuring full respect for and protection of human rights, democracy, and the rule of law. Read here our joint statement published with civil society partners, calling on Member States to create an AI governance framework that is truly oriented at the Council of Europe’s mandate.
  • AlgorithmWatch will continue to actively participate as an official observer in the negotiations in CAI. Upon its inaugural meeting, we clearly articulated our core demands in collaboration with our main partners.

AlgorithmWatch will continue to ensure that the voice of civil society is heard in the negotiations in CAI – and to fight for a legal instrument on AI systems that is truly oriented at the Council of Europe’s mandate: the protection of our most basic rights, of our democracies, and the rule of law.

Read more on our policy & advocacy work on the Council of Europe.