Algorithmic systems – often referred to by the buzzword Artificial Intelligence (AI) – increasingly pervade our daily lives. They are used to detect social benefits fraud, to surveil people at the workplace, or to predict parolees’ risk of reoffending. Often, these systems do not only rest on shaky scientific grounds but can be used in ways that infringe people’s basic rights – like those to non-discrimination, freedom of expression, privacy, or access to justice –, can undermine foundational democratic principles, and through their non-transparent nature and the lack of accountability mechanisms can be in tension with the rule of law. Against this background and in light of its mandate, the Council of Europe has recognized the need for states to govern the development and use of AI systems: To this end, its member states – among which Switzerland – and selected other interested states – like the US or Japan – are negotiating a Convention in AI. Civil society organizations like AlgorithmWatch, experts, and companies are participating in the negotiations as observers.
The Council of Europe is an international organization founded in 1949 with the task to uphold human rights, democracy, and the rule of law in Europe. It currently has 46 Member States (27 of which are also members of the European Union (EU)) and is based in Strasbourg. It is not to be confused with the European Council and the Council of the EU, which are both EU bodies – in contrast to the Council of Europe. In 1950, it drafted the European Convention on Human Rights (ECHR), the ratification of which still is a condition for new members to join. The Council of Europe hosts the European Court of Human Rights, which oversees the implementation of the ECHR, and promotes human rights through a range of additional measures, including international conventions, such as the Convention on Cybercrime or Convention 108 on data protection.
What would this mean? How will the outcome of the Council of Europe’s work protect our rights, our democracy, and principles of rule of law?
As of today, the Council of Europe’s plan is to create an international ‘Convention’ or a ‘Framework Convention’, both of which would be international treaties that are legally binding under international law upon states that sign on to it. States would thus be free to sign or not to – but if they do, they legally commit themselves to comply with it. Signatures would be open to non-member states, which is why states like the US or Israel are participating in the current negotiations.
Such a Convention would contain a range of obligations for states to make sure that human rights are respected in the development and use of AI systems. It would not be limited to one specific field of application but contain horizontal rules (which could be complemented by additional sectoral regulations in the future). Signatory states would be required to implement the provisions at the domestic level, i.e., through introducing domestic measures and laws. Affected individuals will thus be protected by these domestic safeguards (given their state has signed on) and can seek legal remedies at domestic level. In addition, the Convention would likely require states to create a national supervisory authority to supervise the implementation of the Convention’s obligations.
While a monitoring procedure is typically established at Council of Europe level for (Framework) Conventions, whether there will be such and what it would exactly look like is currently being negotiated. What is already clear: It would certainly not include the possibility for individuals to claim violations of the new AI Convention directly before the European Court of Human Rights, whose mandate is limited to the European Convention on Human Rights (ECHR). However, individuals could still lodge complaints about a violation of their ECHR rights in relation to AI systems (on condition that national courts have rejected their complaint). The Strasbourg Court will then likely consider the principles enshrined in the specific legal instrument on AI.
What has happened so far?
- 2019 - 2021: The Ad hoc Committee on Artificial Intelligence (CAHAI) prepares the negotiations. It publishes a “Feasibility Study” on a legal framework on AI and adopts, in December 2021, its final recommendations on “Possible Elements of a Legal Framework on Artificial Intelligence Based on the Council of Europe’s Standards on Human Rights, Democracy and the Rule of Law”.
- April 2022: The new Committee on Artificial Intelligence (CAI) holds its inaugural meeting in Rome. Its work will be informed by CAHAI’s final recommendations, but CAI is specifically tasked with the mandate to negotiate an “appropriate legal instrument” – see Terms of Reference.
- June 2022: The Council of Europe decides that the negotiations should aim at a legally binding Convention or Framework Convention.
- September 2022: Second plenary meeting of CAI
- January 2023: Third plenary meeting of CAI. The Committee decides to split its meetings into meetings of the plenary and of the drafting group. The latter is charged with drafting proposals for the Convention’s text. Civil society organizations and other observers are excluded from participating in it.
What are the next steps?
The next plenary meetings are planned for Mai/June and September 2023. At that point, states should agree on the text of the Convention. This official timeline, however, appears more and more unrealistic, given that the EU and its 27 Member States are unlikely to agree to a Convention in the Council of Europe as long as they do not know the exact content of the AI Act of the EU (which is being negotiated in parallel in Brussels). They fear that the two laws would otherwise not be compatible. An agreement on the Convention on AI in Strasbourg before the end of 2023 would thus come as a surprise.
How does AlgorithmWatch contribute?
- AlgorithmWatch is participating as active and official observer organization at CAI negotiations. Before that, we have actively contributed to CAHAI as an official observer in 2020 and 2021. The aim of our contribution is to ensure that the voice of civil society is heard in the negotiations in CAI – and to fight for a legal instrument on AI systems that is truly oriented at the Council of Europe’s mandate: the protection of our most basic rights, of our democracies, and the rule of law.
- January 2023: On the occasion of the International Day on Data Protection, we call for an effective Convention on AI.
- October 2022: Jointly with our partners, we urge the EU not to delay the process on a Convention on AI within the Council of Europe, given their different mandate.
- April 2022: At the opening session of CAI, AlgorithmWatch and its civil society partners reiterate their core demands.
- December 2021: At the end of CAHAI’s mandate and after two years of intense work, we are afraid that the CAHAI’s recommendations fall short of what is needed in terms of ensuring full respect for and protection of human rights, democracy, and the rule of law.
Read more on our policy & advocacy work on the Council of Europe.