Press release

Europe’s Approach to AI regulation: Embracing Big Tech and Security Hardliners

Europe is about to adopt two major regulations on Artificial Intelligence: the EU’s AI Act and the Council of Europe’s Convention on AI. The Federal Council has already announced that it will base its own proposals for AI regulation on them. Yet, while both rulebooks were initially meant to turn the tables on Big Tech and to effectively protect people against governments' abuse of AI technology, interests of tech companies and governments' security hardliners may win out.

Photo: Council of Europe

Angela Müller
Dr. Angela Müller
Executive Director AlgorithmWatch CH | Head of Policy & Advocacy
Estelle Pannatier
Estelle Pannatier
Policy & Advocacy Manager

In the Council of Europe’s Committee on AI that is gathering in Strasbourg this week, negotiators discuss options that would allow states to limit the AI treaty’s applicability to public authorities and keep private companies widely out of scope. Switzerland plays a key role as it chairs the negotiations. In the EU, the compromise agreement on the AI Act would leave many people in vulnerable situations without reliable protection against government surveillance and control, while at the same time exempting companies from a range of duties.

Open AI, Google/Alphabet, Microsoft, Amazon, Meta – these are the names benefiting from the big hype around AI that has been sparked in November, 2022, with the release of ChatGPT. Only shortly after, these were also the names behind loud calls for AI regulation, that is, for legal rules on how this technology is being developed and used. Big Tech and their allies tend to explain their calls for legal rules on AI by pointing to unrealistic future scenarios – such as that AI could one day take over control over humanity.

The Strasbourg Convention on AI that seeks to protect human rights

While this line of argument – AI developers calling for government control of the development and use of AI – may have sounded puzzling in the beginning, it may now turn out to have been a clever strategy that proves successful in the end: The Council of Europe – the international organization that is host to the European Convention on Human Rights in Strasbourg – is about to conclude a Convention on AI that seeks to protect human rights, democracy, and the rule of law, with a final negotiation session planned for mid-March. And as the published Draft Convention reveals: It is on the brink of agreeing on an international treaty that regulates AI – but one that would give states great leeway to carve out exemptions for tech companies' development and use of AI.

Mind you that among the negotiating parties are also states that are not members of the Council of Europe, including the US – home to most of the world’s Big Tech companies and with a well-known interest in not messing with the Silicon Valley.

«If AI regulation does not reliably protect us from the risks and harms that the development and use of AI by private companies can bring about, then it opens up a blank check for Big Tech. This is not what effective protection of our rights looks like.»

Angela Müller, Executive Director of AlgorithmWatch CH

In an open letter published today, and together with more than 30 civil society organizations, AlgorithmWatch calls upon the negotiating states to make sure tech companies won’t go unchecked.

Meanwhile in Brussels – the EU’s AI Act

The European Union’s law to regulate AI – the so-called AI Act is already one step ahead: It has been agreed upon behind closed doors, and it seems only a question of formal approval that it will be adopted. While it departs from the AI Convention in that, in principle, it applies to both public and private sectors, AI Act negotiators also chose to include an array of loopholes for tech companies. Important provisions and civil society wins in the AI Act – such as the requirement to conduct a fundamental rights impact assessment before deploying a high-risk AI and to register such deployments in a public database – do not apply to private companies. Moreover, providers of AI systems might escape the very core of the AI Act – namely requirements on high-risk AI systems – by simply declaring their systems to perform only preparatory or narrow procedural tasks.

The AI Act contains safeguards against harms and misuse of AI systems that are important enough not to be sacrificed. However, there was a tendency in both regulatory processes:

«Bold statements to keep a tight rein on Big Tech tend to dissolve over the course of political negotiations. Very likely, this is not least because of the millions of euros Big Tech spend on lobbying decision-makers»,

says Angela Müller.

Human rights have lost out

In another respect, the Council of Europe’s Convention on AI and the EU’s AI Act are much more clearly aligned: Both rulebooks will likely make sure that everything done with the help of AI under the umbrella of ‘national security’ can continue to fly under the radar – they will simply not apply to such systems.

Estelle Pannatier

«Not only private companies but also governments' security advocates have lobbied their interests into Europe’s AI rulebooks. In light of the many ways in which AI systems can be used to affect and violate our rights and our societies' interests – to curate the news we see online or the ads displayed to us, to select job applicants, to surveil migrants, or to determine the presence of police forces in certain areas – this is simply unacceptable.»

Estelle Pannatier, Policy & Advocacy Managerin, AlgorithmWatch CH

Small Window Left

While the AI Act is already well advanced, policy-makers in Strasbourg still have a small window left to correct these shortcomings and to put fundamental rights back at the core of European AI governance – and in particular, to put at its core the mandate the Convention is based on and limited to: to protect human rights, democracy, and the rule of law. And Switzerland must take responsibility in the negotiations on the Council of Europe Convention, which it chairs.