Position Paper

Which AI do we want? Framework conditions for algorithms & artificial intelligence

The Swiss Federal Council is introducing regulation around AI. Below, we summarize what the Federal Council and Parliament should consider from the perspective of fundamental rights, democracy, and sustainability.

Angela Müller
Dr. Angela Müller
Executive Director AlgorithmWatch CH | Executive Board Member AlgorithmWatch
Estelle Pannatier
Senior Policy Manager

Overview

  1. Introduction
  2. AI & fundemental rights
  3. AI, society & democracy
  4. AI, power & sustainability

Algorithms and artificial intelligence (AI) are being used more and more – and there is often speculation about their potential opportunities and challenges in the future. However, the development and use of these systems is already having a real impact on people and society today – and therefore on the conditions of democracy: on fundamental rights, equal treatment and the protection of minorities as well as on education, public debate and democratic deliberation or the distribution of power in society. We should therefore ask ourselves: What kind of AI do we want? How do we ensure that this technology does not reproduce existing injustices and only serves the interests of a few? And how can we shape AI – instead of it shaping us?

If we want to ensure that algorithms and AI benefit us all, we have a responsibility to seriously address the social challenges they pose.

To do this, we should create a framework for the development and use of algorithms and AI that ..

... prevents damage: Algorithms and AI can have an impact on fundamental rights, justice, democracy, and sustainability. In order to protect these social achievements, we do not want to have to rely on the discretion of tech companies. As a democratic society, we must define the requirements and ensure that technology is used responsibly and fairly.

enables a benefit for everyone: A look at the value chain behind AI shows that it is currently in the hands of a few large global corporations. We must never lose sight of this concentration of power when we think about how we deal with AI. The aim must be to shape technological development and deployment sustainably and in the interests of the common good and not just to serve the particular interests of a few: AI should be developed and used in such a way that it actually benefits everyone.

Political decision-makers must therefore have the ambition and willingness to shape this framework in order to protect fundamental rights, defend democracy, and enable sustainability when dealing with algorithms and AI. It must promote interdisciplinary research, education and awareness-raising activities, provide public and non-profit infrastructure, and strengthen civic literacy and media pluralism. At the same time, however, we also need measures at the legal level. The aim is to create an overall framework that prevents negative effects while promoting responsible AI supply chains, sustainable AI development, and an AI ecosystem that generates innovations for the common good.

The Federal Council should seize this opportunity in its ongoing process to regulate AI. We have briefly summarized below what the Federal Council and Parliament should consider from the perspective of fundamental rights, democracy and sustainability. To read our recommendations in detail, go to our position paper in German or French.

AI and fundamental rights

Algorithms and AI can influence decisions about people: be it when systems are supposed to evaluate job applications, measure creditworthiness, regulate access to educational institutions, predict crime, forecast recidivism rates, or detect benefit fraud. In doing so, they can have discriminatory and otherwise harmful effects on individuals and affect their fundamental rights.

Objective: We ensure that algorithms and AI are used comprehensibly, responsibly, and in accordance with fundamental rights when they influence decisions about people.

To achieve this, we need measures to create transparency, protect fundamental rights, strengthen protection against discrimination, exercise oversight, and guarantee access to legal remedies and accountability.

Read more in our position paper on page 5 – in German or French

AI, society and democracy

A significant part of our public debate now takes place on social media platforms such as Instagram, X, LinkedIn, and TikTok. Their algorithms are designed to keep us online for as long as possible in order to maximise platform companies' profits. We obtain information that is relevant to forming our opinions via search engines such as Google or Bing. Content such as deepfakes that harm people or impair political deliberation are created – increasingly by generative AI systems – and distributed on platforms by non-transparent algorithms. This lack of transparency, the lack of accountability and the concentration of market and opinion power in the hands of a few multinational technology companies worth billions of dollars pose a challenge for our public debate.

Goal: We enable a constructive public debate: beneficial for society and democracy, positive for individuals and fair for all.

To do this, we need measures to analyze how algorithmic governance affects public debate and what this means for us as individuals and as a society. We need to be able to hold online platforms and AI providers to account when they accept negative impacts in order to maximise their profits.

Read more in the position paper – in German on page 7 or French on page 8.

AI, power & sustainability

Today, there is no AI without Big Tech. The value creation chain is characterized by power concentration in the hands of a few large companies, by a significant consumption of resources such as energy and water, by considerable CO2 emissions, and by sometimes precarious working conditions, for example when data annotation is outsourced to the Global South.

Our aim is to ensure that AI systems themselves are designed to be ecologically, economically and socially sustainable throughout the value chain.

To achieve this, we need measures to counter the enormous and democratically significant market power of a few companies and to enable a sustainable AI market and AI development geared towards the public interest. In addition, companies that develop and use AI must take responsibility for their supply chains and comply with due diligence obligations in terms of sustainability and human rights.

Read more in the position paper – in German on page 9 or French on page 10.