Election year 2024

10 Questions about AI and elections

2024 is an important election year. While citizens all over the world get ready to cast their ballot, many people worry about AI: Are chatbots and fake images a threat to democracy? Can we still trust what we see online? We explain why the hype around AI and elections is somewhat overblown and which real risks we need to watch out for instead.

Photo: Element5 Digital / Unsplash
#1 – Let’s start with the basics: How does technology impact elections?

There’s a lot of talk about how technology has been used to influence elections – but so far there’s very little evidence for elections being nudged towards a different outcome by manipulated content, be it AI-generated or not.

Elections are highly complex, and won or lost on much bigger stories than just online content.

Depicting people as puppets manipulated by sophisticated technology (like deepfakes) or played by powerful antagonists (like Cambridge Analytica or Russian bots) risks painting voters as dupes with limited capability for independent thinking and autonomous choice.

Voters have real concerns, such as the economy or security, that directly impact their lives. They have long standing opinions and values that are not easily changed overnight by new technologies.

This does not mean that technology has no effect on elections, but we tend to hype the wrong issues, while neglecting the real risks. Algorithms can increase polarization and limit individuals' choices, which is a threat to democracy – even outside elections.

#2 – Are deepfakes going to mislead voters and sway elections?

Probably not. Deepfakes get too much attention. The fact that fake content is being created doesn’t necessarily mean that it’s actually having much of an effect.

Generative AI is used to try and mislead voters to some extent, though older tactics like taking real pictures out of context prevail. Examples include deepfake audio in Slovakia, robocalls using Joe Biden’s voice, and so on. But so far there is no evidence that these tactics have swayed or even deeply influenced elections. They were usually uncovered quickly.

We should, of course, look out for fake content online, whether AI-generated or not. Fact checkers and good journalism might help minimize the spread and impact of such content (though if people already want to believe something, this may have limited effectiveness.)

#3 – How do social media recommendation systems contribute to the spread of fake, misleading, or harmful content like deepfakes?

The main problem is often not only the creation of fake content online − it’s the spread of bad content more generally. Another use of AI is responsible for fake content actually reaching voters: recommender systems on social media optimizing for user engagement. According to platforms' metrics, outrageous, and emotive content often is engaging.

It’s hard enough to make people change their minds, even with convincing arguments. But it’s comparatively easy to play into people’s existing feelings, confirming their beliefs, and potentially making them hate their opponents. Feelings drive engagement.

This means that if platforms want engaged users, they might algorithmically boost polarizing speech from political leaders, bad actors, or everyday people. Such negative engagement might spiral further down by political campaigners creating outrage-driving material to get more attention in other media. Regular users then repeat false claims they picked up there because it resonates with them.

#4 – Could AI make political debates meaner and more divided?

Faking pictures, spreading lies, and promoting harmful content existed way before AI – and even before the internet. The problems generative AI causes are therefore not new, even though they do have some harmful impact.

Rather than focusing on the hype surrounding individual new technologies like AI, we should examine the entire tech industry's impact on opinion making. The consolidation of power among a few tech giants – including generative AI providers, social media platforms, and IT service providers for public services – is a cause for concern. These same corporations often fund media outlets and even university positions, undermining the diversity essential to a healthy democracy.

In the long run, these underlying structural issues may pose a greater threat to democratic discourse than manipulated content itself.

#4 – Could AI make political debates meaner and more divided?

Faking pictures, spreading lies, and promoting harmful content existed way before AI – and even before the internet. The problems generative AI causes are therefore not new, even though they do have some harmful impact.

Rather than focusing on the hype surrounding individual new technologies like AI, we should examine the entire tech industry's impact on opinion making. The consolidation of power among a few tech giants – including generative AI providers, social media platforms, and IT service providers for public services – is a cause for concern. These same corporations often fund media outlets and even university positions, undermining the diversity essential to a healthy democracy.

In the long run, these underlying structural issues may pose a greater threat to democratic discourse than manipulated content itself.

#5 – What are the real harms of deepfakes, beyond the hype about election disinformation?

Chatbots and image generators do cause real harm, even though the election disinformation narrative is overblown.

For example: It’s now ridiculously easy to create deepfake porn with female politicians, artists, and regular women and girls. Tech companies should take responsibility for preventing such misuse and make their products safe for everybody. Individual people who are already marginalized in society face the greatest harm from deepfakes. We should listen to their reports and take their safety seriously.

New technologies are released prematurely and without proper risk assessment. So far tech companies have not faced consequences for this from governments.

#6 – How does polarizing discourse on social media platforms discourage participation, particularly by marginalized groups?

Social Media Platforms don’t do enough to protect users from polarizing discourse. It’s easy for people to harass and intimidate other people online, even politicians and other public persons. This drives people away from politics, particularly people from marginalized and frequently attacked groups.

At the absolute worst, polarizing language online can even radicalize people to the point of killing members of particular groups, as seen in Christchurch and Halle.

All this contributes to weakening democracy and making politics a more aggressive, unsafe, and unequal space for everyone.

#7 – Can we trust AI chatbots to provide correct information about elections?

No. So far, tech companies are unable to prevent their text generators from making up false claims. Voters should therefore not turn to AI-driven search to find information about elections. Large language models generate text by stringing words together based on probability. They have to relationship to the truth.
We found in our research on this topic that chatbots frequently misinform users about electoral processes and parties' stances, and even slander political figures.

Voters in democracies need access to reliable information. If AI-driven chatbots (like ChatGPT or Microsoft Copilot) replace search engines as sources of information, they might give voters wrong or biased information. This affects individual users more than entire voting populations.

#8 – What is the EU doing to address such problems?

With the Digital Services Act (DSA) and the AI Act, the EU has introduced legal frameworks to reign in tech companies. The AI Act requires companies to conduct risk assessments before putting their systems on the market. Many of the platform risks arguably arise from prioritizing a speedy development over safe and ethical deployment, and engagement over a scrupulous moderation.

The EU’s DSA is supposed to regulate online platforms and search engines, particularly in terms of transparency and data access. The DSA’s practical implementation is currently far behind schedule, but it still offers exciting opportunities for research and scrutiny.

Not only must there be reliable enforcement mechanisms in place, a strong network of researchers, civil society organizations, and journalists must also make sure that risks – regardless if old or new − are detected, discussed, and diminished.

#9 – What should tech companies do?

Social media platforms have to become more transparent and give access to their data to help us understand how they work and what consequences arise from it. However, they are increasingly becoming less transparent. (X now demands $42,000 per month for previously free data access; in August, Meta will shut down its Crowdtangle tool, used to access Facebook and Instagram data.)

Providers of AI models have to take responsibility for the harm their products cause. Technical solutions such as watermarking or banning isolated search terms (like “Trump” or “Biden”) won’t do the trick. Most AI model providers have so far responded selectively, for example by exclusively taking protective measures in regard to US elections.

Instead, we need accountability for all elections and along the entire chain of creation and distribution.

#10 – How can technology strengthen democracy, rather than undermine it?

Ultimately, a strong democracy requires people to belief in their voices really making a difference, and to feel that they are empowered and autonomous members of society.
But many of the largest tech companies now have such a power that they can challenge governments. They often become hostile towards governance and refuse to collaborate. (Although this is not true of all people working in all technology companies.)

Even very large companies could see people as humans with rights, rather than users, build systems which facilitate civic discourse rather than polarization, put risk mitigation ahead of product release, and collaborate with governments, researchers, and those most at risk from their systems. Power, governance, and accountability issues really undermine democracy, the use of technology is just an aspect of such issues. Addressing them might lead to a better use of technology for democracy.



Read more

Donate