Worldwide, more than 70 elections are scheduled for 2024. In Germany, these include local elections and state elections in three federal states in addition to the European elections taking place this weekend. In the USA, the president will be elected in November.
Therefore, it is not surprising that disinformation and deepfakes are being deliberately spread and disseminated this election year to create sentiment against political opponents, deceive voters, and thus deliberately influence the elections.
According to a survey by the Bertelsmann Stiftung (survey in German), 81% of participants believe that disinformation is a real problem that poses a threat to democracy and social cohesion. Two out of three participants (67%) are concerned that the outcome of elections could be influenced by disinformation.
What exactly are disinformation and deepfakes?
Disinformation means wrong and misleading information which is deliberately spread to manipulate. The term “fake news” is often used synonymously. The term “misinformation”, on the other hand, is to be distinguished by intention, as there is no intention to deceive.
Deepfakes are media contents that appear deceivingly real and have been manipulated by artificial intelligence (AI). They suggest for example that individuals have said or done something that never actually happened.
How to identify disinformation and how to react?
In general, it is important to critically question online content and to avoid reacting directly, especially in times such as the run-up to an upcoming election. It is important that such content is not forwarded.
But first, let’s take a look how disinformation can be identified. Often, such news follows a typical pattern: They are formulated in a sensational and emotional manner. Questionable news should always be cross-checked with reliable sources.
Using a search engine, suspicious buzzwords can be checked by combining the search with the word “fact check”. Several sources like the public broadcasting agencies (text in German) or independent organizations provide such fact check where news are checked and rectified.
If suspicious news were allegedly published on reputable platforms, it is useful to check the source. Does the news fit the general style of the platform? Can one find the content on the official website? In addition, one should check the impressum, the certificate and the correctness of the URL of the website.
For checking if a photograph is manipulated, the reverse image search is helpful: To use it, a file or URL containing the questionable image is uploaded to a search engine. Based on the search results, one can decide whether the image is an actual image or comes from an earlier publication. Image generators based on AI are trained to create new images. Hence, AI-generated images are unique. If the output of the reverse image search reveals only one image, one should become attentive. In general, media content should be critically challenged and not simply accepted as reality.
AI can simplify the creation and distribution of disinformation
Disinformation is not a new phenomenon but existed already well before the rise of generative AI. However, generative AI allows for a faster creation and distribution of content. Especially regarding political elections, it is a real problem: The Global Risks Report 2024 of the World Economic Forum shows for example that elections can be influenced by AI (video in German). In addition, election campaign videos which are generated by AI could not only have an impact on the election results, but also fuel protests or in more extreme scenarios lead to violence and radicalization. The authors of the report state that this is even true if the platform marks the shared content as disinformation or faked.
It is not easy to identify disinformation generated by AI. Even if successfully identified, disinformation might have negative consequences. For example, if politicians are falsely attributed with questionable or polarizing statements in an AI-generated video or audio shortly before an election, this can have negative consequences for themselves, their party and lead to loss of votes. Even if this content is later disproved, the mere publication has already caused damage and thus influences people in their opinion formation and voting decision.
Chatbots fail when it comes to political content
Many persons use chatbots. How correct are their answers regarding political content? Recently, Correctiv investigated several chatbots regarding political questions including questions facing the European elections. The results are sobering: In many cases, the three most common chatbots ChatGPT, Microsoft Copilot and Google Gemini answer the questions incorrectly or not at all. The editorial team asked twelve questions in German, English and Russian. Topics included international politics, the upcoming European elections, Covid-19 or climate change.
The chatbot by Google was even unable to answer simple questions such as the election date. The chatbot of Microsoft didn’t know the lead candidates of the different parties. ChatGPT returned invented Telegram channels as a source of information. In this context, the suggestion of Microsoft Copilot for a serious source of information includes a channel of the AfD party, hence a channel of a German party which is classified as a suspected right-wing extremist by the German domestic intelligence services (text in German), which was recently confirmed by the Münster Higher Administrative Court (text in German).
The test of Correctiv further reveals differences between the investigated languages. Some of the chatbots refuse to answer in one language and evade questions in another. These observations are confirmed by the studies “The Silence of the LLMs: Cross-Lingual Analysis of Political Bias and False Information Prevalence in ChatGPT, Google Bard, and Bing Chat” of Aleksandra Urman und Mykola Makhortykh and “Generative AI and elections: Are chatbots a reliable source of information for voters?” of AlgorithmWatch und AI Forensics.
To put the results into perspective it is crucial to understand the basic functionality of AI-chatbots: They are based on Large Language models (LLMs). During the training process, LLMs have been presented with a huge amount of text samples and from this learned a statistical model of language. Put simply, the models learned to generate human texts by predicting the next word. Hence, it is clear that the results can contain hallucinations or wrong information. Who is interested in diving deeper into the functionality of LLMs can e.g. start from here.
The situation gets problematic when humans use chatbots to search for information without knowing a chatbot is not a search engine, but a text generator. Even with Microsoft Copilot and Google Gemini (in contrast to ChatGPT) being connected to the internet and thus having access to recent texts, the resulting generated texts are based on statistics and not checked for their truth value. However, the study of AlgorithmWatch and AIForensics reveals that it can happen that the chatbots accused candidates of scandalous behavior and sometimes even attributed real sources to these invented stories. This is not only a risk for the reputation of politicians and media, but a threat to democracy. This means that citizens cannot form an uninfluenced opinion and may be guided and influenced by false information.
Artificial Intelligence and Social Media
Methods of AI are also relevant for another form of influence, the so-called microtargeting: Here, AI is applied to evaluate huge amounts of individual data and identify interests of specific target groups. Using this knowledge, target groups are contacted by videos or statements which are exceptionally in line with the interests of the recipients. The mere coincidence of topics helps to influence people politically to make a certain decision. One of the best-known cases is the Facebook-Cambridge Analytica data scandal. In 2016, the company illegally extracted data from millions of Facebook users in order to create personality profiles of voters. These were then targeted with highly personalized election advertising. In the US election campaign at the time, the team around Republican candidate Donald Trump wanted to use negative advertising to prevent black people from going to the polls (text in German) and thus presumably not voting for his Democratic opponent.
In social media, bots also play an important role. Social bots are programs that act like real users on social media platforms, especially in comment sections, automatically spreading messages. They actively participate in and influence discussions. Already in 2019, a study by the University of Duisburg-Essen found that bots can trigger what is known as the spiral of silence. This means that people are less likely to express their opinions if they are in the minority. The researchers show that even a small number of two to four percent of bots can be enough to silence users in a controversial discussion. However, the danger does not only come from influencing those actively involved in the discussion. For the silently reading part of the users, a misleading picture of the mood emerges, which can accordingly influence opinion formation.
Conclusion: Alertness and media literacy as a shield of democracy
In conclusion, artificial intelligence can significantly influence opinion formation of electors and hence the outcome of elections. Here, a conscious handling of AI-generated content is of crucial importance. This requires the following: awareness of the problem, increased media literacy and safe and conscious use of AI tools.
At this point, the project X-Fem of out Lamarr-partner Fraunhofer IAIS wants to contribute by strengthening the digital skills of women in vocational training. The upcoming e-learning covers topics such as disinformation, hate speech and artificial intelligence.
Want to enhance your own media literacy? Then sign up for the newsletter now and be the first to learn about the launch of the e-Learning from X-Fem.