The Threat of AI to Democratic Elections

AI's rising influence in media and politics, through deepfakes and automated news, is poised to alter their interplay fundamentally.

Jurgen Masure
4 min readFeb 11, 2024

--

The Russian president, Vladimir Putin, appeared briefly lost for words when confronted with an AI-generated version of himself.

The increasing impact of artificial intelligence on media and politics, such as the use of deep-fake politicians and automated news broadcasts, is set to significantly disrupt the rules of the game between media and politics.

Media and politics have always been intertwined. The inventions of the printing press, radio, TV, and the internet have transformed how we govern and wield political power. The fast-paced development of artificial intelligence is set to do so once again.

Election year

The year 2024 is expected to be exciting, with significant political events that will test the limits of our democracy. Nearly 70 countries will hold elections, including my own, Belgium, several European countries, Russia, and the United States.

The rise of AI technology will significantly impact the relationship between media and politics. As AI-generated fake news becomes increasingly common, it's essential to remain vigilant. We need to keep a watchful eye on how extremist groups, in particular, will use these new digital tools. Sam Altman, the CEO of OpenAI, has warned of the profound effects this could have on our political systems.

There has been a recent incident in Belgium where a deepfake video of the former Prime Minister, Jean-Luc Dehaene, surfaced on social media.

The late former Prime Minister of Belgium, Jean-Luc Dehaene, was resurrected in a deepfake video created by his former party, CD&V, for a political campaign.

It caused quite a stir among the people.

In the same week, a 22-minute news bulletin was aired on 'Channel 1', which was entirely created by an AI. The bulletin included newsreaders, reports, and subtitles, making it a complete package.

Moreover, in a recent event, Russian President Vladimir Putin had a conversation with an AI version of himself on national TV, which also created a lot of buzz.

Magna Carta 2.0

We in Europe have become accustomed to many technological advancements, including artificial intelligence. However, it has become increasingly important to protect ourselves against the potential dangers of AI.

To address this issue, an agreement has been reached on a new European AI law, which can be seen as a modern Magna Carta for our digital age. The ultimate goal of this law is to regulate and control the potentially harmful impact of AI, algorithms, and machine learning on society.

AI systems are classified according to the level of risk they pose. For instance, spam filters are deemed safe. In contrast, AI systems used for high-risk applications, like social credit systems that analyze human behavior, are restricted to exceptional circumstances, such as counter-terrorism.

Whether the current regulations are strict enough remains to be seen, as the devil is always in the details. There will be a need for transparency for AI systems that are considered low-risk. For instance, an AI chatbot must explicitly indicate that it is not a human.

Cautious

Have you ever considered receiving an ad on your screen where a politician addresses you personally? For instance, imagine politician X saying, "Bonjour Sara, what a mess, that city of Paris, don't you think?".

With personalized emails already being shared, it's only a matter of time before videos start addressing us similarly. However, it's important to note that consent is required, just like with email marketing. But we should be cautious about this trend as it raises several ethical issues.

The line between authentic and fabricated content on social media is becoming increasingly blurred. Screenshots and images are frequently shared without proper context, making it difficult to distinguish between the two. It remains uncertain whether implementing a mere "transparency obligation" can effectively address this issue.

Photo by Manny Becerra on Unsplash

Ashley

With the upcoming US elections, an advanced AI system called 'Ashley' is being used. Unlike standard robocalls, Ashley is capable of conducting interactive conversations with voters.

It analyses voters' responses and provides personalized, intelligent responses that give the impression of an authentic discussion. This makes Ashley an automated source of data and a communicative sparring partner in political campaigns.

Acknowledging the significant impact of microtargeting and social media on our society is essential. Similarly, it is crucial to recognize the potential implications of AI in shaping the interaction between media and politics.

We must actively work to understand and explore these political implications to safeguard the integrity of our democratic process. In today's world, social media provides information and influences our world perception. Therefore, we must protect our democratic dialogue from the potential misuse of AI.

AI laws.

The progress of artificial intelligence in the political arena is a significant and groundbreaking development. This progress has the potential to bring about administrative efficiency and even foster more insightful political discussions, which is undoubtedly a positive outcome.

However, it's important to note that with every significant advancement, there are also potential dangers to be aware of.

The influence of AI can extend to our legislative process. However, using more advanced AI tools than ChatGPT requires careful consideration to avoid bias, as AI is not neutral and has already been accused of exhibiting bias. This raises questions about the need for regulation and oversight when AI is being integrated into politics as a tool.

Is the 'digital Magna Carta' sufficient to oversee these developments? However, with everything changing so rapidly, it is more important than ever to think critically about what is happening around us.

This was initially published on Knack.be

--

--