Artificial Intelligence and Elections
It’s About Saving Democracy

Which information is true, which is artificially generated? Spotting the difference is becoming ever more difficult as AI-augmented fake news and disinformation are spreading.
Which information is true, which is artificially generated? Spotting the difference is becoming ever more difficult as AI-augmented fake news and disinformation are spreading. | Photo (detail): © Adobe

Bogus audios imitating politicians’ voices, AI-augmented robo-calling or fictional TV anchors: artificial intelligence offers a magnitude of new ways to spread disinformation. In an era of global crises, it represents a genuine threat to democracy.
 

By Dan Gillmor

Supporters of democracy have an enormous task ahead in 2024: saving it. Amid an unprecedented wave of elections around the world, we confront mounting threats to the system that Americans call “government of the people, by the people, and for the people”.

As some 40 nations go to the polls in 2024, we must also deal with something that could magnify earlier digital-era challenges to a foundation of democratic participation, namely information we can trust. That new development is generative artificial intelligence, usually called “AI”. It is an abbreviation that holds multitudes of meaning, potential, and danger.

By the time the United States go to the polls in autumn, most of the other nations will have already run their elections. We will have a better idea then whether generative AI, which leapt into the public eye only a year ago with the release of ChatGPT, is more a boon or threat (or, possibly, neither).

The early signs aren’t auspicious, to put it mildly. Several recent examples: In Bangladesh, according to the Financial Times, pro-government media and social influencers have “promoted AI-generated disinformation created with cheap tools offered by artificial-intelligence startups.” In Slovakia’s recently concluded elections, according to a report in Wired magazine, bogus audios purporting to be the voice of a candidate were used to help defeat him. In the U.S. the Associated Press reports that candidates are already starting to use AI-augmented ‘robo-calling ’ – automated dial-up voice systems – to generate interest for political candidates.

These cases, while worrisome, seem unconnected. And, so far, they don’t seem to be central to what is plainly an organized campaign to destabilize liberal democracies around the world. But it should be a given that people who routinely use deceit as part of their arsenal of persuasion are eagerly eyeing the possibilities of the new tools, and surely will use them broadly if they can.

AI Helps Anti-Democratic Forces

The broader AI boom comes amid escalating and existential global crises. Wars and climate change are already fueling mass migration and economic upheaval. Those crises, and others, have contributed to widespread public angst and, not coincidentally, the rise of right-wing populists, some of whom are outright fascists and many of whom view democracy with contempt.

AI is a key player in the relentless, rapid, and related evolution of our information ecosystem. Along with other powerful communications tools, it helps malevolent propagandists who are assisting the anti-democratic forces. Despite the best efforts of people who are trying to help the public sort out reality from lies, the forces of deceit are gaining strength.

It’s vital to remember that generative AI is at an early stage of development. It is not a sham, though its immediate impact and potential may well have been overstated by its promoters and journalists. It is, at the very least, a fascinating and potentially powerful tool. It may ultimately be a revolution. 

Its most profound early impact in politics may be the way it augments other digital technologies, which are themselves already influential in the electoral process. Campaigners already know how to ‘ micro-target ’ small groups of voters. Soon, individual voters will get messages tailored specifically for them, on a mass scale. Bots now flood social media platforms with bogus accounts promoting lies, often with the intent to suppress voter turnout, but those have been somewhat blunt instruments to date. Soon, the bad actors will super-charge the bots with highly crafted AI-augmented persuasion and aim them with far more precision. 

Despite the best efforts of people who are trying to help the public sort out reality from lies, the forces of deceit are gaining strength.

Fictional TV news anchors won’t appear on major TV networks, but clips of them – and AI-generated impersonations of real journalists – will travel widely via social media. Some of the platforms are trying to address this wave, but by definition it is impossible to moderate everything that moves on giant social media sites.

We also need to recognize that even if someone magically removed all AI-enhanced misinformation from public view, that wouldn’t fix the larger problem. Journalists love to attack social media for its hosting of garbage talk, but major news outlets in the U.S. and elsewhere have been injecting misinformation into the civic sphere for decades. Americans are likely absorbing more deceit from the Murdoch family’s Fox News than from all the online trolling put together; around the world, other powerful corporate interests, often aligned with or attached to governments, are engaged in routine propaganda campaigns. If we can’t deter corporate giants from poisoning the public sphere, it’s difficult to envision comprehensive fixes in AI-boosted online misinformation.

Nonetheless, we should try. The options include regulation, outright censorship, countermeasures, and public education. 

Detection, Regulation and Media Literacy

The Brennan Center for Justice, a U.S.-based organization that has been working for free and fair elections, is among many advocates of strong regulation. In December the center published a research report – Regulating AI Deepfakes and Synthetic Media in the Political Arena – laying out the logic for, and examples of, ways that regulation could help protect the electoral process. The report tries hard to balance competing interests. But it doesn’t ultimately reconcile the overriding value of free speech in democratic societies, where the notion of regulating lies adds up to regulating speech that should unquestionably remain legal even if we loathe it. Regulation may well be possible, but it’s going to be fraught with problems.

We can adopt better countermeasures, meanwhile. We need better tools to identify synthetic media and its motivations, to debunk and sometimes confirm authenticity. Software services claiming to identify AI-generated text – such as spotting student plagiarism – have been utterly unreliable, which makes them dangerous. Humans are essential in open-source intelligence projects like Europe’s Bellingcat.com; blending human and machine intelligence looks like the basis for some detection and response, for at least the short term. 

Education is the long-range, and best, way forward.

Another countermeasure under development is digital tagging of media at the moment of creation and ensuring that the tags travel with the content as it is modified and moves around. The Content Authentication Initiative, a collaboration of corporate, academic, and other interests, is working on technology to achieve this. The work is promising but only in its very early stages, and it raises all kinds of difficult questions about media control.

Education is the long-range, and best, way forward. Media literacy training, more common in Europe than the U.S., offers media consumers and creators a framework for common-sense judgment. Can it protect against an onslaught of expertly made propaganda and lies? We don’t know yet. But if we don’t try – with comprehensive, sustained programs for people of all ages – the outcome will be grim.
 

Top