How does the AI contribute to fake news creation?
Was this picture taken using a camera or is it footage generated by an app supported by AI? Did this well-known actor, who is now encouraging me to invest in an oil company shares, really say these words? If so many people claim that the Earth is flat, perhaps they are on to something?
By Piotr Henzler
A dark side of artificial intelligence development (one of its dark sides) is its uncommonly strong contribution to the manufacturing and dissemination of disinformation and fake news. How? On the one hand, AI enables (or facilitates) content creation, while on the other – offers an effective tool of content dissemination. What can such content be?
Fake news texts
So far (though it is unclear how much longer), most fake news have been texts – shorter or longer articles that can be either completely made up or – which makes their detection more challenging – are partly accurate and partly fake. On top of that, there are texts that might be accurate, but are contextualised in a way that changes their message or meaning. These can also be comments on news (and other material), opinions, social media posts, etc.Some of these are „home-made”, i.e. there is a person or persons who more or less intentionally create such fake news, but technology can automate and boost such processes. Textual material can be created using LLM-based apps (LLM are Large Language Models), such as Chat GPT or Copilot. Such algorithms and the apps that use them can generate not only ‘requested’ textual material, but also automatic comments, reactions or social media posts that additionally may target – using algorithms applied in such media – predefined audience groups (e.g. supporters of a given political party, anti-vaccine activists or flat Earth theory believers).
What is the scale of this phenomenon? According to the recent Bad Bot Report, published by the Imperva agency, which is owned by a cybersecurity leader Thales, 51% of on-line traffic, mostly in social media, has been generated by bots. It is machines and not people who make comments and write posts…
Deepfakes
The real AI-driven revolution happened in the domain of images and movie clips, which had previously been thought to resist manipulation. Until recently, people used to say „pics or it didn't happen”, i.e. it was assumed that if there is a picture that confirms a given piece of news, we can be sure the news itself is true. It is no longer the case.Artificial intelligence enables almost any image alteration. Even before its rapid development over the last few years, there had been software that enabled image manipulation, changing background, ‘enhancing’ objects, removing unwanted people captured in the frame. And now? With AI, one can do nearly everything. Do you want your friends to think you wear royal robes on a daily basis, that you drive a luxury car and have a muscled body, much like Hercules? Just find photographs of anyone depicted in such situation and use one of the many ‘face swap’ apps to replace the person’s face with your own.
Do you happen to have a picture of someone famous and you would like them to ‘say’ something? Or do you want to see your neighbour’s dog, the one that you captured on your camera, running around the courtyard? No problem, there are apps that will make your pictures go ‘alive’, adding the requested movement and action.
You can also generate a movie not by making pictures go ‘alive’, but simply by writing a prompt – Generate a movie that shows…here you describe what you want to see. You wish to show yourself reporting from a flood-affected area? Just sit down in the comfort of your own home and ask an app to generate appropriate background. Are you about to make an ‘antivax’ material? Ask AI to generate a clip showing a dozen or so people convulsed by pain after they had been administered a shot.
Do you plan to include celebrities in your material? Using the ‘lip sync’ function you can make anybody say anything, and the resulting video will look natural. That was how the Polish Prime Minister Donald Tusk and the Polish President Andrzej Duda were shown encouraging people to “invest in the Baltic Pipe” and how bishop Kazimierz Nycz advertised a specific drug to relieve leg pain.
Do you wish to show off by speaking Portuguese? Just record a video in Polish, use an app and in no time you will have the same footage with – your own! – voice speaking perfect Portuguese.
And if you have large enough budget and some skill, you can use AI to alter what people are saying – in real time. And this creates so much opportunity…for abuse.
Fake news dissemination
Artificial intelligence not only facilitates fake news generation as texts, pictures or videos. AI can also support their dissemination. How?
Bots/Bot farms comb through the Internet, publishing their content on diverse sites, responding to already published material or commenting on the posts by others. In this way, they create an impression that “everyone is talking about it”, and since ‘everybody’ says it, many people follow the so-called ‘social proof’ rule, i.e. tend to believe that such a large group must be right.
A variant of such flooding of on-line sites with particular viewpoint supporters is called brigading, i.e. by ‘targeting’ select discussions or e.g. generating massive-scale ‘excellent’ or ‘poor’ rating of specific sites, so as to drown out opposing voices and create an impression that there is unanimity of opinion on a specific issue.
That is also how astroturfing works, as a coordinated campaign pretending to voice the concerns of ‘ordinary people’, which may enhance the message credibility for many viewers, dissatisfied with ‘official sources’, who really heed the voices of ‘people like me’.
These are just some examples to demonstrate what those who generate fake news may be up to, for propaganda or disingenuous marketing purposes, by using the possibilities created by AI to manipulate social perception of phenomena or to convince people to e.g. invest in risky projects.
Artificial intelligence supports (or generates) fake news, and it also enables identification of specific audiences that are especially susceptible to specific messages. Artificial intelligence facilitates access to these groups and flooding them with targeted content, so as to create an impression that the promoted idea is not only valid, justified and true, but also shared by a large section of the society. Or at least – within our bubble.
Are we then utterly vulnerable? The technology is developing at an astonishing pace, making it all the more difficult to recognize AI-facilitated fake news. But this does not mean we stand no chance at all. Have a look at our text How to recognize fake news? to learn about general principles and at How to recognize a deepfake to protect yourself against manipulated footage and photographs.
The publication of this article is part of PERSPECTIVES – the new label for independent, constructive, and multi-perspective journalism. The German-Czech-Slovak-Ukrainian online magazine JÁDU German-Czech-Slovak-Ukrainian online magazine JÁDU is implementing this EU co-financed project together with six other editorial teams from Central and Eastern Europe, under the leadership of the Goethe-Institut.>>> More about PERSPECTIVES