Quick access:

Go directly to content (Alt 1) Go directly to first-level navigation (Alt 2)

Artificial intelligence and fakenews
When robots make biased fake content on social media

Smartphone
Facebook, Twitter, LinkedIn: In every social network, automated systems can generate and spread false information. | Photo (detail): © colourbox.com

Artificial Intelligence systems are massively implemented by organisations worldwide. However, these automatic systems are not always used for good, and they do not always make the best decisions. In this article I will highlight three types of biased robots, exemplified in three cases: disinforming social bots, biased AI tools, and synthetic profiles.
 

By Brian L. Due

Three types to be aware of

Artificial intelligence is practically defined in this article as automatic software-robots that are based on advanced algorithms, programmed by humans, and trained on a large but specific dataset by humans for a specific purpose. The AI might generate content that is fake and biased, and it is usually impossible to identify its source. Humans might recognise this and use it purposefully without knowing or stating that it is fake, which we then call disinformation because it deliberately misleads or manipulates.

The effects this can have are shown, for example, by racist misinformation that circulated on social media during the Brexit referendum in 2016: In so-called dark ads, a group of people were targeted on Facebook with the false statement that the EU would grant visa-free entry to 76 million Turks. Among other things, the people in this group were united by the fear of more immigration and loss of sovereignty - they were filtered out via algorithms. Such campaigns aim to bring potential Brexit supporters to the ballot box. They threaten the principle of democratic decision-making.

Fake AI-generated content might also “just” be spread without any malicious intent, however, still leading to biases or misinformation. In this article, we look at three types of phenomena that combines three key characteristics: a) AI content generation that is b) fake in some form, and c) also (re)produces a bias. Content is here defined as information provided to a broad audience. Fake means that the information is misleading either by being dis- or misinformation, and bias is defined broadly as a disproportionate weight in favor of or against something, which often is unintended and unconscious.

In what follows I will shortly describe three types of software robots that cover 1) the difference between disinformation/misinformation, 2) the combination of content, fake and bias and 3) the occurrence on different social media platforms. Obviously, there are other types and ways of presenting the types. One aspect which is, for instance, not covered is the growing area of deep fakes.

1. Disinforming Social Bots: When fake robot profiles spread disinformation

The first type of biased robots is massively occurring on big platforms like Facebook and Twitter. Here we meet AI robot profiles, which at first appear to be humans that spread biased disinformation about key topics, typically in politics. By closer inspection of the profiles, one will typically find that they have numeric names, are quite new on the network, have none or very few followers, and their posts have  very narrow-minded or biased themes. There are many examples of bot-armies that are contaminating political processes, which is also referred to as the weaponization of social media. Recent examples relate to covid-19, where bots are seen to produce fake content leading to distrusting the government (see example 1). 
Ein falsches Social-Media-Konto © Twitter Example 1: A fake social media account by the bot “Mel65842178”. An automatically generated bot that plants comments into social media feeds with a deeper intention to disinform or alter opinions of the general public. In this case, Mel is spreading fake news by the mere fact that the robot cannot have a daughter and the described events cannot have happened. It is also retweeting a strongly biased hateful comment.

2. Biased AI tools: When AI-created content leads to misinformation

The second type of biased robots are AI algorithms that exist as tools within a specific platform. They are designed to help increase the efficiency of work, e.g., as user experience on platforms by suggesting word or sentence completion or image suggestions. Basically, the AI tools are based on not only predictive, but also prescriptive analytics with the ability to enable users to take immediate action as next actions are embedded within the tool. During the process, they may produce unconscious bias because, for example, they were developed by a particular group of developers (e.g., white males) and trained on biased data. Prior examples count police algorithms (predictive policing), which can lead the police to unfairly target neighborhoods with a high proportion of people from racial minorities, regardless of the true crime rate in those areas. A recent example is Twitter’s AI cropping tool - if you post long and slim photos, the cropping tool centers on what it thinks is the best part in the tweet. It crops images in a way that gives preference to people with white skin as opposed to those with black skin. See example 2
Das Bildzuschnitt-Tool von Twitter Example 2: The Twitter cropping tool prefers to center on white faces rather than black ones. The original picture input is the left one with 11 people with black skin and one person with white skin. The cropping tool chose to focus on the one white man. It should be mentioned that Twitter quickly dealt with this problem.

3. Synthetic profiles. When fake profiles lead to disinformation

The third type of robots are synthetic “humans”, i.e., robots that, if they were physical robots would be called humanoids. As software robots, they have a more developed convincing profile than “Disinforming Social Bots”, which are occurring in droves with no back story. Synthetic profiles are renown on Instagram as influencers with #lilmiquela as a prime example. There is a whole business for fake persons where faces and content are generated by an AI. See ThisPersonDoesNotExist.com for fake faces. They can be used to produce biased content and spread disinformation. A key example is from LinkedIn where the profile “Katie Jones” was able to network with top profiles in US politics.
Example 3: The LinkedIn profile “Katie Jones” doesn’t exist. It is fake, a synthetically AI constructed face. The Associated Press did an analysis of the profile picture and stated that it’s typical of espionage efforts on the professional networking site.

Debunk fake news

There are many types of fake profiles and types of AI-generated content on social media. In this article, I have covered the three types: 1) Disinforming Social Bots, 2) Biased AI tools, and 3) Synthetic profiles. Obviously, robots can be of great help for spreading true information when marked as coming from a robot and might be used in fighting bias on a more general level, as for instance done by the Western Norway Research Institute, which tries to debunk fake news and hate speech with the help of AI. In the end, it’s not possible to have a completely unbiased human being, so it is basically also impossible to build an unbiased AI system. But we can certainly do a lot better than we’re doing.

Top