Artificial intelligence and fakenews When robots make biased fake content on social media
Artificial Intelligence systems are massively implemented by organisations worldwide. However, these automatic systems are not always used for good, and they do not always make the best decisions. In this article I will highlight three types of biased robots, exemplified in three cases: disinforming social bots, biased AI tools, and synthetic profiles.
By Brian L. Due
Three types to be aware ofArtificial intelligence is practically defined in this article as automatic software-robots that are based on advanced algorithms, programmed by humans, and trained on a large but specific dataset by humans for a specific purpose. The AI might generate content that is fake and biased, and it is usually impossible to identify its source. Humans might recognise this and use it purposefully without knowing or stating that it is fake, which we then call disinformation because it deliberately misleads or manipulates.
The effects this can have are shown, for example, by racist misinformation that circulated on social media during the Brexit referendum in 2016: In so-called dark ads, a group of people were targeted on Facebook with the false statement that the EU would grant visa-free entry to 76 million Turks. Among other things, the people in this group were united by the fear of more immigration and loss of sovereignty - they were filtered out via algorithms. Such campaigns aim to bring potential Brexit supporters to the ballot box. They threaten the principle of democratic decision-making.
Fake AI-generated content might also “just” be spread without any malicious intent, however, still leading to biases or misinformation. In this article, we look at three types of phenomena that combines three key characteristics: a) AI content generation that is b) fake in some form, and c) also (re)produces a bias. Content is here defined as information provided to a broad audience. Fake means that the information is misleading either by being dis- or misinformation, and bias is defined broadly as a disproportionate weight in favor of or against something, which often is unintended and unconscious.
In what follows I will shortly describe three types of software robots that cover 1) the difference between disinformation/misinformation, 2) the combination of content, fake and bias and 3) the occurrence on different social media platforms. Obviously, there are other types and ways of presenting the types. One aspect which is, for instance, not covered is the growing area of deep fakes.
1. Disinforming Social Bots: When fake robot profiles spread disinformationThe first type of biased robots is massively occurring on big platforms like Facebook and Twitter. Here we meet AI robot profiles, which at first appear to be humans that spread biased disinformation about key topics, typically in politics. By closer inspection of the profiles, one will typically find that they have numeric names, are quite new on the network, have none or very few followers, and their posts have very narrow-minded or biased themes. There are many examples of bot-armies that are contaminating political processes, which is also referred to as the weaponization of social media. Recent examples relate to covid-19, where bots are seen to produce fake content leading to distrusting the government (see example 1).
Example 1: A fake social media account by the bot “Mel65842178”. An automatically generated bot that plants comments into social media feeds with a deeper intention to disinform or alter opinions of the general public. In this case, Mel is spreading fake news by the mere fact that the robot cannot have a daughter and the described events cannot have happened. It is also retweeting a strongly biased hateful comment.
2. Biased AI tools: When AI-created content leads to misinformationThe second type of biased robots are AI algorithms that exist as tools within a specific platform. They are designed to help increase the efficiency of work, e.g., as user experience on platforms by suggesting word or sentence completion or image suggestions. Basically, the AI tools are based on not only predictive, but also prescriptive analytics with the ability to enable users to take immediate action as next actions are embedded within the tool. During the process, they may produce unconscious bias because, for example, they were developed by a particular group of developers (e.g., white males) and trained on biased data. Prior examples count police algorithms (predictive policing), which can lead the police to unfairly target neighborhoods with a high proportion of people from racial minorities, regardless of the true crime rate in those areas. A recent example is Twitter’s AI cropping tool - if you post long and slim photos, the cropping tool centers on what it thinks is the best part in the tweet. It crops images in a way that gives preference to people with white skin as opposed to those with black skin. See example 2
Example 2: The Twitter cropping tool prefers to center on white faces rather than black ones. The original picture input is the left one with 11 people with black skin and one person with white skin. The cropping tool chose to focus on the one white man. It should be mentioned that Twitter quickly dealt with this problem.