Social Bots The power of opinion robots

Bots can spread campaign slogans.
Bots can spread campaign slogans. | Photo (detail): © fotohansel - Fotolia.com

They can spread messages, interact with users, and even write texts. Are programmed user profiles on social networks also able to influence political decision-making?

A political party wants to mobilize voters. To this end, it uses social media such as Twitter and Facebook, from which many people get the news. In order to be as present there as possible, the party plants so-called social bots, computer programmes that simulate human behaviour. These bots spread campaign slogans millions of times over and lend the party so much media weight that it can in fact influence potential voters.

To some extent, this disquieting scenario is already reality today. Social bots were used, for example, in the American election campaign. The research project “Political Bots” at Oxford University has shown that one third of all pro-Trump tweets and one fifth of pro-Clinton tweets sent following the presidential candidates’ first TV duel originated from such programmed opinion machines. Bots were also deployed in the Brexit debate in Great Britain and in the Ukrainian conflict, as among others Simon Hegelich, Professor of Digital Data Science at the Technical University of Munich, has demonstrated.

Politicians are concerned

The purely technical capabilities of bots range from sending canned messages to composing their own texts and interaction with “real” users. They are usually used for a specific purpose – for instance, for press work, marketing or, increasingly, political propaganda. But what influence can bots actually have? This is still difficult to judge. Simon Hegelich sees the main risk in their influence on so-called “trends” in social networks. By dint of the sheer mass of messages sent from thousands of bots, usually organized in networks, certain topics of discussion can be deliberately set. This so-called “bot effect” is theoretically very considerable. “Theoretically” – because empirically, Hegelich admits in a research paper for the Konrad Adenauer Foundation published in September 2016, such effects are extremely difficult to verify.

German politics and media companies take the potential influence of social bots very seriously. On 24 September 2017, a new parliament will be elected in Germany. In view of the election campaign of the right-wing populist party Alternative for Germany (Alternative für Deutschland / AfD), which is marked by provocation and simplification, many politicians are concerned that social bots could lend this strategy additional cogency. Bots could have the potential, according to a research paper of the Office of Technology Assessment at the German Bundestag (Büro für Technikfolgen-Abschätzung beim Deutschen Bundestag / TAB), to influence the outcome of political decision making processes, and in the extreme case even to undermine confidence in democracy.

Bots in the election campaign

The use of bots is not prohibited in Germany, but a labelling requirement is being discussed. All political parties, including the AfD, have spoken out against the use of these programmes in the election campaign on ethical grounds. Thomas de Maizière, Federal Minister of the Interior, has also expressed himself on the subject: “I shall advocate that all parties in Germany participating in the next Bundestag elections publicly declare they will not use such programmes”, he said at a press conference on cybersecurity in November 2016.

Social bots could still play a considerable role in the German election campaign. According to articles published in February 2017 in the Frankfurter Allgemeine Zeitung, bot networks on Twitter and Facebook are already spreading right-wing populist content. This does not mean that, for example, the AfD is itself the operator of these networks, but it probably benefits from the results. In order to consider how the phenomenon of social bots is judged by expert opinion, the German Bundestag invited experts to a discussion in January 2017. Interestingly, most of those invited gave an all-clear signal. Linus Neumann, a member of the hacking association Chaos Computer Club, even gave voice to his incomprehension at the great amount of political and media attention that has been expended on social bots, as the portal Netzpolitik.org reported: not bots are the problem, he said, but rather citizens’ loss of trust in politics and the media. Although bots could be used to strengthen xenophobic tendencies, this danger is negligible because of the low number of Twitter users in Germany. He saw no reason to fear electoral manipulation.

Optimization by click count

Another aspect that plays an important role in the discussion of social bots is the technical mechanisms underlying large social media programmes. Thus more and more people are using Facebook as a source of news. There, journalistic criteria for presented contents have so far played a subordinate role; the key criterion for relevance is the number of clicks. That such an arrangement can favour manipulative tendencies is now admitted by Facebook, and it has announced that in future it will cooperate with German media companies. The idea is that the latter will provide assistance to Facebook in tracking down obviously false reporting – so-called “fake news”. Social bots are particularly used to spread such content.