Quick access:

Go directly to content (Alt 1) Go directly to first-level navigation (Alt 2)

Social Robotics
Learning How to Live With Robots

Older woman with folded hands sitting opposite a robot
“Social” robots: They often have something human | Photo (detail): picture alliance / BSIP | BSIP

When machines become “social” interaction partners, we need to revisit our understanding of sociality. This is a conversation with the philosopher, Johanna Seibt, Aarhus University, who works in the new research area of “robophilosophy”​and launched the biennial conference, which in 2020 took place online from 18 to 21 August.

Ms Seibt, what are we to expect when robots start taking part in our lives?

For about two decades there has been a new branch of research in the field of robotics – social robotics. In contrast to industrial robots, so-called “social” robots are built and programmed in such a way that they give the impression of being social interaction partners. It is their appearance, movement patterns, linguistic reactions and their interactive functional performance that creates this impression.

 
Interestingly, the original motivation for developing robots with “social intelligence” was purely theoretical, but the practical perspectives quickly became obvious. Robots that we can deal with “as if they were people” – as Cynthia Breazeal described the “dream” of social robotics in 2002 – provide prima facie the possibility to replace people – or at least human individual services in the context of cooperation with other people.
 
The latest developments in artificial intelligence (AI), particularly in the area of emotion recognition and speech processing, seem to be making the dream of “social” robotics closer to coming true. Political decision-makers are already beginning to think in concrete terms about the use of social robots in the areas of elderly care, education and certain local government services. In Japan, the development of social robotics has been explicitly linked to the aging of society and the shortage of skilled workers that comes with it – see Jennifer Robertson’s book Robot Sapiens Japanicus (2018).

In contrast to industrial robots, so-called “social” robots are built and programmed in such a way that they give the impression of being social interaction partners. It is their appearance, movement patterns, linguistic reactions and their interactive functional performance that creates this impression.

Johanna Seibt


Public discussion is mainly focused at the moment on the performance and dangers of using artificial intelligence software programs, but as soon as this software can move about in physical space completely different dimensions of  intervention in fields of human action come to light, some positive and some negative. The buzzword industry 4.0 conceals the political dimension of the “automation age” (McKinsey, 2017, A Future that Works), in which an automation potential is calculated for each activity. At the moment, the “automation potential” of activities with a high proportion of social interaction is rather low, but the more progress we make in social robotics, the more the automation potential of these social activities increases.
 
The COVID19 pandemic has opened up a whole new perspective on the possible uses of “social” robots – especially in the field of telecommunications. During the past months we all have realized how important it is that our brain can perceive the interaction partner both three-dimensionally and physically – simple communication of information may well be possible via Skype or Zoom, but, in my experience, decision-making processes and creative thinking with each other are much more difficult. So-called communication robots, which enable the signals of physically distant conversation partners to be “embodied” in another place, would offer new forms of “telepresence”.
 
You talk of a “dream” and mention the catchphrase of the “robot revolution” – is “social” robotics then about a utopia?
 
I very much hope that we will succeed in not allowing the use of so-called “social” robots to be determined by the market, but to introduce and regulate them as a political measure – by means of a public discourse on values, one that is democratically legitimised. To make this possible, however, we have to create new interdisciplinary research structures, branches of training and professional profiles that will ensure that the discourse on social values does not get lost in scaremongering or techno-utopian dreams, but that it remains knowledge-based. In politics nowadays – in several European countries – there is a combination of technology-based innovation management, on the one hand, and humanities bashing, on the other, which means that the humanities are considered socially unimportant and therefore less worthy of support. If “social” robotics is to succeed as a political measure, it needs to be developed with cultural expertise. To put it poignantly – the humanities and social sciences have never been as important as they are now – the more technology, in the role of a social agent, infiltrates the physical and symbolic spaces of human interaction, the more socio-cultural expertise is required to design successful aplications – an expertise that engineers do not have.

The reality of “social” robotics is extremely complex and raises profound questions about our society and our values that, to this day, we have not even been able to formulate, let alone answer.

Johanna Seibt


To explain this in more detail, let me return to the “dream” of the “social” robot. Precisely what are we dreaming of? The human dream of a mechanical assistant is certainly not anything new, it runs through the western and eastern cultural imagination from early antiquity to the present day. It is the dream of the tireless, indestructible helper who often has superhuman powers or skills. Hollywood has, in fact, given us a few examples – see R2D2 and C3PO from Star Wars – or with a touch of self-irony – TARS from Interstellar. These are today’s versions of an ancient dream, that in psychological perspective may appear infantile. The cultural-historical longevity of a dream, however, is no guarantee of its political feasibility. Dreams are not consistent, and most importantly, dreams are not complete. The reality of “social” robotics is extremely complex and raises profound questions about our society and our values that, to this day, we have not even been able to formulate, let alone answer.
 
We should not leave the shaping of our future social relationships (only) to the cultural imagination of the young roboticists, who of course have the best intentions. There are many individual studies of human interactions with social robots, but we are currently still far from being able to assess the psychological, sociological and political consequences of widespread use. The inclusion of humanities expertise can help us to develop a culturally sustainable form of social robotics that can also be implemented politically.
 
“Human-Robot Interaction Research” has been around for about a decade and a half. What progress has been made in the discussion about its practical use? 
 
It should actually be clear that if you intervene in social reality with a new kind of quasi-social agent, you should make use of the expertise of all relevant scientific disciplines. Nevertheless, work on “social” robotics and HRI – Human Robot Interaction Research – continues to this day primarily in the fields of disciplines such as  robotics, psychology, and design studies and this does not do justice to the complexity of our socio-cultural reality.
 
It is astonishing that the contributions that anthropology and philosophy can make in the analysis of new social practices so far have hardly been taken seriously; vice versa, many anthropologists and philosophers are not aware of the important tasks that the field of “social” robotics has opened up for the humanities. G. Verruggio called for the establishment of “robo-ethics” in 2004, and together with my colleagues in Aarhus I established the new field of “robophilosophy” in 2014, since the questions of “social” robotics go far beyond ethical problems and raise deep theoretical issues.The Robophilosophy Conference Series, which I founded and run with my colleague, Marco Nørskov, have become the largest events on Humanities Research in and on Social Robotics. We are currently preparing the fourth conference in the series Robophilosophy 2020: A Culturally Sustainable Robotics (August 18 to 21, 2020), this time as an online event. The message is taking hold – the Danish recommendations for future EU research funding in the field of robot technology by now explicitly include the cultural dimension. Also in Germany, too, the insight is beginning to prevail that culturally sustainable robots applications can only be created using interdisciplinary development teams in which humanities expertise pays a central role right from the start.
 
Can you give an example to explain the questions raised by “social” robotics?
 
Imagine that McDonalds introduced in all its branches sales robots that look like attractive woman. What cultural and social-political signals would this emit? And what signals do you transmit to elderly dependents and their relatives if you install lifting robots in old people’s homes that look like large teddy bears? To deal with these questions competently, you need analytical terms and methods of ethics and cultural expertise that are not yet firmly anchored in the multi-disciplinary repertoire of social robotics.
 
Robot ethics has long drawn attention to these socio-political signalling effects caused by design decisions. There are, however, other underlying problems. Social robots with certain characteristics trigger mechanisms of preconscious social cognition in humans. Due to our “social brain” we tend to identify and misunderstand robots with a certain appearance and movement patterns as fellow beings and even as moral subjects. We tend to attribute conscious intentions, feelings, and even rights to such robots – even to those that don’t look like humans. Social robotics initially aimed to exploit this tendency to anthropomorphise. In the meantime, we have become a little more careful. The fact that we so readily include robots in our circle of social agents, even see them as moral subjects, have empathy for them, even get attached to them presents us with a fundamental problem that goes beyond ethics, a description problem. Social robots are not tools – they are something which does not yet fit into any of our familiar categories.

Due to our “social brain” we tend to identify and misunderstand robots with a certain appearance and movement patterns as fellow beings and even as moral subjects.

Johanna Seibt

Our social cognition predisposes us to use descriptions that are both wrong and yet not entirely wrong. A robot “recognises”, “replies”, “sees”, “asks”, “fetches”, “chooses” nothing – these capacities would require consciousness and other cognitive faculties, which we so far have regarded as exclusively human abilities. Such descriptions are therefore incorrect. If these descriptions were correct, the terms for human social action would be reduced to denoting ovservable behaviour – the robot “recognizes me” would be an appropriate description if one were to equate “recognising” with the manifestation of a certain behaviour. We often use such reductive interpretations when dealing with animals – we say that a dog recognises and greets its owner. But here it is clear that it is about a different, metaphorical “recognising” and “welcoming”, because the dog does not say, “Nice that you have finally come home!” 
 
In the case of “social” robots, on the other hand, we have to ask ourselves how we should proceed conceptually and normatively.  Should we remove the quotation marks from the term “social” robots? In other words, should we relinquish our familiar concepts of social action and accept as social agents all items that behaves according to certain patterns? If animals have rights, why not robots? Or should we keep the traditional understanding of the term “social” and insist that only those who have experienced and can speak about the phenomenology of consciousness are social agents and have rights?
 
This is the point when we delve into questions of theoretical philosophy, that is, the philosophy of the mind, and ontology, as well as political philosophy. Western democratic societies legitimise political authority using a term to describe the concept of person that stems from the Enlightenment era. “Social” robotics is now questioning this concept of person, which is closely linked to the capacities engaged in human social subjectivity. In May 2016, the European Parliament proposed that sophisticated robots be given the status of electronic persons. The subsequent discussion showed how much our political thinking is shaken when what always belonged together – person, social subject, reason, intelligence, feeling – is suddenly torn apart.
 
It is often said that robots will take our work away from us. How do you see that?
 
That is the socio-political dimension of “social” robotics, which I already mentioned earlier. If we abide by principles for responsible and culturally sustainable development of “social” robotics applications, I think we will see the opposite – new job profiles will emerge, and the focus will be on cooperating with  – new job profiles will emerge, and the focus will be on cooperating with “social” robots. It will be important to examine exactly which forms of cooperation will be possible and beneficial – the keywords here being mere coordination versus cooperation versus teamwork. Above all, it depends on how the items in this new interactive envireonment should be described, because actually – and that’s the paradox – robots (in English: workers) cannot really “work”.

New job profiles will emerge, and the focus will be on cooperating with “social” robots.

Johanna Seibt

Expertise from the realm of philosophy will be increasingly in demand, especially if politicians opt for the model of integrative social robotics, or other models for “responsible robotics”. AI companies like Deep Mind are already looking for ethicists. However, and this is formulated in the definition of the new field of robophilosophy as the philosophy of, for, and by social robotics, “social” robotics also allows philosophy to take on new forms. Robophilosophers can propose technical solutions to ethical problems and investigate them experimentally. For example, in Aarhus we are currently investigating whether the social injustice that results from job interviews due to gender or racial bias – or other prejudices that stem from the appearance of job applicants – can be reduced by conducting the job interview via a telecommunications robot with a neutral human appearance.
 
We hear a lot about bots in the field of political communication. What are we dealing with there compared to your model?
 
There’s a world of difference. Our model of ISR (Integrative Social Robotics) supports the development of responsible, culturally sustainable applications of physical, “social” robots, in a transparent, value-based, participative process that includes the expertise of all relevant disciplines and in particular cultural expertise provided by the humanities. The software bots that are currently disrupting our political processes, from communication to election fraud, are pretty much the opposite – covert, manipulative processes driven by power interests alone.
 
Besides the political security problems that automatic profiling creates, we underestimate, I believe, how the learning algorithms of current AI programs could undermine our current models of justification for political and legal decision-making processes. Up to now, these decision-making processes have been based on ethical norms, which in turn have been established through normative practices such as expert discourse. Now we are beginning to replace the “reflective equilibrium of expert discourse” with an automatic process of norm induction, which is based on the ethical intuitions of a number of non-experts who may not be representative at all. Currently we face the fundamental question to what extent we can leave behind human reason and the discursive processes that we have used for a long time to make sensible decisions, both practically and theoretically.
 
The interview was conducted by Stefan Heidenreich (carta.info) via email.

Top