Quick access:

Go directly to content (Alt 1) Go directly to first-level navigation (Alt 2)

AI and Law
What should a robot be allowed to do?

What if he makes a mistake? The humanoid robot “Pepper” works in a nursing home in Erlenbach.
What if he makes a mistake? The humanoid robot “Pepper” works in a nursing home in Erlenbach. | Photo (detail): © picture alliance / REUTERS / Kai Pfaffenbach

Artificial intelligence makes our lives easier in many ways. But who is responsible when algorithms and robots make mistakes?

By Johannes Zeller

We come across forms of artificial intelligence (AI) everywhere in our everyday lives and now it is clear that they are becoming more and more similar to us humans in terms of their abilities - algorithms today cannot only be logical, but also creative. Robots teach themselves new things. What does this mean for the legal situation? Can a robot have rights and duties like a human? Who is to blame when a self-driving car makes a mistake? Questions like this on how to deal with artificial intelligence are becoming more and more of a challenge for lawyers and governments. Who is to blame when a self-driving car makes a mistake? Questions like this on how to deal with artificial intelligence are becoming more and more of a challenge for lawyers and governments. | Photo: © picture alliance/dieKLEINERT/Markus Grolik Ethicists, lawyers and governments are spending a lot of time thinking about questions like this, because to err is, as it seems, not only human. Autonomous programs make their own decisions, and they can sometimes go wrong – even the best algorithm does not protect against that. One example of this is the self-driving car. Like all participants in road traffic, certain rules must be followed. In contrast to humans, it can be programmed in such a way that there is no temptation to exceed a speed limit or to carry out a risky overtaking maneuver. What happens, however, when other road users react incorrectly? For example, when a bicycle suddenly appears on the road. Imagine the case of an evasive maneuver swerving onto the sidewalk and possibly endangering a pedestrian. Should the car be programmed to risk the maneuver? Or should it decide for itself? And if the wrong decision is made – who will bear the responsibility?

Should AI be a legal entity?

In order to create a legal basis for such difficult cases at an early stage, the European Parliament proposed in 2017 that intelligent machines should be given the status of an “electronic person” who – like people and companies – would be recognised as a legal entity. AI researchers and legal advisors, however, were not particularly enthusiastic about this idea. In an open letter to the EU Commission, 250 experts spoke out against the initiative. It is based on the false assumption that the question of liability cannot be answered if autonomous robots make the wrong decisions. This misunderstanding was caused by the depiction of robots in science fiction and by sensational press releases, it said in the letter. The German Federal Ministry for Economic Affairs and Energy (BMWi) also sees no need to create the legal status of an “e-person”. The legal questions raised by smart machines can be solved by the existing legal system, the BMWi stated in a findings document on the subject of AI and law in the context of Industry 4.0. “AI systems have not yet achieved the degree of autonomy that would rule out a link to human behavior.” Humans must therefore continue to be liable for the consequences of using AI. After all, responsibility cannot simply be foisted off on to a lifeless machine or computer program.

The robot as an originator?

On the other hand, there is the question of who should benefit when AI produces intellectual property. The subject is not entirely new. As early as the 1960s, painting robots raised similar questions. The works they created, however, were mostly based on random algorithms that cannot be compared in any way with human intelligence. In the past ten years, however, AI seems to have “reached a new level of development”, as the BMWi acknowledged in its paper. Today robots write entire film scripts and compose pieces of music. It can hardly be compared with the randomised doodles from back then. So can a robot become a creator – an originator?
 
Lawyers like to refer to a precedent from the animal world. In 2008, the English photographer David J. Slater gave his camera to a macaque called Naruto, who snapped a “monkey selfie” that went viral three years later and spread around the world. The animal rights organisation, Peta, tried to sue, on behalf of Naruto, for the proceeds from the photo. This was followed by a lawsuit lasting several years, which was fought in the United States. In 2017, Slater agreed to an out-of-court settlement and pledged to donate a quarter of the future proceeds from the Naturo selfie to Peta. The San Francisco Court of Appeal, however, did not accept the settlement. The lawsuit was dismissed on the grounds that Naturo itself had no say in the settlement and the aim all along had been to set a precedent. In addition, Peta had to pay the photographer’s legal fees. Slater was recognised as the originator of the photo. He later sued the German punk band, Terrorgruppe, for using the photo on a record cover without his authorisation. The US Copyright Office stated that copyrights can only be granted to humans and therefore not to animals – or robots.
 
Currently, courts and governments do not absolve people of their responsibility for the AI they have developed, even if their inventions become inventors themselves. The rights and obligations remain with the users of the AI or with those who operate it. The British Copyright Designs and Patent Act came to this decision back in 1988 when the first home computers raised questions similar to those posed by the “learning robot” today. The EU Commission also seems to be sympathetic to this idea. Technology must always be used in the service of mankind and adhere to the rights this entails, it stated in a communication on shaping Europe’s digital future in February 2020.

Top