Self-driving cars
A Question of Individual Cases?

A small, autonomously driving electric bus at the city railway station. There are a few people waiting at the bus stop.
A total of two automated electric buses run on a 1.5-kilometre test route between Iserlohn railway station and the campus of the University of Applied Sciences Südwestfalen in regular service. | Photo (detail): Rupert Oberhäuser © picture alliance / Rupert Oberhäuser

Who is responsible in an accident involving a self-driving car? When legal experts discuss assistance systems and autonomous driving, things get complicated. A special research centre RobotRecht (“RobotLaw”) in Würzburg was set up specifically to deal with such techno-legal questions. Research associate Max Tauschhuber provides insights.

Mr Tauschhuber, which aspects of a vehicle would be affected by automation? And what does that involve in an autonomous vehicle?

It’s important to use the terminology carefully. For vehicles, we distinguish between multiple levels. In 2021, the German Federal Highway Research Institute introduced a new categorisation that comprises three levels.

The first level is the assisted mode. Drivers are supported in carrying out certain driving tasks – but they still need to permanently monitor the system and their surroundings. This includes cruise control and lane keeping assistants.

Then there’s the automated mode. In this mode, drivers can undertake activities unrelated to driving while the system drives the vehicle. They could read something, for example. However, drivers must remain sufficiently alert to take over the task of driving again in a timely manner when prompted by the system.

The third mode is the autonomous mode. Only the system drives the vehicle. Any humans on board are just passengers. Buses that operate in autonomous mode are also called shuttles. It’s conceivable that these vehicles might no longer have a steering wheel or other controls.

At what point do legal considerations enter into the design or the construction of an automated or autonomous vehicle?

That depends on the respective legal issue. There are certain quality standards, ISO or DIN norms. These norms aren’t laws but standards established by private institutions; however, in the best-case scenario, they correspond to legal requirements, of course.

With regard to fundamental questions that can’t be described through norms, the automotive industry has an interest in having them resolved academically. Ideally, this is done before a vehicle gets onto the road at all. Companies often do this through research co-operations – that’s where we partly get our funding – or PhD projects.

The German Road Traffic Act (StVG) was amended in 2017. It states that “under certain conditions”, automated systems may take over driving tasks. What are these conditions?

Under certain conditions, vehicles operating in automated mode can be permissible on German roads. This was the first legislative foray of its kind worldwide.

These conditions are fulfilled if – put simply – the vehicle has the technical equipment to handle the task of driving after it has been activated. It must be capable of observing traffic rules, for example recognising a stop sign and acting accordingly. The driver must be able to override or deactivate this system.

It’s important to note that the driver still has rights and obligations in this scenario. For example, the law obliges drivers to take over the steering immediately when they notice that the system prompts them to do so or when they realise that the system is no longer working properly. The driver may turn to tasks that aren’t related to driving but must remain aware.

However, the law doesn’t clearly define what “remaining aware” means. In jurisprudence, it occasionally happens that laws use undefined legal terms. This means that case law must fill these terms with life.

You cannot expect anyone to know at what point the system will make a mistake.

In 2021, it was announced that further vehicles may be authorised on public roads “in specific areas”. Could you highlight the differences between this legislative amendment and the one from 2017?

In 2017, vehicles operating in automated mode were authorised. In 2021, the StVG was again reformed and now vehicles operating in autonomous mode may be authorised as well.

That’s the main difference. There are no drivers on board the vehicles anymore, only passengers. Possible application scenarios are buses as well as areas of logistics, postal services or document distribution. Beyond that, it would also make sense for trips between medical care centres and aged-care or nursing homes.

Are there any municipalities where such vehicles are already in use?

There are a few places with bus shuttles on the road, such as Monheim am Rhein in North Rhine-Westphalia and a number of Upper Franconian cities. However, these are research or pilot projects. They weren’t licensed through the StVG because this has only been principally permitted since last year. Thus, special authorisations were granted in those cases.

In case of an accident with an assisted, automated or autonomous vehicle, who is responsible if the accident was caused by a system decision?

In this context, Professor Dr Dr Eric Hilgendorf, director of the RobotRecht research centre, often presents the so-called Aschaffenburg Case from 2012. A vehicle that was equipped with a lane keeping assistant was driving into the town of Alzenau near Aschaffenburg. At the entrance to the town, the driver suffered a stroke. He lost consciousness but kept gripping the steering wheel, jerking the vehicle to the right. If the vehicle had followed his “command”, he would have ended up in the bushes and come to a stop. Likely nothing would have happened.

Tragically, however, the lane keeping assistant steered the vehicle back onto the road. It drove further into town at high speed and hit a young woman and her child in the centre of town. They died at the scene.

How do you approach a case like that legally?

There are two sides to consider if you try to solve it legally. Firstly, in road traffic, we have so-called strict liability which can apply regardless of fault. You are liable for bringing a permitted hazard – a vehicle – into the public sphere.

However, there is also fault-based liability. In a sense, this is the opposite of strict liability. You need to prove that the defendant acted at least negligently. This is difficult, not least in the Aschaffenburg case: it’s not possible to establish any wrongdoing on the part of the driver because the stroke was not foreseeable.

This shows that the use of autonomous systems in road traffic will become more important in the future because mistakes made by the system aren’t foreseeable either. You cannot expect anyone to know at what point the system will make a mistake. That’s why there will likely be a shift away from fault-based liability towards strict liability and even more so towards a vehicle manufacturer’s liability.

Criminal liability is harder to determine. In criminal law, we have the so-called fault principle. This means you have to prove in each instance that the perpetrator is personally at fault. In criminal law, fault means intent or negligence. In road traffic, you’ll rarely be able to assume intent. If you can, though, the case is straightforward.

It's primarily negligence where things get complicated. Imagine: while parking with a parking assistance system, a child that is playing in the parking area is injured because the vehicle’s sensors were dirty and the vehicle didn’t see the child. Acting negligently means disregarding the due diligence that is required in traffic. That’s the basic definition.

Now you could argue that the parking assistant is designed for self-parking so that the driver doesn’t need to pay attention to anything after the parking assistant has been activated. However, an anticipatory driver would assume the possibility of dirt – and the possibility of children playing in the parking lot. Thus, the fact that an autonomous technical system is used does not a priori exclude the user of the system from any criminal liability.

The law dictates that the system must not allow for any prioritisation of individual attributes.

What would the situation look like in an accident with a completely autonomous vehicle where the driver is not responsible for taking over the steering or even has no way of doing so?

This has become more complicated as well. For example, how a vehicle should deal with decisions that affect lives or physical integrity is now regulated by law.

One prerequisite is that a motor vehicle incorporates a system for accident avoidance, meaning it should be able to avoid and reduce harm. Here’s the interesting bit: when the system recognises a situation in which a violation of legally protected rights – such as physical integrity or even life – is unavoidable, the system must be able to identify independently which legally protected right it should prioritise. Problems like this are ethical dilemmas.

If there are two alternative unavoidable harms, the vehicle’s system should automatically attribute the highest priority to the protection of human life. Thus, if the vehicle has a choice between injuring or killing a human being, it should injure a human being.

We can take it further from there. What if a human life is at stake in both alternatives? In these cases, the law dictates that the system must not allow for any prioritisation of individual attributes. We cannot have a situation where a vehicle detects people’s ages and automatically kills the older person.

However, systems also shouldn’t evaluate quantitatively. They shouldn’t decide between 100 human lives and one human life. That is the big ethical discussion in such cases. It has been going on for thousands of years – before autonomous vehicles even existed. Now vehicles are supposed to be able to do it, but for the time being, that probably won’t be technically feasible.

The interview was conducted by Juliane Glahn, online editor trainee of the Zeitgeister magazine.

Top