Quick access:

Go directly to content (Alt 1) Go directly to first-level navigation (Alt 2)

Artificial Intelligence
Ethics for Machines

What should we do, when machines make the decisions – for example, when driving a car?
What should we do, when machines make the decisions – for example, when driving a car? | Photo (detail): © RioPatuca Images – Fotolia.com

In the wake of digitisation, man is delegating responsibility to technical devices. But what decisions should machines be allowed to make and how should we deal with the consequences? In Germany an ethics committee is working on this question.

A motorist is driving fast along a country road, when he suddenly sees an overtaking  vehicle coming at him from the opposite direction. He has only two ways of saving himself – either he jerks the steering wheel round and hurtles off the road into a field – right into a group of children playing. Or he decides to take his chances and stay on track. Will the other driver swerve to the side? Whatever action he takes, due to the brevity of time it will not be a conscious decision, but an instinctive one. No court in the world would hit upon the idea of convicting him for his actions.

It would be different, however, if the driver were a robot. If, for example, it were one of those autonomous vehicles that have been undergoing tests on the road for Google and Audi over the past few years. Its cyberspeed processors would give the computer enough time to make a decision. However, it would already have to have been programmed for this situation, in its algorithm. But what should the verdict be in such a situation? And who should bear the responsibility for it?

Guidelines for algorithms

In order to clarify such issues, in autumn 2016 the German Federal Government set up the so-called Ethik-Kommission für das autonome Fahren (Ethics Commission for Autonomous Driving). Under the direction of the former Federal Constitutional Judge, Udo di Fabio, a dozen scientists, computer scientists, engineers and philosophers have been discussing questions of decision-making responsibility for autonomous vehicles and have devised standards that up to now have not existed. “As long as there is no legal security in this area, there will be no investment in this technology,” says Armin Grunwald. The physicist and philosopher is a member of the Ethics Committee and also heads the Büro für Technikfolgen-Abschätzung (Office for Technology Assessment) in the German Parliament.

Autonomous driving is taking society into new territory. People are handing over responsibility to the computer that controls the vehicle. “We can only do justice to the mobility revolution by developing clear guidelines for algorithms,” said the German Federal Minister of Transport, Alexander Dobrindt, at the Ethics Committee's opening session in October 2016. On the basis of its recommendations, a law is to be passed which will allow autonomous vehicles to operate on public roads. The law is to clarify which responsibilities are to be born by both man and computer and provide legal security for customers, drivers, road users and automobile manufacturers. Armin Grunwald is convinced that this is just the first step towards equality between man and machine, others will follow. “It is time to develop a form of ethics for artificial intelligence.”

Science-Fiction did the groundwork

The Russian-American biochemist and science-fiction author, Isaac Asimov, did the groundwork. As early as in 1942 he published his famous robot laws in his short story Runaround. The first law says a robot may not injure a human being or, through inaction, allow a human being to come to harm. The second and third laws say that robots must obey the orders given it by humans and protect their existence – as long as such protection does not conflict with the first law.

In 1983, parallel to the development of the first autonomously operating weapons of war, Asimov developed the so-called “zeroth” robot law – “A robot may not harm humanity, or, by inaction, allow humanity to come to harm“. A law with some shortcomings, for it could enable machines to enjoy considerable freedoms, for example, the decision to kill individual humans, if they are a threat to the welfare of mankind.

An irresolvable dilemma

The German Ethics Commission for autonomous driving has so far not had to make such far-reaching decisions. It has mainly focused on liability risks. The current understanding is that the owner is always liable for damages: the owner for his dog, the parents for their children. But what if a programming error causes damage? Then the manufacturer would be obliged to take responsibility. That is why they are planning to install a so-called black box in these cars, which records all the driving data. In the event of an accident, it will then be possible to ascertain who was driving – man or computer.

The Commission has already drawn up a number of principles: the algorithm of the computer should be more geared to material damage over personal injury. And it should not classify people, for example, by size or age. In the case of minor accidents in which only a fender bender is inflicted, such conditions are relatively easy to implement. This is different in the case of the acute dilemma situation described above. Can the lives of many children be weighed against the lives of just a few car occupants?
 
The Ethics Committee has also discussed this case in great detail, says Armin Grunwald. The problem is that an ethically justifiable decision is not conceivable in this case, the lives of human beings should never be weighed against one another. This is also stated in the principle of equality in the Universal Declaration of Human Rights and in the German Constitution. To resolve the dilemma technically, the manufacturers could install a random generator in the vehicle. It would then decide in such situations. But who would want to get into a car like that?

Top