Artificial Intelligence and Peace
“We Need the Political Will”

A graffiti of a war drone with a palm leaf in Geneva. Photo (detail): Sascha Steinach © picture alliance / ZB

When people think of the use of AI in warfare, most of them think of “killer robots”, drones. But could it also be used for peacekeeping operations? In our interview, Angela Kane, former Under-Secretary-General of the UN, explains the implementation of AI in UN-operations and its up and downsides.

You worked quite a while for the United Nations. When did you first notice the importance of technology in reference to national security questions and what impacts it could have? 

First of all, I worked at the UN for 37 Years, which is an incredibly long time and makes me a total dinosaur. When I first came involved with AI, or technology rather than AI, I’d taken the new job in 1995 in the department of public information. They had just launched the first website, it was very exciting. For the next two years, I did nothing but Web Development and soon, more and more things were online. That basically led to my later interest in artificial intelligence. Web development was no “intelligence” at all, it was just technology, but you could do an awful lot with technology. And it showed, what can happen if people have access to information. What that actually does to peace and how you can influence that. I thought it was an extremely powerful tool, particularly with the information. When you look at the increasing networking that takes on information systems, there’s a lot of dark aspects, but it was all seen as positive.

When did the UN include AI in their agenda?

Since 1998, Information and Communications technology has been on the agenda of the UN, it was originally included by Russia who thought of it as a security issue, because you never know, where for example cyber-attacks are coming from. There is a lack of transparency in terms of who initiates it, who is the person ordering it, or is it just an amateur hacker, is it a targeted attack? And that issue has not made very much progress at all. No one can agree on the norms, the rules and even though there is a group of governmental experts, it is a long way to developing rules or an instrument that could and should be used by the states. With all the positive aspects of AI, this is, to my mind, an aspect which really needs to be regulated, there needs to be a “norm” which is accepted by the states.

How has AI changed warfare and the soldiers, for example, controlling the drones? 

That is one of the most fascinating developments. Since World War II, we haven’t had any declarations of war – though we have a lot of conflicts, the official number is about 150. The expression “boots on the ground” came up around 20 to 30 years ago. It means men are fighting men in the trenches. Now you have warfare and warfighting that involves people, but it doesn’t involve one person against another, because it is done electronically. Drones are just the most visible technology in this aspect. They are easily equipped, with warfare machinery and they’re getting larger and more powerful. There are regulations for the civilian use of drones, for instance, to not fly near buildings, but there aren’t any for military use. Therefore, someone is sitting somewhere, thousands of kilometres away, programming a drone to attack a certain part, which has been identified with the help of technology. This person doesn’t really know if the drone is attacking the right target or the collateral damage the attack may cost. Of course, soldiers are trying to avoid that but maybe they had a bad day, and feel mad. Though the drone is not emotional, the person controlling it is. 

Though the drone is not emotional, the person controlling it is.

Angela Kane

Also, the storage of nuclear weapons is not safeguarded digitally, so it’s much easier to hack these systems than it was 20 to 30 years ago. Hence, digital systems are vulnerable in terms of hacking and tracing the hacker attack. For instance, the attack on the Saudi refinery last year was generally attributed to Iran but there has not been proved. If it hadn’t been the refinery, but storage for nuclear weapons, it could have been seen as the first strike in a nuclear war. Thus, the level of danger has increased tremendously due to technology. 

Under the Obama administration, we had statistics on civilian damage, but nowadays, this information isn’t transparent. However, the absence of transparency is against international humanitarian law, to which almost all states have signed up to. It is absolutely necessary to have this transparency.

You talked about the emotions of people, who control the drones, interfering in their operations. But the drone operation is also influenced by the person writing the code. Bias in AI is a well-known factor. What does that mean for AI’s usage in warfare? Could bias even be intentionally used in favour of the war parties?  

Yes, it can and what it comes down to again, is the lack of transparency. AI is a black box for many people who don’t know which algorithms have been programmed into the system.  Whoever creates that algorithm has a certain bias and that bias very often becomes apparent in the outcome. We need more transparency in these algorithms and how they are programmed, because they affect people, whether it’s in everyday life, at work or in war. 

In AI-companies, the position of ethics officers has been established.  However, they usually have advisory roles and cannot decide on what can or can’t be done.  Also, the ethic boards are composed of very few people of colour and women, but usually older white men. So, there’s a definite bias there already.
When Google first started, their slogan was “Don’t be evil” but that notion is long gone. Nevertheless, about 2 to 3 years ago, Google objected to developing the face recognition project “Maven” further and cancelled it. However, another company immediately stepped in. So, unless you have some regulations on the development of these algorithms nothing will happen.

Which role does the public and do governments play in the development of regulations concerning lethal autonomous weapons? 

I have to link this topic to another development that is the Treaty on the prohibition of nuclear weapons. The treaty was the outcome of a humanitarian initiative which began with the concern expressed about the humanitarian consequences of a nuclear war. It developed into a major effort with many people participating – even though the states owning nuclear weapons were totally against it and tried to prevent it. Yet, we are now three signing states away from it coming into force. It doesn’t mean that nuclear weapons as a whole will be abolished, but the norm against nuclear weapons possession is becoming increasingly stronger. 

The campaign against killer robots is developing similarly and has an extremely good advocacy effort. The campaign members are usually young and energetic people which is very meaningful because they will ultimately bring about a change. Nevertheless, it is unclear whether governments or parliaments will listen. It has not perpetuated yet, but I think we’re on a good way. 

However, I also fear that we are already so far down the road on developing lethal autonomous weapons, that it’s not that simple. But there are 30 states that already publicly objected to the development of lethal autonomous weapons. They have already worked out some formulations that could possibly be used in the elaboration of a treaty. It is particularly countries that don’t have these weapons that don’t want to see them. Because they know they will probably be the victims of them. 

The notion of “killer robots” is widely spread. What do you think about that?

Killer robot is a very catchy phrase because it tells you what it is. I understand that it captures the imagination and gives a powerful message. But on the other hand, I would like to see some other term used. “Lethal autonomous weapon systems” is not exactly easy to say or remember and sounds very bureaucratic, I get that we’re lacking a good name. Whatever you do, killing machines, maiming machines, that is basically what they are used for and we cannot get around that. 

What opportunities does AI bring in terms of peacekeeping?  

There is a tremendous potential for increasing information and transparency, and therefore to make linkages to people who may otherwise be totally isolated.  Since we have made these advances in tech, we should be using these applications for the good of the people. After all, the UN charter starts with “we the peoples”. 

I remember, in 2014 there was a tremendous uproar in the Congo – where I served myself – because a massacre had been conducted. The peacekeepers who were only 9 km away had no idea because there were no roads. This is what started the use of drones, to allow surveying areas that are hard to access or too dangerous to deploy troops, to make sure nobody is being harmed. These applications are very positive but the fear is always that someone is abusing it. For this reason, the use of drones has to be requested anew every single time. Nowadays, they are used for example in Mali and the Central African Republik, but not in South Sudan, because their government objected.  

After all, the UN charter starts with “we the peoples”

Angela Kane

Do you think the possible damage done by AI is justified by its advantages in reference to peace and security questions?

It is not a way of taking a decision on whether it is justified or not – it is there, and we should use it to the peoples’ advantage. Then again, we’re missing regulations. There is a group in Geneva, that has been discussing these issues since 2014; first in a larger group, which was open to any member state, then in a smaller group composed of governmental experts. They have a veto right: if they don’t agree with something, the aspect is crossed out. Now they also have guiding principles, but they haven’t agreed on anything that would put forward a treaty or an agreement. What I am concerned about, is that the technology has long overtaken this effort to regulate them. There’s a market for it and were not doing enough. 

The UN is an intergovernmental body therefore they do state-to-state diplomacy and occasionally – but not willingly – include NGOs or industry in the process. However, we need to work with the industry, particularly in the field of AI to have a differentiated approach as to how we are handling this right now, it is absolutely essential.

There are many countries and representatives of governments lacking the requisite knowledge and experience in this field, which of course is not their fault.  They need to be trained and we need to have a much more targeted approach addressing key questions on how to handle the development of AI in warfare respective peacekeeping.

In your Couch Lessons Interview, you said that the question of AI in warfare cannot be solved from a human rights perspective but from peace and security as well as a disarmament perspective. Doesn’t that go hand in hand? 

It should go together. The human rights perspective is a very useful one but it is one that is easily shuffled aside.  On the one hand, international humanitarian law has tremendously evolved since the Geneva Convention. There’s an obligation for states to take care particularly of civilians in humanitarian and human rights law. On the other hand, it is not always followed through in terms of what’s happening in this world, but at the same time, a certain amount of disapproval comes through. For example, on October 13th, the human rights council was elected in Geneva. I am not saying that there are only countries on the Human Rights Council that have a perfect record concerning human rights. But what was interesting, Saudi Arabia was a candidate but was not admitted and elected to the HRC, however, China was elected but with fewer votes than before. This reveals that there is a certain change of perception, also in terms of which government abides by human rights standards and which doesn’t.

In my perspective though, you have to pound a lot more on the human rights aspect and shame governments. Naming and shaming don’t always work but the more countries set a higher standard, the better it is for the rest of the world. Many of the European countries are abiding by higher standards concerning ethical – “right” – actions, but we need to have more countries adhering to them. When economies are not going well, when there is more poverty, when there are more migrants and refugees, the adherence to those standards is slipping, with the pandemic even more so. We are seeing more autocratic governments coming up, we are seeing more harsh measures being taken against the population, which is suffering. There is more space for abuse of human rights and humanitarian standards. That is the greatest danger we are facing right now: to ensure that those standards are not slipping away even further. We need to make sure that governments are getting back into the fold of observing them, but how, I do not know. 

What does the world need in terms of AI + peace? 

We need the political will to make progress in this area. But in this current political climate, I don’t see that happening. AI and peace are not aspects the industry focuses on. They can lead the development, but they are also developing whatever they require and want for military purposes.

I wish the pandemic would have shown us that we are in this together.

Angela Kane

I question whether the enormous amount of money that is being spent on military development – a lot of it being in the AI field – is really necessary. I don’t know what else it needs – do we need more than the pandemic? Who or what are we fighting against right now? When we think about NATO, for example, the concept of hostility becomes fragile. It was set up to counter the eastern block, though most of them are now members of the NATO. I wish the pandemic would have shown us that we are in this together, that we need to fight the common enemy, the pandemic. Instead, we see a further breaking down of blaming others for the pandemic and of egoism to be the first country to secure the vaccination. In conclusion, we need a lot more political will and that is really lacking right now. 

Maybe in the next elections, whether it is the one the US just had or the upcoming ones in other countries that will change. Maybe in some parts, maybe not.