Chances and risks Does AI offer planetary solutions to planetary problems?

Planetary
© Goethe-Institut

Global problems require systemic solutions. AI promises help but carries risk when it comes to climate change, war and social inequality.

By Henrik Chulu

From the future of work to the future of war and to the current global battle against carbon emissions, the rapid advancement of artificial intelligence technologies foreshadows radical transformation. However, while optimism about the global benefits of technological innovation may be healthy, so is a balanced look at the risks that its widespread adoption carry with it.

As humanity confronts a series of global, systemic problems, many are investing their hope that artificial intelligence will be a major part of their solutions. Some nonetheless worry, that even they solve some problems, they also come bearing unintended consequences.

Artificial intelligences against climate change

Artificial intelligence holds a lot of promise for helping combat climate change. Whether used to detect signs of weather changes and climate change for researchers, used by engineers for climate change adaption and building robust and resilient infrastructure, or to assist policy makers in crafting mitigation strategies to reduce emissions and transition to a carbon neutral future, the diversity of tools in the AI toolkit is useful.

"Machine Learning is a powerful tool but it's really important to stress that it's not a silver bullet," says Lynn Kaack of the Energy Politics Group ETH Zürich.

Together with a large group of machine learning researchers and climate experts, she wrote the report Tackling Climate Change with Machine Learning that gives a high level overview of the many ways in which machine learning can help the global effort against climate change.

"Machine learning is one piece of the puzzle and what's really important to note is that for machine learning to be really relevant, it needs to be used in collaboration with people who really understand a problem and at best also from the start in finding the problem," she says.
Planetary © Goethe-Institut When artificial intelligence works it usually works well, often better or at least faster than a human expert, but when it fails it risks failing badly without oversight by someone with deep domain expertise in the area that it models. The reason for this is that machine learning algorithms are only  as good as the data that was used to train them.

"If your data is not good and it doesn't tell you much about what you're actually interested in, your model will also not be good. This principle is known as garbage in, garbage out," says Lynn Kaack.

Machine learning will for the same reasons not provide quick technical fixes that allow the world to continue on its course just a little greener and a little more high tech. Climate change is a systemic problem with global impacts and requires a response of an equal magnitude of scale.

"What we need to recognize is that climate change is not something that's happening in the next 50 years or in the next 20 years. Climate change is here and it requires systems change," says Victor Galaz, Deputy Director and Associate Professor at the Stockholm Resilience Centre.

Just adding machine learning to the current systems will not bring us out of the climate crisis, he insists.

"It's not just about individual changes, slowing down the economy, or stopping people from travelling. You need to change the underlying systems of energy production and food production," he says.

Victor Galaz also points to the risks involved in putting too much faith in artificial intelligence. It is a general purpose technology and not inherently green. Besides the energy consumption involved in training the algorithms on massive amounts of data, machine learning might as well work at cross purposes with the intent of combatting climate change.

"Tech giants could use computer vision or big data and applications to extract more oil or more coal. As we try to get hold of more minerals for our technologies we're going deeper into the oceans and some of these massive machines are machine intelligent and increasingly autonomous thanks to AI," he says.

Conflict in the age of lethal autonomous weapon systems

In the same way as AI both carries promises as well as risks when it comes to global climate change, warfare is another area, where automation presents a brand new reality.

Autonomous killer robots are a well-known science fiction cliché, but we are far from a situation where sentient machines engage each other on the battlefield with no human casualties. However, as the reality of autonomous weapon systems inches closer, enormous legal and ethical questions arise if the decision-making power over the life and death of soldiers and civilians is left up to machines. And currently the answers are only starting to be worked out.

"You have certain regulations that apply to every single weapon but you do not have that in AI or AI-empowered weapons. It must be used, and that is the common view, in accordance with international humanitarian law. And what you have is Article 36 of the Additional Protocol 1 to the Geneva Conventions," says Angela Kane from the Vienna Center for Disarmament and Non-Proliferation.

Article 36 states that parties to the convention are obligated to determine whether new means or methods of warfare, such as lethal autonomous weapon systems would go against the protocols of the convention or other international law.

While this sounds like it would cover lethal autonomous weapon systems, it also means that there is currently no common understanding of the legality and norms of their use in war, or that every country currently developing AI weapons are willing to abide by the conventions or accept that international law applies.

The central unresolved question when it comes to regulating these types of weapons is, where does the responsibility for their use lie? With the manufacturer, with the operator or the person setting it loose on the battlefield?

"It's very, very difficult to determine individual responsibility, so a number of states have actually agreed that if no person could be found responsible for the actions of the weapon system, that would not be acceptable because at least a person must be held accountable. That is usually called the person in-the-loop or the person on-the-loop." say Angela Kane.

It is also important to consider other areas of the military where artificial intelligence might be put to use, other than the pointy end of the proverbial stick.

"It's about applying it to all these military tasks, not just the killer robot front end but even who gets selected for the military? How do they get trained? War is still about us it's about our politics, our failings, our arrogance," says military analyst and political scientist Peter W. Singer.

He is very concerned with the potential failure of understanding when it comes to the potential effects of AI not only in military applications but also in wider use across the globe.
Planetary © Goethe-Institut "Just as the technologies from science fiction are coming true, science fiction has not been all that helpful. We are on the 100th anniversary of the creation of the word 'robot'. It was for a play in 1920 and took the Czech word for servitude and used it to describe the playwright's idea of mechanical servants who wizened up and the rises up against their masters. Ever since that, killer robots have cut through our science fiction. It has also affected our discussion in the real world," he says.

This goes to show that the job of science fiction is not to predict the future but to “predict the present” to use a phrase from science fiction author Cory Doctorow. The original robots were not devised to envision a literal future but a figurative present as a symbolic stand in for the working class rising up against the socioeconomic inequalities of its time. In the same way the task of current science fiction thinking is to interpret the world of now to prepare us for the future.

"Whether it's the Geneva Convention or you think of the science fiction of Asimov's three laws [of robotics], every law, every system of ethics is open to interpretation. We're not gonna be able to program our way out of this," says Peter W. Singer.

The future of social and economic inequality

The impact of new technologies on war is evident in ways visible to the naked eye. Cities built in times of medieval warfare often feature tall surrounding walls that would have protected against incoming projectiles such as arrows and catapulted rocks but that would become convenient targets with the invention of gunpowder canons. They led to the geometrical fortifications for defending in-depth rather than height that can be seen in later urban planning.

In the same way as new technologies directly impact the physical construction of cities, they also reshape how economic production is organised in societies. When looking back at other historical periods of massive technological leaps in innovation, Julian Jacobs, Fulbright scholar and graduate of Brown University and the London School of Economics, Department of Political Economy, sees a corresponding trend. He argues that the current development in automation will have the same impact on economic inequality as the industrial revolution and later periods of technological advancement: Wages will stagnate and capital will concentrate.

"In the 1920s, it was transportation, it was industrial design, it was widespread disruption of labour, so much so that the US secretary of labour at the time was on the record worrying about automation. But we can take this back even further to the industrial revolution. It's now also widely accepted that the Luddites, the people who protested the mechanisms that ushered in the industrial revolution, they may have been onto something. Though the economy leapt forward and wages and working conditions were relatively stable for a period, they eventually deteriorated and wages stagnated as labour was replaced by capital, and so inequality went up," he says.

Not only with the Luddites but in most industrialising societies, it has been a rule rather than an exception that there was resistance against the technologies that threatened people's jobs.

"Economic historians have puzzled as to why people would have voluntarily participated in the industrialization process if it reduced their utility. Well the simple answer is that they did
not right? They rioted against mechanisation," says Carl Frey, Director of the program on the Future of Work at the Oxford Martin School.

From an economic perspective this resistance against technological innovation makes rational sense for the people whose economic well-being is being undermined. In the industrial revolution, this meant the handcrafting middle classes.

"A big reason for this was the displacement of well-paying, middle-income artisan jobs which were replaced with the factory system that often employed children at a fraction of that cost and as a result of that the first industrial revolution actually left a lot of people worse off during the first seven decades of it," says Carl Frey.

Generally, there is a tendency for technological innovation to disrupt the middle of the labour market, creating a gap between highly skilled often cognitive labour and the unskilled manual labour that is difficult to mechanise or automate.

"Automation tends to hollow out middle skill, middle-wage work and lead to job growth among low-wage sectors of the economy and high-wage sectors of the economy. In other words, it provides greater financial reward for people whose skills are complemented by technology and punishes people who do not have the training or privilege of education," says Julian Jacobs.

Today, this dynamic plays out in the technology sector, where a chasm opens up between the creators of gig economy technological startups and the precariat of unorganised labour doing the work, such as driving on-demand vehicles. It is a divide unbridgeable by way of social mobility. There is no ladder by which you can advance from working as an app-based driver to working "above the API" programming the app. But this is not a law of nature, it is a political situation.

"Just because technology has a tendency to increase inequality, it does not mean that it must, and we need as a community to bring AI into larger discussions of inequality," says Julian Jacobs.

One of the societal effects of rising inequality, he notes, is increasing political instability. Carl Frey also points to a direct correlation between automation and the rise of the current flavours of radical populist politics, pointing to the main states swung by Donald Trump in the 2016 US presidential election being the ones where most workers were exposed to automation.

"Robots are actually one of the prime reasons and where robotisation has occurred, we see that more people actually opted for Trump and we also see in Europe that people left behind due to automation are more likely to opt for populist candidates and want radical change," he says.