Fairness and Ethics in AI development Can artificial intelligence become fit for democracies?

Democracy
© Goethe-Institut

Fair and just artificial intelligence requires ethical understanding, diverse development, strategic investments, and robust regulation.

By Henrik Chulu

Biased algorithmic decision-making systems have already caused harm to an untold number of people. From bias against women in Amazon's recruitment tool, to bias against minorities buying car insurance, to racially biased predictive policing systems, facial recognition, and criminal sentencing algorithms, the disparate impacts of unfair and opaque algorithms are already being felt by real people in the real world.

Similarly in the world of politics, algorithmically curated social media platforms have become the main channels through which many people consume news and political opinion. Besides forming virtual echo chambers based on the political preferences of their users, these channels have also become one of the main targets of active political propaganda and disinformation campaigns.

In the age of the Covid-19 pandemic, fake news and disinformation about the disease and the vaccination programs have become an especially salient example of how algorithmically curated platforms can have life-or-death impacts on democratic (as well as other) societies.

At the same time and in response to the many revelations of algorithmic discrimination and online disinformation, calls for fairness and ethics in AI development have become mainstream.

In a paper, “The global landscape of AI ethics guidelines”, published in Nature Magazine in 2019, researchers Anna Jobin, Marcello Ienca and Effy Vayena systematically review 84 documents containing ethical principles and guidelines for artificial intelligence published by private companies, research institutions and public sector organizations from around the world.

This meta-analysis revealed that the documents generally agree on five ethical principles for AI: "transparency, justice and fairness, non-maleficence, responsibility and privacy". However, there is no agreement on how to implement these principles or who ultimately bears the responsibility for them and should be held accountable in the case of ethical violations.

Ethics is not just about codes

Sarah Spiekermann, who heads the Institute for Information Systems and Society at the Vienna University of Economic and Business, thinks that principles like the ones collated in the paper are a useful baseline, but far from sufficient to achieve democratic and just AI.

"Such lists are no more than hygiene factors of what the producers and service providers of those machines must deliver, the absolute minimum," she says. "Ethics is about much more than just prohibiting harm."

Ethics in the western philosophical tradition, she points out, falls into three general categories, that differ fundamentally on how to approach ethical conundrums.

Virtue ethics, beginning with Aristotle focuses on character, that is how to be a good person.

Duty ethics, associated with Immanuel Kant, prescribes that actions should be bound by universal values. Finally, utilitarianism, begun by John Locke, evaluates actions according to the amount of "utility" they bring, usually understood as happiness or quality of life.

Add to those the various religious, usually rule-based ethics that human beings strive to live by, as well as the many non-Western ethical traditions.

"We invite innovation teams to think about what technology does to those values as well as those outside of your own cultural frame of thinking and tradition," she says. "All of this is new and is it then automatically good? Are these innovations desirable just because they are new? Very often today, we equate newness and innovativeness with values that are desirable in themselves."
Democracy © Goethe-Institut Among other factors, just the vast variability of ethical thinking across philosophies, cultures and belief systems, make it impossible to simply code ethics into the infrastructure of artificial intelligence.

And at the same time philosophers, theologians and other professional ethicists are not proficient in building AI systems.

"This is not just a topic that can be addressed from a technical perspective solely or from a philosophical perspective solely. We rather need an interaction between these different viewpoints," says Carla Hustedt who leads the Ethics of Algorithms Project at the Bertelsmann Stiftung.

"Algorithmic decision-making systems are sociotechnological systems. That means the impacts that they have do not just depend on the algorithmic model itself or on the data but also, or particularly, on the underlying goals what is the system actually used for."
Democracy © Goethe-Institut Besides building up knowledge and understanding about the area of algorithmic decision-making across society in general and especially among the people who end up making decisions with the assistance of algorithms, Carla Hustedt warns against monocultures when it comes to their development.

There needs to be a vibrant and diverse ecosystem of algorithmic decision-making systems, rather than monopolies.

"If there's just one system in place, the potentials of harm are much larger in comparison to having a variety of systems. And at the same time, we need to make sure that the people deciding over the use of the systems and building the systems are diverse as well," she says.

Who makes algorithmic decision-making systems and for whom?

Building a diverse ecosystem for artificial intelligence requires a closer look at both who creates the system, but also at who the systems are created for.

"Most often from the work I do, I see that AI mostly is being used in the Global North. The technology is serving mostly in Europe and in North America," says Nnenna Nwakanma from the World Wide Web Foundation. "From our analysis at the Web Foundation we have seen that there are three groups who are participating more: groups who have broadband internet access, groups who have data availability because AI runs on data, and the last group are people who have digital skills," she says.

AI research and development therefore skews heavily towards the Global North, argues Nnenna Nwakanma, for economic reasons. Investment in the field is not happening for humanitarian purposes, but for return on investment.

"Investment is coming from private companies. Private platforms are the first investment forces in AI. This is followed by developed countries who are financing AI in the framework of either academic research or governance, so two parallel investment streams: industry and national development," she says.

This means that globally a lot of people, particularly women in the Global South, are left out of consideration when it comes to developing AI systems.

"We need to adapt the technology that we have in order to be able to solve their problems and at the same time maintain their human dignity," says Nnenna Nwakanma.

Ethics should not replace regulation

Developing sophisticated ethical frameworks and having diverse teams combined with globally oriented impact investment will no doubt help bring the artificial intelligence field closer to being fair and equitable, but it is not sufficient. In the end, robust legislation is necessary.

"We need to begin to create a definition of what constitutes duty of care. Who has the legal responsibility to make sure that these systems are 'street-legal', that they are not negligent in their deployment? When we figure that out we have to develop metrics and quality criteria like we do with the environmental protection agency to prevent pollution," says Nathaniel Raymond, Lecturer at Yale Jackson Institute for Global Affairs.

"Often these pieces of AI are repurposed to multiple other systems. What's the chain of custody to prove that one system was the thing that did harm? Do we sue the designer? Do we sue the company? Do we sue the people who decided to use it in a context?"

Regulating AI systems does not necessarily require new laws or new rights, but rather a careful consideration of what we already have.

"I don't think that we need new fundamental rights for the digital age but I do think that we need to check whether current legislation and current protection mechanisms are still suitable in times of automation," says Carla Husted.

Getting the ethics and regulation of artificial intelligence right is crucial if societies are to invest equitably in the technologies while maintaining the foundations of their democratic institutions. Nathaniel Raymond outlines four pillars a democracy rests on, that all are affected by how we regulate AI: political representation, civic rights, regulation, and recourse.

"Democracies involve some form of representation. They involve a compact of rights which recognize certain rights and helps ensure respect for those rights. They allow societies to regulate conduct by individuals by organizations and we do that through a variety of ways most notably laws. And then fourth and finally, they allow recourse for individuals and for groups when representation is denied and rights are violated," he says. "How we figure out recourse and regulation will determine the future of representation and of rights."