Artificial intelligence, bias,

How does bias get into a translation, and what can we do about it?


A black and a white person in front of an orange background with the word "they", "them", "sier" and "xier" written on it. © Goethe-Institut. llustration: EL BOUM.
Artificially Correct Hackathon © Goethe-Institut. Illustration: EL BOUM


How does bias get into translation and what can we do about it? To find answers and solutions to this question, we invited activists, translators, software developers and everyone who is interested in the topic to join our online hackathon at the beginning of October 2021.

More context

Further ressources

More articles on translation and bias, on artificial intelligence and translation.

About the project

Language defines the world. The Goethe-Institut stands for an inclusive language - and thus for an inclusive world.

With Artificially correct, we work with experts to develop a tool that minimises the bias in translations. We want to strengthen the position of translators and a conscious approach to translation machines and include the reality of as many people as possible in them.    

Specifically, Artificially correct deals with AI-based translation tools and the biases (e.g. gender/racial bias) whose translations they generate. Artificially correct creates an active network of people affected by this problem - translators, publishers, activists and experiential experts - and identifies partners who will pursue the issue with us in the long term. We bring together perspectives, create awareness, share knowledge and stimulate discussion.