Artificial intelligence, bias, text

How does bias get into a text, and what can we do about it?


A black and a white person in front of an orange background with the word "they", "them", "sier" and "xier" written on it. © Goethe-Institut. llustration: EL BOUM.

Artificial Intelligence & text production

In journalism, on social media, in novels or in the classroom. Artificial intelligence shapes the language of our everyday lives. It can reproduce bias - but it can also help combat discrimination. This dossier compiles different perspectives on how artificial intelligence affects language and text production, and on why discriminatory language must already be addressed in schools. These articles are the start to our coming events with which we would like to open up the topics of Artificially Correct and start the discussion around AI and text there, where text production begins for many of us: At school.

Further ressources

Find out more? We have compiled more articles on Artificial Intelligence, Bias and text production.

Memories from our Hackathon

In October 2021, 12 teams from around the world came together digitally to develop ideas on how to combat bias in translation. What did they come up with? Here are some insights into our hackathon.
Artificially Correct Hackathon © Goethe-Institut. Illustration: EL BOUM

Artificially Correct Hackathon

How does bias get into translation and what can we do about it? To find answers and solutions to this question, we invited activists, translators, software developers and everyone who is interested in the topic to join our online hackathon at the beginning of October 2021. Have a read about the results or watch video interviews with two winner groups.

Screenshot from the video interview with the winning hackathon-team Aïcha Konaté, Lauri Laatu and Marvin Mouroum. © Goethe-Institut

In conversation with Word 2 Vec

Word 2 Vec focused on one of Google Translates, and other deep learning systems flaws - the struggle to comprehend complex content during the Artificially Correct hackathon. As a solution the team developed a translation bias reduction tool that allows you to identify sentences susceptible to racial and gender bias. It works to identify and analyse the greater context of the sentence, and consequently highlights sentences and phrases that could be susceptible to bias. Find out more about their solution and their hackathon experience! 

Screenshot from the video interview with Bettina Koch, Bhargavi Mahesh, Janica Hackenbuch, Joke Daems, and Shrishti Mohabey. © Goethe-Institut

In conversation with Bias by Us

BiasByUs created a website at the Artificially Correct hackathon that acted as a ‘library’ for bias in translations. This collection of bias is to be generated through crowdsourcing bias found in machine translations. Watch them speak about challenges and scope of bias in machine translations, the ‘Bias by Us’ solution, their Artificially Correct hackathon experience, and more.

Update by BiasByUs

Janiça Hackenbuchner from the BiasByUs team won our hackathon with the idea of a database that raises awareness of gender bias in machine translation. On she explains how this works and how it has progressed since the hackathon.

About the project

Language defines the world. The Goethe-Institut stands for an inclusive language - and thus for an inclusive world.

With Artificially correct, we work with experts to develop a tool that minimises the bias in texts and strengthen a conscious approach to language.    

Specifically, Artificially correct deals with AI-based translation- and writing tools and the biases (e.g. gender/racial bias) whose translations they generate. Artificially Correct creates an active network of people affected by this problem - translators, publishers, activists and experiential experts - and identifies partners who will pursue the issue with us in the long term. We bring together perspectives, create awareness, share knowledge and stimulate discussion.