Quick access:

Go directly to content (Alt 1) Go directly to first-level navigation (Alt 2)
Artificially Correct© El Boum

Artificially Correct

In journalism, on social media, in novels or in the classroom. Artificial intelligence shapes the language of our everyday lives. It can reproduce bias - but it can also help combat discrimination. 

This dossier compiles different perspectives on how artificial intelligence affects language and text production, and on why discriminatory language must already be addressed in schools. These articles marked the start to our coming events with which we would like to open up the topics of Artificially Correct and start the discussion around AI and text there, where text production begins for many of us: At school.

Artificial Intelligence & Text production

While we do not know what tomorrow’s algorithms will tell us, their answers will be based on the questions we ask and actions we take today. It is our duty to ourselves, and to the future, to make sure they are good ones.

Adam Smith

Language and power

School as an institution contributed and continues to contribute to the fact that certain people were or continue to be invisible. An environment must be created in which young people dare to come out and are not bullied for doing so – and unfortunately, that is currently not the case.

Sovia Szymula

Close up of some books from above © Pixabay

New concepts, new ideas
Intercultural diversity in Finland's German textbooks

How culturally diverse is the image of Germany that Finnish textbooks convey today, and what options for shaping this image are open to teachers?

Jun.-Prof.in Dr.in Nina Simon Can Kınalıkaya

Nina Simon in conversation
"Teaching critical race theory goes far beyond diversity-sensitive textbooks."

In her research, Junior Professor Nina Simon dealswith the question of how teaching materials contribute to reproducing racism.


Sensitivity in translation

Human vs. Artificial Intelligence: Fierce Competition or friendly partners?Philippos Vassiliades | CC-BY-SA

A.I. and Literary Translation

Is Artificial Intelligence advanced enough to grasp and process literary texts in all their linguistic richness, and translate them into another language? Will it ever be advanced enough? Or will it remain "artificial" instead of "artistic"? This dossier asks the experts.

More about this project

Language defines the world. The Goethe-Institut stands for an inclusive language - and thus for an inclusive world.

With Artificially correct, we work with experts to develop a tool that minimises the bias in texts and strengthen a conscious approach to language.    

Specifically, Artificially correct deals with AI-based translation- and writing tools and the biases (e.g. gender/racial bias) whose translations they generate. Artificially Correct creates an active network of people affected by this problem - translators, publishers, activists and experiential experts - and identifies partners who will pursue the issue with us in the long term. We bring together perspectives, create awareness, share knowledge and stimulate discussion.

Screenshot from the video interview with Bettina Koch, Bhargavi Mahesh, Janica Hackenbuch, Joke Daems, and Shrishti Mohabey. © Goethe-Institut

In conversation with
Bias by Us

BiasByUs created a website at the Artificially Correct hackathon that acted as a ‘library’ for bias in translations. This collection of bias is to be generated through crowdsourcing bias found in machine translations. Watch them speak about challenges and scope of bias in machine translations, the ‘Bias by Us’ solution, their Artificially Correct hackathon experience, and more.

Screenshot from the video interview with the winning hackathon-team Aïcha Konaté, Lauri Laatu and Marvin Mouroum. © Goethe-Institut

In conversation with
Word 2 Vec

Word 2 Vec focused on one of Google Translates, and other deep learning systems flaws - the struggle to comprehend complex content during the Artificially Correct hackathon. As a solution the team developed a translation bias reduction tool that allows you to identify sentences susceptible to racial and gender bias. It works to identify and analyse the greater context of the sentence, and consequently highlights sentences and phrases that could be susceptible to bias. Find out more about their solution and their hackathon experience! 

Database BiasByUs
What happened after the Hackathon?

Janiça Hackenbuchner from the BiasByUs team won our hackathon with the idea of a database that raises awareness of gender bias in machine translation. On  medium.com she explains how this works and how it has progressed since the hackathon.

Do you want to use the database yourselves? Take a look here!