Artificially Correct

Artificially Correct © El Boum

In journalism, on social media, in novels or in the classroom. Artificial intelligence shapes the language of our everyday lives. It can reproduce bias - but it can also help combat discrimination. 

This dossier compiles different perspectives on how artificial intelligence affects language and text production, and on why discriminatory language must already be addressed in schools. These articles marked the start to our coming events with which we would like to open up the topics of Artificially Correct and start the discussion around AI and text there, where text production begins for many of us: At school.

Artificial Intelligence & Text production

While we do not know what tomorrow’s algorithms will tell us, their answers will be based on the questions we ask and actions we take today. It is our duty to ourselves, and to the future, to make sure they are good ones.

Adam Smith


Smartphone Photo (detail): © colourbox.com

Artificial intelligence and fakenews
When robots make biased fake content on social media

Artificial Intelligence systems are massively implemented by organisations worldwide. However, these automatic systems are not always used for good, and they do not always make the best decisions. In this article I will highlight three types of biased robots, exemplified in three cases: disinforming social bots, biased AI tools, and synthetic profiles.
 


“Bounding boxes” that Max Gruber has artistically and humorously depicted here. Image (screenshot): Max Gruber © Better Images of AI / Ceci n'est pas une banane / CC-BY 4.0

Artificial Intelligence in Journalism
On the Hunt for Hidden Patterns

Even today, Artificial Intelligence (AI) already plays a key role in journalism: algorithms find stories in large data sets and automatically generate thousands of texts. Very soon, AI could become a critical infrastructure of media production.
 


Said Haider Image (screenshot): Stephanie Hesse © Goethe-Institut Finnland

Communicating with AI
Meta – A Chatbot Against Discrimination

It is Said Haider’s vision to develop a “1-1-2” for discrimination incidents. With his team, the Meta founder and CEO has developed the first chatbot that provides round-the-clock legal advice for people experiencing discrimination and looking for counselling.

 


Artificial Intelligence (AI) is capable of automatically generating texts that are almost indistinguishable from human-written ones. Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0

The artificial author
How AI could change how we write fact and fiction

Artificial intelligence (AI) could completely change the way that we write. AI can generate huge amounts of comprehensive content on any topic, filter fact from fiction, and provide a creative spark to poets and screenwriters. But how should we think about this revolutionary tool? And could computers’ endless output possibly reveal something hidden about ourselves?


From left to right: Prof. Dr. Mascha Kurpicz-Briki, Uli Köppen, Dr. Phil. Aljosha Burchardt, Dr. Stefanie Ullmann, Laura Hollink  Images (cropped): private, Uli Köppen: Lisa Hinder, BR Images (detail): private, Uli Köppen: Lisa Hinder, BR

Expert statements
‘The collaboration between humans and machines needs to be redefined.’

What kinds of bias can be found in texts that were created with the help of AI? And what solutions are there to mitigate or even avoid distortions of reality? We talked about this with five experts from the UK, Germany, the Netherlands, and Switzerland.


Language and power

School as an institution contributed and continues to contribute to the fact that certain people were or continue to be invisible. An environment must be created in which young people dare to come out and are not bullied for doing so – and unfortunately, that is currently not the case.

Sovia Szymula

English translation "Speaking & Being" by Kübra Gümüşay Profile Books

In conversation with Gesche Ipsen
Translation can adjust the balance of power

In “Speaking and Being” Kübra Gümüşay explores the question of how language shapes our thinking and determines power relations in our society. The book has now been published in English translation. We talked to the translator, Gesche Ipsen, about her work on the book and the linguistic power of translation.

 


Sophia Szymula Photo (detail): © Patrick Steck

Sovia Szymula in conversation
‘It takes more than gendered teaching materials’

How do teachers deal with gender-sensitive language? Do non-binary people find their place in school? In this interview, student teacher and queer rapper Sovia Szymula a.k.a. LILA SOVIA talks about their personal experiences.
 


The We A.R.E. team, from left to right: Griselda Welsing, Linda Schulz, Aster Oberreit, Kira Römer Photo (detail): Georgina Fakunmoju © We A.R.E.

Racist language at school
‘Language creates reality’

In many schools, discriminatory language is commonplace. The activists of the Hamburg-based initiative We A.R.E. don't want to put up with it any longer. In an interview, they explained to our author what racist language does to children.





ARTIFICIAL INTELLIGENCE AND TRANSLATION

A white robotic hand next to a button with speech bubbles on it and the German and UK flag next to it. Photo (detail): Alexander Limbach © picture alliance / Zoonar

AI Stereotypes
De-biasing Translation Tools

Many people routinely use machine translation services like Google Translate to translate text into another language. But these tools also often reflect social bias.


Michal Měchura developed the plug in Fairslator to identify and reduce bias in machine translation. private

In conversation with Michal Měchura
When the machine asks the human

Language technologist Michal Měchura has always wanted a tool that allows machine translators to ask humans questions about ambiguous phrases. With Fairslator, he has developed such a tool himself. Michal talks to us about bias and ambiguity in automatic translations.
 


Sensitivity in translation


Two persons of colour, one with a headscarf and glasses, and one with dreads and a basketball Shirt. The words "empowerment" and "Selbstermächtigung" can be read in the background. © Goethe-Institut. llustration: EL BOUM.

Othering ~ Andern
10 terms related to identities that require sensitivity in translation

Translation is incredibly challenging: translating sensitively demands that historical, geographical, political and social contexts be taken into account.
In their article, the literature scholars and editors of the platform “poco.lit.” Lucy Gasser and Anna von Rath look specifically at English and German, and discuss 10 terms related to identity constructions that are difficult to translate.



The picture shows the busts of two persons of colour, one of whom may be male and the other female. The male one has short, black hair, the female one a bit longer, wavy, also black hair and earrings. The expressions "people of colour" and "Menschen mit Rassismuserfahrung" can be read in the background. © Goethe-Institut. llustration: EL BOUM.

Race ≠ Rasse
10 terms related to race that require sensitivity in translation

Translation is incredibly challenging: translating sensitively demands that historical, geographical, political and social contexts be taken into account.
In their article, the literature scholars and editors of the platform “poco.lit.” Lucy Gasser and Anna von Rath look specifically at English and German, and discuss 10 terms related to race that are difficult to translate.


More about this project

Language defines the world. The Goethe-Institut stands for an inclusive language - and thus for an inclusive world.

With Artificially correct, we work with experts to develop a tool that minimises the bias in texts and strengthen a conscious approach to language.    

Specifically, Artificially correct deals with AI-based translation- and writing tools and the biases (e.g. gender/racial bias) whose translations they generate. Artificially Correct creates an active network of people affected by this problem - translators, publishers, activists and experiential experts - and identifies partners who will pursue the issue with us in the long term. We bring together perspectives, create awareness, share knowledge and stimulate discussion.

Illustration, to the left a magnifying glass, to the right a screen with a brain on it. © Goethe-Institut. Illustration: EL BOUM

Summary of the results
Artificially Correct Hackathon 2021

Between the 1st and 3rd of October 2021, translators, activists and software developers from all over the world, came together to participate in the Goethe-Institut’s Artificially Correct Hackathon.12 teams had 52 hours (lots of coffee), and one mission – to develop innovative, forward-looking ideas and projects to help tackle bias in language and machine translators.
 


Screenshot from the video interview with Bettina Koch, Bhargavi Mahesh, Janica Hackenbuch, Joke Daems, and Shrishti Mohabey. © Goethe-Institut

In conversation with
Bias by Us

BiasByUs created a website at the Artificially Correct hackathon that acted as a ‘library’ for bias in translations. This collection of bias is to be generated through crowdsourcing bias found in machine translations. Watch them speak about challenges and scope of bias in machine translations, the ‘Bias by Us’ solution, their Artificially Correct hackathon experience, and more.

Screenshot from the video interview with the winning hackathon-team Aïcha Konaté, Lauri Laatu and Marvin Mouroum. © Goethe-Institut

In conversation with
Word 2 Vec

Word 2 Vec focused on one of Google Translates, and other deep learning systems flaws - the struggle to comprehend complex content during the Artificially Correct hackathon. As a solution the team developed a translation bias reduction tool that allows you to identify sentences susceptible to racial and gender bias. It works to identify and analyse the greater context of the sentence, and consequently highlights sentences and phrases that could be susceptible to bias. Find out more about their solution and their hackathon experience! 

Database BiasByUs
What happened after the Hackathon?

Janiça Hackenbuchner from the BiasByUs team won our hackathon with the idea of a database that raises awareness of gender bias in machine translation. On  medium.com she explains how this works and how it has progressed since the hackathon.

Do you want to use the database yourselves? Take a look here!


Top