How does bias get into translation and what can we do about it? To find answers and solutions to this question, we invited activists, translators, software developers and everyone who is interested in the topic to join our online hackathon at the beginning of October 2021.

Illustration, to the left a magnifying glass, to the right a screen with a brain on it. © Goethe-Institut. Illustration: EL BOUM


Congratulations to the winning teams: Team 4c (A Word2Vec solution) and 3a (BiasByUs)!

Team 4c (A Word2Vec solution) looked at the fact that tools like Google Translate, DeepL etc. have difficulties with more complex content. The solution is a web platform that uses Word2Vec architecture which works by comparing words and identifying how "closely" they correspond to the context. The idea was praised for creating a very complete solution, which can be expanded for further languages and bias; the jury also liked the idea of making bias measurable.

Team 3a (BiasByUs) created a website: a crowdsourced database of bias examples and a source of information on the impact of bias. In future the database could be implemented as a browser plug-in. It's also about creating community around the subject and unbiasing the world collectively! BiasByUs was chosen because of its collaborative character and potential for further research. Users can educate themselves and find better alternatives. The jury encourages the team to work further on the browser plug-in idea.

An honourable mention goes to team 2b and the idea of a "gender neutral" toggle. With a "gender neutral" toggle - all German pronouns (sie/er/neo) would be changed into they/them pronouns in English. The jury stressed the relevance of the proposed solution for the non-binary community. The idea also pointed to the important fact that pronouns and grammar are always evolving.

Read more about the hackathon and the winning solutions here.

A short summary of all Top 5-Teams who have pitched their ideas can be found on Goethe-Institut London’s twitter account.

Congratulations to all ten teams who made it through the hackathon! We loved the open-minded spirit of all participants and hope to see the ideas merging and evolving towards a better, less biased landscape of AI-based translation tools!

Huge thanks to all our partners, experts and jury members and all the participants who have shared their ideas with us! A special thanks to our Partner Impact Hub who have planned the Hackathon with us and have technically conducted it for us!

What was the Hackathon about?

The aim of The Artificially Correct Hackathon 2021 was to develop innovative, forwardlooking ideas and projects with regards to tackling bias in language and machine translation. The participants worked on one of the proposed challenges or their own projects, with the aim of coming up with innovative solutions, e.g. concepts, games, apps etc. The best projects developed during the event were awarded by the Goethe-Institut. 



Click on the links below to get to know all the details for each challenge. You don't need all the skills that are stated in the description, as long as you are interested in the topic. The participants will be combined in teams by their interest and knowledge background, unless you are registering as a group.

Challenge 1: Interactive station for recording and crowdsourcing a multilingual speech dataset. Conducted by ZKM (Center for Art and Media, Karlsruhe).
Read more

Challenge 2: Gender-fair post-editing of Machine Translation. Conducted by University of Graz and University of Vienna. In this challenge, you will find strategies for post-editing and improving biased MT outputs to achieve gender-fair translations between the languages of English and German.
Read more
Challenge 3: Database and detection of gender bias in A.I. translations. Conducted by IfM (Institut für Medien- und Kommunikationspolitik) and FCAI (Finnish Centre for Artificial Intelligence).
The goal of this challenge is to define and analyse gender bias from machine translation systems and create a database in which all users can gather, describe and discuss cases of bias.
Read more
Challenge 4: Identifying sentences susceptible to machine translation bias. Conducted by Danielle Saunders. During this challenge, you will automatically identify bias- susceptible sentences, ideally in a way that generalises to other languages than English.
Read more
Challenge 5: Does bias in collections and archives survive translation and multilingualism? Conducted by Cultural A.I. In this challenge you will experiment with the Dutch tool SABIO (the SociAl BIas Observatory), which explores patterns of bias in museum collections and heritage archives, and build extensions for cross- and multilingual contexts.
Read more
Challenge 6: Measuring the effects of representational bias. Conducted by EQUITBL, and WASP-WARA-Media and Language. The goal of this challenge is to find a way to automatically test whether the amount of unbalanced representation of genders affect the quality of the resulting tools with regard to bias for example.  
Read more

Day 1: Friday

Please note: the time is according to CET

Arrive on Slack, get used to the platform and leave a friendly "hello" in your challenge space
A few words of welcome Introduction of partners & challenges.
Find your team members on Slack, get to know each other and LET'S GO
During the hack time, challenge partners can be booked for Q&A

Day 2: Saturday

 Arrive on Slack - Good Morning Team :-) 
Welcome & Agenda Day 2 
Physical warm-up stretching before we continue hacking  
Continue working on the challenge with your team 
Experts are available during hack time and can be booked via Calendly
Treat yourself for some nice lunch!  
Experts are available during hack time and can be booked via Calendly  
Closing Day 2 
Open end hacking until you're out of coffee

Day 3: Sunday 

Arrive on Slack
 Welcome & Agenda Day 3
Brief guided meditation to start your day
Input about submission process & pitch training 
It's the final countdown... 
Finish line - please submit your solutions
... your deserve it  
Announcement of the Top 5 
The Top 5 pitch their projects  
Time to celebrate  
Final Closing Hackathon


If you have any questions or inquiries, don't hesitate to contact us:


Read more

Partnerlogos Logos Logos

About the project

Language defines the world. The Goethe-Institut stands for inclusive language; and thus, for an inclusive world.

With Artificially Correct, we work with experts to develop a translation tool that minimises the bias in translations. We want to strengthen the position of translators by developing a conscious approach to Machine Translation, and promote awareness of social diversity and inclusion.
Specifically, Artificially Correct deals with AI-based translation tools and their built-in biases (e.g., towards gender and race). Artificially Correct creates an active network of people affected by this problem - translators, publishers, activists and experiential experts, and identifies partners who will pursue the issue with us in the long term. We bring together perspectives, create awareness, share knowledge and stimulate discussion.