Join our Hackathon!
How does bias get into translation and what can we do about it? To find answers and solutions to this question, we need your input!
Are you an activist concerning questions of race, gender, identity or language?
Are you a translator?
Have you experienced discrimination?
Are you a software developer or social media expert?
Are you simply interested in the topic and just keen to do something about it?
Then join our Hackathon!
What we offer:
- Time to develop an idea/a translation tool
- Experts to advise you
- The chance to pitch of your findings and to develop your tool after the Hackathon
- A cash prize of 2 x €2500 for the winning pitches
- Input lectures on different topics from international experts
- International experts will be present as advisors and members of the jury
Challenge 1: Interactive station for recording and crowdsourcing a multilingual speech dataset. Conducted by ZKM (Zentrum für Kunst und Medien, Karlsruhe). In this challenge you willl develop a set of tools and an interface for crowdsourcing, a speech dataset that can be embedded in an interactive station.
Challenge 2: Gender-fair post-editing of Machine Translation. Conducted by University of Graz and University of Vienna. In this challenge, you will find strategies for post-editing and improving biased MT outputs to achieve gender-fair translations between the languages of English and German.
Challenge 3: Database and detection of gender bias in A.I. translations. Conducted by IfM (Institut für Medien- und Kommunikationspolitik) and FCAI.
The goal of this challenge is to define and analyse gender bias from machine translation systems and create a database in which all users can gather, describe and discuss cases of bias.
Challenge 4: Identifying sentences susceptible to machine translation bias. Conducted by Danielle Saunders. During this challenge, you will automatically identify bias- susceptible sentences, ideally in a way that generalises languages other than English.
Challenge 5: Does bias in collections and archives survive translation and multilingualism? Conducted by Cultural A.I. In this challenge you will experiment with the Dutch tool SABIO (the SociAl BIas Observatory), which explores patterns of bias in museum collections and heritage archives, and build extensions for cross- and multilingual contexts.
Challenge 6: Measuring the effects of representational bias. Conducted by EQUITBL, and WASP-WARA-Media and Language. The goal of this challenge is to find a way to automatically test whether the amount of unbalanced representation of genders affect the quality of the resulting tools with regard to bias for example.
ZKM – Zentrum für Kunst und Medien; IfM – Institut für Medien- und Kommunikationspolitik, Cultural AI Netherlands; University of Helsinki, University College of Dublin, WASP Wallenberg AI, Antirasistiska Akademin, Gender FairMT
Information on Art 13 & 14 GDPR
About the project:
Language defines the world. The Goethe-Institut stands for inclusive language; and thus, for an inclusive world.
With Artificially Correct
, we work with experts to develop a translation tool that minimises the bias in translations. We want to strengthen the position of translators by developing a conscious approach to Machine Translation, and promote awareness of social diversity and inclusion.
Specifically, Artificially Correct
deals with AI-based translation tools and their built-in biases (e.g., towards gender and race). Artificially Correct
creates an active network of people affected by this problem - translators, publishers, activists and experiential experts, and identifies partners who will pursue the issue with us in the long term. We bring together perspectives, create awareness, share knowledge and stimulate discussion.