Keskustelu
How to Fix Bias in Machine Translation

Online

Language defines the world. Machines that translate it are inevitable contributors to this definition. Although AI-based translation tools are constantly coming up with better results – with regard to “gender” and “race” they exhibit many shortcomings and distortions. For instance a literal translation of “lightness” into German would be “Helligkeit” or “Leichtigkeit” (in the sense of weightlessness). However in texts that refer to race, translations frequently use “helle Haut” (light skin) instead. This explicit reference to a physical feature perpetuates the biologisation of semantic structures referring to race in German. You can find more examples here.

In a project entitled Artificially Correct, the Goethe-Institut – which promotes inclusive language – is working with experts to develop new tools that minimise the bias in translations. The aim is to strengthen the position of translators and establish a conscious approach to translation machines – whilst ensuring that the realities of as many people as possible are included in the translation process. In this panel, participants and jury members of the Artificially Correct Hackathon demonstrate the correct way to approach this bias problem.

The panel language is English.
Register here

Speakers: Danielle Saunders, computer scientist, Cambridge; Janiça Hackenbuchner, Masters student, Cologne; Marvin Mouroum, computer vision engineer, Berlin
Moderation: Simon Caton, Professor of Computer Science, University College Dublin
                                  

Danielle Saunders is a research scientist with a focus on machine translation at the RWS Group. Her primary research interest is controlling the behaviour of automated translation systems in response to unexpected or unusual language. She also addressed this topic in her dissertation, which she successfully completed at the University of Cambridge.

Marvin Mouroum works as a computer vision engineer at iFab Ottobock and is a graduate of the European Institute of Innovation & Technology. He is a member of “A Word2Vec solution”, the winning team at the Artificially Correct Hackathon. The group looked at the fact that automated translation tools like Google Translate or DeepL often have difficulties with more complex content. Their solution was to develop a web platform that uses the Word2Vec algorithm to compare words and identify how strongly they match a particular context. The idea was well received because it represents a very complete solution that can be expanded to include additional languages and bias.
  
Janiça Hackenbuchner is currently working on her master’s thesis on the subject of “Gender Bias in Machine Translation” at Cologne University of Applied Sciences. She also won the hackathon with the theme “Bias in Machine Translation”. Her team “BiasByUs” created a website that was also designed to function as a crowdsourcing database with examples of bias, and as a source of information about the effects of bias. In the future a database like this could for instance be implemented as a browser plug-in. “BiasByUs” was picked because of its collaborative character and potential for ongoing project work.

Simon Caton is Professor of Computer Science at University College Dublin (UCD). Before that he was a data analysis lecturer at the National College of Ireland (NCI) and a research associate at the Karlsruhe Institute of Technology (KIT). In 2010 he completed his PhD in computer science. His research focuses are grid and cloud computing in association with automated processes and social networks.

Yksityiskohdat

Kieli: englanti
Hinta: maksuton