De-biasing Translation Tools
Many people routinely use machine translation services like Google Translate to translate text into another language. But these tools also often reflect social bias. Anna von Rath is part of the team at poco.lit., a postcolonial literature platform, and advocates for more sensitive language usage and translations with the project macht.sprache.
Ms von Rath, how did the macht.sprache. project come about?
I’m actually a translator myself and a diversity trainer, and about two years ago Lucy Gasser and I started up poco.lit., a platform for the discussion of postcolonial literature and related subjects. The platform is bilingual: English and German. That's how the subject of translation came up in the first place. We work with translations a lot and often translate content for poco.lit. And postcolonial literature in particular abounds with words and expressions that have something to do with discrimination or colonial power relations and are awfully hard to translate.
In part, we were using online tools for our work and noticed that a lot of things were terribly problematic. That’s how we hit upon the idea of taking a closer look. Then Timur Çelikel and Kolja Lange then joined the team. They found the subject interesting too and provided the requisite technical expertise. When the Berlin Senate called for project proposals on digitalization in the cultural sector, we applied with macht.sprache.
What are “sensitive terms” in translations and what problems do AI translation tools have with them?
What we mean by “sensitive terms” is all words and expressions that relate to discrimination, as well as certain terms used to designate persons or groups. Here are a few examples: The English word “nurse” is usually translated into German as “Krankenschwester” [from krank meaning “ill” and Schwester meaning “sister”], which is gender-biased. “Race” is still frequently translated as “Rasse” in German, which is a very problematic racist term. Subtler aspects of language can be problematic as well, for example “ableist” expressions like being “blind to something”.
My own focus is on race and gender issues and, to a lesser extent, on ableism. But some of our registered users on macht.sprache. have highlighted anti-Semitic terms as well. There’s plenty of room to broaden the scope. That is what’s supposed to happen, because our project is designed in way that allows people to contribute.
Why is AI translation biased and still using outmoded expressions?
Translation programs are trained on various corpora. Linguee, for example, was trained using EU texts, which tend to contain plenty of bureaucratic language. They’re not specialised texts that address discrimination issues. But AI merely reproduces what it has learnt. Which is why social norms and commonly used expressions turn up here.
AI is only trained to a certain point, so it’s always a little bit behind the times, because language is constantly evolving. As we can see in society, language changes often arise out of activist contexts. It's usually universities that say at some point, “We're going to take this up and closely examine and discuss it.” It takes years for language changes to reach the mainstream. And then a few more to reach the translation programs.
Are there any languages that are particularly concerned by these sensitive terms or more likely to reproduce biases?
At poco.lit. we work with German and English, which was our point of departure in developing the idea for macht.sprache. in the first place. What’s special about German – and this goes for French, Spanish, Italian and so on as well – is that every noun is assigned a grammatical gender. Expressing yourself gender-neutrally or -inclusively is particularly difficult in gendered languages.
It’s easier in English, at least grammatically speaking. But if we take a closer look, the gendered aspects there are often just less conspicuous. Take “nurse”, for example, an occupation that many tend to think of as female. This has a lot to do with social norms.
I think each language and each geographical context has its own history: the way words and expressions are used, how the language has evolved and is evolving today. Automated translation can't reflect that very well.
How were the platform and the browser extension for Google Translate developed? What happens when I insert text to be translated?
We implemented the project in three phases. In the first phase, we developed the discussion platform machtsprache.de, on which anyone interested can join in the discussion or just check out the content. The idea is to gather words and expressions that may be sensitive, suggest possible translations for them and rate them. How to translate something, which words to choose in the target language, always depends on context. That's why we designed macht.sprache. to enable users to enter examples of existing translations in books, newspapers and films and discuss why they find a term suitable or not in this context. This first phase of the discussion platform, which is still running, laid the groundwork for everything that followed, as well as enabling us to learn from others’ input.
The second phase involved more specific training. Among other things, we worked with the Goethe-Instituts in Northwest Europe to organise events focusing on race, gender and ability/disability in translation. We invited various experts to provide input, which we also used in the third phase, when it came to developing the tool itself. The Text Checker we developed can be found on machtsprache.de. Before you translate a given text, you can copy it into our Text Checker, which will flag any sensitive words or terms it finds.
We received follow-up funding from the Prototype Fund to develop the browser extension because we feel it’s important for macht.sprache. to be as easily accessible as possible. The hurdle is lower if macht.sprache can be integrated into the programs people are already using for their work anyway. Our Text Checker doesn’t translate, it suggest various translations and gives advice on sensitive terms. One new feature of the Google Translate extension is that all personal terms are highlighted, the idea being to invite people to think about gender stereotypes and possible gender-inclusive or gender-neutral variants.
Are you planning to make the extension available to other languages and translation tools?
We are currently applying for additional funding to develop a browser extension for DeepL and to support other browsers. Our Google Translate extension currently works only in Google Chrome.
If we can raise the money and find the time, we’d like to work more languages into the program. I speak Spanish, one colleague of mine speaks French and another Italian, so those would be the logical languages to include. But we’d still have to grow the team or find outside experts because this work is very detailed.
Language changes, and words we use today may be outdated in just a few years. How can AI and your project, including the browser extension, adapt?
With regard to our project, it has to be continually updated. The fact that language is constantly changing is just one aspect. The bigger a project like this one gets, the greater the risk of trolls popping up and endangering the whole project. That aside, the project is designed for crowdsourcing and community participation. Our hope is that the longer the platform keeps going, the more people will join in. So our database will automatically adapt to current language usage and keep more up to date than existing translation programs are able to do at the moment. This collaboration is vital to us because our team is pretty small and we share a certain view of the world. So it’s vital for us to get input from different perspectives, from which we can learn something ourselves, and which will also make the whole application better.
The interview was conducted by Juliane Glahn, online editor trainee of the “Zeitgeister” magazine.