Quick access:

Go directly to content (Alt 1) Go directly to first-level navigation (Alt 2)

The book sector is under pressure — How will Artificial Intelligence affect the world of translation?

ChatGPT, DeepL, Google Translate: will texts and translations soon be 100 percent producible by machines alone? What would that mean for the book sector? And how would the world of translators be affected?
 

By Andreas G. Förster

“If we want everything to stay as it is, then everything needs to change.” Lampedusa’s dictum in The Leopard has long since become a truism. Not every innovation brings change. The extent to which the literary translator’s work is going to be affected by the application of Artificial Intelligence (AI) is going to depend on the manner and extent of its use.

If we swallow the promises trumpeted by the IT industry and deploy AI software on a massive scale, then literary translation will be reduced to mere post-editing and will share the fate of factory work, where the reduction of the process to a succession of repetitive manual operations was just one unfortunate outcome. As things stand at present in terms of transatlantic cultural transfer AI can’t even render clothes sizes or physical dimensions correctly, let alone render typographical peculiarities effectively. Viewed overall, the revising of machine-generated translations presents a translation bureau’s copy-editors with a challenge sharply different from usual in that the algorithm involved does not provide a reliable textual framework, and sentences can in certain circumstances end up as discrete entities entirely unconnected with each other. Hence André Hansen’s remark concerning the Collective Intelligence Project: "A translation machine can make the correct choice in one sentence, and then, faced with identically the same problem in the next sentence, make the wrong choice."

The Collective Intelligence Project was founded in 2022 by three translators, two male and one female, and is funded by the German Government’s ‘BKM-Programm Neustart Kultur’ via the German Translation Fund. Its aim is to encourage German literary translators to involve themselves in the systematic exchange of information about the advantages and disadvantages of using AI. The empirical framework was devised by a group of fourteen professional translators using the generic (i.e. not subject-specific) software built by the Cologne startup company DeepL, and feeding it with extracts taken from a non-fiction book and a popular-type novel.

The Collective Intelligence Project ended up ascribing three main effects to machine-produced translations: the ‘fatigue effect’, the ‘priming effect’ and the ‘obstacle effect’. The main cause of the ‘fatigue’ effect is the absence of a single and solid textual foundation (whereas in traditional translation there is just one source text, translations involving AI mean that there are two texts vying for attention: the foreign-language original, and the machine-derived translation; dealing with both requires additional time and effort.) The post-edit may be said to be ‘primed’ or pre-conditioned in that it is invariably based on the machine version, as is evident in both vocabulary and sentence structure). The machine-derived text can itself sometimes represent an ‘obstacle’ if it becomes obtrusive and thereby supplants the literary original. These translatory considerations may be followed online in the summary by André Hansen already referred to, as also in the reports from the fourteen Collective Intelligence Project participants.[1]

The International Organisation for Standardisation has indeed already agreed on a specific set of norms with respect to machine translations. DIN-ISO Norm 18587:2018.02 on post-editing identifies two grounds for using AI: saving time, and saving money. However, it also emphasises that not all texts lend themselves to machine translation (!), and that texts that have not undergone post-editing are not yet suitable for publication. Such is the current technological state of affairs. The degree of savings that can be achieved is always contingent on the individual circumstances of the case (savings of 10 to 30 percent have been mooted), but may on occasion run into the negative, meaning that the process would entail more input than otherwise. The text and its translator/post-editor are the sole determining factors in such cases.

Can machines create works of art?

In Germany the question whether machines can acquire copyright over writings or images they have generated is relatively clear: copyright can accrue only to human beings, to natural persons. An artistic work is always necessarily human in origin, everything else is mere confection. Copyright always belongs to the originating artist; usage rights are the only thing that can be transferred. The current talk of ‘emerging’ art works seeks to smooth the path towards the re-defining and attenuation of copyright law: it is argued that if machine- and human-generated art works are no long differentiable, then human beings can no longer claim preferential protection. Whether any normal court would accept this line of argument is currently open to question.

The application of software has a devastating effect at the aesthetic level. Meaning and expressiveness both tend to lose all sharpness in machine-generated work. The severity of the overall loss may vary depending on the language, but the following effects may be observed in the case of German: minimal variation in verb use; multiple passive constructions; random parataxis; use of a nominal style, but largely without composite nouns; etc. Essentially, massive interferences frequently occur that have a deleterious effect on the original German. But it would seem that at the end of the day these are often deemed acceptable.

To get back to the question of whether or not to use AI: if the views of literary translators are to be believed, little time is saved. The argument goes that AI is chiefly useful as a supplementary source of inspiration with respect to a sentence or a  paragraph, or to individual turns of phrase, and as such serves as an additional tool, and indeed the newest one. The process of interpreting foreign-language works of art thus potentially acquires an additional stage, but is not thereby accelerated - or at any rate not without a reduction in the quality of the resultant text.

The extent of any forward leap that translation machines currently in development may prove to be capable of will depend on a multiplicity of factors (not least the development of specific and adaptive software). Some people take the view, however, that these language models ultimately have irreducible limitations, with the result that human beings will never be superfluous to the process of writing and translating. Computer linguists accordingly make the point that machines take note of only one of the two or potentially three dimensions of a linguistic signifier, namely its form[2] – but of neither its content nor its meaning.

AI is never likely to simply supplant translators. However, the tensions between translators, editors and publishers may well increase if publishers and editors make themselves more and more dependent on emerging AI technologies. It must be acknowledged that at present and for the medium term the book industry is affected in respect not only of translations, but also of children’s books, illustration, manuscript analysis, sales forecasting (slash programme-planning) and audiobook production. The prospect of fully automated book production its purely speculative at present, but is already causing widespread palpitations.
 

[1] André Hansen, „Kollektive Intelligenz: Kann KI Literatur?“, 2023,
https://kollektive-intelligenz.de/originals/kollektive-intelligenz-kann-ki-literatur/

[2] Emily M. Bender und Alexander Koller, „Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data“, 2020, https://aclanthology.org/2020.acl-main.463.pdf

 

Top