Amanda Svensson The Machine, or: He might as well have ghosted her

An illustration with black and purple elements.
Illustration: © Ricardo Roa

What happens when the right words no longer come? Amanda Svensson spends months training AI models – while searching for language in her own life. An intimate account of loss, hallucinations and the desperate search for words that really mean something.

On 6 September 1783, Johann Wolfgang von Goethe traipses, toils and trips his way up the peak of Kickelhahn, and in the process writes one of the most famous poems in all of German literature, Über allen Gipfeln, or Wanderers Nachtlied, on the wall of a little cabin. Two hundred and fifty years later there are, according to the Swedish Dictionary of Translation, around two thousand translations of the poem, just into Swedish. Maybe because it’s mercifully short: twenty-four words, artfully arranged on eight lines with the rhyme scheme ABABCDDC.

Über allen Gipfeln
Ist Ruh,
In allen Wipfeln
Spürest du
Kaum einen Hauch;
Die Vöglein schweigen im Walde,
Warte nur, balde
Ruhest du auch.

According to the Kickelhahn Wikipedia page, Goethe visited the part of Thüringen that contains Kickelhahn, Illmenau, twenty-eight times in his life. No more, no less. Exactly how many times he, in the company of his friend and patron Karl August, Grand Duke of Saxe-Weimar-Eisenach, climbed Kickelhahn is not clear from the digital history books. But it was more than once.
One must imagine Goethe happy.
That he was sort of rolling the grand duke ahead of him like a rock, waiting for the moment at which he would happen upon just the right words and scratch them into the walls of a mountain bothy.

*

In the summer of 2024 I was in the middle of a divorce, I just didn’t know it yet. Someone very close to me was very ill, that was something I did know. I was on the verge of making some major changes to my so-called career. What’s more, I owned a house that, in a very real sense, had been slowly taken apart until nothing but the walls remained, and it seemed unclear how it would ever become a house again. That summer, and the year that followed, felt at times like climbing a very high mountain. All the while, words – the right words, the true ones – felt increasingly out of reach.
Sometimes I hear people – mainly poets, frankly, which sometimes makes me wonder if I’ve chosen the right field – claim that there is no language for this or for that. Implicit is the idea that there are things so enormous there’s no way of expressing them verbally. I’ve always felt very sceptical about that claim. There’s a word for everything, and if there isn’t, you can make one. That is, so to speak, the whole point of words.
All the same: faced with loss after loss I’ve found myself more or less mute – completely incapable of writing anything meaningful, partially incapable of speaking, at times even incapable of thinking. Even the words I’m writing now feel false and belaboured. I can’t even be quite sure it’s me writing them – maybe there’s a ghost in the machine or just a machine so skilled in imitating something I could have written that it does it on command.
Do you reckon a machine can be happy?
Maybe it’s better to resist the temptation.

*

Berlin, May 2025. The German theatre audience are laughing. They are laughing so hard they’re crying, they simply can’t get enough Goethe. Goethe, chopped up into little bits. Goethe, quantified, Goethe, algorithmised, Goethe, made so strange he becomes nothing but sound.
The performance that has the audience doubled over in their chairs at the Berliner Festspiele is Georges Perec and Eugen Helmlé’s radio play Die Maschine, oder: Über allen Gipfeln ist Ruh from 1968, which has been showing at Schauspielhaus Hamburg this year, directed by Anita Vulesica. The piece is one of ten selected for the annual theatre festival Theatertreffen in Berlin, so the auditorium is jam-packed.
The theatrical potential of a radio play based entirely on sophisticated semantic games might sound a little limited, but it turns out to be very effective to give – as Vulesica does – Perec’s imaginary machine physical, human form. The machine in question has a single function: like a contemporary AI engine, it dissects and analyses Goethe’s famous graffiti poem Über allen Gipfeln according to five ‘protocols’. The machine has a ‘control’ – which sets what in AI-speak would be called ‘the prompt’ – and three output channels that generate the results, each represented by one actor in a simultaneously futuristic and nostalgic set consisting of pneumatic postal tubes and oversized buttons and levers. The five protocols are thrashed out in turn: statistics, linguistics, semantics, criticism, poetics. The control gives one command after another, and the increasingly harassed-looking number crunchers spit out the answers: syllables are counted, letters are swapped, nouns and verbs are pulled out, the words are put into alphabetical order, in reverse alphabetical order, words are swapped with the next five words in the dictionary, swapped with synonyms, swapped with antonyms. . . And so on, and so on, in increasingly absurd concatenations. The five protocols have a sort of rising degree of complexity, from straightforward mechanical data management to commands that require more complex or creative intelligence: free association, spotting connections, critical thought – in short, all the things that humans can do but which machines were very far from attempting when Perec wrote his radio play in 1968. Perec’s machine was a fantasy – a way of symbolically staging the almost magical mechanisms of language and human thought.
But in 2025? Not even the most far-fetched or absurd of the commands Perec came up with for his fictional machine are particularly tricky for the ‘large language models’ (LLMs) of today – the most famous and most-used of which is ChatGPT.
One effect of hearing Goethe’s poem repeated in never-ending variations for ninety minutes – which is what happens in the play – is that it ends up seeming completely meaningless. It’s like repeating one’s own time over and over – in the end you hear nothing but a sequence of strange sounds whose connection to your own self feels absurd.
But somewhere there, something completely new is created too.
For Perec, the machine wasn’t something that hollowed out language. On the contrary, like all of his experiments in form, it served to expand it, even if it did so in the most brutal fashion. Not until language has been reduced to its parts can new sense and meaning rise from the ruins. Perhaps this is the whole point of Perec’s machine: to generate new meaning by forcibly demolishing the old, thereby bringing people closer to the inner magic of language, that secret spark that makes it live.
There is a ghost in Perec’s machine. I think that’s why everyone was laughing so hysterically in the theatre. The ghost tickles their tummies, pointing at itself and saying: “You see? How absurd this all sounds? And yet still it means something to you. Can’t you see I’m not the one creating the meaning? It’s you – the living”.

*

England, autumn 2019. My daughter has lived five of her seven years outside Sweden, and I am beginning to worry that her feeling for the nuances of her mother tongue is not quite as well-formed as it should be. We’re baking and I ask her to get out a ‘bunke’, or mixing bowl, she gets out a glass ‘skål’, the kind of bowl you might fill with fruit. She looks at me, uncomprehending, as I try to explain that there’s a difference between a bunke and a skål. I can understand her confusion. I can’t really put my finger on the difference either. Is it something to do with size? No. The glass bowl she’d got out is just as big as the plastic mixing bowl I’d had in mind. Is it about the material, then? Not necessarily. You could have a skål made of plastic, a bunke made of metal. Function? To a certain extent. But in a clinch you can beat things in a skål and serve something like popcorn in a bunke. To really distinguish a bunke from other kinds of bowl, all these data points – size, material, function – need to come together, but even then, it takes something more to be conclusive. A kind of linguistic ‘fingerspitzgefühl’ is needed – a creativity of thought that only human cognition is truly capable of. And perhaps something else – a sensual, lived experience. All this is hard to explain to a seven-year-old. So I explain it to her in the way that occurs to me in the moment:

“If you get the object out of the cupboard and get a spontaneous feeling that you want to put it on your head upside-down like a hat – then it’s a bunke. It not, it’s just a bowl.”

She’s never got it wrong since.

*

Finding the right words. Among all the words that exist. Bunke. Skål. To find reality among all the words that constitute reality. To find yourself, only you’re someone else. Finding words you could have said, but didn’t. I think about these things a lot in the course of my year-long mountain hike. I often stand there, staring at the empty walls in my tumbledown house, vaping (it’s a phase) and wondering when they’re going finally going to appear: those twenty-four ingeniously arranged words that will bring calm to this mountain ridge.
They don’t come. Instead, I temporarily get lost in the machine.
Because, for reasons I don’t fully understand myself, I start doing a bit of work on the side for a global digital corporation that trains LLMs, that is, what we commonly refer to as AI. I choose this path myself, after seeing an advert on social media. It’s not for the money, though it’s pretty well paid. Neither do I have any interest in shaping the next generation of AI, as the advert so appealingly puts it. As an author, translator and journalist, this impingement into my vocational domain actually runs counter to my interests. But there’s something tempting about working on the language processing side itself. To get to input something at one end and see what comes out on the other.
Perhaps there is actually a ghost there, inside the machine. Maybe the right words will present themselves, if only I pull on enough levers.
And, as they say on the internet: You’ll Never Guess What Happened Next!
(Spoiler: nothing much happens, the right words don’t appear, everything is futile and I spend some time thinking about Goethe.)

*

Artificial intelligence has, since the term first appeared, given rise to discussions about where the line between human and machine lies. The singularity, the point at which a machine’s intelligence becomes indistinguishable from a human’s, has been lurking as a threat on the horizon the whole time. Some want to bring about such a complete dissolution of the boundary between human and machine – and the development by the machine of some kind of consciousness – that it becomes impossible to distinguish between these two different ways of being in the world. Others think that there is some diffuse quality to human consciousness that the machine will never be able to acquire. For some people, the looming victory of machine intelligence is a utopia, for others a nightmare. I don’t think much about that stuff. As long as we’re unable to define what human consciousness actually is, trying to determine whether a machine is able to achieve it feels like little more than an exercise in pointlessness.
In other words, I’m not hugely interested in what the machine might become in the future – a person? – but rather, in what it’s doing to people here and now. In short, what it’s doing to language, art, communication, to the ways in which we understand ourselves and each other in the even larger language model we call the world.

*

Spring 2025. I’m scrolling aimlessly through Instagram and for some algorithmic reason I stop at a post by a woman who’s been ghosted by her date. Or, not ghosted, exactly – the guy did actually get in touch and cancel the date. But still, the woman is annoyed and the message he sent is there for all to see. I read it several times and can’t see what the problem is. He expresses himself clearly yet definitively, saying that he’s realised he doesn’t want to see her because he doesn’t believe that he and the woman are compatible, that he doesn’t want to waste her time and that she seems like a great girl, just not for him. Cliched, sure, and rubbish for the woman who wants to go on a date – but you could hardly claim there’s been any real breach of etiquette.
It’s not until I read the comments that I get why the girl feels she’s been treated so disgracefully. All the commenters seem convinced there’s no way the guy could have written that brush-off himself. ‘Obviously AI, just look at that DASH!’ – ‘OMG like who would ever use a dash in a text lmao’ – ‘He might as well have ghosted her that’s so disrespectful’.
And then I see it too. The message is too well composed – it even has an en dash to separate clauses. As I read more of the comments I realise this is the punctuation mark – the dash – that you need to look for if you suspect you’ve been dumped by ChatGPT.
I also learn that the problem of Tinder dates not even bothering to write their own messages to the person they’re chatting with is widespread. Screenshot after screenshot flashes by in the comments, as people post proof that they’ve had long conversations with someone who seemed keen – but who had yielded to the temptation to let AI do the courting. The internet being full of bots is one thing. If you’ve spent more than five minutes online you’ll recognise one of those. But trying to date an actual person, who speaks through and with the help of a Perecian protocol? That’s a whole new ball game, a new grey zone that has opened up between authentic expressions of love and fake ones.

*

Training generative AI is a bit like finding yourself inside Perec’s Goethe machine, but instead of levers you’re surrounded by peppy digital cheerleaders trying to convince you that the work you’re doing is important, revolutionary and, above all, inescapable. I can’t say I find it important or revolutionary, but perhaps it is inescapable – humanity has yet to discover a technology it hasn’t made use of.
In any case, for a while it’s surprisingly stimulating, even if the work itself is pretty monotonous. It’s mostly a case of writing prompts based on certain predetermined subjects or restrictions, making the machine generate responses which are then graded against a particular schema. The main things graded are truthfulness, usefulness and linguistic accuracy. At times, you are the one writing the responses, but for the most part it’s a case of writing prompts so complex you force the model to make mistakes, big or small, which can then be corrected.
I’m happiest when the model makes straightforward linguistic errors, like using words with the wrong nuance, revealing a lack of understanding of tone, or just sounding a bit off. A bit rebellious.
Being the one to beat back that rebellion.
There’s something comforting in that.

*
A friend had done some work for a company who for some reason were refusing to pay the invoice. A year passes, with my friend making ever-more strident demands, but no payment is forthcoming. Increasingly frustrated, he decides to use ChatGPT to write a legally water-tight email threatening to take the company to court, strewn with references to various European laws and judicial bodies. This doesn’t work either.
I, having by this point been inside the machine for a long time, am not particularly surprised.
When absolutely anybody can find the right words for a given situation – in this case European employment law – those words lose their power. Naturally, the recipient knows it’s not a lawyer who’s written that email. They know it’s the machine, and the machine doesn’t want anything, doesn’t demand anything. There’s no weight behind the words. All the machine wants is to please.
In that way we’re pretty similar, the machine and I.

*

The best thing that can happen when you’re training an AI, at least in the project I was involved in, is to make it hallucinate. An AI generally has a very hard time admitting there are things it doesn’t know – so when the machine can’t answer a question it quite simply makes something up. An AI tasked with writing rhyming verse about the contents of a kitchen cupboard might happily try to tell you that ‘skål’ rhymes with ‘bunke’, simply because it doesn’t understand the parameters of the question.
The reason AI trainers are happy to encounter hallucinations of this kind is, of course, because it gives you something to do. If you’ve made the machine hallucinate, there’s something to correct, something to teach it so it can become better.
But I don’t think that’s the whole reason I feel a spark of excitement every time I manage to make the machine lie, exaggerate, make up stories, invent words that don’t exist, form insane linguistic constructions or borrow words from other languages that it vaguely attempts to bend to the norms of the Swedish language. It’s in these moments of vulnerability that I get a sense the machine might be alive after all. Or, not that it’s actually alive. Maybe more that it says something about things that are alive.
About living language and how it’s created.
And about living people – their idiocy, their fallibility, their constant desire to make themselves bigger, smarter, more articulate than they really are.
To put it plainly, I recognise myself in the machine. The frenzied hunt for the right words. The continual failure, sometimes so well disguised that it passes by unnoticed.
But unless you want to start hallucinating for real you should probably avoid that kind of identification.

*

Months pass inside the machine. I train the model every day. And the model learns. The model becomes more and more human. The model grows legs and feet and wanders around the kitchen, clattering about in the cabinets, getting out a bunke and whipping some cream that it tips into a skål. The model toddles its way up Kickelhahn, mops its brow and says: ‘’Karl August, you see that bothy? You see those birds in the sky? Do you ever think about death?” The model picks up a pencil – or whatever writing implement they might have had in 1783 – and scratches a poem onto the wall. The model opens its eyes and realised that everything around her has changed without her noticing. She searches for the words to describe it. The real words, but they won’t come. The model travels to Berlin and goes to the theatre. She sees herself, in rudimentary form. People are laughing at her as though she wasn’t sitting right there among them. The model loses someone close to her. The model thinks: That’s the condition. All the sorrow and joy that’s born out of this language. That’s the price a human being pays for being human, not machine.
And then maybe, for a while, she feels regret. Was it really worth it, leaving the machine?

*

Berlin, May 2025. I’m walking home from the theatre. Goethe’s poem is echoing around my head – after ninety minutes it’s impossible to turn it off. It’s a poem about ageing and finding peace in the face of death. I find it difficult to tell whether I think it’s any good or not. I’m inclined to say no. It’s a throwaway poem. If I’d asked the machine to write a poem of twenty-four words, artfully arranged across eight lines with the rhyme scheme ABABCDDC, in the style of Johann Wolfgang von Goethe, it probably would have managed just as well. At least with a little training.
But that poem isn’t loved because it’s good. It’s loved because Goethe scrawled it on the wall of a little cabin on a mountaintop, a cabin he’d struggled his way to, together with his friend and patron Karl August of Saxe-Weimar-Eisenach. It’s loved because it was put there by someone who quite literally left his own human fingerprints on the words.
By this point I’ve stopped training AI models. Doing things with words in that way was, when all was said and done, something that had very little to do with real language. Only the living can make something that’s alive.
Right? On a May evening in Berlin, marinated in Goethe, it’s easy to feel romantic. But I’m still not completely sure what kind of words I’m waiting for and how they differ from the many thousands of words I helped generate in the machine. Maybe it’s simply not the case that there are true words and empty words. That there could be a difference between words waiting on mountaintops and words hallucinated by over-enthusiastic algorithms. Maybe it’s possible to find the same meaning in a collection of data points as you would in words born in the long, dark night of the soul.
But just imagine: You’re Karl August of Saxe-Weimar-Eisenach. You’re up a mountain with Goethe. Suddenly a poem appears on the wall. You haven’t seen anyone get out a pen. What do you make of that poem? What does it mean? What is communication, when we can’t be sure who’s trying to communicate with us, or that there’s even someone trying to communicate? What’s the meaning of a love letter if the lover didn’t articulate that love themselves?
He might as well have ghosted her that’s so disrespectful.

*

Summer 2025. That someone close to me who’s been sick so long finally dies. Even though I’ve left the machine behind, my own words still haven’t returned. So I ask ChatGPT what to write on the funeral wreath. The suggestions are reasonable, yet somehow terribly insufficient. I regret asking the machine, because once the suggestions are there it’s hard to think beyond them. Almost impossible. It’s so easy just to choose one. Thanks – for everything you gave us.
In vain I try to blow a little life into the words by deleting the dash.