Quick access:

Go directly to content (Alt 1) Go directly to first-level navigation (Alt 2)

AI and Art
Studio Ghibli style and the draining of meaning

An illustration of a boy in Studio Ghibli style. The figure disintegrates halfway into individual pixels.
Illustration: © Ricardo Roa

What remains of art when AI imitates everything? An essay on Ghibli filters and the quiet erosion of cultural meaning in the age of generative machines.

An awful personal prophecy is coming true. Way back in 2019, when AI was still a relatively niche topic, and only the primitive GPT-2 had been released, I predicted the technology would usher in a “semantic apocalypse” wherein art and language were drained of meaning. In fact, it was the first essay ever posted on my newsletter The Intrinsic Perspective.

I saw the dystopian potential for the future the exact moment I read a certain line in Kane Hsieh’s now-forgotten experiment, Transformer Poetry, where he published poems written by GPT-2. Most weren’t good, but at a certain point the machine wrote:

“Thou hast not a thousand days to tell me thou art beautiful.”

I read that line and thought: “Fuck.”

Fast forward six years, and the semantic apocalypse has started in earnest. People now report experiencing the exact internal psychological change I predicted about our collective consciousness all those years ago.

The Ghiblification of the Internet

In March 2025, OpenAI released their latest image generation model, with capabilities far more potent than the technology was even a year ago. Someone tweeted out the new AI could be used as a “Studio Ghibli style” filter for family photos. 20 million views later, everything online was Studio Ghibli.
Two children are sitting on the floor next to a bookshelf filled with books. The child in front, with curly hair, is reading a book that features an illustration of a person. The child behind, with straight hair, is looking at the camera. To the right, there is a striped armchair with colorful candies scattered on its seat. The image has been filtered to resemble artwork drawn by Studio Ghibli artists.

A photo of the author's children, which looks like a still from a Studio Ghibli film thanks to a ChatGPT filter | Foto: © Erik Hoel



Every meme was redone Ghibli-style, family photos were now in Ghibli-style, anonymous accounts face-doxxed themselves Ghibli-style. And it’s undeniable that Ghiblification is fun. I won’t lie. That picture of my kids reading together above, which is from a real photo – I exclaimed in delight when it appeared in the chat window like magic. So I totally get it. It’s a softer world when you have Ghibli glasses on. But by the time I made the third picture, it was less fun. A creeping sadness set in.
  The internet’s Ghiblification was not an accident. Changing a photo into an anime style was specifically featured in OpenAI’s original announcement.

Why? Because OpenAI does, or at least seems to do, something arguably kind of evil: they train their models to specifically imitate the artists the model trainers themselves like. Miyazaki for anime seems a strong possibility, but the same thing just happened with their new creative writing bot, which (ahem, it appears) was trained to mimic Nabokov.

While that creative-writing bot is still not released, it was previewed earlier this year, when Sam Altman posted a short story it wrote. It went viral because, while the story was clearly over-written (a classic beginner’s error), there were indeed some good metaphors in there, including when the AI mused:

“I am nothing if not a democracy of ghosts.”

Too good, actually. It sounded eerily familiar to me. I checked, and yup, that’s lifted directly from Nabokov.

“Pnin slowly walked under solemn pines. The sky was dying. He did not believe in an autocratic God. He did believe, dimly, in a democracy of ghosts.”

The rest of the story reads as a mix of someone aping Nabokov and Murakami – authors who just so happen to be personal favorites of some of the team members who worked on the project. Surprise, surprise.

Similarly, the new image model is a bit worse at other anime styles. But for Studio Ghibli, while I wouldn’t go so far as to say it’s passable, it’s also not super far from passable for some scenes. The AI can’t hold all the signature Ghibli details in mind – its limitation remains its intelligence and creativity, not its ability to copy style. Below on the left is a scene that took a real Studio Ghibli artist 15 months to complete. On the right is what I prompted in 30 seconds.
 
Two Studio Ghibli-style images placed next to each other show large crowds of people. On the left, many people are standing or sitting densely packed in a rocky environment. On the right, a dense crowd of people can also be seen, but in a darker, shadier environment with fewer visible details.

Studio Ghibli (left); the scene recreated with ChatGPT (right) | Foto: © Erik Hoel

In the AI version, the action is all one way, so it lacks the original’s complexity and personality, failing to capture true chaos. I’m not saying it’s a perfect copy. But the 30 seconds vs. 15 months figure should give everyone pause.

The loss of meaning and magic

The irony of internet Ghiblification is that Miyazaki is well-known for his hatred of AI, remarking once in a documentary that:
  While ChatGPT can’t pull off a perfect Miyazaki copy, it doesn’t really matter. The semantic apocalypse doesn’t require AI art to be exactly as good as the best human art. You just need to flood people with close-enough creations such that the originals feel less meaningful.
Many people on social media are reporting that their mental relationship to art is changing; that as fun as it is to Ghibli-fy at will, something fundamental has been cheapened about the original. Here’s someone describing their internal response to this cultural “grey goo.”
  Early mental signs of the semantic apocalypse. Which, I believe, follow neuroscientifically the same steps as semantic satiation.

A well-known psychological phenomenon, semantic satiation can be triggered by repeating a word over and over until it loses its meaning. You can do this with any word. How about “Ghibli?” Just read it over and over: Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. You just keep reading it, each one in turn. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli.

Try saying it aloud. Ghiiiiiiiii-bliiiiiii. Ghibli. Ghibli. Ghibli. Ghibli.

Do this enough and the word’s meaning is stripped away. Ghibli. Ghibli. Ghibli. Ghibli. It becomes an entity estranged from you, unfamiliar. Ghibli. Ghibli. Ghibli. Ghibli. It’s nothing. Just letters. Sounds. A “Ghib.” Then a “Li.” Ghibli. Ghibli. Ghibli. Like your child’s face is suddenly that of a stranger. Ghibli. Ghibli. Ghibli. Ghibli. Only the bones of syntax remain. Ghibli. Ghibli.

Semantic satiation at a cultural level

No one knows why semantic satiation happens, exactly. There’s a suspected mechanism in the form of neural habituation, wherein neurons respond less strongly from repeated stimulation; like a muscle, neurons grow tired, releasing fewer neurotransmitters after an action potential, until their formerly robust signal becomes a squeak. One hypothesis is that therefore the signal fails to propagate out from the language processing centers and trigger, as it normally does, all the standard associations that vibrate in your brain’s web of concepts. This leaves behind only the initial sensory information, which, it turns out, is almost nothing at all, just syllabic sounds set in cold relation. Ghibli. Ghibli. Ghibli. But there’s also evidence it’s not just neural fatigue. Semantic satiation reflects something higher-level about neural networks. It’s not just “neurons are tired.” Enough repetition and your attention changes too, shifting from the semantic contents to attending to the syntax alone. Ghibli. Ghibli. The word becomes a signifier only of itself. Ghibli.

The semantic apocalypse heralded by AI is a kind of semantic satiation at a cultural level. For imitation, which is what these models ultimately do best, is a form of repetition. Repetition at a mass scale. Ghibli. Ghibli. Ghibli. Repetition close enough in concept space. Ghibli. Ghibli. Doesn’t have to be a perfect copy to trigger the effect. Ghebli. Ghebli. Ghebli. Ghibli. Ghebli. Ghibli. And so art—all of it, I mean, the entire human artistic endeavor – becomes a thing satiated, stripped of meaning, pure syntax.
This is what I fear most about AI, at least in the immediate future. Not some superintelligence that eats the world (it can’t even beat Pokémon yet, a game many of us conquered at ten). Rather, a less noticeable apocalypse. Culture following the same collapse as community on the back of a whirring compute surplus of imitative power provided by Silicon Valley. An oversupply that satiates us at a cultural level, until we become divorced from the semantic meaning and see only the cheap bones of its structure. Once exposed, it’s a thing you have no relation to, really. Just pixels. Just syllables. In some order, yes. But who cares?

 

Top