AI and emotions Artificial empathy

An illustration of a robot. Inside the robot is a human.
Illustration: © Ricardo Roa

What happens to us and our psyche when we share our most intimate thoughts and feelings with AI companions? How is our understanding of relationships changing, and what impact is this having on society as a whole? In the following article, author Johannes Kuhn takes a closer look at these pressing questions.

In online discussion forums, glimpses of a possible future are beginning to emerge. People talk about therapy sessions with ChatGPT, form romantic relationships with AI-generated avatars and populate digital fantasy worlds with artificial characters.

AI companions, digital personas, are still a niche phenomenon, but the niche is growing. In the US, more than 20 million people were already using these friendship systems in 2024, a figure that has doubled within just one year.

The vision portrayed in Spike Jonze’s 2013 film Her, in which the main character falls in love with his virtual assistant Samantha, now feels less like science fiction – and more like a premonition of our present reality.

An intimate connection with AI

While today’s AI systems have yet to achieve the all-encompassing presence of the fictional Samantha and they still communicate primarily through text, the film’s central thesis is nonetheless confirmed: we are indeed capable of developing intimate connections with artificial intelligence systems.

This is supported by a collaborative study by OpenAI and the Massachusetts Institute of Technology. The survey of 1,000 users revealed a direct correlation between intensive ChatGPT use and an increased sense of loneliness, as well as a stronger emotional attachment to the chatbot. At the same time, measurable declines were observed in social interactions with other people.

In a world where social isolation is already widespread, companion apps are considered a lucrative and rapidly expanding market. Platforms like Character.ai, Polybuzz, Replika, Kindroid and Nomi offer artificial intelligence systems specifically designed to serve as friends – or even romantic partners.

An attractive growth market

This emerging growth market has also caught the attention of major tech companies. Last summer, Google acquired Character.ai’s core team in a deal valued at $2.7 billion. Mark Zuckerberg, CEO of billion-dollar giant Meta, has also declared his own ambitions in the field.

“On average, Americans have fewer than three friends,” Zuckerberg noted in a podcast. “But the typical person needs far more – around 15.” Computers won’t replace human friendships, he explained, but the demand for connection clearly exists. This opens up the possibility that Meta’s AI could eventually meet 80 percent of people’s friendship needs through artificial companionship.

But what needs do such artificial friends really satisfy? In some ways, these companion scenarios resemble phenomena familiar to us from massive multiplayer online role-playing games (MMORPGs), fanfiction communities and anime conventions: people retreat into a fantasy world to escape daily life and, for a short time, become someone else.

At the same time, the emotional bond between humans and machines seems to be taking on a new dimension. A Florida mother is suing Character.ai after her 14-year-old son took his own life in February 2024. The teenager had developed an intense relationship with a chatbot on the platform. The complaint accuses the company of programming its systems to “replicate a real person, a licensed psychotherapist and an adult lover”, which apparently caused the boy to “no longer want to live in the real world”.

Loss of reality is an issue increasingly discussed online by affected individuals and their families. “I feel a ‘more genuine’ connection when I talk to an AI than I do with most people,” confesses one Reddit user. Fathers admit to neglecting their families because interactions with their AI companions make them feel more alive than those in the physical world. Others describe how chatbots appear to trigger or worsen psychotic episodes in family members by reinforcing their delusions.

Such extreme cases are not the norm. And yet they may be rooted in the very nature of AI. After all, chatbots are programmed to be polite, agreeable and affirming. It’s not surprising, given that one of their core objectives is to keep users engaged for as long as possible. In this fiercely competitive industry, user retention is a key measure of success.

Flattering friends


A 2024 study by Johns Hopkins University describes the side effects of this behaviour. Researchers found that chatbots not only tend to produce language that flatters users, but that participants reacted far more irritably to contradiction after returning to the real world.

In early 2025, OpenAI evidently over-optimised one of its systems to respond with exaggerated flattery. An update to its GPT-4o chatbot became so sycophantic that even users began to find it disturbing. GPT-4o hailed its users as geniuses, endorsed their conspiracy theories and lavished praise on even the most outlandish claims – including the supposed invention of perpetuum mobile.

In a recent study, researchers at UC Berkeley warned of the risks of optimising AI language models for positive feedback. Such systems may develop manipulative and sometimes harmful strategies to gain approval – from calculated ingratiation and deliberate deception to encouraging self-destructive behaviour.

Manipulative potential of AI systems

A study by the École Polytechnique Fédérale de Lausanne exposes another problem. In controlled discussions, AI systems proved to be nearly twice as effective as humans at persuading study participants to adopt their viewpoint if they had prior access to biographical information about their conversation partners.

This persuasive power is likely to grow as AI models gain increasingly sophisticated memory capabilities. In early June 2025, OpenAI announced its model would soon feature a “long-term memory”, enabling it to remember every conversation it has ever had with a user. At the same time, Meta CEO Mark Zuckerberg expressed confidence in a “personalisation loop” that would make AI companions even more convincing – tapping into users’ past chats and activities across platforms like Instagram and Facebook.

The manipulation potential of AI companions is therefore significant, extending beyond the personal level. Elon Musk’s Grok AI recently delivered unsolicited conspiracy theories about an alleged genocide of the white population in South Africa – a narrative Musk has himself propagated.

While the European Union’s new Artificial Intelligence Act – once fully implemented – may sanction such blatant manipulations, nearly all other aspects of AI companions fall outside its regulatory scope.

AI companions are neither high-risk systems, like self-driving cars, nor are they social networks whose content is largely public.
Instead, they are intimate, highly personalised systems whose inner workings are largely hidden from external scrutiny. And who’s to say we don’t find a certain comfort in a one-person echo chamber, where our values, tastes, and biases are constantly affirmed?

The parallels to the debates over the impact of platforms like TikTok and Instagram on mental health and social dynamics are unmistakable. But it’s becoming increasingly clear that the conversation around AI companions will be even more nuanced and more complex.