AI in newsrooms Between a tool and a turning point
Illustration: © Ricardo Roa
AI transcribes interviews, filters comments, improves phrasing – and in doing so, is quietly but fundamentally changing newsroom life. But to what extent? And what does this mean for a profession built on integrity, judgement and responsibility? Gregor Schmalzried, journalist and host of Bayerischer Rundfunk’s AI Podcast, offers a behind-the-scenes look. Lena Kronenbürger spoke with him.
Mr. Schmalzried, imagine all AI tools in your newsroom suddenly vanished overnight. Where would that hurt most? Which editorial tasks would be the most difficult to manage without AI tools?It’s not that certain tasks wouldn’t be possible any more without AI – but that they would simply take much longer. I recently had a problem with my touchpad and had to rely solely on the keyboard. It works, but it’s not much fun, and is slow and quite tedious. That’s what working without AI feels like: it's doable, but it’s time-consuming and inefficient.
Many people are talking about a “turning point” in journalism – a before and after AI. Do you feel that, too?
Absolutely – or rather, there’s a “before”, and a “right in the middle of it”. A few years ago, the tools available were far more limited. There were things like transcription tools or first attempts at so-called robot journalism, for sports coverage or stock market reports, in other words, content based on structured data. These tools worked, but they were confined to very specific tasks. Their implementation was often top-down. That meant that a few individuals explored potential applications, but most people in the newsroom didn’t have anything to do with them. What’s different today is that these tools are suddenly available to everyone. AI is no longer a niche topic for tech enthusiasts or dedicated project teams — it concerns everyone in their everyday working life. Even colleagues who previously had no interest in AI are now having to engage with it, because they are seeing firsthand how relevant it is to their specific tasks.
Are there editors who completely reject or deliberately avoid using AI tools?
Outright rejection is rare. More often, people simply aren’t aware of what’s technically possible. Just recently, a colleague took a photo of a printed text, and someone else was going to type it up manually – without realising they could just upload the image to a chatbot. Many people still don’t know these kinds of possibilities exist, and it’s understandable. It takes time for new workflows to become second nature.
In your experience, how much does an editor’s age affect their openness to AI tools?
To be honest, I know a lot of younger editors who are surprisingly behind. Perhaps it’s because they approach journalism with a very traditional mindset. They’re focused on going out, talking to people, writing everything down. AI simply doesn’t fit in with this approach. It’s often the people who have always experimented with their workflows who adapt more easily. They had their own databases, tools and routines even before AI came along. Experimentation is a good introduction to new technologies.
In concrete terms, what does it mean if editors lag behind in adopting AI tools?
Put simply, they are spending far more time on tasks others finish quickly. More importantly, they risk losing touch with how the world is currently changing. In journalism, it’s crucial to stay attuned to how people communicate and understand which tools they rely on. ChatGPT has become a mainstream platform with 800 million users weekly and it’s now the fifth most visited website in the world. Ignoring developments like this means losing sight of what truly matters to people today.
Do you observe any differences in the way radio and TV and online editorial teams use AI?
It’s less about the medium and more about the underlying structures. Established media organisations with long-standing processes often find it hard to integrate new tools. Young production companies, on the other hand, build their workflows around AI from the outset – giving them a significant advantage.
AI can produce strikingly realistic voices, and deepfakes are becoming increasingly sophisticated – making it harder to tell what’s real. How can editorial teams maintain trust?
This is the crucial question, not just for media but also for marketing, advertising, and basically the entire content industry. In the past, the authenticity of a message came through the product itself; today, anyone without expertise can pose as an expert. I believe the key lies in perspective – a distinctly human perspective. It doesn’t necessarily mean sharing an opinion, but it should clearly show who is speaking. That was exactly our focus with the AI podcast. We wanted people to recognise our voices and understand where we were coming from. Video has faces, audio has voices, but text is the hardest because it is initially voiceless. Still, personality can shine through – whether it’s a regional accent, a certain choice of words, or the way someone approaches their research.
How does readers’ desire for authentic voices and perspectives impact the way editorial teams work?
The trend is clearly moving towards personalisation – on two levels. On the user side, instead of Googling and clicking through multiple articles, people are increasingly turning to a chatbot that knows their location, maybe even their reading habits, and delivers direct answers. Personally, I still read articles, but more like books – out of interest rather than efficiency. That will become less common. On the other hand, there is also a shift in who is actually still creating content – and how it’s being produced. Personalities with distinct viewpoints are gaining importance – people whose voices you recognise and whose perspectives you can place. What is disappearing is the middle ground: those classic advice pieces people used to Google, like “What to do for sunburn?” or “How does this work?” – content where the author’s identity didn’t matter. This kind of content doesn’t require an opinion, voice or a recognisable person behind it. This is exactly what AI can now do better.
AI is being used in newsrooms more and more. Do you think that traditional sentence-by-sentence writing will soon become obsolete?
If you provide the structure and ideas, AI can now generate texts at a level sufficient for standard news articles, given the right prompts. Of course, that’s different for columns or special formats. This raises a question: will it be acceptable in the future to forego mastering traditional sentence-by-sentence writing and instead focus on prompting AI and then editing its output? Personally, I believe that’s perfectly fine.
Which journalistic skills do you think AI cannot replace?
Judgement remains a weak point for AI. It often accepts everything as good without making distinctions, which can be frustrating. I want an AI that’s better at distinguishing good ideas from bad ones and doesn’t blindly support everything I suggest. AI can’t decide whether a topic is worth pursuing or not. And that’s precisely our role as journalists – to make those judgements, to have a sense of relevance, and understand what truly makes a difference.