AI and journalism Facts, fakes and figures
As news media struggles to compete for attention and retain public trust while slashing jobs and cutting costs, the use of artificial intelligence (AI) in newsrooms around the world is rising. AI is being used to source information, produce news articles and identify trends. But will using it lead to better journalism?
By Barbara GruberWhen you scroll through your newsfeed on your mobile phone, do you stop and think whether what you’re reading has been written by a human or a machine? Well, here are the facts: AI-driven journalism has been making inroads into large newsrooms for almost a decade and is already producing financial news, sports stories, weather and traffic reports.
Bloomberg was an early adopter using Cyborg, a programme dissecting financial reports and instantly writing news stories with all relevant facts and figures. The Washington Post made headlines when it started using Heliograf, a home grown artificial intelligence technology, to cover the 2016 Rio Olympic Games and congressional elections.
News wire AP went from producing 300 articles on company earnings reports every quarter to 3,700 through using AI. Today, AP’s newsroom AI technology automatically generates roughly 40,000 stories a year - only a fraction of the overall stories the global news agency produces but the advantages of using AI and automation are manifold says Lisa Gibbs, Director of News Partnerships and AI News Lead at AP.
“It frees our journalists from routine tasks, to do higher-level work that uses their creativity; allows us to create more content that serves new audiences more efficiently; and improves our ability to discover news,” she says. Financial journalists have used AI in their reporting for years | © AP Photo / Richard Drew
Robo-reporting goes mainstreamSo called “robo-reporting" can help create multiple versions of stories that serve different news outlets and consumers, and also reach niche audiences interested for example in the exact details of annual donations to Australia’s political parties, local election results for every municipality in France or high school gridiron results in Washington, USA.
So, are robots taking over journalism? No, says Charlie Beckett, Director of the media think tank Polis at the London School of Economics, who recently lead a study of 71 news organisations in 32 countries.
Like Lisa Gibbs, Beckett sees potential for artificial intelligence, machine learning and data processing to give journalists “new powers of discovery, creation and connection” and believes “machines might soon be able to do much routine journalism labour.”
In Polis’ global survey on journalism and artificial intelligence, “New Powers, New Responsibilities”, Beckett and his team asked newsrooms using AI about how this technology is impacting journalism and the news industry. The report found that currently newsrooms mostly use AI in three areas: news gathering, production and distribution.
The Stuttgarter Zeitung is one of the AI pioneers in German media. The publisher’s CrimeMap was the idea of data journalist Jan Georg Plavec who, together with the company Arvato, trained a computer through machine learning to understand police communiques from Stuttgart Police headquarters. The system then sorts the information into predefined categories, recognises when and where a crime took place and feeds it into the CrimeMap.
Reflecting on the input of man and machine for this project, Jan says “in the planning stage there was a lot of journalistic experience and human thinking.” He emphasises that even if the heavy lifting is now done by machine, the data is still checked daily by journalists.
“We do have manual quality control, but it's becoming less and less necessary as the machine gets better, with our regular feedback,” he explains.
“It now has an accuracy rate of well over 90 percent, so we only have to intervene very rarely. But we still check, because we don't have complete confidence in the technology yet. But every day, we see that the technology improves.” Stuttgarter Zeitung's CrimeMap was developed by data journalist Jan Georg Plavec | © Stuttgarter Zeitung
No need for a coffee fixIn a newsroom machines are tireless colleagues who can sift automatically through troves of data, analysing almost anything from SEO tags to official data, user-generated-content or location matching social media posts.
Tools like Heliograf, News Tracer or CrowdTangle are now alerting journalists to breaking news, viral stories and anomalous data trends. Editors can then determine if there’s a bigger story to be written by a human being. Tools like this also measure the reach of the content produced by media companies and can also tell you the competitor’s biggest hit of the day.
“Like any technology, AI can be used for good and for bad purposes,” Lisa Gibbs explains. “And each news organisation must have an ethical framework for how to use it, and understand its implications.”
News organisations using “event detection” systems to spot breaking news via social media posts need to be careful. How does the news agenda get shaped by those communities who happen to use Twitter? And does it have implications for under-represented communities online?
“Ideally, the fact that event detection can help us sift through Twitter, Snap and Reddit etc, frees journalists’ time to get out on the street and talk to people,” Gibbs says. “In practice? We’ll have to see.”
Voitto: the robot journalist and smart news assistantFinland’s national broadcaster Yle, meanwhile, is using AI to improve news personalisation for their readers. The company has created a dual system called Voitto. It works as both a robot journalist churning out around 100 articles and 250 visualisations a week, and as a smart news assistant, which is part of Yle’s personalised news app NewsWatch, too.
Voitto, the news assistant, lives on the lock screen of a mobile device and recommends to users interesting news content via news alerts or notifications. It uses machine learning to improve its recommendations by learning from the user’s reading history, their interactions on their lock screen and also direct feedback.
“Voitto’s appearance, the tone of voice and the algorithms that power it, are all guided by Yle’s journalistic values and mission,” says Jarno Koponen, Head of AI & Personalization at Yle News Lab. “Yle's current vision is to create a Yle that is ‘for all of us, for each of us.’”
Koponen says Voitto provides an accessible way to see how the people in power are actually using their power, for example by creating newsletters about all 200 Finnish MPs and explaining the workings of democracy and parliament. And he says the system makes sure users get news that really matter to them, but also nudging them to expand their views, for example by providing two stories about the same topic, from completely different perspectives. Voitto the Yle News Assistant | © Yle News Lab
So, what do Yle’s users think? They seem to like the technology. The notifications of smart news assistant Voitto regularly get the most engagement. More importantly, over 90% of users who turn smart news assistant Voitto on, keep it on.
“Naturally, nothing is ever perfect”, says Koponen. “We collect both qualitative and quantitative feedback and follow our metrics systematically to understand the true impact of Voitto’s manifestation in order to serve our citizen better in the future.”
Using AI to reduce harassment and abuseThe use of AI tools in journalism are many and varied. Many news organisations are also using AI to moderate readers’ comments, encourage constructive discussion and eliminate harassment and abuse on their websites.
To sift through comments and hide inappropriate contributions, German company NOZ began using the AI software Conversario. Head of HHLab Joachim Dreykluft says it works well, but the AI is still learning. “My colleague who is in charge of community management still says to the software ‘I wouldn’t have hidden this comment’, ‘I would have hidden this one’. It keeps learning and gets better every day.”
As disinformation exploits new technologies and operates at massive scale, Lisa Gibbs says the news industry must be aggressive in tackling it. Detecting “deep fakes” for example, will be a key area of growth for many newsrooms in the next few years, while some media organisations like Reuters are already using similar technologies to create fully automated, presenter-led sports news.
In a world of increasing disinformation, trust is key to securing the confidence and subscriptions of news users. Jarno Koponen from Yle admits that transparency is “crucial” for his organisation.
“We serve citizens, and we serve them best when citizens can contribute to open dialogue around what works and what doesn’t,” he says.
That’s one of the reasons Yle openly communicates goals, methods and practices related to Voitto. Every article created by the robot journalist is also marked as “made by Voitto.” It looks like a news story, but is the video you are watching trustworthy? | © Pixabay / Engin Akyurt
The time is nowAs AI impacts the way information and debate is shaped, often outside of the news media, new AI technologies are undoubtedly providing great tools to make journalism more transparent and relevant for people too.
But newsrooms must act swiftly, according to AP’s Lisa Gibbs. “Outside the big news organisations and the niche players, most struggling newsrooms don’t have the skills — or, more importantly, the time — to devote to understanding how to use these technologies and so they don’t even begin,” she says.
As Charlie Beckett concludes in his report on AI and journalism: “We are at another critical historical moment. If we value journalism as a social good, provided by humans for humans, then we have a window of perhaps 2-5 years, when news organisations must get across this technology.”
This article is part of Kulturtechniken 4.0, a web project from Goethe-Institut in Australia which looks at the interplay between artificial intelligence and traditional cultural skills.