AI news summaries are still terrible – and hurt the reputations of news brands

The EBU (European Broadcasting Union) and BBC have spent 2025 investigating how commonly used AI assistants misprepresent news content. They have published a report with their results and an accompanying toolkit for media organizations about the uses and pitfalls of AI-driven news consumption. According to the Reuters Institute’s Digital News Report 2025, 7% of online news consumers use AI assistants to get their news, and 15% of news consumers under 25.

The News Integrity in AI Assistants Report, released in October at the EBU News Assembly in Naples, brought together 22 public service media organizations in 18 countries, working in 14 languages.

Four commonly used AI tools were investigated, ChatGPT, Copilot, Gemini, and Perplexity. Journalists from the participating news outlets assessed responses to over 3000 prompts from the chatbots. The investigation found that:

  • 45% of answers had at least one significant issue.
  • 31% showed missing, misleading, or incorrect attributions.
  • 20% had major accuracy issues, including hallucinated details and outdated information.
  • Google Gemini had issues in 76% of its responses, more than double the other assistants, mostly due to poor information sourcing.

On the surface, there doesn’t appear to be anything too revelatory here. It’s widely understood that chatbots are not librarians or search engines and shouldn’t be expected to give precise answers to questions about realworld information. But the report does start to give real numbers, and benchmarks, around how unreliable they can be, something which can be used to chart how AI, and human attitudes toward it, changes over time.

First the AI’s, now the audience

But do people who use AI assistants really care about factuality? Is a broad strokes story enough, even if the details may be off?

The BBC has been doing additional research, focusing on audience use and perceptions of AI assistants. Their analysis, Research Findings: Audience Use and Perceptions of AI Assistants for News found that over one third of UK adults find AI news summaries trustworthy, a figure that rise to one half of under 35 year olds. That is, a respectable proportion of users doesn’t even know that they’re getting incomplete, or incorrect information.

But the BBC investigation found that the knowledge of errors in AI summaries did impact trust in AI news summaries, and 36% of people said that AI companies should be responsible for ensuring accuracy, plus 31% said that should accuracy should be helped along by government regulation. The catch is that 35% also said that news organizations should be responsible for errors that AI generates based on their content. According to the report:

“The implication is simple. Errors made by third-party Gen AI tools create direct reputational exposure for the sources they cite. Audiences look for signs that responsibility is being taken in practice: provenance that travels with the summary, working links back to the reporting, timestamps and update notes, and timely corrections where the summary is actually encountered. They also expect the fix to be reflected wherever the summary appears, not just in one place.”

The AI/news distortion field

The internet and its news brands on are always in flux, of course. Audience opinion affects what stories are produced, and search engines affect what stories audiences see. Riding these waves, news and information brands have varying degrees of authority and reputation, among both audiences and AI’s.

Major media outlets like the CBS News, the Washington Post, and the BBC, or respected institutions like Harvard and Columbia University, might be given extra weight by a LLM. As a result those who want an outsized influence on society have been quick to get institutions like these onside – sometimes through bullying or bribery – knowing that the content they produce, and its sentiment, will propagate through the entire public information sphere. What is published on a top news website doesn’t just stop at that site’s homepage, it is absorbed into the accepted worldview presented by AI that more and more people are going to as their first source.

Major news outlets are now not just having an effect on their own audiences, but on those who read AI summaries who may have never once engaged with the original source.