Key Takeaway:
A joint study by the European Broadcasting Union (EBU) and the BBC published on October 22, 2025, confirms that AI Assistants misrepresent news in 45% of responses. Leading platforms (ChatGPT, Copilot, Gemini, and Perplexity) produced frequent sourcing and accuracy errors, raising serious concerns for trust in journalism and democratic participation.
AI Assistants Misrepresent News – Key Points
- Research Source and Scope
- A report published on October 22, 2025 by the European Broadcasting Union (EBU) and BBC analyzed 3,000 AI assistant responses to news-related queries.
- Coverage spanned 14 languages and assessed accuracy, sourcing, and ability to distinguish opinion from fact.
- Participation: 22 public-service media outlets from 18 countries, including CBC/Radio-Canada, ARD/ZDF/Deutsche Welle (Germany), RTVE (Spain), BBC (UK), NPR (US), RTBF/VRT (Belgium), YLE (Finland), Radio France (France), Rai (Italy), NOS/NPO (Netherlands), NRK (Norway), RTP (Portugal), SVT (Sweden), SRF (Switzerland), LRT (Lithuania), Czech Radio (Czechia), GPB (Georgia), and Suspilne (Ukraine).
- The project launched at the EBU News Assembly in Naples; professional journalists evaluated responses to standardised prompts.
- Findings: Frequency of Errors
- The study found that AI Assistants misrepresent news almost half the time: 45% of responses contained at least one significant issue, while 81% had some form of problem.
- 20% of responses showed accuracy issues (e.g., outdated or incorrect facts).
- Cited examples:
- Gemini (Google) misstated changes to a disposable vapes law.
- ChatGPT referred to Pope Francis as the current Pope months after his death.
- The BBC notes some improvements versus results earlier in 2025, but error levels remain high.
- Sourcing Issues
- 31% of answers exhibited serious sourcing errors (missing, misleading, or incorrect attribution).
- Gemini had the highest problem rate: significant issues in 76% of its responses—driven largely by sourcing—versus under 25% for ChatGPT, Copilot, and Perplexity.
- AI Assistant Market Usage
- Per the Reuters Institute Digital News Report 2025, 7% of all online news consumers use AI assistants for news.
- Among people under 25, usage rises to 15%, indicating a stronger generational shift from traditional search to assistant-style answers.
- Industry Responses & Context
- OpenAI and Microsoft have previously acknowledged hallucinations (models generating incorrect or misleading information due to factors like insufficient data) and say they are working to mitigate them.
- Google (Gemini) states it welcomes user feedback to improve.
- Perplexity highlights a “Deep Research” mode with 93.9% factual accuracy.
- Reuters contacted the companies for comment regarding the study’s findings.
- Audience Trust & Perception (New Data)
- A BBC audience study (Oct 2025) finds just over one-third of UK adults trust AI to produce accurate news summaries, rising to almost half among under-35s.
- Many users assume AI summaries are accurate; when errors appear, respondents blame both news providers and AI developers, risking collateral damage to news brands.
- Broader Concerns, Accountability & Next Steps
- The EBU warns that systemic failures in which AI Assistants misrepresent news threaten public trust in information ecosystems.
- Jean Philip De Tender, EBU media director: “When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation.”
- Peter Archer, BBC Programme Director, Generative AI: the BBC is “excited about AI” but stresses the need for trustworthy outputs; the BBC is open to working with AI companies to improve results.
- The team released a News Integrity in AI Assistants Toolkit outlining what constitutes a good news response and enumerating fixable problems (sourcing discipline, context provision, opinion/fact separation, transparent corrections).
- The EBU and its Members are pressing EU and national regulators to enforce existing laws on information integrity, digital services, and media pluralism, and call for ongoing independent monitoring of assistants given rapid model updates.
Why This Matters:
With younger audiences adopting assistants as primary news sources, the fact that AI Assistants misrepresent news in nearly half of tested cases, (a 45% significant-issue rate, 31% sourcing failures, and Gemini’s 76% significant-issue rate) pose systemic risks to information quality and civic engagement. The evidence supports auditable sourcing, versioned corrections, clear provenance, and regulatory enforcement, aligning assistant behavior with established editorial accountability norms and helping protect trust in news ecosystems.
This article was drafted with the assistance of generative AI. All facts and details were reviewed and confirmed by an editor prior to publication.
Over 850 public figures, including Nobel laureates, royals, and AI pioneers, signed a Future of Life Institute statement on Oct 22, 2025, calling for a global superintelligence ban until science proves it safe and public buy-in is secured.
Read a comprehensive monthly roundup of the latest AI news!






