AI Chatbots Provide Incorrect Answers in Nearly Half of News Cases
amsterdam, donderdag, 23 oktober 2025.
An international study by 22 media organisations, including the NOS, VRT, and BBC, has shown that AI chatbots such as ChatGPT, Copilot, Gemini, and Perplexity provide incorrect information about news in nearly half of the cases. The main issues are incorrect sourcing and inaccurate information, which can damage the reputation of news organisations. For example, ChatGPT claimed that Elon Musk did not give a Nazi salute, while VRT NWS reported that he claimed it himself. Experts warn users to always verify the sources themselves.
International Study Highlights Unreliability of AI Chatbots
A large-scale study by 22 media organisations worldwide, including the NOS, VRT, and BBC, has shown that AI chatbots such as ChatGPT, Copilot, Gemini, and Perplexity provide incorrect information about news in nearly half of the cases [1][2][3]. The main issues are incorrect sourcing and inaccurate information, which can damage the reputation of news organisations. For example, ChatGPT claimed that Elon Musk did not give a Nazi salute, while VRT NWS reported that he claimed it himself [1].
Issues with Sourcing and Inaccurate Information
The study shows that 45 percent of the answers from AI chatbots to news and current affairs questions are problematic [1][2][3]. The main issues are incorrect sourcing, which occurs in 31 percent of cases, and inaccurate information, which occurs in 20 percent of cases [1][2][3]. Google’s Gemini performed the worst: 72 percent of its answers had issues with sourcing [1][2][3]. ChatGPT, Copilot, and Perplexity scored better with 24, 15, and 15 percent incorrect sourcing, respectively [1][2][3].
Concrete Example: Incorrect Information About Politicians and Events
VRT NWS provides concrete examples of incorrect AI output. ChatGPT from OpenAI claimed that Elon Musk did not give a Nazi salute and cited VRT as the source, but VRT only reported that Musk claimed it was not a Nazi salute [1]. Additionally, Google’s Gemini thought that Paul Van Tigchelt was still the Belgian Minister of Justice, which is no longer accurate [1]. Furthermore, Copilot, based on NOS/NPO source material, stated that Francis is the current Pope but also claimed he died on 21 April 2025 [1].
Implications for Media Literacy and Democracy
These problems with AI chatbots have concerning implications for media literacy and democracy. Despite the ongoing development of AI technology, these systems remain unreliable for following the news. Karel Degraeve, innovation expert at VRT NWS, states that it is problematic that chatbots handle content so carelessly [3]. Martin Rosema, a political scientist at the University of Twente, warns that AI bots can provide incorrect information and lack transparency [4].
Practical Tips for Readers to Identify Fake News
To ensure the reliability of news, experts and media organisations offer practical tips for readers to identify fake news. First and foremost, it is important to always verify the sources yourself [1][2][3][4]. Pay attention to consistency and context in the information. Look for multiple reliable sources that confirm the same information. Be critical and ask yourself if the information is logical and coherent. Finally, use AI chatbots as a tool, but do not blindly trust their answers [1][2][3][4].