AIJB

Swedish Prime Minister Under Fire for Using ChatGPT in Policy Advice

Swedish Prime Minister Under Fire for Using ChatGPT in Policy Advice
2025-08-07 journalistiek

Stockholm, donderdag, 7 augustus 2025.
Swedish Prime Minister Ulf Kristersson regularly uses ChatGPT for ‘second opinions’ on his policies, which has drawn fierce criticism in his own country. Ethical experts warn of the dangers of AI use in political decision-making, such as ‘automation bias’ and the risk of political biases in the training data of AI systems. Expert Ann-Katrien Oimann states that politicians must be more transparent about their AI use and should not share sensitive information.

Swedish Prime Minister Under Fire for Using ChatGPT in Policy Advice

Swedish Prime Minister Ulf Kristersson regularly uses ChatGPT for ‘second opinions’ on his policies, which has drawn fierce criticism in his own country. Ethical experts warn of the dangers of AI use in political decision-making, such as ‘automation bias’ and the risk of political biases in the training data of AI systems. Expert Ann-Katrien Oimann states that politicians must be more transparent about their AI use and should not share sensitive information [2].

Kristersson’s Use of ChatGPT

In an interview with Dagens Industri last Friday, Kristersson revealed that he uses ChatGPT to ask critical questions about his policies. He stated: ‘I use artificial intelligence fairly often. Even just for a second opinion. What have others done? And should we think the opposite? That kind of question.’ His spokesperson emphasised that no sensitive information is entered and that AI is used to gain a broad assessment [2].

Criticism in the Swedish Media

Kristersson’s admission sparked intense reactions in the Swedish media. Aftonbladet published an article titled ‘A Trap for the AI Psychosis of the Oligarchs’, while Dagens Nyheter wrote: ‘Our society needs more critical thinkers, not policymakers who seek answers in black boxes’ [2].

Ethical Considerations

Ann-Katrien Oimann, an expert in AI and ethics at KU Leuven and KMS, warns of the potential dangers of AI use in political decision-making. She states that ‘An AI system is only as good as the training data it has been trained on’, and that this training data often contains political biases. Oimann also highlights the risk of ‘automation bias’, where people tend to accept the outcomes of AI systems as true without critically questioning them [2].

Tips for Responsible AI Use

Oimann offers some tips for responsible AI use in political decision-making. Politicians should not share sensitive or confidential information, should not use legal advice or factual information that cannot be verified, and should be transparent about their AI use. She suggests that the problem does not lie in the use of chatbots, but in how they are used [2].

Impact on Public Opinion

The revelation of Kristersson’s use of ChatGPT has also influenced public opinion. Many are questioning whether the role of a prime minister should be more than a human interface for AI. Oimann argues that ‘People vote for a prime minister, not for a chatbot’, which further fuels the debate about the limits of AI use in politics [2].

Sources