How much water and electricity does a chat with ChatGPT cost?
Hilversum, zaterdag, 8 november 2025.
EenVandaag highlights that the use of large language models in the Netherlands in 2025 not only consumes a lot of energy but also requires significant amounts of cooling and generation water. Research shows that a session of roughly 10–50 questions to ChatGPT can amount to about 0.5 litres of indirect water use — a fact that sharpens the public debate on digital sustainability. Experts explain that the water demand stems from datacentre cooling, electricity generation and chip production, and that the rapid growth of ICT has kept national energy consumption steady over the past ten years despite efficiency measures. For organisations such as news media, libraries and public information services, this raises questions about transparency from AI providers, responsible use and media literacy around environmental claims. The story discusses technical causes, societal costs and practical policy choices: from seeking more efficient models to communicating with users about the ecological impact of AI.
Introduction: why AI and sustainability should be examined together
The rise of large language models such as ChatGPT has opened up new possibilities for modern public information and communication, but at the same time raises critical questions about ecological impact and transparency for public organisations [1]. EenVandaag highlights that the use of such models in the Netherlands in 2025 not only demands a great deal of energy, but also consumes cooling and generation water — a factor that sharpens the debate around digital sustainability [1].
Concrete: how much water and electricity per conversation?
Research summarised in the report indicates that a session of roughly 10–50 questions to ChatGPT can amount to about 0.5 litres of indirect water use (datacentre cooling, water for electricity generation and chip production are cited as causes) and that the introductory phase of ChatGPT was accompanied by very high consumption, although some models have since become more compact and efficient [1].
AI applications in public information and communication
AI is used in public information for personalised information delivery (adapting language and content to reader profiles), chatbots for public services (24/7 answers to frequently asked questions) and AI-driven campaigns (targeting and A/B testing of messages) — applications that can increase the efficiency and accessibility of information distribution [GPT][1].
Practical example(s) and success criteria
Information teams can use AI to segment large audiences and automatically simplify or localise texts; such practices improve reach and comprehension provided the models used are checked for accuracy and bias [GPT][1]. EenVandaag notes that organisations — such as news media, libraries and public information bodies — therefore need visibility of both the benefits and the environmental and reliability costs of AI use [1].
How AI helps make complex information accessible
Language models can rewrite complex technical texts into understandable language, summarise for busy audiences and generate alternative representations (for example short bullets, infographics or question-and-answer formats) that improve information transfer — a capacity gain particularly useful for vulnerable or less-literate groups [GPT][1].
Measuring effectiveness and continuous optimisation
AI tools offer possibilities for real-time monitoring of interactions, measuring readability and analysing which formulations yield the most understanding or engagement; this measurable feedback can speed up campaign improvements, provided analysis and interpretation are transparent and comply with privacy rules [GPT][1].
Privacy and trust issues in public deployment
Public organisations deploying AI must explicitly pay attention to data minimisation, storage policies and the question of whether personal data is sent to external AI providers at all — partly because the use of commercial models has implications for privacy and oversight, something media and libraries should ask providers about according to the report [1].
Inclusivity and reliability: risks of automatic personalisation
Automatic personalisation can reach groups more effectively, but carries the risk of exclusion or the reinforcement of existing inequalities when training data are not representative or when automatic adjustments contain errors; the report stresses that organisations should remain critical about validation and human oversight in public communication [1][GPT].
Ecological considerations for organisations
The report urges organisations to develop policies that take into account the ecological costs of AI — from choosing more efficient models and localising compute tasks to demanding transparency from providers about energy and water use — because local savings at the ICT level can otherwise be negated by the extra consumption of large-scale AI services [1].
Communication with the public and media literacy
To maintain public trust, information bodies should not only use AI but also openly communicate when and how AI has been used, what uncertainties exist and what environmental impact this entails; the report argues that media literacy around environmental claims and AI use is important for citizens to make informed choices [1].
Practical steps for organisations
Concrete policy steps emerging from the report are: asking providers for energy and water data, prioritising more efficient or smaller models for routine tasks, deploying local (on-premises) solutions where appropriate and informing the public about trade-offs between service provision and sustainability [1].