Why AI Is Not a Deity – And What That Means for You
Madrid, donderdag, 13 november 2025.
Ana Lazcano, director of the University Institute of Artificial Intelligence at the Francisco de Vitoria University in Madrid, warned on 12 November 2025 that artificial intelligence is not an omnipotent force, but a tool with limitations and risks. The most troubling reality? AI has created a spectacular technological gap between teachers and students, rendering written assignments often meaningless. She emphasises that critical thinking, ethical principles, and positioning AI as a supplement—not a replacement—are essential. This perspective is not a technical assertion, but a call for human responsibility in an era of rapid change—and a reminder that our wisdom remains irreplaceable.
AI as a Tool, Not a Deity: The Core Message of Ana Lazcano
Ana Lazcano, director of the University Institute of Artificial Intelligence at the Francisco de Vitoria University in Madrid, warned on 12 November 2025 that artificial intelligence is not an omnipotent force, but a tool carrying limitations and risks [1]. Based on her multidisciplinary approach, drawing on experts from philosophy, anthropology, education, engineering, psychology, and other fields, she stresses that AI ‘does not possess properties or qualities that it lacks’ [2]. Her message underscores the importance of critical thinking, ethical integration, and human responsibility in developing and using AI [1]. She highlights that treating AI as a deity is a fundamental misconception: ‘it is far removed from that; it is a complement’ [2]. This perspective is not merely technical, but ethically grounded, inspired by the approach of Pope Leo XIV, who advises not to fear AI, but to understand it and approach it with great caution [1][2].
AI in Public Communication: From Personalised Information to AI-Driven Campaigns
In modern public communication and outreach, AI plays an increasingly significant role, ranging from personalised information delivery to AI-driven awareness campaigns. Advanced algorithms enable information packages to be tailored to individual preferences, age groups, language proficiency, and social context, thereby enhancing the effectiveness of messages [1]. This approach is particularly valuable in reaching diverse audiences, such as youth, older adults, or individuals with lower educational levels, where complex information is simplified without losing the core message [2]. In public service delivery, chatbots are increasingly used to provide quick and consistent answers to basic questions—such as those about social security, healthcare, or tax returns [1]. The application of AI in communication contributes to more efficient information transfer, both in time and cost, and enables campaigns to be measured based on interaction patterns, open rates, and conversion metrics [2]. Thus, the technology not only helps reach target audiences, but also allows for real-time evaluation of effectiveness and strategic adaptation.
Benefits and Opportunities: Accessibility, Efficiency, and Scalability
One of the greatest advantages of AI in communication is the improved accessibility of complex information. By leveraging natural language processing and text generation, AI systems can transform technical or scientific content into simple, understandable language accessible to all [1]. This is crucial for reducing knowledge barriers, especially for individuals with low literacy levels or those who cannot communicate in their native language [2]. Moreover, AI enhances the efficiency of communication processes: whereas campaigns once took weeks to design, modern AI tools can accomplish this within hours [1]. The scalability is remarkable—a single AI-powered campaign can simultaneously reach millions of people across multiple languages without compromising message quality [2]. This makes AI a powerful instrument during crises, such as pandemics or climate emergencies, where rapid and accurate information is essential [1].
The Dark Sides: Privacy, Inclusivity, and Reliability
Despite its benefits, the deployment of AI in communication presents serious challenges. One major concern is user privacy. When AI delivers personalised information, data on behaviour, preferences, and personal traits are collected—posing high risks if such data are inadequately secured or misused [2]. Furthermore, AI systems can unintentionally perpetuate unequal or discriminatory patterns, especially if training data are biased [1]. This leads to a lack of inclusivity, where certain groups—such as minority communities or low-income individuals—are underrepresented or even receive inaccurate messages [2]. Additionally, there is an issue of reliability: AI systems can generate false or fabricated information (‘deepfakes’ or ‘hallucinations’), leading to misinformation and erosion of trust [1]. In an era of growing AI use in media and policy, transparency and oversight are essential to prevent misuse and misunderstanding [1].
The Technological Divide and the Need for Ethical Education
Ana Lazcano warned on 12 November 2025 about a ‘spectacular technological divide’ created by AI between teachers and students [1]. According to her, this divide renders written assignments ‘meaningless’ because students often master the technology better than their educators [2]. This situation underscores the urgent need for a transformed learning environment, where educators are supported by a ‘technological support model’ enabling them to effectively harness AI capabilities [1]. The University Institute of Artificial Intelligence, established in November 2025, therefore focuses not only on technical training, but especially on ethical integration and the cultivation of ‘human wisdom’ as an irreplaceable core [1][2]. The expert stresses that it is ‘necessary to lay the foundations of critical thinking,’ as there is ‘much noise’ about AI, but ‘little quality’ [2]. She hopes that, ultimately, AI will help return universities to their original purpose: fostering debate, dialogue, and student-centred learning processes [2].
The Environmental Impact of AI: A Growing Ecological Burden
The growth of AI comes with a significant environmental impact. According to a study from Cornell University, published on 10 November 2025, AI data centres could emit 24 to 44 million tonnes of CO₂ annually by 2030—equivalent to adding 5 to 10 million cars to American roads [3]. Furthermore, these data centres could consume between 731 and 1,125 million cubic metres of water per year—equivalent to the annual household water usage of 6 to 10 million Americans [3]. Researchers stress that there is no ‘silver bullet’ to solve this issue, but a combination of smart location choices (such as in the Midwest or wind-rich states), faster decarbonisation of the power grid, and more efficient operations could reduce CO₂ emissions by approximately 73% and water usage by around 86% [3]. Even under an optimistic high-energy usage scenario, carbon emissions would still reach approximately 11 million tonnes per year—requiring 28 gigawatts of wind or 43 gigawatts of solar energy to offset [3]. These figures underscore that the choices we make now about AI infrastructure will determine whether AI supports climate progress or becomes a new environmental burden [3].