Vietnam builds an AI law that regulates not only, but also creates trust
Hue, vrijdag, 21 november 2025.
Vietnam is taking a unique step in global AI regulation by introducing an artificial intelligence law that not only emphasizes ethics and accountability, but also explicitly prohibits the use of AI in elections. One of the most striking aspects: the draft includes a specific ban on creating deepfakes or manipulating votes using AI—directly responding to the growing threat of digital disinformation. At the same time, the law lays the foundation for a national AI infrastructure built on clean, shared data and strict security measures. The debate in the National Assembly reflects a conscious balancing act between innovation and safety—a balance that is not only significant for Vietnam, but also serves as a model for emerging markets worldwide.
Vietnam opens the door to a human-centered AI future
Vietnam is taking a significant step forward in global artificial intelligence (AI) governance by presenting a new legislative proposal that not only promotes technical innovation, but also adapts ethical and legal frameworks for AI use to the realities of 21st-century society. The draft law, approved by members of the National Assembly on November 20, 2025, and discussed in group sessions in Lang Son, Dong Nai, and Hue on November 21, 2025, aims to strike a balance between technological progress and civil liberties [1][2]. The law establishes a legal framework for AI development and application, including risk classification, mandatory ethical audits, and a clear structure for accountability among users, developers, and providers [2]. The core of the proposal lies in minimizing harm to individuals and society, with AI systems posing significant risks to citizens’ rights and safety subject to strict oversight [2]. Delegates emphasized that technology can only function safely and responsibly within a clear legal and ethical framework, which is especially important for journalism and public information provision [1]. This initiative demonstrates that even in emerging markets like Vietnam, the focus is shifting from technological innovation to responsible use—something of great importance for public communication as well [1].
Ethical boundaries for AI: from fake news to election fraud
One of the most notable elements of Vietnam’s AI draft law is the explicit prohibition on using AI to manipulate elections, incite political unrest, or compromise national security. On November 21, 2025, Nguyen Thanh Hai, Chair of the National Assembly’s Committee on Science, Technology, and Environment, proposed adding regulations that ban the use of AI to manipulate votes, generate fake content, or incite public sentiment [6]. These proposals are a direct response to the increasing threat of digital disinformation and the misuse of generative AI in political campaigns. The law also prohibits the creation of fake images, fake video clips, and AI-based fraud, and establishes principles for early detection of violations from the research phase through to AI deployment [6]. This measure is crucial in an era when AI-generated content—such as deepfakes—is becoming increasingly realistic and has already been used in multiple countries to influence public opinion [6]. Delegates emphasized that such actions are not only technically feasible but also threaten democratic order and civil liberties [6]. The law seeks to achieve balance by promoting technology while clearly defining the ethical and legal boundaries of acceptable behavior [2].
AI in practice: from healthcare to public safety
In practice, AI is already being applied in essential sectors such as healthcare and public safety, offering both opportunities and challenges. At Hue Central Hospital, AI is already used for image diagnostics, where algorithms assist in interpreting X-rays, CT scans, and brain imaging [1]. Delegate Pham Nhu Hiep (Hue) stressed the importance of accountability for AI errors in medical contexts, where human oversight remains essential [1]. Similarly, AI-powered video surveillance systems are being deployed worldwide to enhance public safety by detecting suspicious behavior—such as loitering, unusual gatherings, or abandoned objects—in real time [7]. These systems improve the efficiency of security operations by automating routine monitoring and providing early warnings about potential threats [7]. However, such applications also raise concerns about privacy, algorithmic bias, and mass surveillance [7]. The Vietnamese law aims to mitigate these risks by implementing measures such as data anonymization, strict access controls, community oversight, and ensuring fairness in AI algorithms [7]. Delegates emphasized that finding a balance between security needs and individual rights is crucial for the responsible deployment of AI surveillance in urban areas [7].
Infrastructure and accountability: the foundation for trust
The draft law also lays the groundwork for a national AI infrastructure focused on clean, shared, and up-to-date data. Delegates proposed that data used for AI systems must be accurate, sufficient, clean, current, uniform, and shared—a core principle for effective and secure AI services [6]. A mechanism is proposed for interconnection and data exchange to prevent data silos and bottlenecks in AI research and development [6]. Mandatory requirements are planned for network security, data protection, and defense of the national AI infrastructure to prevent breaches and data leaks [6]. These measures are essential for building public and corporate trust in AI technology. Delegate Trinh Xuan An (Dong Nai) emphasized that input data for AI must be ‘correct - sufficient - clean - alive,’ a fundamental prerequisite for reliable outcomes [1]. When determining accountability for AI errors, the final decision remains with humans, even though AI functions as an assistant [1]. The law states that violations may be addressed administratively, civilly, or criminally, though there is currently no clear categorization of responsibilities between administrative, civil, and criminal liability [1].
Looking ahead: from law to implementation and international cooperation
The AI draft law is still under review, with the drafting committee’s session on November 21, 2025, serving as a guiding step for future decisions [1]. Delegates recommended expanding the regulation of the National Committee for Artificial Intelligence, with a clear legal status, defined functions, powers, organizational structure, and coordination with ministries [1]. Additionally, it was proposed to add a dedicated article on prohibited actions, based on principles such as transparency, safety, accountability, human oversight, fairness, non-discrimination, and support for controlled innovation [1]. The law has been developed in collaboration with global leaders, scientists, and spiritual figures—from Harvard to Hanoi, from Paris to Tokyo—with a focus on global cooperation and ethical leadership [2]. The Boston Global Forum and the AI World Society (AIWS) play a prominent role in shaping Vietnam’s framework, with the AIWS-DASI certification framework rewarding ethical digital assets worldwide [2]. This international collaboration signals a growing trend where countries not only build domestic capacity but also become part of a global network of ethical AI governance [2]. Delegates stressed that only when the above mechanisms are fully implemented will the AI ethical framework be effective, ensuring that technology develops in the right direction—safely and for people [1].