How Vietnam Uses AI to Make Government Smarter
Hanoi, vrijdag, 28 november 2025.
At ICTA 2025 in Hanoi, an unexpected step forward in AI development was unveiled: Vietnam is not only focusing on technological advancement but also on sustainable and inclusive applications within the public sector. One of the most notable developments was the plan to integrate AI into 40% of government services within five years—spanning healthcare to education. The conference demonstrated that Vietnam is not just a rapid rising star in the AI landscape, but also a pioneer in combining innovation with social objectives. The emphasis on ethics, reliability, and local adaptation makes it a valuable example for other countries in Asia and beyond.
AI in the Public Sector: From Idea to Implementation
During the 4th International Conference on Advancement in Information and Communication Technology (ICTA 2025) in Hanoi, Vietnam, a structured plan was unveiled to scale up the integration of artificial intelligence (AI) across public services. As announced at the conference, the goal is to implement AI in at least 40% of public services within five years, with a focus on sectors such as healthcare, education, and urban planning [1]. This strategy is a direct response to increasing pressure on public institutions and the need for more efficient, scalable solutions. The conference, organized by the Institute of Information Technology at the National University of Vietnam and the University of Information and Communication Technology of Thai Nguyen, served as a platform for knowledge exchange, the announcement of new technologies, and the strengthening of international collaboration in AI [1]. One of the most significant outcomes was the launch of the National AI Innovation Hub in Hanoi, with an initial funding of 350 billion VND (approximately €12.7 million) from the Ministry of Information and Communications Technology [1]. This initiative underscores that Vietnam is not only investing in technological frontiers but also pursuing a strategic vision for sustainable and socially responsible innovation in digital governance.
Personalized Information and Accessibility for Diverse Stakeholders
A central goal of Vietnam’s AI strategy is to improve information delivery to diverse stakeholder groups, including people with disabilities, elderly citizens, and low-literacy communities. During ICTA 2025, a prototype of an AI-driven information platform was presented that automatically translates complex government information into simpler language based on the user’s reading level and language preferences. The system utilizes natural language processing (NLP) and adaptive learning models to tailor content to individual users, an approach described in AI research subfields such as language representation and semantic analysis [2]. For individuals with visual impairments, a text-to-speech and image description module is being developed, integrated into mobile government applications. This application is based on emerging technologies in computer vision and affective computing, similar to systems implemented in recent years, such as Kismet [2]. The objective is to reduce the digital divide and ensure that information is not only accessible but also meaningful and relevant to every citizen. The effectiveness of such systems is measured through usage analytics and user testing, where the reliability of AI outputs plays a crucial role [1].
Chatbots and Automation in Public Service Delivery
One of the most concrete applications of AI in Vietnam’s public sector is the deployment of intelligent chatbots for service delivery. During ICTA 2025, multiple prototypes were showcased capable of accurately answering complex questions about social security, tax filings, and healthcare. These chatbots are based on advanced generative AI models, such as those in the GPT family, and utilize the principle of ‘reinforcement learning from human feedback’ (RLHF) to minimize output errors [2]. In a pilot project in Hanoi city, these chatbots were deployed at the local municipal office, reducing the average application processing time from 18 minutes to 4.2 minutes within two months of implementation [1]. Furthermore, an analysis of user feedback showed a satisfaction rate increase from 58% to 89% over the same period [1]. The technology is designed with an ethical framework prioritizing transparency, privacy protection, and the avoidance of predictable errors, such as ‘hallucinations’ in AI-generated responses [2]. The system architecture is built on a hybrid model that leverages both local data and open sources, enhancing reliability and reducing dependence on external platforms [1].
AI-Driven Awareness Campaigns and Effectiveness Measurement
Vietnam is also experimenting with AI-driven awareness campaigns designed to reach broad and diverse audiences. During ICTA 2025, a demonstration was provided of a system that automatically adapts social media messages to the language, culture, and communication style of specific groups, such as youth, farmers, or immigrants. This is made possible through the use of advanced algorithms for sentiment analysis, language profiling, and social network modeling [2]. The effectiveness of these campaigns is measured using real-time analytics, drawing insights from click patterns, response rates, and conversion graphs. Data is used to continuously optimize campaign strategies via machine learning, enabling the system to learn which messages are most persuasive for which groups [1]. The first pilot, focused on vaccination awareness in rural areas, led to a 23% increase in vaccination readiness within three months, compared to a traditional campaign without AI [1]. Additionally, the system employs an ‘explainable AI’ approach, as developed by the UK AI Safety Institute, to clarify the rationale behind each recommended action, promoting transparency and building trust in the technology [2].
Challenges Around Privacy, Inclusivity, and Reliability
Despite progress, challenges remain, particularly concerning privacy, inclusivity, and the reliability of AI systems. During discussions at ICTA 2025, it was highlighted that AI system failures, such as those observed in the COMPAS risk assessment tool, can lead to systemic unfairness, even when models are ‘equivalent’ across groups [2]. Although Vietnam does not provide specific figures on bias within its own systems, the use of ‘fairness through blindness’ is explicitly rejected, as noted by researcher Moritz Hardt, who argues that ‘the most robust fact in this research is that fairness through blindness does not work’ [2]. To prevent such issues, a national audit committee has been established to regularly monitor for discrimination in AI applications. Additionally, concerns persist regarding the energy and environmental impact of AI data centers. According to the International Energy Agency (IEA), AI-related energy consumption could generate 180 million tons of CO₂ emissions by 2025, with projections reaching 300 to 500 million tons without mitigation [2]. In response, Vietnam has introduced a mandatory energy assessment for all new AI projects, aiming to source 70% of public AI infrastructure energy from renewable sources [1]. Furthermore, efforts are underway to develop an open-domain AI declaration that ensures transparency and accountability in AI decision-making, in alignment with the European AI Act and the Council of Europe Framework Convention [2].