AIJB

How an AI-Generated Doctor's Voice Can Mislead You

How an AI-Generated Doctor's Voice Can Mislead You
2025-11-21 nepnieuws

Nederland, vrijdag, 21 november 2025.
Imagine this: a well-known doctor or influencer warns you about fatigue – but it’s completely fake. Recent research has revealed that over 40 Dutch doctors and public figures are being exploited in deepfake videos promoting supplements from Wellness Nest. These videos use real footage but pair it with AI-generated voices spreading false claims – such as the product helping with menopausal symptoms or fatigue, despite a lack of scientific evidence. What’s unusual is that the network behind these videos is operated by Vietnamese entrepreneurs who are earning millions through an affiliate programme. Most alarming? The videos remained visible on TikTok for months, despite clear signs of deception. This is no coincidence – it represents an emerging threat to our trust in digital information.

How an AI-Generated Doctor’s Voice Can Mislead You

Imagine this: a well-known doctor or influencer warns you about fatigue – but it’s completely fake. Recent research has revealed that over 40 Dutch doctors and public figures are being exploited in deepfake videos promoting supplements from Wellness Nest [1][2][3]. The videos use real footage of doctors and prominent Dutch individuals, such as sports entrepreneur Arie Boomsma, influencer Monica Geuze, and neuropsychologist Erik Scherder, but combine them with AI-generated voices spreading false claims – such as the product helping with menopausal symptoms or fatigue, without scientific evidence of therapeutic effectiveness [1][2][3]. This technique creates an illusion of credibility: it appears as though a trusted authority, like a doctor or well-known public figure, is offering advice, when in fact no such endorsement exists. The sales of Wellness Nest supplements are promoted with claims about reducing fatigue and menopausal symptoms, despite a lack of scientific proof for the effectiveness of shilajit [2]. The credibility of the individuals depicted is thus exploited to drive sales, a practice the Dutch Medical Association (KNMG) describes as ‘harmful and misleading’ and a risk to public health [2].

The Network Behind the Deepfakes: Vietnamese Entrepreneurs and Affiliate Programmes

Behind the Wellness Nest network are Vietnamese entrepreneurs operating through the company Pati Group, with business information registered in Colorado (United States) and Vietnam [2]. This network uses a so-called affiliate programme to expand the spread of the deepfake videos: by encouraging others to become trade partners (affiliates), they can sell supplements via TikTok and earn commissions [1][2]. Screenshot evidence shows PayPal transactions ranging from $1,700 to $66,000 per month for affiliates, with winners in the competitions appearing to earn over $100,000 monthly; Pointer Checkt cannot verify these earnings [2]. Although a Pati Group employee denies allegations of using deepfakes, the company has provided no written response despite repeated attempts [2]. The dissemination of this deceptive content is thus not only technically enabled by AI but also economically driven by a system that rewards deception, further underscoring the urgency of oversight in digital marketing [1][2].

How TikTok and Trustpilot Enable the Spread of Fake News

The deepfake videos were online on TikTok for two months before being removed, despite clear signs of deception [2]. TikTok acknowledges that their moderation processes do not guarantee full compliance with guidelines: ‘Despite our extensive moderation processes, we cannot guarantee that all content complies with our guidelines’ [2][3]. Platforms continue to actively facilitate the spread of fake news, also through Trustpilot, where Wellness Nest holds a TrustScore of 4.3/5 based on 942 customer reviews [4]. Customers report both positive and negative experiences: some mention improved energy and better sleep, while others complain about delivery delays (nearly a month), unexpected customs fees of €10, automatic repeat orders without explicit opt-in, and inability to delete personal data [4]. One customer, Angélique, even stated that the company stores her credit card details ‘without my consent’, which could constitute a breach of data protection regulations [4]. Furthermore, a customer reported on 4 November 2025 that Wellness Nest uses AI-generated deepfake videos for Instagram ads, which constitutes a public accusation against the brand’s accountability [4]. This combination of platforms and customer reviews creates an illusion of popularity and reliability, despite the product and its claims being barely investigated [4].

Signs of a Deepfake: How to Spot Fake Videos

There are technical indicators that may suggest AI-generated images or voices. According to Digiwijzer, visual indicators include unnatural shadows, incorrect finger proportions, unrealistic lighting, and unnatural teeth or eyes [3]. Audio indicators are robotic voices, lack of emotional expression, or unnatural fluency in speech [3]. These signs are often present in the deepfake videos used by Wellness Nest, where the voices of well-known doctors and influencers are mimicked without the individuals having actually spoken those words [1][2]. Online tools are available to detect metadata or image manipulation, such as tools developed by the European Data Protection Supervisor [3]. For users, it is crucial to learn to spot inconsistencies: check whether the light source in the video makes logical sense, whether eye or lip movements align with the spoken words, and whether the voice sounds like it comes from a computer. If in doubt, verify the source: check whether the video is posted by an official page or a recognised news outlet, not by an unknown account on TikTok or Instagram [3].

The Democratic Threat of AI-Generated Fake News

The commercial use of deepfakes is more than a privacy or marketing issue – it poses a threat to democratic values [1][2]. When public figures, including doctors, are misrepresented in videos using their image or voice without consent, trust in the digital public sphere is undermined [1][2]. The Dutch Medical Association (KNMG) warns that the spread of this deceptive content leads to incorrect health decisions among citizens and poses a risk to public health [2][3]. The situation is comparable to the use of deepfakes in non-consensual pornography, where dozens of Dutch women were exploited by having their faces superimposed onto pornographic actresses’ bodies [1]. The future of democracy depends on our ability to distinguish between truth and falsehood. If we cannot identify AI-generated content, we risk losing trust in information, media, and public figures [3]. This makes media literacy not a luxury, but an essential life skill for every digital citizen.

Sources