AIJB

How Google Ads Mimic Fake News to Resemble Real Newspapers

How Google Ads Mimic Fake News to Resemble Real Newspapers
2025-12-09 nepnieuws

Nederland, dinsdag, 9 december 2025.
Last week it became clear: fake news via Google ads looks so realistic compared to genuine articles from AD and NU.nl that many readers cannot distinguish it from the real thing. The technique behind these fabrications uses advanced AI not only to replicate text, but also layout and style. What is most alarming is that this method is being deployed at scale, particularly to mislead vulnerable groups. The risk is not just misinformation, but also a growing distrust in all media. The question arises: how can you know what you’re reading when even the format of a news article can no longer be trusted?

How Google Ads Mimic Fake News to Resemble Real Newspapers

Last week it became clear: fake news via Google ads looks so realistic compared to genuine articles from AD and NU.nl that many readers cannot distinguish it from the real thing. The technique behind these fabrications uses advanced AI not only to replicate text, but also layout and style. What is most alarming is that this method is being deployed at scale, particularly to mislead vulnerable groups. The risk is not just misinformation, but also a growing distrust in all media. The question arises: how can you know what you’re reading when even the format of a news article can no longer be trusted? On 2025-12-06, it emerged that Google ads contained fake news resembling content from AD and NU.nl, further supporting the role of AI generation technology in spreading disinformation [2]. These ads use AI to imitate the layout, style, and text of trusted media, making the content appear strikingly similar to authentic articles [2]. This makes it nearly impossible for users to differentiate between genuine and fabricated news stories, especially when content is delivered through targeted online advertising platforms like Google [2]. The combination of AI generation and targeted advertising platforms enables the large-scale distribution of fake news, particularly among vulnerable groups such as older consumers and younger users with less experience in digital media [2][3]. A recent study found that one in five young people has previously been deceived by a fake online shop, indicating a growing risk of digital misinformation among young users [2]. The technology is accessible and rapidly deployable, accelerating the spread of misinformation across the digital landscape [2]. Cybercriminals exploit Google News for fraud, including fake news and scams, demonstrating that platforms themselves are also being manipulated [2]. The Dutch government is using AI far more than a year ago, increasing the complexity of the problem, as AI is also being used to combat these fabrications [3].

The Role of AI in Generating and Detecting Fake News

AI plays a dual role in the digital landscape: on one hand, it is a powerful tool for generating fake news, and on the other, it aids in detecting and combating misinformation. Advanced techniques such as generative adversarial networks (GANs) and variational autoencoders (VAEs) are used to create synthetic media, including deepfakes that can manipulate both video and audio [4]. In 2025, 93% of social media videos were synthetically generated by AI, indicating a massive spread of AI-generated content [5]. This technology has advanced to the point where it is difficult to determine based on visual or auditory cues whether a video is authentic [4]. The technology has become accessible to the general public, with tools like Stable Diffusion allowing users to generate portraits of celebrities, such as an AI-generated portrait of actress Sydney Sweeney [4]. In May 2025, the Dutch company Xicoia, a division of Particle6, developed an AI-generated actress named Tilly Norwood, demonstrating that AI is not only used for deception but also for creative applications in the entertainment industry [4]. Yet the risk of misinformation remains significant: a single Medicare fraud campaign used deepfake advertisements that were viewed over 195 million times across YouTube, Facebook, and TikTok, highlighting the scale at which AI is being exploited for fraud [4]. On the other hand, AI tools are also being developed to detect fake news. The Deepfake Detection Challenge, organized by a coalition of technology companies, produced a winning model with 65% accuracy on a test set of 4,000 videos [4]. A team from the University at Buffalo developed a technique that uses reflections of light in the eyes to detect deepfakes without relying on AI detection tools [4]. Research from UCLA shows that 47.3% of internet information from 2003 was already unreliable, indicating that the problem of misinformation is not new, but AI has significantly accelerated it [5]. There are also legal measures: in the United Kingdom, under the Online Safety Act 2023, it is illegal to create deepfakes, and the country plans to expand the law in 2024 to penalize more serious harm [4]. In China, it is mandatory to clearly label deepfakes, and failure to do so is considered a criminal offence [4]. In the Netherlands, the ACM is taking stricter action against fraud via intermediation services, underscoring the role of regulators in combating AI-based fraud [2].

Practical Tips for Readers to Recognize Fake News

To reduce the risk of believing fake news, readers can take concrete steps to assess reliability. The first step is to verify the source: not only the website name, but also whether it is an officially recognized news organization. Websites spreading fake news often copy the layout of real newspapers such as AD and NU.nl, but their domain names are typically unusual or include words like ‘news’, ‘update’, or ‘info’ [2]. Users can employ tools such as NewsGuard, DBUNK, and Oigetit Fake News Filter to evaluate the trustworthiness of news websites [6]. NewsGuard provides reliability assessments for news websites based on criteria such as transparency, sourcing, and fact-checking [6]. DBUNK is an app that helps users fact-check news through a simple interface [6]. Additionally, it is important to read content critically: watch for implausible claims, dramatic language, or an overly perfect structure that resembles an advertisement [2]. When in doubt, cross-check the information using another reliable source. A key tip is to trust your gut instinct—if something seems too good or too alarming to be true, it likely is fake news [2]. The police advise: ‘Hang up immediately’ on phone calls predicting fraud, such as those purporting to be from PayPal or banks [2]. In a recent initiative by NU.nl, readers are asked: ‘Speak up: How do you recognize fake news?’ to promote media literacy [2]. Research shows that 67% of Americans encounter fake news on social media, and 49% of global news consumers now disclose less personal information online due to distrust [5]. This underscores the importance of critical thinking and digital media literacy. A user on Japan Today noted: ‘In a year or two it will be impossible to spot fake photos or videos. So we will be back in the 19th century, when people had to trust their newspaper editors.’ [5]. In this context, it is essential to develop your own ‘BS detector’ by cross-checking information with primary sources and raw data [5]. Media outlets are also taking action: media organizations have sent a joint letter to tech companies warning that tech companies seriously threaten democracy [3]. It is no longer solely the responsibility of tech companies, but also of every individual to handle information responsibly.

Sources