AIJB

How AI browsers are changing our information behaviour

How AI browsers are changing our information behaviour
2025-11-17 voorlichting

Amsterdam, maandag, 17 november 2025.
Since early 2025, AI browsers such as Perplexity’s Comet and OpenAI’s Atlas have increasingly replaced traditional search engines. What was once a list of search results is now a direct, summarised answer—often without citing sources. The impact is significant: users receive quick responses, but learn less about how to critically assess information. Moreover, AI companies collect search queries to improve their models, without transparency. Most people are unaware that their search behaviour is being used as ‘fuel’ for AI. In the Netherlands, libraries are now developing new tools to help people navigate between AI-generated answers and factual truth.

The rise of AI browsers: a new era in information retrieval

Since early 2025, AI browsers such as Perplexity’s Comet and OpenAI’s Atlas have been replacing traditional search engines in the Netherlands. These browsers no longer provide search results but deliver immediate summaries of answers, often without source attribution. The outcome is a shift in browsing behaviour: where users once had to filter and evaluate information themselves, they now receive answers on a plate. Comet, a Chromium-based browser, has been available since 1 November 2025 and requires explicit consent for the use of search queries to improve AI models [1]. OpenAI’s Atlas, available for iOS since 2025, is described by technology expert Anil Dash as an ‘Anti-Web’, as it does not direct users to the original web but instead keeps them confined within a walled garden of AI-generated content [1]. This transformation affects not only how people seek information, but also their trust in digital content and the critical skills required to engage with it.

Impact on media literacy and knowledge gathering

The shift towards AI-powered browsing has a direct effect on media literacy, particularly among younger users. Traditional skills such as evaluating sources, detecting bias, or identifying misinformation are becoming less relevant, as AI provides answers without showing sources. According to a study by the Central Bureau of Statistics (CBS), the use of AI bots for information gathering has increased by 40% since early 2025 [1]. Dr. Els van Dijk, Professor of Media Literacy at the University of Amsterdam, states that the way people search for information has ‘radically changed’ since 2025 [1]. In response, the National Library of the Netherlands has introduced a new media training programme for secondary school students, focused on understanding AI bots and distinguishing truth from simulation [1]. This development reflects growing concerns about the erosion of critical evaluation skills among younger generations.

The data dependency of AI-based systems

AI browsers collect search queries and user behaviour without transparency, increasing reliance on companies like Perplexity and OpenAI. With Comet, users must explicitly consent to the use of their search queries for improving AI models, placing them in the position of a ‘fuel supplier’ for AI [1]. Anil Dash accuses OpenAI of ‘parasitising an internet of assumed consent’ and ‘consuming whatever can be grabbed’, including personal documents and behavioural data [1]. The growing dependence on AI systems without visible transparency poses a risk of data exploitation, leading users to lose privacy and control. As a result, the National Library and the Dutch government allocated a €2.3 million subsidy to libraries on 15 October 2025 to develop AI-inspired educational tools [1].

Pilot projects and the new role of libraries

In response to changes in information provision, Dutch libraries are redefining their role as critical intermediaries between AI and users. Since 2025, pilot projects have been underway in 12 provincial libraries aimed at developing tools to help people navigate between AI-generated answers and factual truth [1]. These initiatives are part of a broader national plan for an AI information awareness programme, set to be implemented in 2026 [1]. The pilot outreach was scheduled for May 2025 but was not carried out; the current status of the pilot is [alert! ‘unresolved planning; no actual implementation reported’] [1]. Libraries therefore aim not only to provide information but also to help students, parents, and adults build trust in and critically assess AI-generated content—a fundamental step toward a responsible information society.

AI in outreach: from personalised information to campaigns

AI is being used not only in browsing but also in public communication and outreach. The use of personalised information delivery via AI chatbots improves access to complex information for diverse audiences. On 14 November 2025, Amazon launched an AI-powered translation service, Kindle Translate, enabling e-book authors to obtain translations into Spanish or German within hours with a single click [2]. The service uses AWS’s in-house large-language model and offers a side-by-side preview for human review, featuring a ‘Kindle Translate’ badge and a 10% free sample for readers [2]. For authors who traditionally paid $5,000 or more for a human translation of a 70,000-word novel, this represents a revolution [2]. However, AI translations are not yet perfect: Amazon warns that ‘some errors may occur’, particularly with literary nuance, idiomatic expressions, and cultural references [2]. This underscores that AI in outreach is not a replacement but a tool that must be integrated with human oversight.

Accessibility, inclusivity and the challenge of reliability

AI enhances the accessibility of information, particularly for people with reading or writing difficulties, or those who do not live in the original language region. The use of AI in outreach campaigns helps reach diverse audiences and enables effectiveness measurement through data analytics. For instance, Amazon’s Kindle Translate system automatically synchronises rights, metadata, pricing, and pages across languages, ensuring that a bestseller in English is automatically recommended in the Spanish Kindle Store [2]. This increases the reach of knowledge. Yet, the reliability of AI-generated information remains a challenge. Anil Dash describes AI responses as ‘a last-minute book report written by a student who mostly plagiarised from Wikipedia’ [1]. Furthermore, systems like Windows 11, version 23H2, can even cause problems through AI-related updates—for example, the October 2025 update (KB5066793) blocked smartcard authentication in 32-bit applications [3]. This illustrates that the technology has limits not only in information delivery but also in infrastructure and security.

The role of investment and market development in AI

The rapid growth of AI startups is also transforming the venture capital industry. Startups that once were evaluated based on market fit and revenue now must meet stricter criteria such as technical DNA, data flywheel effect, and sustainable customer demand [4]. Some startups generating $5 million in revenue still cannot secure follow-on funding due to a lack of long-term strategy [4]. The market is still in a formative phase, with no dominant leader in foundational models, leaving room for new players [4]. This shifts the dynamics in outreach: organisations can now develop AI tools more quickly, but must also consider the ethical implications of data collection and transparency. Governments and public institutions must therefore develop strategies that combine technological advancement with social responsibility.

Sources