How AI Is Permanently Changing the Way We Find Information
amsterdam, vrijdag, 14 november 2025.
Imagine no longer having to scroll through search results, but instead receiving a precise answer instantly—and one that even knows its source. This is already a reality thanks to AI summaries from Google and other platforms. But there’s a profound twist: while users are finding information faster, traffic to websites is vanishing. In less than a year, the American news industry lost 600 million visitors—and a niche blog lost 86% of its revenue. Why? AI economies are transforming the information value chain. The largest players profit from data they don’t have to pay for, while smaller content providers lose income. What if you knew that Google processes 14 billion searches daily, while Perplexity and ChatGPT together handle only 57 million? Power is shifting—and the consequences affect everything from your trusted websites to the future of the web itself.
The New Search Experience: From Clicking to Direct Answers
Instead of scrolling through a list of links, users now receive direct, summarized answers via AI summaries. This transformation is especially evident in Google Search, where the integration of the multimodal AI model Gemini drives a fundamental shift in the search experience. Gemini uses a transformer-decoder architecture optimized for efficiency and runs on Google’s Tensor Processing Units (TPUs), making inference faster and cheaper than competitors relying on general cloud infrastructure [1]. The AI processes not only text but also images, audio, and video within a single neural network, enabling it to answer complex, context-sensitive questions [1]. This technology lies at the heart of the new search model, where users can ask: ‘Can you tell me about Robert Graves?’ and the AI responds based on indexed documents, including source citations via the grounding_metadata feature [2]. As a result, the user experience shifts from active searching to passive information processing—without any click required. The consequences are dramatic: while search volume is projected to grow by 4.7% in 2025 (up from 4.1% in 2024), traffic to traditional websites has declined by 40% since September 1, 2025, according to Veritas Insights [2]. This signals a fundamental transfer of value from content providers to platforms like Google, which generate content without direct compensation to sources [2].
The Economic Collapse of the News Industry
The transformation of search engines has severe economic consequences for the information market, particularly for the news industry. Between 2024 and 2025, the American news industry lost approximately 600 million monthly visitors—a decline of about 26%—which, according to the source, is attributed to the integration of AI summaries that eliminate the click step [1]. This creates an economic externalization of costs: Google internalizes the gains from improved user experience—such as reduced search duration and higher conversion rates—but externalizes the costs onto publishers who no longer generate traffic [1]. A major American news magazine lost 27–38% of its traffic, while a niche blog about home renovation saw its monthly revenue drop from $7,000–$10,000 to $1,500, a decrease of 86% [1]. Publishers report losses of 70 to 80% in traffic, a situation driven by declining clicks and ad revenue despite rising search volume [1]. This development is not caused solely by Google, but also by competitors such as Perplexity AI and OpenAI, which together process approximately 47.5–57.5 million AI search queries per day [1]. In contrast, Google processes around 14 billion search queries per day, meaning Google handles 250–370 times more queries than both combined [1]. This disparity in scale and data advantage—Google has access to 50 billion products via the Shopping Graph, 250 million locations, and financial data—intensifies the concentration of power within a single platform [1].
The Role of AI in Building Knowledge: From Files to Context
Beyond general search, AI is also transforming personal and professional knowledge management. On November 13, 2025, the Google Gemini API introduced an advanced File Search feature that allows users to upload documents to a File Search store, where they are split into chunks, converted into embeddings using the gemini-embedding-001 model, and indexed for semantic search queries [2]. This technology enables context-aware questions such as: ‘Can you tell me about Robert Graves?’, with the AI responding based on imported documents and including citations [2]. The File Search store is created with a display_name and integrated into the generateContent call via the tools object using file_search(store_names=[…]) [2]. All data remains on the user’s device or in the cloud, depending on settings, and original files are automatically deleted after 48 hours, while embeddings persist indefinitely until manually removed [2]. The technology supports various file types, including PDFs, Word documents, and Markdown files, with a maximum file size of 100 MB per document [2]. This functionality is integrated into the gemini-2.5-pro and gemini-2.5-flash models, with the latter used in the example code [2]. Indexing embeddings costs $0.15 per 1 million tokens, while storage and query-time embeddings are free [2]. This makes it ideal for professional users such as researchers and content creators who need fast access to their own documents without data privacy risks.
The Privacy Dilemmas of AI Browsers and Agents
As AI search technologies grow, so does the need for robust privacy and transparency policies. OpenAI’s ChatGPT Atlas and Perplexity’s Comet are examples of AI browsers that integrate a chatbot within the browser window, allowing users to directly query website content [3]. ChatGPT Atlas collects and processes personal data from visited websites—including Amazon order history or WhatsApp messages—for AI model training and task execution [3]. Lena Cohen from the Electronic Frontier Foundation warns that Atlas gains access to significantly more information than other browsers, and this data could be used to train OpenAI’s models [3]. Or Eshed, CEO of LayerX, calls this a ‘gold rush for user data in the browser’ [3]. OpenAI offers a ‘logged-out mode’ to reduce risks, but Perplexity’s Comet lacks such a feature, increasing the risk of data leaks [3]. Furthermore, Atlas automatically uses the current website URL and content in every AI query, whereas standalone access offers greater control over shared data [3]. OpenAI received 105 requests from the US government for user data between January and June 2025, casting doubt on the security of this information [3]. AI browsers also introduce new security challenges, such as prompt injection attacks, where malicious instructions are hidden within website content [3]. These technologies are still in an early stage—‘a very nascent, evolving sector’—and therefore require strict policy development to maintain trust [3].
New Players and Models: Competition in the AI Information Economy
The market for AI search is no longer dominated solely by Google but is increasingly being entered by startups and new technologies. Parallel Web Systems, founded by former Twitter CEO Parag Agrawal, raised $100 million in a Series A round on November 12, 2025, with a valuation of $740 million [4]. The company is developing APIs that grant AI agents access to the live web for real-time data, focusing on software development, sales analytics, and insurance [4]. Parallel aims to create an ‘open marketplace mechanism’ that encourages publishers to keep content accessible to AI systems, although details remain limited [4]. Additionally, Nexa.ai has launched a new version of its Hyperlink agent that runs on NVIDIA RTX AI PCs, accelerating file indexing by up to 3x and LLM inference by up to 2x [5]. Data remains fully on the device—no data is sent to the cloud—ensuring privacy [5]. LinkedIn expanded its AI-powered people search for premium users in the US on November 12, 2025, enabling users to ask queries like ‘find investors in healthcare with FDA experience’ [6]. This tool uses natural language and replaces the traditional search bar with ‘I’m looking for…’. Although it still performs inconsistently—such as in queries like ‘Y Combinator’ versus ‘YC startup’—it demonstrates AI’s potential in professional networks [6]. The competition is fierce: Google, OpenAI, Perplexity, Parallel, Nexa, and LinkedIn are all moving toward a world where search engines are no longer lists, but active, knowledge-based agents [1][2][3][4][5][6].
Guarding the Future of the Web: From Quality to Ethics
The rise of AI search carries long-term risks for the quality of the web itself. Research indicates that AI models like GPT-4, Gemini, and Perplexity are trained on web data, 99% of which was created by humans, without compensation to content creators [1]. While this practice is technically permitted under US ‘fair use’ legislation, it is considered ethically and economically asymmetric [1]. A classic ‘tragedy of the commons’ scenario looms: if AI-powered search engines continue generating ever more information without rewarding the original sources, the incentive to create high-quality content will diminish [1]. Web quality will degrade, which in turn creates problems for AI models trained on lower-quality data [1]. Moreover, there is a risk that users become less critical: if they receive an AI-generated answer that sounds convincing but lacks source attribution, they may accept it as fact [2]. Transparency is therefore crucial. Google provides citations via grounding_metadata, but this is not yet standard across all AI systems [2]. The European Union has shown regulatory initiative: the ‘AI Transparency and Accountability Act’ is planned for implementation on December 15, 2025 [2]. This legislation could require AI systems to clearly indicate the sources of their answers. The future of information is not just technological, but also ethical and economic: how we store, share, and profit from information will be shaped by the choices of platforms, users, and policymakers [1][2][3][4][5][6].