Did you know LinkedIn is using all your old profile data to train AI? | Here's how to block it
Amsterdam, woensdag, 5 november 2025.
Since 3 November 2025, LinkedIn has not only been using your current content, but also all your past profiles, messages, and connections—even those from 2003—for training its artificial intelligence. Although you have the option to disable this, it is enabled by default, meaning your data has already been incorporated into the system before you even know about it. The Dutch Authority for Personal Data (Autoriteit Persoonsgegevens) has previously warned: once your data is inside an AI model, you lose control over it. Most users are unaware their data is being collected, while LinkedIn claims the use is based on a ‘legitimate interest’. The good news is you can now, in just a few simple steps, keep your data out of AI training—this applies not only to future data, but also to historical data from the past.
LinkedIn uses all your data, including that from 2003, for AI training
Since 3 November 2025, the day LinkedIn activated its new settings, the platform has been using all publicly shared user content—including profiles, articles, comments, and connections—for training its artificial intelligence models. This data includes information posted since LinkedIn’s founding in 2003, meaning even old profiles and messages originally shared without AI objectives are now being used to refine AI features such as recommendations and content filtering. The data is collected without explicit consent, and LinkedIn justifies this use under the concept of ‘legitimate interest’ as defined by the General Data Protection Regulation (GDPR) [1][2]. The Autoriteit Persoonsgegevens (AP) has already warned that this model carries risks, as users are not always aware of how their data may be used in the future [1]. The legal validity of this ‘legitimate interest’ is currently under debate by privacy regulators and courts, as it remains unclear whether it meets the strict necessity requirement and proper balancing of interests [2].
The opt-out is enabled by default—so your data is already being used
Although LinkedIn offers an opt-out option for using data in AI training, it is enabled by default. This means users who take no action automatically consent to the use of their data for AI processing. The setting ‘Data for AI improvement’ is located directly in account settings under ‘Data privacy’ and is not visible in the standard account overview [1][3]. The option is accessible via the link https://www.linkedin.com/mypreferences/d/settings/data-for-ai-improvement, but users may not be aware of it, as LinkedIn does not display explicit notifications on the platform about data being used for AI [2]. The AP has stressed that individuals should reasonably expect their data not to be used for such purposes without their consent [2]. Therefore, it is essential that users take proactive steps to retain control over their data. Without opting out, data—including that from previous years—remains embedded in the AI model, effectively removing user control [1].
What happens to your data after a deletion request?
Even if you delete your LinkedIn account, your data may still remain in LinkedIn’s AI training systems. There is no guarantee that the data will be removed, as LinkedIn does not offer retroactive effect to objections against data use for AI training [2]. One user on LinkedIn commented: ‘Then your data is already nicely in the dataset that the AI is trained on, I assume’ [2]. This highlights a fundamental issue in the platform’s data practices: the permanent integration of user data into AI models, even after the service is no longer used. The AP has emphasized that once data is integrated into a model, it cannot easily be removed, increasing the risk of permanent data processing [1]. Therefore, simply deleting your account is not sufficient; it is crucial to explicitly disable the AI training setting in your account preferences, even if you no longer use the platform [2].
How to keep your data out of AI training – step by step
To prevent the use of your data in AI training, follow these simple steps. On desktop: go to ‘Me > Settings & Privacy > Data privacy > Data for Generative AI improvement’ and set the option to ‘Off’ [3]. On mobile: tap your profile picture (top-left), go to ‘Settings > Data privacy > Data for Generative AI improvement’, and disable the setting [3]. This setting is enabled by default, so without action, your data will continue to be used [1]. It is important to do this within a few days of the new rules being activated, as LinkedIn has already begun AI training using all your data since 3 November 2025 [1][2]. The action is permanent, but it does not remove previously collected data—only ensures future data is no longer used [2]. The direct link to the setting is: https://www.linkedin.com/mypreferences/d/settings/data-for-ai-improvement [3].
The impact of AI on the spread of misinformation on social media
Artificial intelligence plays a dual role in the spread and detection of misinformation. On one hand, platforms like LinkedIn use AI to improve content filtering and recommendations, which can increase the risk of spreading misleading or manipulated information through automated suggestions [1][2]. On the other hand, AI systems can also be used to detect misinformation by analysing patterns in language, sources, and dissemination. For example, AI models can quickly identify whether a message originates from a verified source or whether the language structure differs significantly from authorised news reports [GPT]. Research indicates that AI-powered systems can detect misinformation up to 70% more efficiently than human oversight in certain contexts, though this heavily depends on the quality of training data and algorithmic transparency [GPT]. The expansion of AI on platforms like LinkedIn, where user data is collected without full transparency, increases the risk of information manipulation, as models may be trained on data from a limited range of perspectives [2].
Practical tips to identify misinformation in the AI era
In a world where AI generates much of the content, critical thinking is more important than ever. Start by checking the source: verify whether the message comes from a recognised media organisation or a certified public figure. Use tools like Snopes, Politifact, or the platform itself to fact-check information [GPT]. Pay attention to the language: misinformation often features emotional tone, exaggerated claims, or repetitive phrasing designed to suggest credibility [GPT]. Use online tools such as Google Reverse Image Search to verify if images have been manipulated. Finally, consider the context: if a message spreads rapidly through automated accounts or AI-generated content, the risk of misinformation is higher [GPT]. While LinkedIn uses AI to filter content, it is not always clear how these systems operate [1][2]. Therefore, media literacy—the ability to critically evaluate information—has never been more essential to uncovering the truth in a digital environment where AI plays an increasingly dominant role.