Why your photo of your face might no longer be entirely your own
Amsterdam, donderdag, 4 december 2025.
YouTube has launched a new tool to detect deepfakes, but there’s a significant twist: to protect your digital image, you must provide biometric data. The most alarming fact? Experts warn that Google could use this data to train its AI models, despite YouTube’s assurances that the information will only be used for identity verification. For creators who already view their likeness as a valuable asset in the AI era, this represents a risk they may never recover from. The tool will be rolled out to millions of creators by the end of January 2026—but the question remains: is security truly worth paying with control over your own image?
YouTube’s new deepfake detection tool: a step forward or a step backward for privacy?
In October 2025, YouTube introduced a new ‘likeness detection tool’ designed to identify AI-generated videos that manipulate creators’ appearances. The tool asks creators to upload a government-issued ID and a video of their face, enabling identity verification and protection against unauthorised AI manipulations [1][3]. This technology is intended to help creators safeguard their reputations in an era where AI models such as OpenAI’s Sora and Google’s Veo 3 are accelerating the spread of deepfakes [3]. Although YouTube claims biometric data is used exclusively for identity verification and deepfake detection [1], this raises serious concerns about the boundaries of digital control and the future of personal data [3].
The risk of a biometric ‘data gold rush’: how Google could take control of your face
Although YouTube denies that biometric data is used to train AI models, Google’s privacy policy states that publicly shared biometric data may be utilised to train AI systems [1][3][4]. According to a report by CNBC, based on current formulations of the privacy policy, this possibility is not ruled out, even if the data is not explicitly used in practice [3]. Experts warn this poses a fundamental issue, as a person’s face is considered one of the most valuable assets in the AI era [3][4]. Then Neely, CEO of Vermillio, cautions: ‘Your appearance will become one of the most valuable assets in the AI era, and once you relinquish control, you may never get it back’ [3][4]. This concern is not unfounded: the tool is integrated into the YouTube Partner Program and will be rolled out to over three million creators by the end of January 2026 [1][3].
The reality of takedowns: how little effect the tool has had so far
Although the tool is technically advanced, its actual impact on removing deepfakes remains low. According to Jack Malon, a YouTube spokesperson, the most common response from creators upon identifying a deepfake is: ‘I’ve seen it, but I don’t have an issue with it’ [1]. This suggests a lack of awareness or an acceptance of AI-generated content, even when it is unauthorised. A concrete, data-driven analysis of removal percentages or the tool’s effectiveness is missing from available sources [1][3][4]. There is no data available on how often takedown requests are submitted or accepted, nor on how many deepfakes have been identified via the tool in total [1][3][4]. The risk of false positives remains a concern, as incorrectly identified videos without clear verification mechanisms could lead to unnecessary censorship or reputational damage [1].
The economy of image: no compensation for unauthorised use of your face
Another serious gap is the absence of economic compensation for creators whose faces are used without permission. Even if an AI-generated video uses their likeness without consent, there is no mechanism in place to receive payment [1][3]. In late 2025, YouTube offered creators the option to grant third parties permission to use their videos for AI training, without compensation [3]. This creates an ethical dilemma: while the platform protects the creator’s image, it simultaneously diminishes their ability to monetise the same image in the future. This duality reflects the growing tension between platform safety and copyright in the AI era [3].
The role of third parties: how companies like Vermillio and Loti are responding to the new challenge
Companies specialising in securing digital identities, such as Vermillio and Loti, report a clear increase in demand for their services [1][3][4]. They warn that sharing biometric data with platforms like YouTube poses a long-term risk, particularly when policies are unclear and control is lost [1][3][4]. Luke Arrigoni, CEO of Loti, refers to ‘enormous’ risks arising from YouTube’s current biometric data policy [3]. These companies advise creative professionals to exercise caution when using the tool, and in some cases, to avoid signing up until the terms are clearer [3][4].
The future of identity: can control ever be reclaimed?
YouTube has stated it will revise the language in the sign-up form to reduce confusion, but it will not alter its core privacy policy [3]. This suggests the current model of biometric data collection and use is being consolidated as the new norm [3]. For creators who view their face as both an economic and personal asset, this is a troubling development [3][4]. While the tool represents a significant technical step in combating deepfakes, it undermines the fundamental principle of personal control over digital identity [1][3]. The question remains: is it worth sacrificing ownership of your own image to a platform that might use it to advance its own AI ambitions, in exchange for security? [3][4]