OpenAI Collaborates with Bryan Cranston and SAG-AFTRA on Stricter Rules for Deepfakes
amsterdam, donderdag, 23 oktober 2025.
Following the unauthorised use of Bryan Cranston’s face and voice in deepfake videos, OpenAI has entered into a collaboration with the actor and the actors’ union SAG-AFTRA. The aim is to develop stricter rules and safety measures to protect the integrity and privacy of artists. OpenAI has promised to introduce an opt-in policy and to handle complaints promptly. This step marks a significant change in the entertainment industry, where creative professionals seek better protection of their identity against AI misuse.
Collaboration Against Deepfakes
OpenAI, the renowned AI company, has entered into a collaboration with actor Bryan Cranston and the actors’ union SAG-AFTRA. This step followed the unauthorised use of Cranston’s face and voice on the platform Sora 2 for deepfake videos. The goal of this collaboration is to develop stricter rules and safety measures for the use of AI in entertainment and media, particularly to protect the integrity and privacy of artists [1][2][3].
New Policy Measures
OpenAI has pledged to introduce an opt-in policy, requiring individuals to explicitly consent before their likeness or voice can be used. This policy emerged in response to the criticism from Cranston and SAG-AFTRA. According to Sean Astin, president of SAG-AFTRA, legislation such as the proposed “NO FAKES Act” in the US is necessary to address misuse legally [1][2]. OpenAI has also committed to handling complaints about unauthorised images “expeditiously” [1][2].
Industry Reactions
Bryan Cranston, known for ‘Breaking Bad’, expressed his concern after discovering AI-generated videos featuring him without his permission. He said: “I was deeply concerned not only for myself but for all performers whose work and identity can be misused in this way.” Cranston was grateful for OpenAI’s new policy and improved security measures [2][3][4]. SAG-AFTRA president Sean Astin praised Cranston for his action and emphasised the need for regulation around deepfakes [2][4].
Impact on the Entertainment Industry
This collaboration marks a significant moment in the entertainment industry, where creative professionals are making it clear that they do not want their identity misused by AI tools. The impact of these rules on large-scale deepfake production will become apparent in the coming months. This initiative represents a step towards greater responsibility from tech companies in protecting people and their rights [1][2][3][4].
Implications for Media Literacy and Democracy
The growing threat of deepfakes extends beyond the entertainment industry. It also has implications for media literacy and democracy. Deepfakes can be used to spread disinformation and undermine trust in media and government institutions. Education and awareness campaigns play a crucial role in informing the public about the risks and how to recognise them [GPT].
Practical Tips for Recognising Deepfakes
To identify deepfakes, readers can follow these tips: 1) Look for unnatural movements or lip movements that do not match the spoken words. 2) Check for irregularities in lighting or shadows. 3) Examine the quality of the image; deepfakes can often be pixelated or distorted. 4) Compare the video with known sources to detect irregularities. 5) Consult reliable fact-checkers and media outlets for verification [GPT].