Encode Accuses OpenAI of Intimidation Over AI Legislation in California
san francisco, zondag, 12 oktober 2025.
Encode, a small policy advisory firm, has publicly accused OpenAI of intimidation tactics during the development of California’s new AI safety law. The general director of Encode claims that OpenAI used witness interrogations and political influence to intimidate the organisation and weaken the law, while falsely alleging that Encode was funded by Elon Musk. These accusations have sparked significant reactions, including criticism from within OpenAI itself.
Accusations of Intimidation
In a widely circulated post on social media, Calvin, the general director of Encode, accused OpenAI of using witness interrogations and political influence to intimidate Encode and weaken California’s new AI safety law. He alleged that OpenAI falsely claimed Encode was funded by Elon Musk, a major critic of OpenAI. These accusations have sparked significant reactions, including criticism from within OpenAI itself [1].
Reactions from Insiders
Joshua Achiam, head of mission alignment at OpenAI, responded to Calvin’s post with a tweet expressing concern about the company’s tactics. “With possibly my entire career on the line, I will say: this does not seem like a good thing,” wrote Achiam. “We should not do things that turn us into a scary power instead of a virtuous one. We have a duty and a mission to all of humanity. The bar for striving to meet that duty is remarkably high.” [1][3]
Subpoenas and Political Pressure
Calvin recounted that he received a subpoena from OpenAI delivered by a deputy sheriff one evening while sitting at the table with his wife. He was asked to share private messages with California lawmakers, students, and former OpenAI employees. The organisation he works for, Encode AI, was also subpoenaed by OpenAI last month to investigate whether Elon Musk, a major competitor of OpenAI, funds the group. OpenAI issued the subpoena as part of its countersuit against Musk, alleging that the billionaire engages in ‘bad faith tactics to slow down OpenAI’. Calvin commented, “I believe OpenAI has used their lawsuit against Elon Musk as a pretext to intimidate their critics and suggest that Elon is behind them all.” [2]
Internal Tensions
There is also criticism internally at OpenAI. Boaz Barak, a researcher at OpenAI and professor at Harvard, openly questioned the launch of Sora 2, a new video generation tool. He found it technically impressive but warned of potential pitfalls. These tensions come to light not only through Encode’s accusations but also through the broader discussion on the responsibility of AI companies. [3]
Impact on Legislation
Encode’s accusations have had a direct impact on the development of California’s AI safety law, also known as SB 53. This law, signed by Governor Gavin Newsom on 30 September 2025, imposes strict requirements on the transparency and safety of AI systems. Calvin emphasised that OpenAI’s actions were inappropriate in the context of this law, which is intended to promote ethical and safe AI applications. [1]
Ethical Considerations
The allegations of intimidation and the broader discussion on the responsibility of AI companies have also led to deeper reflection on the ethical aspects of AI regulation. While AI enhances the capabilities of businesses and governments, these developments also raise questions about privacy, safety, and the potential misuse of technology. Boaz Barak noted that the technical possibilities of AI are advancing rapidly, but the societal and ethical implications must be thoroughly examined. [3]