OpenAI co-founders quit amid Musk’s legal firestorm
OpenAI experienced a significant shake-up as two co-founders, John Schulman and Greg Brockman, announced their departures a day after Elon Musk filed a new lawsuit against the company.
On Aug. 5, Musk filed a legal action accusing OpenAI CEO Sam Altman and Brockman of misleading him into co-founding the organization and straying from its original non-profit mission. The billionaire’s lawsuit had attracted significant attention because he had withdrawn a similar application less than two months ago.
However, less than 24 hours after the news, OpenAI faces an exodus of its top leadership, raising more questions about the company.
Why they left
Brockman, who served as the company’s President, stated that he was taking an extended sabbatical until the end of the year. He emphasized the need to recharge after nine years with OpenAI, noting that the mission to develop safe artificial general intelligence (AGI) is ongoing.
On the other hand, Schulman said he was leaving the AI company for its rival, Anthropic because he wanted to focus more on AI alignment—a field that ensures AI systems are beneficial and non-harmful to humans.
He added that another reason he left was because he wanted to engage in more hands-on technical work. Schulman wrote:
“I’ve decided to pursue this goal at Anthropic, where I believe I can gain new perspectives and do research alongside people deeply engaged with the topics I’m most interested in.”
Schulman’s exit leaves the company with only two active co-founders, including CEO Altman, Brockman, and Wojciech Zaremba, who leads language and code generation.
Meanwhile, Schulman’s departure has reignited focus on OpenAI’s AI safety practices. Critics argue that the company’s emphasis on product development has overshadowed safety concerns. This criticism follows the disbandment of OpenAI’s superalignment team, which was dedicated to controlling advanced AI systems.
Last month, US lawmakers wrote Altman and sought confirmation on whether OpenAI will honor its pledge to allocate 20% of its computing resources to AI safety research.
In response, Altman reiterated the firm’s commitment to allocating at least 20% of its computing resources to safety efforts and added:
“Our team has been working with the US AI Safety Institute on an agreement where we would provide early access to our next foundation model so that we can work together to push forward the science of AI evaluations. excited for this!”