Premium
This is an archive article published on May 19, 2024

OpenAI leader resigns, accuses company of putting ‘shiny products’ above safety

OpenAI is being accused of focusing on releasing new products rather than safety.

OpenAIOpenAI logo is seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

Jan Leike, a machine learning researcher co-leading ‘surperalignment’ OpenAI, on May 17, announced his last day at the company with a thread on X. “Yesterday was my last day as head of alignment, superalignment lead, and executive @OpenAI,” Leike said in his post on X.

Leike’s consecutive posts on X gave insight into the reasons behind his resignation. Describing his time at OpenAI as “a wild journey over the past three years”, Leike said he joined OpenAI with the thought it would be the “best place in the world to do this research”.

Leike then stated he has been disagreeing with the company leadership and OpenAI’s core priorities for quite some time “until we reached a breaking point”.

In a series of consecutive posts, Leike shared, “Stepping away from this job has been one of the hardest things I have ever done, because we urgently need to figure out how to steer and control AI systems much smarter than us.”

Story continues below this ad

Stressing on how OpenAI approaches safety, Leike said in his post, “Over the past years, safety culture and processes have taken a backseat to shiny products,” and “OpenAI must become a safety-first AGI company.”

According to Leike, OpenAI should spend more time on “getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics,” and he is concerned that OpenAI is not on that trajectory.

He also said, “Building smarter-than-human machines is an inherently dangerous endeavour,” hinting at the ongoing quest to develop AGI (artificial general intelligence), and that “OpenAI is shouldering an enormous responsibility on behalf of all of humanity.”

According to Leike, “To ensure AGI benefits all of humanity, we must prioritize preparing for them as best we can,” and OpenAI is “long overdue in getting incredibly serious about the implications of AGI.

Latest Comment
Post Comment
Read Comments
Advertisement
Loading Taboola...
Advertisement