Premium

OpenAI CEO Sam Altman steps back from committee focused on safety of AI models

Sam Altman being at the helm of OpenAI’s safety committee had led to concerns over its functioning.

Open AIOpenAI CEO Sam Altman (Image: AP Photo/Eric Risberg)

OpenAI CEO and co-founder Sam Altman will no longer have a place in the organisation’s Safety and Security Committee as it looks to become a more “independent board oversight committee.”

The committee was formed in May 2024 in order to make safety recommendations on the AI models that have been developed and deployed by the Microsoft-backed startup. Altman being at the helm of such an oversight body had led to concerns that members would not be able to objectively assess the safety and security of its AI models.

With the CEO now out of the picture, the committee comprises two OpenAI board members – former NSA chief Paul Nakasone and Quora co-founder Adam D’Angelo – as well as Nicole Seligman, the former executive vice president at Sony, and Zico Kolter, director of the machine learning department at Carnegie Mellon University’s school of computer science.

Story continues below this ad

“The Safety and Security Committee will be briefed by company leadership on safety evaluations for major model releases, and will, along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed,” OpenAI said in its blog post published on Monday, September 16.

On the launch of its new reasoning-based AI model o1, OpenAI said that the safety committee had “reviewed the safety and security criteria that OpenAI used to assess OpenAI o1′s fitness for launch as well as the results of safety evaluations of OpenAI o1.”

The committee also concluded its 90-day review of OpenAI’s processes and safeguards and made the following recommendations to the AI firm:

– Establish independent governance for safety & security
– Enhance security measures
– Be transparent about OpenAI’s work
– Collaborate with external organisations
– Unify OpenAI’s safety frameworks for model development and monitoring

Story continues below this ad

Prior to setting up the safety committee, OpenAI’s current and former employees had expressed concerns that the company was growing too quickly to operate safely. Jan Leike, a former executive who exited OpenAI along with chief scientist Ilya Sutskever, had posted on X that “OpenAI’s safety culture and processes have taken a backseat to shiny products.”

Latest Comment
Post Comment
Read Comments
Advertisement

You May Like

Advertisement