
OpenAI, the company behind ChatGPT recently announced it formed a new “Safety and Security Committee” led by Bret Taylor, Adam D’Angelo, Nicole Seligman and you guessed it – CEO Sam Altman.
OpenAI says the new committee is responsible for “making recommendations to the full Board on critical safety and security decisions for OpenAI projects and operations”, which was one of the main concerns for several AI researchers who left the company earlier this month.
In a blog post, OpenAI said the “first task of the new Safety and Security Committee will be to evaluate and further developer OpenAI’s processes and safeguards over the next 90 days”. After this, it will share the recommendations with the full Board, which will then review it and “publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”
The new committee is formed in wake of OpenAI’s co-founder and chief scientist Ilya Sutskever leaving the company. He was also part of the Superalignment team, which was responsible for steering and controlling AI systems that are “much smarter than us.” Following in his footsteps, head of the Superalignment team Jan Leike also announced that he is leaving OpenAI and said that safety has “taken a backseat to shiny products.”
Since then, the company has dissolved the entire Superalignment team with Jan Leike joining Anthropic. In the blog post, OpenAI also announced that it recently started training a new model that might succeed GPT-4, the company’s most powerful large language model to date.