OpenAI unveils a new safety board following an employee uprising.
Only a few weeks after disbanding a team dedicated to AI safety, OpenAI said on Tuesday that it had formed a new committee to provide safety and security recommendations to the company's board.
OpenAI announced in a blog post that Bret Taylor, the chair of the company's board, and Nicole Seligman, a board member, will lead the new committee in addition to CEO Sam Altman.
The news comes after Jan Leike, an OpenAI executive with a safety focus, left the company in a highly publicized manner last month. Leike quit from OpenAI, claiming that leadership tensions had "reached a breaking point" and that the business has underinvested in AI safety efforts.
It also coincides with the resignation of Ilya Sutskever, a different team leader at OpenAI who was in charge of making sure AI advancements meet the requirements and goals of people. Sutskever oversaw the group known as "superalignment." Sutskever was a major factor in Altman's abrupt resignation as CEO last year, but he then changed his mind and backed Altman's comeback.
A representative for OpenAI told CNN earlier this month that the firm would be better able to accomplish its superalignment objectives if the superalignment team was disbanded and the staff were reassigned throughout the organization.
OpenAI also mentioned in a blog post on Tuesday that it has started building a new AI model to replace the one that powers ChatGPT. According to the corporation, the new AI model that replaces GPT-4 will represent a further development towards artificial general intelligence.
"We encourage a healthy debate at this vital juncture, while we are happy to design and ship models that are industry-leading on both safety and capability," the company stated.
The blog post also stated that during the following ninety days, "one of the Safety and Security Committee's primary responsibilities would be to assess and enhance OpenAI's procedures and security measures." "The Safety and Security Committee will present its recommendations to the entire Board at the end of the ninety-day period. In a way that is compliant with safety and security, OpenAI will make public an update on recommendations that have been approved when the full Board has reviewed them.
Comments
Post a Comment
In the comments, give your opinion on the information you have read, and don't be afraid to tell us what we did wrong or your good advice so that we know what we should convey to you and what you would like us to add.