ChatGPT-maker OpenAI has started the training process for its new generation of AI model, GPT-5. As the AI model training begins, the company on Tuesday announced the formation of a new Safety and Security Committee which will include major board members. OpenAI recently announced the dissolution of its Superalignment team which was formed to tackle long-term AI risk. However, now the new committee will work similarly as it will look after safety and security decisions for new projects and operations.
On Tuesday, OpenAI shared a blog post announcing the formation of a new Safety and Security Committee headed by directors Bret Taylor (Chair), Adam D'Angelo, Nicole Seligman, and Sam Altman (CEO). OpenAI said that it is responsible for making recommendations to the company board regarding “critical safety and security decisions for all OpenAI projects.”
Additionally, the committee will include OpenAI's technical and policy experts such as Aleksander Madry, John Schulman (Head of Safety Systems), Matt Knight (Head of Security), and Jakub Pachocki (Chief Scientist). The members will oversee and thoroughly examine the company's plans and will develop processes and safeguards in 90 days.
OpenAI's new safety committee will look over the company's new projects and operations thoroughly to provide safety processes for the ethical use of their tools and technology. The company also highlights that they are moving towards the next level of capabilities of developing AGI and they want to focus on both safety as well as technology advancements. OpenAI said, “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.”
Within 90 days, OpenAI's Safety and Security Committee will present the recommendations and processes for managing safety and security in their projects. This a major step for OpenAI as a Wired report highlighted that after the dissolution of the Superalignment team, the
Read more on tech.hindustantimes.com