OpenAI Establishes Safety Committee for Training New Artificial Intelligence Model

OpenAI Establishes Safety Committee for Training New Artificial Intelligence Model

OpenAI says it’s setting up a safety and security committee and has begun training a new AI model to supplant the GPT-4 system that underpins its ChatGPT chatbot.

The San Francisco startup said in a blog post Tuesday that the committee will advise the full board on “critical safety and security decisions” for its projects and operations.

The safety committee arrives as debate swirls around AI safety at the company, which was thrust into the spotlight after a researcher, Jan Leike, resigned and leveled criticism at OpenAI for letting safety “take a backseat to shiny products.” OpenAI co-founder and chief scientist Ilya Sutskever also resigned, and the company disbanded the “superalignment” team focused on AI risks that they jointly led.

OpenAI said it has “recently begun training its next frontier model” and its AI models lead the industry on capability and safety, though it made no mention of the controversy. “We welcome a robust debate at this important moment,” the company said.

AI models are prediction systems that are trained on vast datasets to generate on-demand text, images, video and human-like conversation. Frontier models are the most powerful, cutting edge AI systems.

The safety committee is filled with company insiders, including OpenAI CEO Sam Altman and Chairman Bret Taylor, and four OpenAI technical and policy experts. It also includes board members Adam D’Angelo, who’s the CEO of Quora, and Nicole Seligman, a former Sony general counsel.

The committee’s first job will be to evaluate and further develop OpenAI’s processes and safeguards and make its recommendations to the board in 90 days. The company said it will then publicly release the recommendations it’s adopting “in a manner that is consistent with safety and security.”

——

The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of the AP’s text archives.

OpenAI, a leading artificial intelligence research lab, has recently announced the establishment of a Safety Committee dedicated to ensuring the safe and ethical development of new AI models. This move comes in response to growing concerns about the potential dangers of advanced AI systems and the need for robust safety measures to mitigate risks.

The committee will be responsible for overseeing the training and deployment of new AI models, with a focus on identifying and addressing potential safety issues. This includes evaluating the potential risks associated with different AI applications, such as autonomous vehicles, healthcare diagnostics, and financial trading algorithms.

One of the key goals of the Safety Committee is to develop guidelines and best practices for the responsible development of AI technologies. This includes ensuring that AI systems are designed to prioritize human safety and well-being, and that they are transparent and accountable in their decision-making processes.

In a statement, OpenAI emphasized the importance of proactive safety measures in the development of AI technologies. “As AI systems become more advanced and capable, it is crucial that we take steps to ensure that they are developed in a safe and responsible manner,” said a spokesperson for the organization. “The establishment of the Safety Committee is a key step towards achieving this goal.”

The announcement comes at a time when concerns about the potential risks of AI technologies are at an all-time high. From fears about job displacement to worries about the misuse of AI for malicious purposes, there is a growing recognition of the need for robust safety measures to protect against these risks.

OpenAI’s decision to establish a Safety Committee is a positive step towards addressing these concerns and ensuring that AI technologies are developed in a responsible and ethical manner. By prioritizing safety and accountability in the development of new AI models, OpenAI is setting a strong example for other organizations in the field to follow.

Overall, the establishment of the Safety Committee by OpenAI is a welcome development in the ongoing conversation about the responsible development of AI technologies. By taking proactive steps to address potential safety risks, OpenAI is demonstrating its commitment to ensuring that AI systems are developed in a way that prioritizes human safety and well-being.

Tagged: