GPT-5 might be farther off than we thought, but OpenAI wants to make sure it is safe.
On Monday, OpenAI announced the formation of a new “Safety and Security Committee” to oversee risk management for its projects and operations. The announcement comes as the company says it has “recently begun” training its next frontier model, which it expects to bring the company closer to its goal of achieving artificial general intelligence (AGI), though some critics say AGI is farther off than we might think. It also comes as a reaction to a terrible two weeks in the press for the company.
FURTHER READING
OpenAI on the defensive after multiple PR setbacks in one week
Whether the aforementioned new frontier model is intended to be GPT-5 or a step beyond that is currently unknown. In the AI industry, “frontier model” is a term for a new AI system designed to push the boundaries of current capabilities. And “AGI” refers to a hypothetical AI system with human-level abilities to perform novel, general tasks beyond its training data (unlike narrow AI, which is trained for specific tasks).
Meanwhile, the new Safety and Security Committee, led by OpenAI directors Bret Taylor (chair), Adam D’Angelo, Nicole Seligman, and Sam Altman (CEO), will be responsible for making recommendations about AI safety to the full company board of directors. In this case, “safety” partially means the usual “we won’t let the AI go rogue and take over the world,” but it also includes a broader set of “processes and safeguards” that the company spelled out in a May 21 safety update related to alignment research, protecting children, upholding election integrity, assessing societal impacts, and implementing security measures.
OpenAI says the committee’s first task will be to evaluate and further develop those processes and safeguards over the next 90 days. At the end of this period, the committee will share its recommendations with the full board, and OpenAI will publicly share an update on adopted recommendations.
OpenAI says that multiple technical and policy experts, including Aleksander Madry (head of preparedness), Lilian Weng (head of safety systems), John Schulman (head of alignment science), Matt Knight (head of security), and Jakub Pachocki (chief scientist), will also serve on its new committee.
The announcement is notable in a few ways. First, it’s a reaction to the negative press that came from OpenAI Superalignment team members Ilya Sutskever and Jan Leike resigning two weeks ago. That team was tasked with “steer[ing] and control[ling] AI systems much smarter than us,” and their departure has led to criticism from some within the AI community (and Leike himself) that OpenAI lacks a commitment to developing highly capable AI safely. Other critics, like Meta Chief AI Scientist Yann LeCun, think the company is nowhere near developing AGI, so the concern over a lack of safety for superintelligent AI may be overblown.
FURTHER READING
GPT-5 might arrive this summer as a “materially better” update to ChatGPT
Second, there have been persistent rumors that progress in large language models (LLMs) has plateaued recently around capabilities similar to GPT-4. Two major competing models, Anthropic’s Claude Opus and Google’s Gemini 1.5 Pro, are roughly equivalent to the GPT-4 family in capability despite every competitive incentive to surpass it. And recently, when many expected OpenAI to release a new AI model that would clearly surpass GPT-4 Turbo, it instead released GPT-4o, which is roughly equivalent in ability but faster. During that launch, the company relied on a flashy new conversational interface rather than a major under-the-hood upgrade.
We’ve previously reported on a rumor of GPT-5 coming this summer, but with this recent announcement, it seems the rumors may have been referring to GPT-4o instead. It’s quite possible that OpenAI is nowhere near releasing a model that can significantly surpass GPT-4. But with the company quiet on the details, we’ll have to wait and see.