o3 matches human levels on ARC-AGI benchmark, and o3-mini exceeds o1 at some tasks.
Credit: Benj Edwards / Andriy Onufriyenko via Getty Images
On Friday, during Day 12 of its “12 days of OpenAI,” OpenAI CEO Sam Altman announced its latest AI “reasoning” models, o3 and o3-mini, which build upon the o1 models launched earlier this year. The company is not releasing them yet but will make these models available for public safety testing and research access today.
The models use what OpenAI calls “private chain of thought,” where the model pauses to examine its internal dialog and plan ahead before responding, which you might call “simulated reasoning” (SR)—a form of AI that goes beyond basic large language models (LLMs).
The company named the model family “o3” instead of “o2” to avoid potential trademark conflicts with British telecom provider O2, according to The Information. During Friday’s livestream, Altman acknowledged his company’s naming foibles, saying, “In the grand tradition of OpenAI being really, truly bad at names, it’ll be called o3.”
According to OpenAI, the o3 model earned a record-breaking score on the ARC-AGI benchmark, a visual reasoning benchmark that has gone unbeaten since its creation in 2019. In low-compute scenarios, o3 scored 75.7 percent, while in high-compute testing, it reached 87.5 percent—comparable to human performance at an 85 percent threshold.
Ars Video
How The Callisto Protocol’s Team Designed Its Terrifying, Immersive Audio
OpenAI also reported that o3 scored 96.7 percent on the 2024 American Invitational Mathematics Exam, missing just one question. The model also reached 87.7 percent on GPQA Diamond, which contains graduate-level biology, physics, and chemistry questions. On the Frontier Math benchmark by EpochAI, o3 solved 25.2 percent of problems, while no other model has exceeded 2 percent.
During the livestream, the president of the ARC Prize Foundation said, “When I see these results, I need to switch my worldview about what AI can do and what it is capable of.”
The o3-mini variant, also announced Friday, includes an adaptive thinking time feature, offering low, medium, and high processing speeds. The company states that higher compute settings produce better results. OpenAI reports that o3-mini outperforms its predecessor, o1, on the Codeforces benchmark.
Simulated reasoning on the rise
OpenAI’s announcement comes as other companies develop their own SR models, including Google, which announced Gemini 2.0 Flash Thinking Experimental on Thursday. In November, DeepSeek launched DeepSeek-R1, while Alibaba’s Qwen team released QwQ, what they called the first “open” alternative to o1.
These new AI models are based on traditional LLMs, but with a twist: They are fine-tuned to produce a type of iterative chain of thought process that can consider its own results, simulating reasoning in an almost brute-force way that can be scaled at inference (running) time, instead of focusing on improvements during AI model training, which has seen diminishing returns recently.
OpenAI will make the new SR models available first to safety researchers for testing. Altman said the company plans to launch o3-mini in late January, with o3 following shortly after.