OpenAI has revealed that its unreleased reasoning model earned a gold medal at the International Mathematical Olympiad (IMO), igniting debate in the competitive math community.
AI Competes Alongside Top High School Mathematicians
While most high school students enjoy a break from academics, top math competitors from around the globe gathered for the prestigious IMO. Alongside human contestants, AI labs entered large language models (LLMs) into the competition. OpenAI’s model impressed by solving five out of six challenging problems, scoring 35 out of 42 points — a score worthy of a gold medal, according to researcher Alexander Wei, who shared the news on X.
Each submitted proof was independently graded by three former IMO gold medalists, with unanimous agreement required to finalize the score. The problems tested algebra and pre-calculus skills requiring creative, multi-step reasoning, making the model’s achievement particularly notable.
Timing of Announcement Causes Tensions
However, the timing of OpenAI’s public announcement sparked criticism for overshadowing the human competitors. IMO organizers reportedly requested AI labs working with the organization to delay announcements by a week to let student achievements take center stage, according to Mikhail Samin of the AI Governance and Safety Institute.
OpenAI stated it did not formally cooperate with the IMO and instead verified its results independently with mathematicians, so it did not feel bound by such requests.
Rumors suggest the move upset IMO organizers, who viewed OpenAI’s early reveal as “rude” and “inappropriate.” Supporting this sentiment was a screenshot shared by Samin, allegedly from two-time IMO gold medalist Joseph Myers, though Myers has not publicly confirmed the authenticity of the message.
In response, OpenAI researcher Noam Brown said the results were shared after the IMO closing ceremony, respecting an organizer’s request. He clarified OpenAI was not in direct communication with IMO officials about timing or announcement protocols.
In contrast, Google DeepMind publicly announced that an “advanced version of Gemini with Deep Think” officially achieved gold-medal standard at the IMO. DeepMind’s model was “officially graded and certified by IMO coordinators” under the same criteria as human solutions. The timing of this announcement closely followed OpenAI’s and appeared coordinated with the IMO.
Author’s Opinion
While OpenAI’s AI achievement is undeniably impressive, the controversy around its announcement highlights the importance of respecting traditional institutions and the human effort behind competitions like the IMO. AI labs must balance transparency and publicity with sensitivity toward the human competitors whose hard work defines these events. Early reveals risk alienating organizers and overshadowing student accomplishments, which could harm future collaboration between AI researchers and educational communities.
Featured image credit: Kaboompics via Pexels
For more stories like it, click the +Follow button at the top of this page to follow us.