OpenAI announced its unreleased reasoning model won the gold at the International Mathematical Olympiad (IMO), igniting fierce drama in the world of competitive math.
While most high schoolers blissfully enjoy a break from school and homework, top math students from around the world brought their A-game to the IMO, considered the most prestigious math competition. AI labs also competed with their LLMs, and an unreleased model from OpenAI achieved a high-enough score to earn a gold medal, according to researcher Alexander Wei who shared the news on X.
This Tweet is currently unavailable. It might be loading or has been removed.
The OpenAI model got five out the six problems correct, earning a gold medal-worthy score of 35 out of 42 points. “For each problem, three former IMO medalists independently graded the model’s submitted proof, with scores finalized after unanimous consensus,” according to Wei. The problems are algebra and pre-calculus challenges that require creative thinking on the competitor’s part. So for LLMs to be able to reason their way through long, complex proofs is an impressive achievement.
However, the timing of the announcement is being criticized for overshadowing the human competitors’ results. The IMO reportedly asked the AI labs officially working with the organization verifying the results to wait a week before making any announcements, to avoid stealing the kids’ thunder. That’s according to an X post from Mikhail Samin, who runs the AI Governance and Safety Institute nonprofit. OpenAI said they didn’t formally cooperate with the IMO to verify their results and instead worked with individual mathematicians to independently verify its scores, and so it wasn’t beholden to any kind of agreement. Mashable sent a direct message to Samin on X for comment.
Mashable Light Speed
But the gossip is that this rubbed organizers the wrong way, who thought it was “rude” and “inappropriate” for OpenAI to do this. This is all hearsay, based on rumors from Samin, who also posted a screenshot of a similar comment from someone named Joseph Myers, presumably the two-time IMO gold medalist. Mashable contacted Myers for comment, but he has not publicly confirmed the authenticity of the screenshot.
This Tweet is currently unavailable. It might be loading or has been removed.
In response, OpenAI researcher Noam Brown said they posted the results after the IMO closing ceremony, honoring an IMO organizer’s request. Brown also said OpenAI wasn’t in touch with IMO, suggesting they didn’t make any agreements about announcing the results later.
This Tweet is currently unavailable. It might be loading or has been removed.
Meanwhile, Google DeepMind reportedly did cooperate with the IMO, and announced this afternoon that an “advanced version of Gemini with Deep Think officially achieve[d] gold-medal standard at the International Mathematical Olympiad.” According to the announcement, DeepMind’s model was “officially graded and certified by IMO coordinators using the same criteria as for student solutions.” Read into that statement as much or as little as you want, but the timing is hardly coincidental.
This Tweet is currently unavailable. It might be loading or has been removed.
Others may follow the Real Housewives, but the proper decorum of elite math competitions is the high drama we live for.
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.