Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Models Aren’t Yet Available to Public

The math Olympiad was never meant to be a battleground for AI dominance, but this weekend changed that. Two of the world’s leading AI labs announced that their systems achieved gold-medal-level scores in the International Mathematical Olympiad.
See Also: Delivering ROI on AI: How AI transforms customer support efficiency
Both OpenAI and Google DeepMind announced that their AI models achieved gold-level performance on the 2025 International Mathematical Olympiad, a math competition reserved for gifted high school students.
OpenAI researcher Alexander Wei on Saturday said an experimental large language model developed by the company solved five of six IMO proof problems under typical test conditions: two 4.5-hour sessions with no calculators, internet access or external tools. That score is equivalent to a gold medal, a threshold fewer than 9% of human contestants reach annually.
Each of the six solutions was reviewed by three independent graders, with unanimous consensus required for acceptance. OpenAI has reportedly said it will publish the solutions and rubrics for public evaluation.
That approach differed from Google DeepMind’s. The company worked with IMO organizers to have its Gemini Deep Think model evaluated through the competition’s formal process. DeepMind said the model correctly solved five out of six problems, earning a gold medal score of 35 points.
DeepMind’s new model showcased a shift in architecture from last year. The company’s models AlphaProof and AlphaGeometry 2 last year relied on formal language inputs and took up to three days per problem to solve. This year’s Gemini Deep Think model handled the IMO questions in natural language and produced complete answers under the 4.5-hour constraint, with no human translation.
In one case, Deep Think solved a particularly tricky problem using only elementary number theory. DeepMind researcher and Brown University professor Junehyuk Jung reportedly said that many human contestants leaned on graduate-level tools like Dirichlet’s Theorem, while the model constructed a simpler, self-contained proof.
The IMO was first held in 1959 and is one of the most prestigious math contests in the world. Many AI researchers, including those at both OpenAI and DeepMind, come from Olympiad backgrounds.
Google’s announcement came after OpenAI posted about the results. Both companies received the same problem set from IMO organizers and were asked not to share results before July 28. Harmonic, another AI lab participating in the challenge, said that it would respect the embargo.
Responding to criticism online, OpenAI research scientist Noam Brown wrote on X: “We weren’t in touch with IMO. I spoke with one organizer before the post to let him know. He requested we wait until after the closing ceremony ends to respect the kids, and we did.”
Others contested that timeline. Posts from developer and IMO observer Mikhail Samin said that an IMO coordinator stated that OpenAI published its results before the ceremony concluded and had not been part of the official collaboration process.
DeepMind said Gemini Deep Think is being tested with a small group of expert users, including mathematicians and may be introduced to premium AI users in the future. There is no confirmed release plan for a consumer version of this IMO-tuned model. OpenAI’s Wei also clarified that the model used in this experiment is not tied to its next major release.