In a major update for developers and AI users, OpenAI has rolled out its new o3 Pro model to ChatGPT Pro and Team users, marking its most advanced AI release to date. At the same time, the company has significantly lowered the cost of its standard o3 model by 80%, further widening accessibility. However, plans for releasing an open-source AI model have been pushed to later this summer.
The o3 Pro model, now available through ChatGPT and OpenAI’s API, replaces the older o1 Pro and brings a higher level of reasoning and accuracy, especially in areas like science, education, programming, and mathematics. OpenAI describes it as “a version of our most intelligent model, o3, designed to think longer and provide the most reliable responses.” According to the company, the new model has shown superior performance in both internal and academic evaluations, particularly in clarity, instruction adherence, and depth of analysis.
What sets o3 Pro apart is its enhanced reliability, a feature that makes it ideal for handling complex queries where precision is critical. As OpenAI notes, “We recommend using it for challenging questions where reliability matters more than speed, and waiting a few minutes is worth the tradeoff.”
Despite using the same underlying architecture as o3, the Pro version has been optimised for dependability. It includes advanced tools like Python code execution, document analysis, web browsing, visual input interpretation, and memory-based personalisation. These tools make o3 Pro more versatile, though response times are typically longer compared to o1 Pro. Notably, some capabilities like temporary chats, image generation, and the Canvas interface are not yet available in o3 Pro. OpenAI has advised users to stick with GPT-4o, o3, or o4-mini for those particular features. Enterprise and Education customers will gain access to the model in the upcoming week.
In tandem with the model upgrade, OpenAI announced a dramatic cost reduction for its o3 model—from $10 to $2 per million input tokens and from $40 to $8 per million output tokens. Cached prompt usage comes with further discounts. The update places OpenAI in a more competitive pricing bracket compared to rivals like Google DeepMind’s Gemini and Anthropic’s Claude.
Confirming the change, CEO Sam Altman posted on X: “we dropped the price of o3 by 80%!! excited to see what people will do with it now. think you’ll also be happy with o3-pro pricing for the performance :)”
However, not all announcements were forward-moving. OpenAI’s open-source AI model, initially expected in June, has been delayed. Altman shared that the postponement is due to unexpected progress by the research team that requires additional refinement. “We are going to take a little more time with our open-weights model, i.e. expect it later this summer but not June,” he wrote.
The open-source model is anticipated to compete with the likes of DeepSeek R1 and is designed to raise the standard for freely accessible large language models.
In a separate blog post, Altman also addressed environmental concerns around AI use, revealing that a single ChatGPT query consumes approximately 0.34 watt-hours of electricity and about 0.000085 gallons of water—comparable to a second of oven use or a sip of water. He added, “The cost of intelligence should eventually converge to near the cost of electricity.”
With these moves, OpenAI not only aims to empower developers and enterprises with powerful tools at lower costs but also continues to push the envelope in AI innovation.