
(Source: Prathmesh T/Shutterstock)
It’s been a monumental week for OpenAI. Tuesday saw the release of a new open weight family of models, gpt-oss. Midweek, news broke that the company is in talks with investors for a potential stock sale at a valuation of $500 billion. And now, as one of the most anticipated releases since the LLM arms race began only a few years ago … GPT-5 is finally here.
GPT-5 Is Out Today
The main event today was the release of GPT-5, OpenAI’s first flagship model since GPT-4 arrived in March 2023. CEO Sam Altman introduced the system during a livestreamed launch, calling it “a significant step along our path to AGI.” GPT-5 will be available for ChatGPT free, Plus, and Team users starting today, with Enterprise and Education rollouts scheduled for next week. Additionally, developers will have a choice of three API tiers: GPT-5, GPT-5 Mini, and GPT-5 Nano.
The company began its launch by explaining the redesign of ChatGPT’s routing logic. Previous ChatGPT sessions routed users’ more routine prompts to fast models and complex tasks to the company’s slower reasoning variants. GPT-5 will remove that fork.
“Until now, our users have had to pick between the fast responses of standard GPTs or the slow, more thoughtful responses from our reasoning models. But GPT-5 eliminates this choice. It aims to think just the perfect amount to give you the perfect answer,” OpenAI Chief Research Officer Mark Chen said at the launch. An internal controller now decides how long the model should think, aiming to deliver the best answer without extra latency in simple cases, the company explained.

OpenAI CEO Sam Altman debuted GPT-5 in a livestream earlier today. (Source: OpenAI livestream)
Benchmark slides shown at the event show GPT-5 earned a 74.9% score on SWE-Bench, which measures bug fixes in Python coding projects. The model scores 88% on the Aider Polyglot coding test and sets a new high on the multimodal MMMU visual-reasoning suite. On the 2025 AIME high-school math exam, GPT-5 surpasses GPT-4o by an undisclosed margin. OpenAI staff cautioned that formal evaluations do not cover every real-world use case, but they also noted that the higher scores align with internal observed gains in reliability.
Hallucination and deception were major targets during training, the company said. Safety lead Sachi Desai said GPT-5 shows fewer factual errors on internal tests and uses a safe-completion method instead of a simple comply-or-refuse rule. Regarding deception, Desai explained: “These are instances where the model might misrepresent its actions to the user or lie about task success. This can especially happen if the task is underspecified, impossible, or lacking key tools. And we found that GPT-5 is significantly less deceptive than 03 and 04-mini.” When a request is ambiguous or potentially dangerous, such as instructions for lighting pyrogen fireworks, the model now offers partial guidance, points users to safety manuals, and explains any refusal. The change is meant to reduce blanket denials while withholding instructions that could cause harm.
OpenAI is also shipping product and API updates built on GPT-5, including a more natural voice mode with live video context for free users, new personalization options, and memory that can connect to Gmail and Google Calendar, plus a study mode for step-by-step learning. For developers, GPT-5 adds custom tool calls that accept plain text, optional preambles before tool use, a verbosity control, and a minimal reasoning setting to trade depth for speed. The company claims GPT-5 achieves 97% on the Tau-Squared benchmark for multi-tool tasks, up from 49% two months ago.
GPT-5’s launch has been highly anticipated ever since GPT-4’s debut in 2023. Though today’s rollout was confirmed for Free, Plus, and Team users, as of 4:30 p.m. ET, GPT-5 was not yet available for some. A note on the website said, “GPT-5 Rollout: We are gradually rolling out GPT-5 to ensure stability during launch. Some users may not yet see GPT-5 in their account as we increase availability in stages.”
Employee Shares Sale Could Value OpenAI Around $500 Billion
News of GPT-5’s launch lands alongside new reports about OpenAI’s valuation and capital plans. Bloomberg reports the company is in early talks for a secondary sale of employee shares at about $500 billion, with existing investors, including Thrive Capital, exploring purchases. If completed, the deal would lift the company’s paper valuation from about $300 billion set in a $40 billion round led by SoftBank. The outlet says OpenAI also secured $8.3 billion last week as a second tranche of that oversubscribed financing round. A secondary sale would give staff liquidity and may aid retention amid the fierce talent competition with companies like Meta and Anthropic.
Bloomberg also reported that OpenAI and Microsoft are renegotiating their relationship, including Microsoft’s stake and access to OpenAI technology, before the current deal ends in 2030. Microsoft entered a long-term partnership with OpenAI back in 2023, an uneasy arrangement that has proven to be rivalrous, strategic, and interdependent.
The OpenAI profit model remains a hybrid, with a nonprofit parent company overseeing a profit-seeking operating company. The firm has explored changes to the operating arm, including becoming a public benefit corporation, while stating nonprofit oversight would continue. Talks with major investors about structure and governance are ongoing. This debate is unfolding amid rapid growth: OpenAI said it expects ChatGPT to reach 700 million weekly active users this week, up from 500 million in March.
New Open Weight Models: Is OpenAI Finally Living up to Its Name?
Another OpenAI news item not to be missed this week is the company’s release of a new open weight family of models: gpt-oss. The new models come in 20-billion and 120-billion-parameter versions and are available on Hugging Face and GitHub under the Apache 2.0 license.
“These models outperform similarly sized open models on reasoning tasks, demonstrate strong tool use capabilities, and are optimized for efficient deployment on consumer hardware. They were trained using a mix of reinforcement learning and techniques informed by OpenAI’s most advanced internal models, including o3 and other frontier systems,” the company said in a blog announcement.
Both new models share a Transformer architecture which leverages a mixture-of-experts to reduce the number of active parameters needed to process input. gpt-oss-120b activates 5.1B parameters per token, while gpt-oss-20b activates 3.6B. Gpt-oss-120B is designed to run in datacenters and on high-end desktop and laptop computers, as it requires an 80GB GPU to run. The smaller model, gpt-oss-20B, can run on most consumer desktops and laptops, the company says, requiring only 16GB of memory, “making it ideal for on-device use cases, local inference, or rapid iteration without costly infrastructure.” The gpt-oss-120b model achieves near-parity with OpenAI o4-mini on core reasoning benchmarks, while the gpt-oss-20b model delivers similar results to OpenAI o3‑mini on common benchmarks, OpenAI notes.
Since gpt-oss has open weights, or publicly downloadable parameters, researchers can run the models on their own hardware instead of only through a hosted API. This flexibility could be vital for scientific use cases. Open weight models enable researchers to run reproducible experiments, inspect their results and methods, tune on their domain-specific data, and compare results with other labs through benchmarks, all while keeping data private and costs down.
With gpt-oss, users can expose the model’s full chain of thought, adjust the depth of reasoning, and fine-tune every parameter. The models follow instructions, call external tools such as web search or Python, and offer a provenance log to help audit results. OpenAI says this transparency is intended to speed reproducible research in fields like molecular design and climate modeling.
OpenAI frames the gpt-oss release as a step forward for open weight models, citing improvements in reasoning and safety and noting how gpt-oss complements its hosted models by giving developers more options for research and development: “A healthy open model ecosystem is one dimension to helping make AI widely accessible and beneficial for everyone. We invite developers and researchers to use these models to experiment, collaborate, and push the boundaries of what’s possible. We look forward to seeing what you build.” Learn more and try an interactive demo of gpt-oss at this link.