A criticism about AI safety from an OpenAI researcher aimed at a rival opened a window into the industry’s struggle: a battle against itself.
It started with a warning from Boaz Barak, a Harvard professor currently on leave and working on safety at OpenAI. He called the launch of xAI’s Grok model “completely irresponsible,” not because of its headline-grabbing antics, but because of what was missing: a public system card, detailed safety evaluations, the basic artefacts of transparency that have become the fragile norm.
It was a clear and necessary call. But a candid reflection, posted just three weeks after he left the company, from ex-OpenAI engineer Calvin French-Owen, shows us the other half of the story.
French-Owen’s account suggests a large number of people at OpenAI are indeed working on safety, focusing on very real threats like hate speech, bio-weapons, and self-harm. Yet, he delivers the insight: “Most of the work which is done isn’t published,” he wrote, adding that OpenAI “really should do more to get it out there.”
Here, the simple narrative of a good actor scolding a bad one collapses. In its place, we see the real, industry-wide dilemma laid bare. The whole AI industry is caught in the ‘Safety-Velocity Paradox,’ a deep, structural conflict between the need to move at breakneck speed to compete and the moral need to move with caution to keep us safe.
French-Owen suggests that OpenAI is in a state of controlled chaos, having tripled its headcount to over 3,000 in a single year, where “everything breaks when you scale that quickly.” This chaotic energy is channelled by the immense pressure of a “three-horse race” to AGI against Google and Anthropic. The result is a culture of incredible speed, but also one of secrecy.
Consider the creation of Codex, OpenAI’s coding agent. French-Owen calls the project a “mad-dash sprint,” where a small team built a revolutionary product from scratch in just seven weeks.
This is a textbook example of velocity; describing working until midnight most nights and even through weekends to make it happen. This is the human cost of that velocity. In an environment moving this fast, is it any wonder that the slow, methodical work of publishing AI safety research feels like a distraction from the race?
This paradox isn’t born of malice, but of a set of powerful, interlocking forces.
There is the obvious competitive pressure to be first. There is also the cultural DNA of these labs, which began as loose groups of “scientists and tinkerers” and value-shifting breakthroughs over methodical processes. And there is a simple problem of measurement: it is easy to quantify speed and performance, but exceptionally difficult to quantify a disaster that was successfully prevented.
In the boardrooms of today, the visible metrics of velocity will almost always shout louder than the invisible successes of safety. However, to move forward, it cannot be about pointing fingers—it must be about changing the fundamental rules of the game.
We need to redefine what it means to ship a product, making the publication of a safety case as integral as the code itself. We need industry-wide standards that prevent any single company from being competitively punished for its diligence, turning safety from a feature into a shared, non-negotiable foundation.
However, most of all, we need to cultivate a culture within AI labs where every engineer – not just the safety department – feels a sense of responsibility.
The race to create AGI is not about who gets there first; it is about how we arrive. The true winner will not be the company that is merely the fastest, but the one that proves to a watching world that ambition and responsibility can, and must, move forward together.
(Photo by Olu Olamigoke Jr.)
See also: Military AI contracts awarded to Anthropic, OpenAI, Google, and xAI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.