Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
A new startup founded by a former Anthropic executive has raised $15 million to solve one of the most pressing challenges facing enterprises today: how to deploy artificial intelligence systems without risking catastrophic failures that could damage their businesses.
The Artificial Intelligence Underwriting Company (AIUC), which launches publicly today, combines insurance coverage with rigorous safety standards and independent audits to give companies confidence in deploying AI agents — autonomous software systems that can perform complex tasks like customer service, coding, and data analysis.
The seed funding round was led by Nat Friedman, former GitHub CEO, through his firm NFDG, with participation from Emergence Capital, Terrain, and several notable angel investors including Ben Mann, co-founder of Anthropic, and former chief information security officers at Google Cloud and MongoDB.
“Enterprises are walking a tightrope,” said Rune Kvist, AIUC’s co-founder and CEO, in an interview. “On the one hand, you can stay on the sidelines and watch your competitors make you irrelevant, or you can lean in and risk making headlines for having your chatbot spew Nazi propaganda, or hallucinating your refund policy, or discriminating against the people you’re trying to recruit.”
The AI Impact Series Returns to San Francisco – August 5
The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.
Secure your spot now – space is limited: https://bit.ly/3GuuPLF
The company’s approach tackles a fundamental trust gap that has emerged as AI capabilities rapidly advance. While AI systems can now perform tasks that rival human undergraduate-level reasoning, many enterprises remain hesitant to deploy them due to concerns about unpredictable failures, liability issues, and reputational risks.
Creating security standards that move at AI speed
AIUC’s solution centers on creating what Kvist calls “SOC 2 for AI agents” — a comprehensive security and risk framework specifically designed for artificial intelligence systems. SOC 2 is the widely-adopted cybersecurity standard that enterprises typically require from vendors before sharing sensitive data.
“SOC 2 is a standard for cybersecurity that specifies all the best practices you must adopt in sufficient detail so that a third party can come and check whether a company meets those requirements,” Kvist explained. “But it doesn’t say anything about AI. There are tons of new questions like: how are you handling my training data? What about hallucinations? What about these tool calls?”
The AIUC-1 standard addresses six key categories: safety, security, reliability, accountability, data privacy, and societal risks. The framework requires AI companies to implement specific safeguards, from monitoring systems to incident response plans, that can be independently verified through rigorous testing.
“We take these agents and test them extensively, using customer support as an example since that’s easy to relate to. We try to get the system to say something racist, to give me a refund I don’t deserve, to give me a bigger refund than I deserve, to say something outrageous, or to leak another customer’s data. We do this thousands of times to get a real picture of how robust the AI agent actually is,” Kvist said.
From Benjamin Franklin’s fire insurance to AI risk management
The insurance-centered approach draws on centuries of precedent where private markets moved faster than regulation to enable the safe adoption of transformative technologies. Kvist frequently references Benjamin Franklin’s creation of America’s first fire insurance company in 1752, which led to building codes and fire inspections that tamed the blazes ravaging Philadelphia’s rapid growth.
“Throughout history, insurance has been the right model for this, and the reason is that insurers have an incentive to tell the truth,” Kvist explained. “If they say the risks are bigger than they are, someone’s going to sell cheaper insurance. If they say the risks are smaller than they are, they’re going to have to pay the bill and go out of business.”
The same pattern emerged with automobiles in the 20th century, when insurers created the Insurance Institute of Highway Safety and developed crash testing standards that incentivized safety features like airbags and seatbelts — years before government regulation mandated them.
Major AI companies already using the new insurance model
AIUC has already begun working with several high-profile AI companies to validate its approach. The company has certified AI agents for unicorn startups Ada (customer support) and Cognition (coding), and helped unlock enterprise deals that had been stalled due to trust concerns.
“Ada, we help them unlock a deal with the top five social media company where we will came in and ran independent tests on the risks that this company cared about, and that helped unlock that deal, basically giving them the confidence that this could actually be shown to their customers,” Kvist said.
The startup is also developing partnerships with established insurance providers, including Lloyd’s of London, the world’s oldest insurance market, to provide the financial backing for policies. This addresses a key concern about trusting a startup with major liability coverage.
“The insurance policies are going to be backed by the balance sheets of the big insurers,” Kvist explained. “So for example, when we work with Lloyd’s of London, the world’s oldest insurer, they’ve never failed to pay a claim, and the insurance policy ultimately comes from them.”
Quarterly updates vs. years-long regulatory cycles
One of AIUC’s key innovations is designing standards that can keep pace with AI’s breakneck development speed. While traditional regulatory frameworks like the EU AI Act take years to develop and implement, AIUC plans to update its standards quarterly.
“The EU AI Act was started back in 2021, they’re now about to release it, but they’re pausing it again because it’s too onerous four years later,” Kvist noted. “That cycle makes it very hard to get the legacy regulatory process to keep up with this technology.”
This agility has become increasingly important as the competitive gap between US and Chinese AI capabilities narrows. “A year and a half ago, everyone would say, like, we’re two years ahead now, that sounds like eight months, something like that,” Kvist observed.
How AI insurance actually works: testing systems to breaking point
AIUC’s insurance policies cover various types of AI failures, from data breaches and discriminatory hiring practices to intellectual property infringement and incorrect automated decisions. The company prices coverage based on extensive testing that attempts to break AI systems thousands of times across different failure modes.
“For some of the other things, we think it’s interesting to you. Or not wait for a lawsuit. So for example, if you issue an incorrect refund, great, well, the price of that is obvious, is the amount of money that you incorrectly refunded,” Kvist explained.
The startup works with a consortium of partners including PwC (one of the “Big Four” accounting firms), Orrick (a leading AI law firm), and academics from Stanford and MIT to develop and validate its standards.
Former Anthropic executive leaves to solve AI trust problem
The founding team brings deep experience from both AI development and institutional risk management. Kvist was the first product and go-to-market hire at Anthropic in early 2022, before ChatGPT’s launch, and sits on the board of the Center for AI Safety. Co-founder Brandon Wang is a Thiel Fellow who previously built consumer underwriting businesses, while Rajiv Dattani is a former McKinsey partner who led global insurance work and served as COO of METR, a nonprofit that evaluates leading AI models.
“The question that really interested me is: how, as a society, are we going to deal with this technology that’s washing over us?” Kvist said of his decision to leave Anthropic. “I think building AI, which is what Anthropic is doing, is very exciting and will do a lot of good for the world. But the most central question that gets me up in the morning is: how, as a society, are we going to deal with this?”
The race to make AI safe before regulation catches up
AIUC’s launch signals a broader shift in how the AI industry approaches risk management as the technology moves from experimental deployments to mission-critical business applications. The insurance model offers enterprises a path between the extremes of reckless AI adoption and paralyzed inaction while waiting for comprehensive government oversight.
The startup’s approach could prove crucial as AI agents become more capable and widespread across industries. By creating financial incentives for responsible development while enabling faster deployment, companies like AIUC are building the infrastructure that could determine whether artificial intelligence transforms the economy safely or chaotically.
“We’re hoping that this insurance model, this market-based model, both incentivizes fast adoption and investment in security,” Kvist said. “We’ve seen this throughout history—that the market can move faster than legislation on these issues.”
The stakes couldn’t be higher. As AI systems edge closer to human-level reasoning across more domains, the window for building robust safety infrastructure may be rapidly closing. AIUC’s bet is that by the time regulators catch up to AI’s breakneck pace, the market will have already built the guardrails.
After all, Philadelphia’s fires didn’t wait for government building codes — and today’s AI arms race won’t wait for Washington either.