Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

OpenAI and partners are building a massive AI data center in Texas

IBM donates “Trusted AI” projects to Linux Foundation AI

Stocks making the biggest moves after hours: NOW, IBM, CMG

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
VentureBeat AI

Former Anthropic exec raises $15M to insure AI agents and help startups deploy safely

By Advanced AI EditorJuly 23, 2025No Comments9 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

A new startup founded by a former Anthropic executive has raised $15 million to solve one of the most pressing challenges facing enterprises today: how to deploy artificial intelligence systems without risking catastrophic failures that could damage their businesses.

The Artificial Intelligence Underwriting Company (AIUC), which launches publicly today, combines insurance coverage with rigorous safety standards and independent audits to give companies confidence in deploying AI agents — autonomous software systems that can perform complex tasks like customer service, coding, and data analysis.

The seed funding round was led by Nat Friedman, former GitHub CEO, through his firm NFDG, with participation from Emergence Capital, Terrain, and several notable angel investors including Ben Mann, co-founder of Anthropic, and former chief information security officers at Google Cloud and MongoDB.

“Enterprises are walking a tightrope,” said Rune Kvist, AIUC’s co-founder and CEO, in an interview. “On the one hand, you can stay on the sidelines and watch your competitors make you irrelevant, or you can lean in and risk making headlines for having your chatbot spew Nazi propaganda, or hallucinating your refund policy, or discriminating against the people you’re trying to recruit.”

The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Secure your spot now – space is limited: https://bit.ly/3GuuPLF

The company’s approach tackles a fundamental trust gap that has emerged as AI capabilities rapidly advance. While AI systems can now perform tasks that rival human undergraduate-level reasoning, many enterprises remain hesitant to deploy them due to concerns about unpredictable failures, liability issues, and reputational risks.

Creating security standards that move at AI speed

AIUC’s solution centers on creating what Kvist calls “SOC 2 for AI agents” — a comprehensive security and risk framework specifically designed for artificial intelligence systems. SOC 2 is the widely-adopted cybersecurity standard that enterprises typically require from vendors before sharing sensitive data.

“SOC 2 is a standard for cybersecurity that specifies all the best practices you must adopt in sufficient detail so that a third party can come and check whether a company meets those requirements,” Kvist explained. “But it doesn’t say anything about AI. There are tons of new questions like: how are you handling my training data? What about hallucinations? What about these tool calls?”

The AIUC-1 standard addresses six key categories: safety, security, reliability, accountability, data privacy, and societal risks. The framework requires AI companies to implement specific safeguards, from monitoring systems to incident response plans, that can be independently verified through rigorous testing.

“We take these agents and test them extensively, using customer support as an example since that’s easy to relate to. We try to get the system to say something racist, to give me a refund I don’t deserve, to give me a bigger refund than I deserve, to say something outrageous, or to leak another customer’s data. We do this thousands of times to get a real picture of how robust the AI agent actually is,” Kvist said.

From Benjamin Franklin’s fire insurance to AI risk management

The insurance-centered approach draws on centuries of precedent where private markets moved faster than regulation to enable the safe adoption of transformative technologies. Kvist frequently references Benjamin Franklin’s creation of America’s first fire insurance company in 1752, which led to building codes and fire inspections that tamed the blazes ravaging Philadelphia’s rapid growth.

“Throughout history, insurance has been the right model for this, and the reason is that insurers have an incentive to tell the truth,” Kvist explained. “If they say the risks are bigger than they are, someone’s going to sell cheaper insurance. If they say the risks are smaller than they are, they’re going to have to pay the bill and go out of business.”

The same pattern emerged with automobiles in the 20th century, when insurers created the Insurance Institute of Highway Safety and developed crash testing standards that incentivized safety features like airbags and seatbelts — years before government regulation mandated them.

Major AI companies already using the new insurance model

AIUC has already begun working with several high-profile AI companies to validate its approach. The company has certified AI agents for unicorn startups Ada (customer support) and Cognition (coding), and helped unlock enterprise deals that had been stalled due to trust concerns.

“Ada, we help them unlock a deal with the top five social media company where we will came in and ran independent tests on the risks that this company cared about, and that helped unlock that deal, basically giving them the confidence that this could actually be shown to their customers,” Kvist said.

The startup is also developing partnerships with established insurance providers, including Lloyd’s of London, the world’s oldest insurance market, to provide the financial backing for policies. This addresses a key concern about trusting a startup with major liability coverage.

“The insurance policies are going to be backed by the balance sheets of the big insurers,” Kvist explained. “So for example, when we work with Lloyd’s of London, the world’s oldest insurer, they’ve never failed to pay a claim, and the insurance policy ultimately comes from them.”

Quarterly updates vs. years-long regulatory cycles

One of AIUC’s key innovations is designing standards that can keep pace with AI’s breakneck development speed. While traditional regulatory frameworks like the EU AI Act take years to develop and implement, AIUC plans to update its standards quarterly.

“The EU AI Act was started back in 2021, they’re now about to release it, but they’re pausing it again because it’s too onerous four years later,” Kvist noted. “That cycle makes it very hard to get the legacy regulatory process to keep up with this technology.”

This agility has become increasingly important as the competitive gap between US and Chinese AI capabilities narrows. “A year and a half ago, everyone would say, like, we’re two years ahead now, that sounds like eight months, something like that,” Kvist observed.

How AI insurance actually works: testing systems to breaking point

AIUC’s insurance policies cover various types of AI failures, from data breaches and discriminatory hiring practices to intellectual property infringement and incorrect automated decisions. The company prices coverage based on extensive testing that attempts to break AI systems thousands of times across different failure modes.

“For some of the other things, we think it’s interesting to you. Or not wait for a lawsuit. So for example, if you issue an incorrect refund, great, well, the price of that is obvious, is the amount of money that you incorrectly refunded,” Kvist explained.

The startup works with a consortium of partners including PwC (one of the “Big Four” accounting firms), Orrick (a leading AI law firm), and academics from Stanford and MIT to develop and validate its standards.

Former Anthropic executive leaves to solve AI trust problem

The founding team brings deep experience from both AI development and institutional risk management. Kvist was the first product and go-to-market hire at Anthropic in early 2022, before ChatGPT’s launch, and sits on the board of the Center for AI Safety. Co-founder Brandon Wang is a Thiel Fellow who previously built consumer underwriting businesses, while Rajiv Dattani is a former McKinsey partner who led global insurance work and served as COO of METR, a nonprofit that evaluates leading AI models.

“The question that really interested me is: how, as a society, are we going to deal with this technology that’s washing over us?” Kvist said of his decision to leave Anthropic. “I think building AI, which is what Anthropic is doing, is very exciting and will do a lot of good for the world. But the most central question that gets me up in the morning is: how, as a society, are we going to deal with this?”

The race to make AI safe before regulation catches up

AIUC’s launch signals a broader shift in how the AI industry approaches risk management as the technology moves from experimental deployments to mission-critical business applications. The insurance model offers enterprises a path between the extremes of reckless AI adoption and paralyzed inaction while waiting for comprehensive government oversight.

The startup’s approach could prove crucial as AI agents become more capable and widespread across industries. By creating financial incentives for responsible development while enabling faster deployment, companies like AIUC are building the infrastructure that could determine whether artificial intelligence transforms the economy safely or chaotically.

“We’re hoping that this insurance model, this market-based model, both incentivizes fast adoption and investment in security,” Kvist said. “We’ve seen this throughout history—that the market can move faster than legislation on these issues.”

The stakes couldn’t be higher. As AI systems edge closer to human-level reasoning across more domains, the window for building robust safety infrastructure may be rapidly closing. AIUC’s bet is that by the time regulators catch up to AI’s breakneck pace, the market will have already built the guardrails.

After all, Philadelphia’s fires didn’t wait for government building codes — and today’s AI arms race won’t wait for Washington either.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleAI’s talent arms race is starting to look like pro sports
Next Article Why LLMs Might Finally Be Good News for Premium Publishers
Advanced AI Editor
  • Website

Related Posts

Open-source MCPEval makes protocol-level agent testing plug-and-play

July 23, 2025

Intuit brings agentic AI to the mid-market saving organizations 17 to 20 hours a month

July 23, 2025

Mixture-of-recursions delivers 2x faster inference—Here’s how to implement it

July 23, 2025

Comments are closed.

Latest Posts

Winston Artory Merger Targets $15B Art Valuation Market

Denver Museum Discovers 67.5 Million-Year-Old Fossil Under Parking Lot

Barnes Foundation Online Learning Platform Expands to Penn Museum

Archaeologists Identify 5,500-Year-Old Megalithic Tombs in Poland

Latest Posts

OpenAI and partners are building a massive AI data center in Texas

July 23, 2025

IBM donates “Trusted AI” projects to Linux Foundation AI

July 23, 2025

Stocks making the biggest moves after hours: NOW, IBM, CMG

July 23, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • OpenAI and partners are building a massive AI data center in Texas
  • IBM donates “Trusted AI” projects to Linux Foundation AI
  • Stocks making the biggest moves after hours: NOW, IBM, CMG
  • Customize Amazon Nova in Amazon SageMaker AI using Direct Preference Optimization
  • Alibaba unleashes Qwen3 coding model for developers to push AI agent adoption

Recent Comments

  1. 1win app download on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  2. 📃 ✉️ Pending Deposit: 1.8 BTC from new sender. Review? > https://graph.org/REDEEM-BTC-07-23?hs=60194a6753699dfb5804798d5843ffd0& 📃 on This Neural Network Optimizes Itself | Two Minute Papers #212
  3. 📉 📩 Pending Deposit - 1.0 BTC from unknown sender. Review? => https://graph.org/REDEEM-BTC-07-23?hs=16ed4f83e039fc01f975372e66ec05d7& 📉 on OpenAI seeks to make its upcoming ‘open’ AI model best-in-class
  4. 📊 📩 Pending Transfer: 1.8 BTC from unknown sender. Approve? >> https://graph.org/REDEEM-BTC-07-23?hs=8f64f5846f6d90e5a1ebb4bba272bbea& 📊 on Nvidia’s GB200 NVL72 Supercomputer Achieves 2.7× Faster Inference on DeepSeek V2
  5. 📅 ✉️ New Deposit: 1.8 BTC from new sender. Approve? > https://graph.org/REDEEM-BTC-07-23?hs=5719fe560af3b8c36c0a0976ea7a6f6b& 📅 on Meta, Booz Allen develop ‘Space Llama’ AI system for the International Space Station

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.