OpenAI is currently being sued for copyright infringement by The New York Times and authors who claim their content was used to train models without consent. It is also being sued for wrongful death by the parents of a 16-year-old who died by suicide after discussing methods with ChatGPT.
Two people with knowledge of the matter said OpenAI has considered “self insurance,” or putting aside investor funding in order to expand its coverage. The company has raised nearly $60 billion to date, with a substantial amount of the funding contingent on a proposed corporate restructuring.
One of those people said OpenAI had discussed setting up a “captive”—a ringfenced insurance vehicle often used by large companies to manage emerging risks. Big tech companies such as Microsoft, Meta, and Google have used captives to cover Internet-era liabilities such as cyber or social media.
Captives can also carry risks, since a substantial claim can deplete an underfunded captive, leaving the parent company vulnerable.
OpenAI said it has insurance in place and is evaluating different insurance structures as the company grows, but does not currently have a captive and declined to comment on future plans.
Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit with authors over their alleged use of pirated books to train AI models.
In court documents, Anthropic’s lawyers warned the suit carried the specter of “unprecedented and potentially business-threatening statutory damages against the smallest one of the many companies developing [AI] with the same books data.”
Anthropic, which has raised more than $30 billion to date, is partly using its own funds for the settlement, according to one person with knowledge of the matter. Anthropic declined to comment.
© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.