In a short-form video post, an influencer gets worked up about a television news story from California. The images broadcast behind her appear authentic, with an anchor calling viewers to action, victims and even a CNN logo.
“California accident victims getting insane payouts,” the anchor says above a banner touting “BREAKING NEWS.”
But what could be a social media star excited about local news is actually an advertisement to entice people to sign up for legal services. And much of it is generated by artificial intelligence.
With a slew of new AI video tools and new ways to share them launched in recent months, the line between newscast and sales pitch is starting to blur.
Personal injury lawyers have long been known for over-the-top ads. They tap into the latest methods — radio, television, 1-800 numbers, billboards, bus stop benches and infomercials — to burn their brands into consumers’ consciousness. The ads are intentionally repetitive, outrageous and catchy, so if viewers have an accident, they recall who to call.
Now they are using AI to create a new wave of ads that are more convincing, compelling and local.
“Online ads for both goods and services are using AI-generated humans and AI replicas of influencers to promote their brand without disclosing the synthetic nature of the people represented,” said Alexios Mantzarlis, the director of trust, safety and security at Cornell Tech. “This trend is not encouraging for the pursuit of truth in advertising.”
It isn’t just television news that is being cloned by bots. Increasingly, the screaming headlines in people’s news feeds are generated by AI on behalf of advertisers.
In one online debt repayment ad, a man holds a newspaper with a headline suggesting California residents with $20,000 in debt are eligible for help. The ad shows borrowers lined up for the benefit. The man, the “Forbes” newspaper he is holding and the line of people are all AI-generated, experts say.
Despite growing criticism of what some have dubbed “AI slop,” companies have continued to launch increasingly powerful tools for realistic AI video generation, making it easy to create sophisticated fake news stories and broadcasts.
Meta recently introduced Vibes, a dedicated app for creating and sharing short-form, AI-generated videos. Days later, OpenAI released its own Sora app for sharing AI videos, with an updated video and audio generation model.
Sora’s “Cameo” feature enables users to insert their own image or that of a friend into short, photo-realistic AI videos. The videos take seconds to make.
Since its launch last Friday, the Sora app has risen to the top of the App Store download rankings. OpenAI is encouraging companies and developers to utilize its tools to develop and promote their products and services.
“We hope that now with Sora 2 video in the [Application Programming Interface], you will generate the same high-quality videos directly inside your products, complete with the realistic and synchronized sound, and find all sorts of great new things to build,” OpenAI Chief Executive Sam Altman told developers this week.
What’s emerging is a new class of synthetic social media platforms that enable users to create, share and discover AI-generated content in a bespoke feed, catering to an individual’s tastes.
Imagine a constant flow of videos as addictive and viral as those on TikTok, but it’s often impossible to tell which are real.
The danger, experts say, is how these powerful new tools, now affordable to almost anyone, can be used. In other countries, state-backed actors have utilized AI-generated news broadcasts and stories to disseminate disinformation.
Online safety experts say AI churning out questionable stories, propaganda and ads is drowning out human-generated content in some cases, and worsening the information ecosystem.
YouTube had to delete hundreds of AI-generated videos featuring celebrities, including Taylor Swift, that promoted Medicare scams. Spotify removed millions of AI-generated music tracks. The FBI estimates that Americans have lost $50 billion to deepfake scams since 2020.
Last year, a Los Angeles Times journalist was wrongly declared dead by AI news anchors.
In the world of legal services ads, which have a history of pushing the envelope, some are concerned that the rapidly advancing AI makes it easier to skirt restrictions. It is a fine line since law ads can dramatize, but they are not allowed to promise results or payouts.
The AI newscasts with AI victims holding big AI checks are testing new territory, said Samuel Hyams-Millard, an associate at law firm SheppardMulin.
“Someone might see that and think that it’s real, oh, that person actually got paid that amount of money. This is actually on like news, when that may not be the case,” he said. “That’s a problem.”
One trailblazer in the field is Case Connect AI. The company runs sponsored commercials on YouTube Shorts and Facebook, targeting people involved in car accidents and other personal injuries. It also uses AI to let users know how much they might be able to get out of a court case.
In one ad, what appears to be an excited social media influencer says insurance companies are trying to shut down Case Connect because its “compensation calculator” is costing insurance companies so much.
The ad then cuts to what appears to be a five-second news clip about the payouts users are getting. The actor reappears, pointing to another short video of what appears to be couples holding oversized checks and celebrating.
“Everyone behind me used the app and received a massive payout,” says the influencer. “And now it’s your turn.”
In September, at least half a dozen YouTube Short ads by Case Connect featured AI-generated news anchors or testimonials featuring made-up people, according to ads found through the Google Ads Transparency website.
Case Connect doesn’t always use AI-generated humans. Sometimes it uses AI-generated robots or even monkeys to spread its message. The company said it uses Google’s Veo 3 model to create videos. It did not share which parts of its commercials were AI.
Angelo Perone, founder of the Pennsylvania-based Case Connect, says the firm has been running social media ads that use AI to target users in California and other states who might be suffering from car crashes, accidents or other personal injuries to potentially sign up as clients.
“It gives us a superpower in connecting with people who’ve been injured in car accidents so we can serve them and place them with the right attorney for their situation,” he said.
His company generates leads for law firms and is compensated with a flat fee or a monthly retainer from the firms. It does not practice law.
“We’re navigating this space just like everybody else — trying to do it responsibly while still being effective,” Perone said in an email. “There’s always a balance between meeting people where they’re at and connecting with them in a way that resonates, while also not overpromising, underdelivering, or misleading anyone.”
Perone said that Case Connect is in line with rules and regulations connected to legal ads.
“Everything is compliant with proper disclaimers and language,” he said.
Some lawyers and marketers think his company goes too far.
In January, Robert Simon, a trial lawyer and co-founder of Simon Law Group, posted a video on Instagram saying some Case Connect ads that seemed to be targeting victims of the L.A. County fires were “egregious,” cautioning people about the damage calculator.
As part of the Consumer Attorneys of California, a legislative lobbying group for consumers, Simon said he’s been helping draft Senate Bill 37 to address deceptive ads. It was a problem long before AI emerged.
“We’ve been talking about this for a long time in putting guardrails on more ethics for lawyers,” Simon said.
Personal injury law is an estimated $61 billion-market in the U.S., and L.A. is one of the biggest hubs for the business.
Hyams-Millard said that even if Case Connect is not a law firm, lawyers working with it could be held responsible for the potentially misleading nature of its ads.
Even some lead generation companies recognize that AI could be abused by some agencies and bring the ads for the industry into dangerous, uncharted waters.
“The need for guardrails isn’t new,” said Vince Wingerter, founder of 4LegalLeads, a lead generation company. “What’s new is that the technology is now more powerful and layered on top.”