
ZDNET’s key takeaways
Kagan praised Claude’s analysis of a complex legal issue.Many lawyers have been caught using ChatGPT poorly in case filings.The legal profession is grappling with its use of AI.
Get more in-depth ZDNET tech coverage: Add us as a preferred Google source on Chrome and Chromium browsers.
Can AI provide legitimately useful assistance to lawyers and judges? One of the nation’s most powerful attorneys seems to think so.
US associate Supreme Court Justice Elena Kagan said recently that Anthropic’s Claude chatbot “did an exceptional job of figuring out an extremely difficult” Constitutional dispute — one that had twice previously divided the Court, according to a report from Bloomberg Law.
Also: AI is creeping into the Linux kernel – and official policy is needed ASAP
Speaking at the Ninth Circuit’s judicial conference in Monterey, California last month, Kagan referred to recent blog posts from Supreme Court litigator Adam Unikowsky, which describe his experiments using Claude for complex legal analysis. The dispute in question revolved around the Confrontation Clause, part of the Sixth Amendment, which guarantees defendants the opportunity to cross-examine witnesses testifying against them in court.
In one post from last year, Unikowsky prompted Claude 3.5 Sonnet to assess the court’s majority and dissenting opinions on Smith v. Arizona — the most recent Confrontation Clause case — for which Kagan authored the majority opinion.
“Claude is more insightful about the Confrontation Clause than any mortal,” Unikowsky wrote in that post.
AI, work, and the law
Unikowsky’s and Kagan’s praise of Claude’s jurisprudence signals a broader reality in AI about high highs and low lows. While the technology can have bursts of insight recognized by professionals and experts, the courts are still working to figure out the ramifications of AI in the legal field, and AI’s potential more broadly is still proving to be patchy at best.
Also: Every AI model is flunking medicine – and LMArena proposes a fix
In recent years, several lawyers have been caught, in well-publicized incidents, using ChatGPT to craft legal arguments and supporting documents. In many of those cases, the chatbot hallucinated — confidently provided inaccurate information, either without citations or with fabricated ones — legal cases that were then referenced as precedents in court filings. Last month, for example, a federal judge reportedly sanctioned three lawyers in Alabama after they included fictitious cases generated by ChatGPT in a filing for a case defending the state’s prison system.
Kagan added while speaking at the Ninth Circuit’s conference that she didn’t “have the foggiest idea” how AI will ultimately reshape her field, according to Bloomberg Law. Currently, no rules exist that bar lawyers from using the technology, though several legal bodies have put out ethics guidelines and best practices.
In a 2023 end-of-year report to the Federal Judiciary, US Chief Justice John Roberts highlighted the possibility that AI legal advisors could one day provide useful service to those who aren’t able to afford a (human) lawyer. At the same time, he tried to assuage any fears his colleagues might be feeling about their future job security, noting he was confident that judges would not become obsolete amid the burgeoning wave of automation.
Also: 5 ways automation can speed up your daily workflow – and implementation is easy
A recent report from Microsoft highlighting the jobs that are most likely to be replaced by AI placed “lawyers, judges, and related workers” near the middle, right between architects and personal care workers.
The stakes are high
Kagan’s comments seem to support the idea that generative AI could be a legitimately useful tool for legal experts trying to understand the nuances of complex cases, though perhaps not in every situation. Chatbots like Claude and ChatGPT excel at detecting subtle patterns across huge bodies of data, something that human lawyers are also trained to do, but which AI systems can do on a bigger scale.
But the ongoing reality of hallucination means that it’ll likely be some time before the legal profession is able to onboard these tools meaningfully. These issues aren’t restricted to the legal field, either; new AI models and agents are still routinely falling short of expectations and, at times, causing serious damage when deployed in workflows.
There will always be those few in any industry to attempt to covertly use AI to sidestep the more difficult aspects of their job, and it’s likely that the wrists of a few more lawyers will need to be slapped for submitting hallucinated briefs before the industry is able to impose broad-scale laws and regulations around its internal use of the technology. Kagan’s comments, meanwhile, will likely encourage other legal professionals to turn to generative AI for various professional purposes.
Also: Stop using AI for these 9 work tasks – here’s why
In the absence of federal regulation, it remains up to the discretion of individuals to decide how they use — or don’t use — AI at work. Given the huge stakes, let’s all hope lawyers and judges err on the side of caution.