Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Coinbase and Perplexity AI Unite for Live Crypto Price Access

Paper page – Beyond the Linear Separability Ceiling

Fraud detection empowered by federated learning with the Flower framework on Amazon SageMaker AI

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Industry Applications

If We Achieve Legal AI Perfection – Everything Changes – Artificial Lawyer

By Advanced AI EditorApril 28, 2025No Comments9 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email



Lawyers want genAI tools to give good results. But, what happens when they become almost perfect, i.e. 99.9% accurate, all of the time? That would mark the end of the legal world as we know it.

So You Want Perfection?

At present, as many studies have shown, genAI results across a range of tasks are not perfect. GenAI can also – without proper RAG support – include hallucinations, and even with RAG techniques can still give partial, or potentially misleading, answers that are ‘correct’ in factual terms, but miss key information.

Every legal tech company worth its salt is trying to address this. Meanwhile the foundation model builders from OpenAI to Anthropic are also trying to do better, as well as adding in reasoning and support for agentic flows that could be used to improve on ‘single shot’ answers. I.e. if you can spend more time on an answer, and/or go back to an LLM multiple times, you might get a more accurate output that meets the user’s specific needs.

In short, the legal world wants more accuracy, and the providers are trying to give it. Nice, right? But….what happens next?

For now, lawyers use genAI tools knowing they need to be ‘in the loop’, and that any answer, whether a system providing a redline, a new version of a clause based on a playbook, a summary of a deposition, or case law research, all has to be taken with a degree of scepticism. Not total scepticism. But, still, lawyers not having absolute faith in what appears ‘magically’ on their screen is the standard approach.

But, as David Wang, now head of innovation at Cooley, and others, have said: ‘You can’t have efficiency without accuracy.’ I.e. if you have to spend 30 minutes checking a contract review result because you cannot feel confident of it, then if you save 30 mins on that task using AI you are back to square one: no net improvement.

While if AI gets a case citation – which has to be objectively correct with very little wiggle room – totally wrong, then you are even worse off, as you have to find a new way to get the result. And worse still, what happens if you fail to spot that the answer is a bit ‘off’?

Plus, looking at so many other tech tools, what if your car only drove where you steered it 80% of the time, emails only arrived with the right person ‘quite often’, and items you ordered on Amazon only ‘tended to be what you actually paid for’? Those technologies would be abandoned until they could be fixed.

But, lawyers – ironically for a profession famed for its attention to detail – accept a world where they work with genAI results. Why? Because the legal world is in fact highly-attuned to error spotting and output improvement.

A law firm is not just an economic pyramid of legal labour, it’s a pyramid of quality control. I.e. junior associate work product is checked, it advances up the structure to more senior lawyers, to finally the partners, to then the clients – who also then ‘correct’ the work. It’s a long chain of quality control.

That is why imperfect genAI outputs are tolerated, because perhaps surprisingly this reality meshes well with how things have always been done. Imperfection is part of law firm life.

And thus – double irony here – if genAI finally becomes truly accurate one day, i.e. almost totally reliable and its work never needs to be checked because it’s ‘perfect’, then the legal world as we know it will radically change.

What Happens After We Achieve ‘Legal AI Perfection’?

Let’s say in 10 years, after many advances in LLM deployment – possibly not via more and more compute, but via better reasoning and agentic actions that provide much better outputs – we get to 99.9% accuracy for legal use cases.

Whether it’s drafting, reviewing, or research, legal AI is now – for want of a better word – perfect. Maybe that happens in a decade, maybe longer, but let’s work with the idea that it will happen at some point and within your career.

What happens then? Here are some thoughts.

A junior associate, who already does work that is supported by genAI, will truly see some tasks nearly fully automated. Why? Because those specific tasks can be achieved so perfectly that their human input is in effect ‘a waste of time’.

Lawyers in the loop will still be needed for those routine ‘fully automatable tasks’, but their input will be the ‘lightest of touches’. Plus, the clients and the partners at the law firm will still need someone to blame if things go wrong. So, human lawyers are needed still as ‘blame sponges’ even for basic things.

Even at that automatable level someone still has to press the keys to make things work, to carry out the orders from above, to achieve what the clients want. But, now a doc review project’s genAI outputs can be trusted. Checking them is just a waste of time. Drafting – as long as the firm has clear precedents – is also made into something that simply needs a look-over by a senior lawyer. Legal research – as long as the questions sent to the junior lawyers are crystal clear – also doesn’t need much oversight…..if in a world of perfect outputs.

The junior associate’s job now is simply to trigger the system and reap the results, like a sales assistant scanning tins of baked beans at the checkout.

More Complex Work As Saviour?

We often say that it ‘will be fine because junior lawyers will just do more complex work’. How many times have we heard this…? But will they?

Will a law firm that takes on 100 matters per month, and of those the time needed reduces per matter by 50% because nearly all the routine ‘associate labour’ has been automated by super-reliable genAI tools, find loads of new work for them to do? Really? Will that actually happen?

An associate can be seen as a ‘role’ made up of many tasks. If 80% of those tasks are automated then you really don’t need so many associates. Plus, as noted, do those associates just magically take on more complex work? Some of the brightest ones do, for sure, but all of them?

And how far will the perfection go? In 10 years it doesn’t seem impossible for truly reliable negotiation tools to handle M&A deals, i.e. firm A and firm B both have genAI tools that ‘game out’ the wording of each and every clause, perhaps in a few seconds. We’d need senior lawyers there for sure – not just to steer things, and add in new aspects, but to take the blame.

And blame may be a lawyer’s saving grace. I.e. you can’t take a piece of software to court (if the law firm has brought it formally into its tech stack and vouches for it), but you can blame a lawyer who used it, as the final work product is the responsibility of the firm. Yep, blame, i.e. the need for someone to ‘eat the risk’, in this case an external law firm, is never going away.

Now, AL doesn’t buy into ‘the end of lawyers’ thesis, in part because of the blame point, and we’ll need lawyers to be managers and also to handle all the human inter-personal elements of the job – which are many. Plus, who will own the law firms…? Er….lawyers. So, from the outside much will appear to be the same.

But….that said, if perfection does arrive across a range of legal skills then the old adage about ‘AI is there to support lawyers, not replace them’ will be consigned to the trash heap of history.

And perhaps the idea that AI must always only support, not replace, is just a temporary thought process designed not to scare off potential buyers? Plus, for now it’s also a statement of fact. Any law firm that believed it was getting 99.9% accuracy (when it’s more like 80% on many tasks) on genAI outputs and not checking them before sending to clients would be out of business very quickly.

Conclusion

We may never get to 99.9% accuracy across all legal AI skills. But, equally, it’s not impossible that this happens in a decade or more/less. When that happens the traditional legal business model is gone. High leverage (i.e. having hundreds of very junior lawyers) is no longer a source of profit, it’s a terrible loss-maker.

But, it will all be dependent on output quality. Will we get there?

It’s not guaranteed. Maybe we do ‘top out’ in a couple of years on accuracy and not even agentic systems and better reasoning get us to ‘near perfection’. Maybe even one day when quantum computing and chips allow AI systems to perform exponentially more calculations, we still don’t get there.

But, equally, maybe one day we do. You need an M&A due diligence project done. Tap. There it is. 99.9% perfect. You need to redraft a 200-page contract. Tap. Done. Perfect. You need to research a case and draft something for court. Tap. Done. Perfect. That’s a very different world to the one we live in now.

So, to conclude. Ironically, genAI’s inaccuracy fits very nicely into the legal world, which is designed with errors in mind. That is why when people say ‘this is a game changer’ when a new AI tool is brought in, this site doesn’t believe it. Why? Because the game has not changed.

But, if accuracy goes up to near perfect, we don’t just gain huge efficiency increases, we truly change the legal business model forever. In short, if you really want to change the legal world, then reaching the highest accuracy level possible should be the goal.

Richard Tromans, Founder, Artificial Lawyer, April 2025

—

Legal Innovators California Conference, San Francisco, June 11 + 12

If you’re interested in the cutting edge of legal AI and innovation, then come along to Legal Innovators California, in San Francisco, June 11 and 12, where speakers from the leading law firms, inhouse teams, and tech companies will be sharing their insights and experiences as to what is really happening and where we are all heading.

We already have an incredible roster of companies to hear from. This includes: Legora, Harvey, StructureFlow, Ivo, Flatiron Law Group, PointOne, Centari, eBrevia, Legatics, Knowable, Draftwise, newcode.AI, Riskaway, SimpleClosure and more.

See you all there!

More information and tickets here.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleOpenAI Now Powers Singapore Airlines’ Customer Support
Next Article TwelveLabs’ identity is inspired by early motion studies
Advanced AI Editor
  • Website

Related Posts

Robinhood is up 160% this year as bitcoin and crypto stocks soar

July 11, 2025

Unlucky in love? AI dating apps promise to help you up your game.

July 11, 2025

Tesla Semis to get 18 new Megachargers at this PepsiCo plant

July 11, 2025
Leave A Reply

Latest Posts

Homeland Security Targets Chicago’s National Museum of Puerto Rican Arts & Culture

1,600-Year-Old Tomb of Mayan City’s Founding King Discovered in Belize

Centre Pompidou Cancels Caribbean Art Show, Raising Controversy

‘Night at the Museum’ Reboot in the Works

Latest Posts

Coinbase and Perplexity AI Unite for Live Crypto Price Access

July 12, 2025

Paper page – Beyond the Linear Separability Ceiling

July 12, 2025

Fraud detection empowered by federated learning with the Flower framework on Amazon SageMaker AI

July 12, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Coinbase and Perplexity AI Unite for Live Crypto Price Access
  • Paper page – Beyond the Linear Separability Ceiling
  • Fraud detection empowered by federated learning with the Flower framework on Amazon SageMaker AI
  • Full STEM ahead: Parkersburg Catholic’s Helena Teltscher will fuel her passion for problem solving, engineering at MIT | News, Sports, Jobs
  • TU Wien Rendering #8 – Surface Normals

Recent Comments

  1. Compte Binance on Anthropic’s Lawyers Apologize After its Claude AI Hallucinates Legal Citation in Copyright Lawsuit
  2. Index Home on Artists Through The Eyes Of Artists’ At Pallant House Gallery
  3. código binance on Five takeaways from IBM Think 2025
  4. Dang k'y binance on Qwen 2.5 Coder and Qwen 3 Lead in Open Source LLM Over DeepSeek and Meta
  5. "oppna binance-konto on Trump crypto czar Sacks stablecoin bill unlock trillions for Treasury

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.