
By Richard Tromans, Artificial Lawyer.
This site strongly supports the use of AI in the legal sector. But, we should not avoid the tough questions, such as: will AI ‘dumb down’ the legal world?
There are two main aspects to this: individual lawyers dumbing down because they outsource their thinking, criticality, creativity and judgment; and also a systemic dumbing down of the legal sector’s collection of workflows due to end-to-end automation without any high-level human oversight.
And it’s worth reiterating: this site is 100% in favour of automating plenty of things in the legal sector – where it is ethical and makes sense to – but, where there are systemic risks we really need to be careful. It’s a fine line that will need to be trod in a fast-arriving world where we could see a significant percentage of all legal work today automated, but where we will still need to keep humans in the loop.
A simple example is ‘automated judges’. Could we do that now? Probably, especially with the new reasoning models and if tied to a sufficient amount of relevant case law. But should we? Artificial Lawyer would argue: no – in fact, never. Why? Because if we take humans out of that loop for something that is so….human….then we do something to how society works that could have huge, negative implications. (And would be hard to change back again….)
Would it be a good idea to allow an over-worked judge to use every AI tool they can to manage the trial and all the connected data, and explore related cases, to help them come to a conclusion? Yes.
But, there is a difference between the two things: one is about efficiency and helping a skilled human get to the point where that skill can be applied – rather than it getting swamped by process work. The other scenario is where the skill that civilisation has developed over millennia to ensure a just society is rejected in favour of efficiency at any cost.
And of course, you may well ask: but what’s the risk of automating something entirely if it works? And it’s true, I’d rather be in a self-driving Waymo than in a car with a human driver (especially after recent experiences.) What’s the harm in fully automating legal work as well?
The answer is subtle and it boils down to this: do you want a world where decisions that will affect you personally, and your society as a whole, to be outsourced to a system that has been built by people you have never met and have no control over, nor ability to appeal to? Perhaps a faceless corporation that has its own agenda? But, nevertheless we are handing over the exercise of our legal rights to…? Personally, I’d rather stick with a human judge I can look in the eye.
(And on the Waymo, self-driving car bit, yes such vehicles will have to make snap judgments in an emergency situation, but we don’t go to Waymo for judgments on a divorce, employment trial, or a ruling from the Supreme Court about assisted dying, for example.)
Differences in Automation
Spellcheck probably didn’t result in any collective change to the way the legal sector functioned. Nor did doc automation. Nor using NLP/ML to find change of control clauses in a due diligence project.
But, having the genAI ability to draft from a previously agreed playbook and/or precedent document in a few seconds, or conduct deeply reasoned research on a matter via your own carefully curated data, or simply brainstorm your closing arguments in a trial with an LLM….now, that is different.
Why is it different? It’s different because you are outsourcing some of your deeper thinking, your judgment, your informed criticality i.e. your mind’s higher functions.
Spotting a spelling error does not engage the human mind’s higher functions.
Finding a change of control clause (rarely) engages the human mind’s higher functions.
But finding exactly the right way to express the client’s needs in a contract, even if a fairly ‘simple’ one (relatively), that requires some degree of higher function, even if it’s to say ‘OK, this is good, please proceed’.
In not much more than two and a half years we have moved from doing some Q&A with LLMs, to developing agents that can act on their own, to now adding ‘Deep Research’ that can pick through three dozen sources in minutes to come to a carefully reasoned answer to a legal issue.
This capability will only grow and grow, and any lawyer who believes it won’t happen at scale very rapidly will be in for a surprise.
The Problem
The problem is not that AI is super-productive, or can read a week’s worth of source material in 8 minutes and provide a reasoned answer to a complex question, or draft something in 5 seconds, or redline a document in 1 second….and so on……it’s that we are rapidly reaching a point where human lawyers will ask themselves: should I bother….?
Should I bother to what? To check it. To be sure that the output is what I wanted. To be certain that ‘I, a living, breathing, regulated representative of my client and their best interests’ actually gives a damn that this whole thing is right.
Luckily, lawyers have an obligation to care about their clients. Plus, the tools today (although advancing incredibly rapidly) are not yet good enough for lawyers to just detach from the whole process.
But….this will change. The challenge then becomes one of educating the market to handle ways of working that involve automating large parts of what they once did, but re-emphasising the need to remain ‘in charge’ and providing critical judgment.
I.e. even the most hard-working of us can be lulled into a state of detachment from a matter if we start to feel (over) confident that it’s been ‘handled’ already.
If legal AI’s advances end up creating lawyers who don’t check things, who don’t feel that pang of personal responsibility for the outcome of a legal workflow, then we are in trouble.
So, to reiterate: should we automate everything we can that is ethical to do so? Yes. Does that then absolve the legal sector of the same needs to protect clients? No. In fact, it increases them because the risk of complacency rises in such a world.
These two statements may seem to be at odds. So too AL’s support for AI and yet concerns over taking lawyers out of the loop entirely. It all boils down to this: AI is a tool, and tools are there to help get humans to where they can really add value. They are not there to replace humans unless that task is really so routine it makes no sense for a human to do it.
Personally, I am happy to have a Waymo drive me home. A toaster to make my toast. A Roomba to vacuum my floor. My alarm clock to wake me up. And so on. And I’m happy to have 90% of the work a lawyer does today fully automated….but not all of it.
One Possible World
The following is set in the near future:
A company using agents detects a potential legal issue.
Without any human expert checking it, the issue is sent to another agent inside the inhouse legal team.
That matter gets triaged automatically and it’s decided it reaches the threshold where it needs to get sent out to a law firm (because the need for risk externalisation has been met).
A law firm’s new matter agent accepts the job, also triages it, and gets to work assessing what needs to be done.
(At this point no human lawyer has been involved.)
It’s sent to the correct practice group, where curated legal data and proprietary data and insights of the firm are applied and a legal solution is produced.
The result is (finally) sent to a partner at the firm……(and this is the bit that really messes it all up)…….and this lawyer gives it a quick look over. But they know that A) the inhouse legal team won’t read it, only their AI will….so what’s the point, and that B) the probability their AI has got it totally wrong is very low. So they let it go without checking it properly.
(P.S. one addendum here is: how many contracts exist that no client has ever really read in detail, even under the current system…?)
The inhouse team’s AI receives the work product, and inserts the document into the right place, triggering automated follow-up actions across the company. (This results in the closing of a production unit in Mexico and firing of deputy managers in Texas and Frankfurt – all of which is done via a fully automated process. And in fact even when the manager in Germany gets upset and decides to appeal his sacking by contacting a local law firm, that firm’s AI analyses the case in 0.1 seconds, concludes there is no chance of winning, tells their client not to bother and bills them for €1,000. Also with no human lawyer playing a role.)
OK, that’s a dystopian vision of the future, but we are not a million miles away from that if we start to think it’s OK to take human lawyers out of the loop.
Conclusion
The fundamental problem here is triage, or rather the potential lack of ability for human lawyers to get this right in a legal AI world.
Automating the rote work of junior associates sounds like a very sensible thing to do that will benefit society as a whole. (Of course, junior lawyers may not feel that way, but the law firm clients and the wider economy may well see it that way – and that’s something we’re going to have to deal with.)
At the same time, fully automating, or rather ignoring the need for human engagement at the key moments of a legal project, is not just risky, it’s potentially putting compromised legal ‘content’ into the legal data ecosystem.
And this is where we come back to both the impact at a personal level, e.g. the lawyer disengages because they feel there’s no point getting involved, and the wider systemic level, i.e. society as a whole believes we can just automate the ‘whole thing’, leading to all kinds of mistakes getting folded into the record.
So, is AI a ‘good thing’? Yes. The legal world has been due a ‘rebalancing’ for a long time, and now it’s coming because of this technology. But, even Artificial Lawyer doesn’t want to give AI carte blanche, or wish to see human judgment and criticality wholly replaced for something as essential to society’s operation as legal work.
How we get the balance right in the years to come will be essential.
By Richard Tromans, Founder, Artificial Lawyer, June 2025
—
Legal Innovators Conferences New York and UK – Both In November ’25
If you’d like to stay ahead of the legal AI curve….then come along to Legal Innovators New York, Nov 19 + 20, where the brightest minds will be sharing their insights on where we are now and where we are heading.

And also, Legal Innovators UK – Nov 4 + 5 + 6

Both events, as always, are organised by the awesome Cosmonauts team!
Please get in contact with them if you’d like to take part.
Discover more from Artificial Lawyer
Subscribe to get the latest posts sent to your email.