
Antti Innanen explores what an ‘agent’ actually is, and why some things that are described as ‘agentic’ in fact are not, in this educational think piece for Artificial Lawyer.
The AI world is buzzing about ‘agents,’ and for good reason. They represent a fundamental shift from what AI has been to what it can become.
We’re moving from sophisticated conversation partners to digital entities that actually get things done.
Understanding Agents
Everyone’s talking about ‘agents,’ but the term gets used for everything from chatbots to autonomous systems. Here’s the key distinction: chatbots respond to your inputs; agents work toward your goals.
Most systems called ‘agents’ today are actually workflows (following fixed steps) rather than true agents (deciding how to accomplish tasks).
A chatbot tells you flight prices when you ask.
An agent finds the best deal, books the flight, and tries to figure out how to dodge Ryanair’s €70 too-small-luggage trap.
Here’s a quick test: are you telling the system exactly what to do, or just what you want done? If it needs step-by-step instructions, it is a chatbot. If it can take a high-level goal and figure out the rest, it is starting to behave like an agent.
Many ‘agents’ in the market today are still goal-blind systems with limited flexibility. Systems that can reason from goals to outcomes are just beginning to emerge. But understanding the difference helps cut through the hype and set realistic expectations.
From Reactive to Proactive
When ChatGPT launched, it democratized AI interaction through natural language prompting. Anyone could access powerful AI capabilities just by typing commands.
But these systems were fundamentally reactive. They waited for you to ask, then answered. Each conversation started fresh, with no memory or persistence.
AI agents flip this dynamic entirely. They don’t just respond. They act. They pursue goals over time, remember what they’ve learned, and take initiative to solve problems. Instead of waiting for your next prompt, they are already planning their next move.
Crucially, agents don’t replace natural language. They build on top of it. Inside an agent’s reasoning process, you’ll often find steps like: ‘Analyze the situation. Check previous actions. Choose the right tool. Execute the plan. Evaluate results. Adjust strategy.’
Prompting becomes one layer in a larger behavioral system.
A New AI Phase: From Outputs to Orchestration
Not long ago, building a capable AI system meant training a model from scratch. Then prompting flipped the equation, turning outputs into programmable tools anyone could use. Now agents mark a third phase. We are no longer just generating answers or writing instructions. We are building systems that pursue goals, recover from failure, and adapt as they go.
This is not a return to traditional machine learning. It is a step into orchestration. Agents interact with their environments, refine their actions, and improve through iteration.
What Makes Agents Revolutionary
Several genuinely new capabilities distinguish agents from earlier AI systems:
They Actually Do Things
Agents interact with the world. They call APIs, edit files, control software, and manage tools. They move beyond describing actions and start performing them.
They Remember and Adapt
Agents build on prior experience. They learn what works and improve over time. Their intelligence is situated and persistent.
They Recover from Failure
Traditional AI often fails in a single step. Agents identify problems, try different strategies, and self-correct.
They Collaborate
Multi-agent systems allow for specialization. One agent manages, another critiques, others bring domain expertise. Together they produce emergent behaviors that exceed the limits of a single system.
They Take Initiative
Agents don’t just wait for commands. They notice patterns, anticipate needs, and act on their own. This is the shift from tool to teammate.
Are We Returning to Machine Learning? Not Exactly
Training is back in focus, but the goal has changed.
We are not just fine-tuning models to improve predictions. We are shaping behavior. Agents need to plan, reflect, coordinate, and recover. That involves memory frameworks, reinforcement-style loops, and sometimes synthetic environments.
Prompting still matters. It becomes the internal language of the agent. Agents use prompts to reason, explore options, and talk to other agents. What we are seeing is a new layer of abstraction. Agents decide how to learn, when to ask for help, and how to adjust course.
Law: The Perfect Testing Ground
Law is an ideal proving ground for agents. Not just because it is high-stakes and complex, but because it is already structured and rule-based.
Simulating High-Performance Teams
The best law firms function through collaboration. Associates research, partners strategize, and specialists bring in depth. Agents can simulate this teamwork and make it more efficient. They work in parallel, and can offer diverse viewpoints: technical, client-focused, or procedural.
From Task Automation to Workflow Orchestration
Most legal tech automates isolated tasks. Agents manage full workflows. They track deadlines, coordinate systems, update clients, and ensure that nothing falls through the cracks.
Legal work is hard, and it can be mentally draining. Even logging hours or sending invoices can become a burden. Most law firms also struggle to communicate what they’ve done and turn that work into reference cases. These are the kinds of challenges agents can help with. The goal isn’t just task automation, but ensuring that everything happens as planned, from start to finish.
Safe in Regulated Terrain
Legal work demands high standards. Missed deadlines, incorrect citations, or confidentiality breaches can have serious consequences. At the same time, the legal field offers the kind of structure that agents need to operate safely. Multiple agents can check each other’s work: one drafts, another reviews, a third ensures compliance, and a fourth handles documentation.
This kind of layered oversight becomes especially powerful when combined with human involvement for the most important tasks.
The Digital Partner Era
Agents expand the role of AI. They understand, but they also act. They follow instructions, but they also take the lead. They complete tasks, but they also manage processes. As agents become more capable, they will reshape how we work, learn, and solve problems. The most exciting part is not what agents do today. It is where they are heading.
The goal isn’t just to help lawyers do the same work faster using new tools. With agents, we’re building digital partners.
And not that kind of partners. Not the grey-haired type, sending you a ‘pls fix’ while lounging on a boat and changing one comma in a draft.
That’s why the AI community is watching this space closely, and why the legal industry should be paying close attention too.
—
About the authors:
Antti Innanen is a tech lawyer, legal design enthusiast, and an AI geek. His new book ‘Prompted: How to Create and Communicate with AI’ will be published by Routledge in 2025. You can find more about Antti and his work here: LEGIT: www.wearelegit.ai, Dot.: www.dot.legal, Legal Design School: www.legaldesignschool.com
And,
Elias Ylönen is a software developer and AI specialist. He serves as CTO at LEGIT, where he focuses on building the agentic future. Elias contributed the technical insights in this article.
—
[ This is an educational think piece for Artificial Lawyer. ]
—
And if this was intriguing and you’d like to stay ahead of the curve….then come along to Legal Innovators New York, Nov 19 + 20, where the brightest minds will be sharing their insights on where we are now and where we are heading.

And also, Legal Innovators UK – Nov 4 + 5 + 6

Both events, as always, are organised by the awesome Cosmonauts team!
Please get in contact with them if you’d like to take part.
Discover more from Artificial Lawyer
Subscribe to get the latest posts sent to your email.