Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Doubao Launches Minor Mode_minor_the_mode

Asahi, Nikkei sue Perplexity AI over copyright infringement

C3 AI Announces Record Fiscal Fourth Quarter and Full Fiscal Year 2025 Financial Results

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Industry Applications

Stop Calling Workflows ‘Agents’ – A Guide to Real Agentic AI – Artificial Lawyer

By Advanced AI EditorAugust 26, 2025No Comments9 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email



By Jake Jones, Flank.

Legal tech has a new addiction: slapping ‘agentic’ on anything with an LLM and a few integrations. It’s sloppy, it confuses buyers, and it slows the industry down. If your product can’t run unattended, can’t re-plan when the world pushes back, and requires a bespoke UI to babysit every click, then it’s not an agent. It’s software with delusions of grandeur.

This piece draws a bright line between genuine agentic systems and ‘workflow theatre’ dressed up as autonomy. Expect some toes to be stepped on.

A simple definition:

‘Agentic AI is a system that can pursue goals autonomously within constraints’.

Concretely, that means it can:

1.      Hold a goal (e.g., ‘execute this NDA within policy’).

2.      Form and revise a plan over multiple steps.

3.      Choose and compose tools (email, e-signature, CLM, CRM, calendars, knowledge bases) without you telling it which to use, when, or how.

4.      Act in external systems, observe results, and adapt when reality deviates.

5.      Handle obstacles (OOO replies, missing fields, blocked permissions) by replanning, escalating, or negotiating alternatives.

6.      Operate via existing channels (email, Slack, Teams) rather than requiring you to live inside a new interface.

7.      Respect policy and risk tolerances via a rules/policy engine and auditable logs.

8.      Finish the job (or stop safely) without human micro-orchestration.

If any of those are missing, we’re not in ‘agent’ territory.

What agentic AI is not:

Not an if/then workflow. Pre-baked branches are brittle. Agents plan, act, observe, and re-plan.

Not a generative UI with tool buttons. A pretty panel of integrations you have to click through is still software.

Not a ‘copilot’ that drafts suggestions you must accept step-by-step. That’s assistive AI, not autonomy.

Not dependent on a dedicated interface. Real agents meet you where you already work.

Not a synonym for ‘we wrote lots of integrations.’ Integration count ≠ autonomy.

Vendor bingo: the most common ‘fake agent’ patterns:

1.      Workflow Wrappers

A rigid business process flow with LLM prompts glued in. Impressive demo; collapses the first time Finance changes a form.

2.      Integration Theatre

‘We’re agentic, we integrate with 47 tools.’ The system still needs you to select Tool X, step 3, option B. That’s a remote control, not an agent.

3.      Wizard Cosplay

A five-step UI that asks you everything the agent should infer. If the human must drive the path, it’s not autonomous.

4.      Play-Acting Copilots

Drafts clauses and comments, but can’t chase signatures, update the tracker, or re-route around an OOO. That’s assistive drafting.

5.      LLM-as-Form-Filler

Auto-completes fields in your CLM but can’t negotiate timelines, chase counterparties, or book a call when stuck.

If you recognise your product in any of these, stop calling it ‘agentic.’ Sell it proudly as assistive or automated workflow… both useful, just not agents.

The autonomy ladder (use this with your buyers)

Level 0 – Automated Workflow: Deterministic sequences. Reliable, brittle, cheap.

Level 1 – Assistive AI: Drafts, classifies, extracts. Human drives the process.

Level 2 – Supervised Agent: Plans and acts across tools; human approves key steps or exceptions.

Level 3 – Constrained Autonomy: Operates unattended within policy and risk bounds; escalates only on edge cases.

Most ‘agentic’ legal tech on the market is Level 1 masquerading as Level 3.

The legal-grade minimum bar for an agent

To claim ‘agent’, you should meet all of the following:

Goal & Plan Loop: An explicit planner that updates its plan based on outcomes, not prompts alone.

Tool Autonomy: Dynamic selection/composition of tools (including fallback paths).

Obstacle Recovery: Detects blockers (OOO, permission denied, missing data), tries alternatives, and escalates with context.

Policy Guardrails: Hard constraints (approval thresholds, clause libraries, data handling rules) enforced at run-time.

Auditability: Complete action log (who/what/when/why), reproducible inputs/outputs, and deterministic policy checks.

Channel-Native Operation: Works over email/Slack/Teams; no bespoke UI dependency.

Stop Conditions: Risk triggers, timeouts, and retry ceilings to avoid runaway behaviour.

If your system ticks these boxes only with a human clicking ‘next’, it’s not agentic.

A concrete example: NDA to signature without babysitting

Goal: Execute a low-risk NDA within policy.

A real agent will:

1.      Parse intake from email/Slack, classify counterparty risk, pick the correct template.

2.      Draft the NDA, apply house positions, log rationale.

3.      Send via e-signature; if signer is OOO, re-route to delegate, propose a call, or reschedule.

4.      Detect non-standard edits; auto-negotiate within authority, escalate only above thresholds.

5.      Update the CLM, CRM, matter tracker; notify stakeholders in their channels.

6.      Close the loop with an audit trail and evidence pack.

A ‘workflow wrapper’ will: generate a draft, open a UI, and wait for you to do the rest.

Underutilising agents just to wear the badge

Another flavour of malpractice: products that throttle autonomy so marketing can say ‘agentic’ without doing the hard work.

Forcing approvals on every microscopic step ‘for control’, you’ve turned an agent into a checklist.

Banning tool selection—hard-coding the e-signature vendor and calendar logic—so the ‘agent’ can never re-plan.

Hiding behind ‘compliance’ to avoid building guardrails, then blaming regulators for lack of autonomy.

If you’re doing this, you’re not safeguarding; you’re ducking engineering.

How buyers should evaluate ‘agentic’ claims

Ask for these four metrics on a representative cohort of matters:

1.      Unattended Completion Rate (UCR): % of tasks fully completed with no human actions.

2.      Obstacle Recovery Rate (ORR): % of blockers resolved without human help.

3.      Mean Time to Human (MTTH): Average runtime before first required human intervention.

4.      Policy Breach Rate (PBR): Incidents per 1,000 runs where the agent attempted an out-of-policy action (should be near zero).

Then run a black-box test: give a mailbox, a CLM, an e-sig tool, your policies, and a real inbox full of edge cases for a week. No vendor-operated demo rail. Watch what survives.

Architecture matters (and it’s different)

Agentic systems aren’t ‘CRUD-plus-LLM.’ They have different bones:

•       Planner/Controller: Maintains goals, decomposes tasks, re-plans on feedback.

•       Memory & State: Case state + episodic memory for long-running matters.

•       Policy Engine: Compile-time and run-time constraints; authority thresholds; safe-action filters.

•       Toolbox & Router: Tool schemas, affordances, adapter discovery, and fallbacks.

•       Monitors: Execution watchdogs, anomaly detectors, stop conditions.

•       Event Bus: Asynchronous, event-driven loops, not request/response forms.

•       Audit Layer: Immutable logs, artefact storage, replay.

If your ‘agent’ is a prompt template calling a few APIs, it will crumble the moment reality deviates.

The OOO email, revisited

Agents don’t fall over when they hit an obstacle (like an OOO email when seeking approval on a contract).

A real agent will infer the delay impact, check the authority map, contact a delegate, propose alternate timelines, or escalate with a risk-aware summary… without you holding its hand!

The interface myth

Agents don’t need a dedicated interface. If your system only ‘works’ inside your proprietary UI, it’s not an agent; it’s a product demanding user behaviour change. Agents should hum along over email/Slack/Teams and touch your CLM/CRM quietly in the background.

The naming problem (and why it matters)

‘Agent’ isn’t just another name for a generative AI application. Language shapes budgets. When vendors blur ‘assistive’, ‘automated’, and ‘agentic’, legal teams buy the wrong thing, measure the wrong outcomes, and conclude ‘AI can’t do that.’ It can, but only if we build the right class of system and deploy it in the right risk bands.

A workable way forward

•       Be honest about the level. If you’re L1/L2, say so. There’s huge value in copilots and smart workflows.

•       Pick bounded domains. Start with high-volume, low-risk matters (NDAs, routine vendor onboarding, standard DPAs).

•       Engineer guardrails properly. Policy engines, safe tool schemas, monitors. Not just ‘human-in-the-loop everywhere.’

•       Publish the metrics. UCR, ORR, MTTH, PBR. If you can’t, you’re not ready to say ‘agent’.

•       Meet users where they are. Channels first; dashboards later.

The paradigm shift, plainly

The emerging industry is not ‘digital software with AI inside.’ It’s intelligent, autonomous systems that act across your stack to achieve outcomes. Different components, different constraints, different responsibilities. We don’t ‘use’ them so much as task them, constrain them, and audit them.

Stop rebranding workflows. Build agents, or sell what you have got proudly as what it is.

—

About the author: Jake Jones is the co-founder of Flank, a legal tech company that develops agents for legal teams that can autonomously handle routine tasks.

This is an educational think piece kindly written for Artificial Lawyer after this site has become increasingly aware that some of the ‘agents’ currently being sold in the legal tech market are not actually real agents at all. Hence, we need to understand more about this subject. AL therefore asked Jake, who has been working in this niche area for some years, to help clear up the matter and set out some clear definitions.

As noted in a previous AL article, if you’re planning on marketing a new product or feature, please consider first whether it actually displays agentic characteristics before describing it as such.

—

Legal Innovators Conferences in London and New York – November ’25

If you’d like to stay ahead of the legal AI curve then come along to Legal Innovators New York, Nov 19 + 20 and also, Legal Innovators UK – Nov 4 + 5 + 6, where the brightest minds will be sharing their insights on where we are now and where we are heading. 

Legal Innovators UK arrives first, with: Law Firm Day on Nov 4th, then Inhouse Day, on the 5th, and then our new Litigation Day on the 6th.

Both events, as always, are organised by the awesome Cosmonauts team! 

Please get in contact with them if you’d like to take part.

Discover more from Artificial Lawyer

Subscribe to get the latest posts sent to your email.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleNeither Valid nor Reliable? Investigating the Use of LLMs as Judges – Takara TLDR
Next Article Top Japan news outlets sue AI start-up Perplexity for copyright violations
Advanced AI Editor
  • Website

Related Posts

Tesla looks to expand Robotaxi geofence once again with testing in new area

August 25, 2025

Tesla considers making a big move with Model Y pricing as demand is skyrocketing

August 25, 2025

Tesla produces 100,000th new Model Y in Giga Berlin

August 25, 2025

Comments are closed.

Latest Posts

People Inc. Sells Oldenburg and Van Bruggen ‘Plantoir’ Sculpture

Amy Sherald Speaks Out About Government Censorship at the Smithsonian

Dealers Living Like Collectors, Egypt’s Tourism and More: Morning Links

Mütter Museum in Philadelphia Announces New Policy for Human Remains

Latest Posts

Doubao Launches Minor Mode_minor_the_mode

August 26, 2025

Asahi, Nikkei sue Perplexity AI over copyright infringement

August 26, 2025

C3 AI Announces Record Fiscal Fourth Quarter and Full Fiscal Year 2025 Financial Results

August 26, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Doubao Launches Minor Mode_minor_the_mode
  • Asahi, Nikkei sue Perplexity AI over copyright infringement
  • C3 AI Announces Record Fiscal Fourth Quarter and Full Fiscal Year 2025 Financial Results
  • Explain Before You Answer: A Survey on Compositional Visual Reasoning – Takara TLDR
  • 10 DeepSeek AI Prompts for Productivity

Recent Comments

  1. LhaneUnecy on New MIT CSAIL study suggests that AI won’t steal as many jobs as expected
  2. OLaneUnecy on Artist Stuart Semple Loses Trademark Lawsuit From Yves Klein Estate
  3. OLaneUnecy on A New Trick Could Block the Misuse of Open Source AI
  4. depo123 on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. Juniorfar on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.