
In a special Artificial Lawyer interview with Min Chen, Chief AI Officer, we explore LexisNexis’s genAI strategy, why agents will be so important, improving accuracy, her view of DeepSeek, and using small language models, to name a few key topics.
To watch the AL TV video please press Play to watch inside the page, or you can go directly to the AL TV Channel here.
In this interview we cover:
A multi-model strategy at Lexis to use the most promising models and use of SMEs.
Tested more than 30 models: OpenAI, Anthropic, Mistral, Gemini +
Used DeepSeek – answers are long, but quality not as good as other top models. Cost lower? Host via Amazon, so more expensive.
Cost not so important, quality of outputs is top.
How thinking about accuracy? We have many metrics……have about 6. Usefulness score, relevancy, accuracy, comprehensiveness….others.
Are Lexis genAI answers getting better? Yes. RAG solutions have improved, using knowledge graph, not just semantic similarity. It looks at answer authority, e.g. ignore a lower level court.
Agentic AI – for me it’s a system that can autonomously carry out a plan, also may have self-reflection.
Can be behind and the customer never sees the agents.
But can be agentic workflow where the customers can see what is happening. Show the reasoning process. Very transparent.
So, can then allow customer to refine it. And make it part of their work. AI agents augment their work. Combines best of human and AI.
Lexis roll out now? It’s autonomous agents now, but can’t interact yet…..but will in future.
Transformer models plateau? They keep launching new models, but focus on reasoning. I think see less improvement on new added compute. Diminishing return.
What is the answer if plateau, then application layer, agents, etc? Gain will be agentic AI and reasoning.
Also place for fine-tuning. We use to improve the speed. Also look at model distillation. Use a ‘teacher model’ and then have a smaller ‘student model’ – which is faster and cheaper.
So we build fine tune model – more speed and less cost.
Make a SLM? Yes. Fine tune a smaller model, yes. Take small open-source models. Fine tune Mistral smaller model, then under another model. Also fine-tune OpenAi mini.
Way easier to influence a small model. And improve speed and quality.
Applying LLM to trusted legal content – that is why RAG is so important, of whatever type. Not using the LLM’s knowledge.
If use raw LLM then just lucky if it’s right. Higher chance to hallucinate. You need precise answers.
Trends? Our product vision is to empower everyone with a personal AI assistant – and that’s already in motion.
Seek to drive intelligent and collaborative AI tools, evolve seamlessly with humans. Agentic AI will drive this, self-directed planning and reflection. Allows higher autonomy.
Our product vision is all about personalisation. And that is also about agentic AI that is grounded in trusted knowledge. Whether Lexis data or from own DMS. Ensure result is accurate and relevant.
The AI system can learn from the data you keep using. And also, Lexis can help agents to be part of customers’ own environment . Now agents are in our Protege environment, but if in theirs would free the agent.
—
Legal Innovators California Conference, San Francisco, June 11 + 12
And if you’re interested in the cutting edge of legal AI and innovation, then come along to Legal Innovators California, in San Francisco, June 11 and 12, where speakers from the leading law firms, inhouse teams, and tech companies will be sharing their insights and experiences as to what is really happening and where we are all heading.
We already have an incredible roster of companies to hear from. This includes: Legora, Harvey, StructureFlow, Ivo, Flatiron Law Group, PointOne, Centari, eBrevia, Legatics, Knowable, Draftwise, newcode.AI, Riskaway, SimpleClosure and more.

See you all there!
More information and tickets here.