
Judicial office holders in the UK are being encouraged to make use of Microsoft’s ‘Copilot Chat’ genAI capability via their inhouse eJudiciary platform. The move comes as updated guidance for judges stressed that ‘public AI chatbots do not provide answers from authoritative databases’, among other warnings.
The genAI announcement earlier this month highlights that ‘Copilot Chat can be accessed [by judges] via the Edge browser or the Microsoft 365 Copilot application. This tool provides enterprise data protection and operates within the privacy and security frameworks of Microsoft 365. When signed into your eJudiciary account, the data you submit into ‘Copilot Chat’ is secure and will not be made public.’
At the same time, the UK’s Courts and Tribunals Judiciary body has outlined a series of key issues for judges to be aware of. I.e. they clearly want judges to use genAI….but to do it carefully and understand the tech behind it.
Warnings include:
‘Public AI chatbots do not provide answers from authoritative databases. It is not necessarily the most accurate answer.
AI tools may be useful to find material you would recognise as correct but have not got to hand, but are a poor way of conducting research to find new information you cannot verify. They may be best seen as a way of obtaining non-definitive confirmation of something, rather than providing immediately correct facts.
The quality of any answers you receive will depend on how you engage with the relevant AI tool, including the nature of the prompts you enter, and the quality of the underlying datasets. These may include misinformation (whether deliberate or otherwise), selective data, or data that is not up to date. Even with the best prompts, the information provided may be inaccurate, incomplete, misleading, or biased.
The currently available LLMs appear to have been trained on material published on the internet. Their ‘view’ of the law is often based heavily on US law, although some do purport to be able to distinguish between that and English law.’
There are also some important points about confidentiality and avoiding sticking sensitive data into public LLMs.
They add: ‘You should disable the chat history in public AI chatbots if this option is available, as it should prevent your data from being used to train the chatbot and after 30 days, the conversations will be permanently deleted. This option is currently available in ChatGPT and Google Bard but not in some other chatbots. Even with history turned off, though, it should be assumed that data entered is being disclosed.’
And that you need to ‘be aware that some AI platforms, particularly if used as an App on a smartphone, may request various permissions which give them access to information on your device. In those circumstances you should refuse all such permissions’.
While if things go wrong…..’in the event of unintentional disclosure of confidential or private information you should contact your leadership judge and the Judicial Office. If the disclosed information includes personal data, the disclosure should reported as a data incident’.
There are also some points made about AI and eDisclosure, which (see AL story) ILTA and a group of UK-based lawyers and experts are launching a guide for.
So, overall a balanced approach, which both encourages judges and their staff to explore what AI can do, but to be careful, especially with ‘raw’ and public LLMs. Instead the Ministry of Justice would prefer judges to use Copilot within the judiciary’s own secure environment – which makes sense.
AL would add that it may be even better to simply use a basket of legal tech tools that come already built with a secure environment and have access – in some cases – to plenty of useful legal data.