UK Artificial Intelligence Guidance for Judicial Office Holders

The UK Judiciary released the Artificial Intelligence Guide for Judicial Office Holders [Guide] on December 12th, 2023.

See the “AI Guidance for Judicial Office Holders” here

As a lawyer and an AI entrepreneur I was very impressed with the tone and content of the Guide.

Before using any AI tools, ensure you have a basic understanding of their capabilities and potential limitations.

Guide, p. 3

The Problem

Justice applies to the world as we find it, and the world we find around us is one experiencing the dawn of artificial intelligence as an undeniable fact.

Substantively and procedurally, Judges have to deal with evidentiary issues like AI generated content, deep fake images and videos, replicated biometrics like one’s voice, and a list of unknown unknowns as deep and wide as the spectrum of human creativity. They must set clear expectations regarding professional responsibility on the part of all Court and Tribunal actors utilizing AI.

All legal representatives are responsible for the material they put before the court/tribunal and have a professional obligation to ensure it is accurate and appropriate. Provided AI is used responsibly, there is no reason why a legal representative ought to refer to its use, but this is dependent upon context. Until the legal profession becomes familiar with these new technologies, however, it may be necessary at times to remind individual lawyers of their obligations and confirm that they have independently verified the accuracy of any research or case citations that have been generated with the assistance of an AI chatbot.

Guide, p. 5

AI impacts every aspect of life with fractal-like complexity. Grasping what AI means for one’s individual experience feels daunting and awe inspiring. But it is a nightmare to have to make serious decisions when the basic understanding of what AI is and does seems to require a wait-and-see approach as its impact comes into focus.

This Guide Speaks to Judges in their Mastery

I like this approach. Judges have to deal with a level of confidentiality, privacy, and accuracy which is difficult to imagine for those who do not bear the responsibility of deciding.

I think it is harder to scale up the “seriousness” of safeguards rather than relaxing safeguards in appropriate circumstances.

Many things are dangers, but not terrifying. Crossing the street is a serious life-threatening risk for one who knows nothing of what vehicles are and what they do and how they drive around and according to what set of rules. But with the right set of procedures it becomes a calculated risk that many normally take multiple times a day without second thought.

As with any other information available on the internet in general, AI tools may be useful to find material you would recognise as correct but have not got to hand, but are a poor way of conducting research to find new information you cannot verify. They may be best seen as a way of obtaining non-definitive confirmation of something, rather than providing immediately correct facts.

Guide, p. 3


I hope that this becomes the paradigm for approaching AI regulation within and across professions. The Guide is a result of consultation with all judicial office holders. This guidance has been produced by a cross-jurisdictional judicial group to assist the judiciary, their clerks, and other support staff on the use of AI. AI is here to stay, but there is nothing new under the sun. The judiciary’s truth-seeking function has covered a long history of human behaviour, and AI is a tool for humans to behave with.

Judges are not generally obliged to describe the research or preparatory work which may have been done in order to produce a judgment. Provided these guidelines are appropriately followed, there is no reason why generative AI could not be a potentially useful secondary tool.

Guide, p. 5

If Lady Chief Justice of England and Wales Baroness Carr of Walton-on-the-Hill feels capable of responsibly utilizing AI in her work, then I am sure with the right consultation, any profession can responsibly integrate AI into their day-to-day.

The future is bright,

Nawar Kamel


Nawar Kamel is CEO and Co-Founder of Experto AI Inc., and licensed Canadian lawyer in Ottawa, ON, Canada. 

Nawar started his academic path studying philosophy and went on to get his masters in philosophy focusing on social contract theory from York University. Nawar graduated from the University of Ottawa Faculty of Common Law and was a litigator spending his days fighting in the courts on behalf of his clients until he went on to found Experto AI Inc., which was established to create AI tools geared towards lawyers and legal researchers. 

Leave a Comment

Your email address will not be published. Required fields are marked *