As we head into 2026, inevitably, we look forward to the year ahead. In our earlier 2026 blogs, we made clear that we were suggesting that law firms should be prepared rather than making predictions per se, but in this follow-up blog, we examine the legal sector’s critical issues related to Artificial Intelligence (AI) in a little more detail.
AI is not new!
Artificial intelligence has been around for a long time: its history dates back to the 1950s when Alan Turing published a paper on machine intelligence. However, it exploded into the mainstream with the launch of the Large Language Models (LLMs) Chatbots in late 2022.
For law firms, there is no doubt that AI, like the move from typewriters to computers, from correspondence by letter to email, is a platform shift that will impact every law firm, whether they address it happily or reluctantly.
There will, we can boldly predict, be lots of AI failures and errors! For a law firm, its reputation is everything, and adopting the benefits and avoiding the mistakes will be crucial.
AI Snake Oil?
Talking to our own client base, they are being offered AI solutions by their suppliers and technology providers daily. Of course, AI matters; that is why we were an early adopter in 2023, when LLM models started to show productivity benefits, and why we are carefully evolving our adoption. But beware of snake-oil salespeople peddling poor products that will not help or last in terms of user benefit.
As law firm businesses, we always have to think long-term, generally, and with AI, especially. Do not try to predict the right solution; instead, adopt cautiously flexible products that can evolve with your firm’s needs. Think about BlackBerry’s dominance of the business phone market in the 2000s, and its rapid demise after the iPhone launched in 2007 and evolved the smartphone marketplace.
Arguably, clear thinking and a focus on productivity will matter more than an expensive solution.
Bolting on AI functionality to tools your team will never use is just buying snake oil – do not fall for it.
Ask: When and how would my team confidently use this?
Thinking Clearly
Clarity of thought matters. The outcomes for clients are better, such as a simpler draft and a stronger proposal to present to the other side. The firm’s outcomes should similarly be enhanced, as the value of your team’s expertise should shine through.
As the author, Shane Parrish, puts it in his book, Clear Thinking:
“If you’re like me, no one ever taught you how to think or make decisions. There’s no class called Clear Thinking 101 in school. Everyone seems to expect you to know already how to do it or to learn it on your own. As it turns out, though, learning about thinking – thinking clearly – is surprisingly hard.”
The reason thinking, particularly clarity of thought, has long been a superpower in law is that it matters to client outcomes.
AI technology is evolving, which means what it can do today and in six months will be different and distinct. Unlike previous platform shifts, such as the shift from postal correspondence to email, how you use it affects the client experience, the firm’s systems and processes, and every aspect of operational management in a law firm.
Professor Ethan Mollick, in Co-Intelligence – Living and Working with AI, emphasises the importance of human-AI collaboration, which he terms co-intelligence. The book argues for this because the human input into what clients of law firms will see is critical, and thinking through that output is crucial. Applying the knowledge gained over many years is a central requirement. He states:
“You provide crucial oversight, offering your unique perspective, critical thinking skills, and ethical considerations. This collaboration leads to better results and keeps you engaged with the AI process, preventing over-reliance and complacency.”
To avoid over-reliance and complacency, and to deploy your team’s knowledge, experience, and in-depth expertise in context rather than relying on an LLM tool to make a prediction (for that is both generative AI’s strength and weakness), clarity of thought will be crucial.
Clear thinking and co-intelligence, we suggest, are key. Professor Mollick also guides, as follows:
“… I can assure you that there is nobody who has the complete picture of what AI means, and even the people making and using these systems do not understand their full implications.”
Thus, do not buy expensive solutions now, as your team and clients will want something different soon. The client experience matters. The client must trust you and the way you have used the technology as a firm. Co-intelligence builds that confidence; it is a key building block for law firms.
Ask: Are we able to use the tool to improve the client output? Quality of service to clients should be your guiding light.
Agentic AI and Legal Services
The technology websites that I have seen have predicted that AI agents will dominate by 2026.
The problem with AI agents is that they do much of the work unsupervised, and in law, this raises risks of professional negligence, reputational issues, and ethical implications. Agents feel a little too much, too soon, for law firms at this stage. This is before we address regulatory compliance.
Ask: Should we avoid this so others can make mistakes with this agentic tool?
What are we doing with AI then?
We are suggesting to our clients that AI technology deployment be supplemented by training in supervision, clear thinking, and its use as a co-intelligence tool. Yes, we can provide all of that training, but crucially, it is how your firm can safely adopt AI and supervise the output.
We invite AI to every single meeting.
We use AI to help draft every single email and document.
We delete the AI output or suggestions when they do not enhance things.
When AI helps, we adopt the co-intelligence model to improve the client experience. When it is AI slop, we ignore it. Just as you would with a paralegal or newly qualified solicitor’s efforts, we review, revise, and only use what adds value. It is making us more productive by careful usage in this way.
The AI models we would encourage our clients to consider are not legal specialist AI tools, but rather Microsoft Co-Pilot, Grammarly, and Fyxer.AI, all of which can be deployed safely from a data perspective. Paul loves Fyxer (it manages his email inbox and takes meeting notes); it was not for Mark. This shows bespoke solutions for each user might make sense in an AI context.
For AI-only devices (without client data access), we use Claude and Gemini, two AI models that rival ChatGPT. Mainly for training session materials, i.e. when they aid the co-intelligence content.
Collectively across the data, secure and no data access tools are using these technology tools as follows:
- to record and transcribe meetings;
- to create to-do lists from those meetings;
- to draft emails of advice following meetings;
- to manage our productivity – collating documents, summary drafts, etc.
- to manage emails more effectively (including predicting initial draft replies automatically based on our own past emails);
- to create PowerPoint training slides.
Ultimately, we are suggesting to our clients that 2026 must be the year that they are prepared for AI in their firm. This probably requires three things:
- Training their team on AI usage: clarity of thought, productivity, etc.
- Guidance on supervising the use of AI safely;
- Adopting a co-intelligence ethos rather than an artificial intelligence lead model.
Co-intelligence is crucial because the human leads the output the client sees, thereby enhancing your team’s hard-earned skills.
Do you want help with implementing AI in your law firm?
If you would like to book a consultancy session to discuss how your law firm can use AI more effectively, we offer this service for a fixed fee.
Please email us at paul@bennettbriegal.co.uk for more information.
Recent Comments