<img alt="" src="https://secure.coat0tire.com/222145.png" style="display:none;">
Skip to content
Menu

    Insights

    AI and Law Firms – A Hasty Marriage

    26 September 2023

    Martin Finnegan

    View all blogs

    SHARE THIS ARTICLE

    Share on Twitter Share on LinkedIn Share by email

    4 minute read

    Last week, leading law firm, Macfarlanes announced that it will roll out AI for use by its lawyers “to boost productivity and improve efficiency”[1]: Macfarlanes progresses AI strategy announcing partnership with Harvey - Macfarlanes. Revealingly, I saw very limited references in the firm’s press release to the interests of clients in implementing AI; instead, it would seem that the factors that impact on Profits Per Partner were decisive.  

    At the risk of sounding like a self-interested Luddite, lawyers should in my view be the last in line when it comes to signing up to the current generation AI platforms. Why? Because AI is not (yet) the tool that it’s cracked up to be.  

    Illusion & Delusion

    As a corporate lawyer of 25 years+ who has experimented with these systems with an open mind, there seems in those firms adopting this “technology” a lack of proper due diligence around the risks to both it and to clients and a desperation to position themselves as a ‘pioneer’ and legal tech innovator.  

    As a user/client of such firms I think I would be unimpressed and circumspect if a client partner told me that AI was key to delivery of his/her legal advice. The press release made clear that AI would not just be producing 1st drafts – but be engaged in reviewing, analysing and summarising with “outputs carefully monitored and reviewed” by senior lawyers.  

    This doesn’t reassure me that the legal advice I receive best meets my interests (and all solicitors are under a strict regulatory duty to act in the best interests of a client). Why? Well apart from the fact I’m paying top rates for legal expertise accumulated over a professional lifetime, I need bespoke legal advice and practical, commercial knowhow based on that experience. It also needs to be accurate – and that can’t be guaranteed with AIAI make things up – it hallucinates. It gets things spectacularly wrong and does so with 100% certainty. And the things it gets wrong are not always the complex questions or issues.  

    Hallucinations – or wide inaccuracies - are not something one would typically associate with expert legal advice (although one might feel lightheaded on receiving a large legal bill). The co-author of the White House’s Blueprint for an AI Bill of Rights and Professor at Brown University, Suresh Venkatasubramanian explains that large language models — the technology supporting AI systems such as ChatGPT – are programmed to “produce a plausible sounding answer”:

    “So, in that sense, any plausible-sounding answer, whether it’s accurate or factual or made up or not, is a reasonable answer, and that’s what it produces…. There is no knowledge of truth there.”

    Need further convincing? Let me pull another Professor off the shelf. Jevin West of the University of Washington and co-founder of its Center for an Informed Public explains that an AI hallucination refers to when an AI model “starts to make up stuff — stuff that is not in-line with reality”…and to boot, “it does it with pure confidence…..the same confidence that it would if you asked a very simple question like, ‘What’s the capital of the United States?’ 

    The People Piece

    So how does a smart junior lawyer at a major law firm, burning the midnight oil as we know they love to do, discern what’s correct and verifiable legal analysis from what’s not under such conditions? 

    At best, with some difficulty – at worst, do they just take the risk and hope it’s right? But surely that’s where a supervising partner would, as promised by Macfarlanes “monitor and review” the outputs? Well, I wouldn’t put my mortgage on it….. Let’s be clear, Partners in these firms are running multiple major transactions and (will/can) not necessarily pick up what’s on page 97 of a 150-page due diligence report on one of those deals. 

    We’ve already seen cases in both the UK and US where both lawyers and non-lawyers have presented as legal authority cases that AI has simply made up[2]. In truth, there will inevitably be lots of examples that demonstrate just how hard it is for qualified lawyers to discern legal fact from AI fiction.    

    Call It What It Is

    Ultimately, signing up to AI is all part of the great outsourcing that started many years ago (north shoring anyone?) and large law firms chasing the dream of even greater gross margins that feed the profit per equity partner meter. Earlier this month Herbert Smith Freehills (the Post Office’s legal advisers) admitted that a major disclosure exercise was principally carried out by law graduates in South Africa, Australia and Belfast with little to no professional qualifications. As a result, 1000s of documents were wrongly withheld. Presumably, the relevant Partners had assured the client that proper legal oversight would be delivered by experienced lawyers at the monitoring and reviewing stage. As well as the shocking miscarriage of justice lying at the heart of the PO case, HSF solicitors owed duties to the Court under the relevant Codes of Conduct.  

    Whilst this is clearly not an AI case, it is an appalling example of derelict delegation of duty as a direct result of outsourcing and should sound the warning for law firms and AI because what’s the difference when the outcome is terrible legal advice with potentially disastrous consequences for clients?

    Even Sam Altman, CEO of ChatGPT, maker of OpenAI, believes it could take years to “get the hallucination problem to a much, much better place”. Thanks Sam, don’t you just mean to get it right rather than guess?!

    Intellectual Property Rights

    Harvey is built on ChatGPT and Open AI tech which scrapes data on an industrial scale for its training purposes. There is no information in Macfarlanes’ press release about whether it has verified that the content used by Harvey has been properly licenced from content creators or based upon copyrighted creations.    

    There are legitimate concerns about the data used to train these huge data sets and as clients whose legal advice and documentation is produced using such data, I would want to know that proper IP licences are in place and that I was properly indemnified by my law firm for potential IP infringement caused by an AI company.  

    What Next for Law Firms & AI?

    I doubt Macfarlanes will be the last law firm to sign up to Harvey and other providers, the provider of “unprecedented legal AI” and Harvey’s marketing strategy would seem to be based on law firms’ FOMO – as illustrated by a prominent “Join the Waitlist” button on its otherwise minimalistic website!

    Clearly AI will feature in the delivery of professional services in the future, but law firms being seen to be early adopters of nascent unreliable technology does not sit well with their overriding and primary duties to act in the best interests of their clients and provide competent/professional legal advice.


    [1] https://www.bloomberg.com/news/articles/2023-09-20/city-firm-macfarlanes-launches-ai-chatbot-to-help-its-lawyers  

    [2] ChatGPT: US lawyer admits using AI for case research - BBC News

    Tags |

    AI makes things up – it hallucinates. It gets things spectacularly wrong and does so with 100% certainty.

    Get in touch

    Talk to a specialist

    We're always happy to help.

    If you have a query for us, just fill out the short form and we'll be in touch.