z

A Manhattan attorney bringing an injury claim recently discovered, unfortunately rather too late, that an artificial intelligence chat bot is perhaps not the most appropriate tool for legal research supporting federal court submissions. To what one can only assume was the dismay of the Plaintiff’s attorney, the defence attorney was able to demonstrate to the Judge that the six or so cases cited in support of the Plaintiff’s case did not in fact exist. The Plaintiff’s attorney said that he was unaware that the AI software used could produce false content. The attorney signed an affidavit stating he did not have any intention to deceive the court and did not act in bad faith. The Judge commented that this was an unprecedented position and that he would consider sanctions. At this time, it is unknown whether the attorney has faced sanctions.


Whilst AI technology is without doubt impressive and will have enormous benefits, it is not uncommon for AI to make mistakes. ‘Mistakes’ is probably not the correct word here as the AI seemingly ‘knows’ it is fabricating statements. AI software of the ‘generative’ nature is infamously inaccurate and may create “facts”. The technology may also invent sources for these facts. Matt Novak of Forbes recently observed, ‘They were designed to sound impressive, not to be accurate’.


A northern federal district in Texas has issued a standing order stating that anyone appearing before the court must attest that no portion of any filling has been drafted by AI or to highlight sections which have been produced by AI to be checked for accuracy.


The legal profession has already embraced forms of AI which has been of benefit to both clients and the firms themselves. Contract review technology is one example (although in most cases the output in still checked by a lawyer). However, the risks to the legal (and indeed most) professions in using this technology are foreseeable. To name a few: inaccuracies, confidentiality issues, reputational harm and an increased vulnerability to negligence.


It is often the case that generative AI relies on information from only a certain period, data which predates its initial training. Therefore, it is possible that it could miss subsequent legal developments such as judgments which follow first instance decisions or fail to reproduce an entire commentary, only reproducing sections, consequently misrepresenting the overall gist of the opinion.


The use of generative AI could result in inadvertent breaches of copyright or potential distribution of confidential client data. It is often not clear to the user whether the AI generated material includes verbatim material previously produced. A lawyer would have no mechanism by which to confirm the originality of the generated text and that it had not previously been produced for another user. At the time of writing it is understood that there are various actions being filed in the US for breach of copyright arising from AI technologies.

It is unlikely that the legal profession would blindly rely on AI technologies, but we consider the use of this technology should be addressed from a risk management perspective by law firms as the misuse of the technology harbours a real potential for reputational damage and professional negligence claims. If law firms are intending to use this technology, it would be prudent to address this within their terms and conditions and to make suitable allowances for the limitations of the technology.

​​​​​​​