Stephen Baister Writes On... Artificial Intelligence
Artificial intelligence has its good and bad sides.
Back in 2023, Lord Justice Birss publicly acknowledged that he had once used an AI tool to help produce a short summary of an area of law:
“I asked ChatGPT can you give me a summary of this area of law, and it gave me a paragraph. I know what the answer is because I was about to write a paragraph that said that, but it did it for me and I put it in my judgment. It’s there and it’s jolly useful. I’m taking full personal responsibility for what I put in my judgment, I am not trying to give the responsibility to somebody else.”
We can be confident that the AI used by Birss LJ (as he then was) was in good hands. Like many intellectual property lawyers, Birss studied science before taking to the law. He knew what he was doing and he knew how to verify what an AI tool produced. But the risks are more obvious when AI is used by laypeople without an understanding of how these tools generate information.
A clear example remains the case of Harber v Commissioners for His Majesty's Revenue and Customs [2023] UKFTT 1007 (TC). Mrs Harber had disposed of a property but failed to notify HMRC of her capital gains tax liability. HMRC issued a failure-to-notify penalty of £3,265.11. Mrs Harber appealed it on the basis that she had a reasonable excuse for her failure by reason of a mental health condition from which she was suffering and/or her ignorance of the law. In support of her arguments she relied on case law, providing the tax tribunal with the names, dates and summaries of nine First-tier Tribunal decisions in which the appellant had been successful in showing that a reasonable excuse existed. The problem was that none of the authorities were genuine: they had been generated by artificial intelligence.
Mrs Harbour did not take the finding lying down. The tribunal’s judgment recorded:
“Mrs Harber…asked how the Tribunal could be confident that the cases relied on by HMRC and included in the Authorities Bundle were genuine. The Tribunal pointed out that HMRC had provided the full copy of each of those judgments and not simply a summary, and the judgments were also available on publicly accessible websites such as that of the FTT and the British and Irish Legal Information Institute (‘BAILLI’). Mrs Harber had been unaware of those websites.”
The tribunal accepted that Mrs Harber had not known that the cases were not genuine; but unsurprisingly, it found against her.
Since 2023, incidents like this have prompted courts and tribunals in the UK and elsewhere to issue clearer guidance. Judicial bodies now routinely emphasise that:
- AI tools must not be treated as legal research databases.
- Any AI-generated material must be independently verified against authoritative sources.
- Parties and representatives may be required to certify that no AI-generated content is used without proper checking.
These developments reflect a sensible middle ground: AI can be a useful assistant, but it cannot replace genuine legal research or professional judgment. The Harber case, now widely cited in training materials and professional guidance, remains a stark reminder of the risks of unverified AI output in legal proceedings.
Stephen Baister - Board Director