UK court warns lawyers could face severe penalties if AI generates fake citations

The High Court of England and Wales has said that stronger measures must be taken to prevent lawyers from misusing AI in their work.
Lawyers could face 'severe' penalties for fake AI-generated citations, UK court warns | TechCrunch

In a UK trial involving a lawsuit over homelessness and a lawsuit over jurisdiction and termination of a loan agreement, a case occurred in which a lawyer used chat AI such as ChatGPT to prepare court documents. In this case, the lawyers were suspected of submitting court documents without checking whether the AI had made false precedents and citations, and the court took issue with the lawyers' litigation.
In the 'Homeless Support Lawsuit,' the defense cited five non-existent precedents in the court documents. The judge in charge of the trial said, 'I could not foresee that they would submit completely false precedents. This is extremely problematic professionally and ethically,' and ordered the lawyer who submitted the court documents to pay the court costs and report to the UK Judicial and Solicitors Regulation Authority. However, the lawyers denied using AI, and the lawsuit did not determine whether the 'five non-existent precedents' were generated by AI. However, it has been pointed out that they may have cited summaries created by a generation AI that appears in Google searches, etc., and the judge pointed out that if the lawyers submitted the AI-generated ones without verifying them, it was a 'serious mistake.'
In the 'Application for Jurisdiction and Termination of Loan Agreement,' it was revealed that 18 of the 45 references listed in the statement submitted to the court did not exist. The lawyer who submitted the statement claimed that 'we used AI but did not notice the error' and that 'it was an unintentional mistake,' and promised to improve in the future. In response, the court found that the lawyer's responsibility was significant and ordered him to report to the Solicitors Regulation Authority and the court. However, the judge

Judge Victoria Sharp, who reviewed these two cases, said that generative AI tools like ChatGPT 'are incapable of reliable legal investigation. They generate answers that appear consistent and plausible at first glance, but in other cases the answers are simply wrong. In other cases the answers the AI gives are confident false assertions that are simply not true,' and that it is a mistake to rely too heavily on AI.
While that doesn't mean lawyers can't use AI in preparing for litigation, it did warn lawyers that they 'have a professional duty to consult authoritative sources and verify the accuracy of their research before using AI in the course of their professional work.'
The UK's judicial service and legal profession have also published guidance on the use of AI from 2023 to 2025, stating that lawyers must be responsible for verifying content generated by AI. Judge Sharp noted that there has been an increase in court documents citing false content that appears to have been generated by AI, and said, 'Further steps need to be taken to ensure that guidelines (on the use of AI) are followed and that lawyers are complying with their obligations to the court.'

Judge Sharp also warned that 'lawyers who fail to comply with their professional duties risk severe sanctions.'
in Software, Posted by logu_ii