Court Chastises BC Lawyer for Citing AI-Generated Cases

Court Chastises BC Lawyer for Citing AI-Generated Cases
A keyboard is seen reflected on a computer screen displaying the website of ChatGPT, an AI chatbot from OpenAI, in this illustration picture taken on Feb. 8, 2023. (Florence Lo/Reuters)
Jennifer Cowan
2/28/2024
Updated:
2/28/2024
0:00

A B.C. Supreme Court judge has reprimanded a lawyer for citing two AI-generated “hallucinations” in a legal filing and has ordered her to compensate opposing counsel for their time.

Justice David Masuhara, in his Feb. 26 ruling, ordered lawyer Chong Ke to personally compensate the lawyers representing her client’s ex-wife.

Judge Masuhara said it was “appropriate” for Ms. Ke to pay opposing counsel for the time it took them to discover the cases she planned to reference had been created by ChatGPT, a free AI system that allows users to refine conversations and automate tasks.

Although Ms. Ke withdrew the AI-generated cases when she realized they were fake, Judge Masuhara said he was troubled by the occurrence.

“As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers,” Mr. Masuhara wrote. “Competence in the selection and use of any technology tools, including those powered by AI, is critical.”

Ms. Ke represents Wei Chen, a multi-millionaire businessman, in his divorce from Nina Zhang, who lives with their three children in West Vancouver.

The court last December ordered Mr. Chen to pay $16,062 a month in child support after his annual income was calculated at $1 million.

Ms. Ke filed an application prior to the ruling so Mr. Chen’s children could travel to China. The legal notice cited a case in which a mother took her “child, aged 7, to India for six weeks” and another case granting a “mother’s application to travel with the child, aged 9, to China for four weeks to visit her parents and friends.”

The error was discovered after Ms. Zhang’s lawyers asked for copies of the cases because they couldn’t locate them based on their citation numbers.

‘Deeply Embarrassed’

Ms. Ke gave an apology letter to an associate who was to appear at a court hearing in her place, but the associate didn’t give Ms. Zhang’s lawyers a copy.

Judge Masuhara said Ms. Ke later swore an affidavit explaining her “lack of knowledge” of the risks of using an AI program.

“I am remorseful about my conduct. I am now aware of the dangers of relying on Al generated materials,” Ms. Ke wrote. “I acknowledge that I should have been aware of the dangers of relying on Al-generated resources, and been more diligent and careful in preparing the materials for this application. I wish to apologize again to the court and to opposing counsel for my error.”

She said she was “deeply embarrassed” by her mistake, adding that the “publicity and the potential consequences of my error have made it hard for me to focus and left me feeling anxious and overwhelmed.”

Although opposing counsel asked Ms. Ke also be made to pay special costs for abuse of process, the judge declined, saying he believed her to be sincere in her apology.

“These observations are not intended to minimize what has occurred, which—to be clear—I find to be alarming,” Judge Masuhara wrote. “Rather, they are relevant to the question of whether Ms. Ke had an intent to deceive. In light of the circumstances, I find that she did not.”

This is not the first time a lawyer has been sanctioned for using ChatGPT.

A U.S. judge last June imposed sanctions on two New York lawyers who submitted a legal brief that included six fictitious case citations generated by ChatGPT.

U.S. District Judge P. Kevin Castel in Manhattan ordered lawyers Steven Schwartz, Peter LoDuca, and their law firm Levidow, Levidow & Oberman to pay a $5,000 fine in total.

The judge found the lawyers acted in bad faith and made “acts of conscious avoidance and false and misleading statements to the court.”

Reuters contributed to this report.