Florida Bar Considers Whether Lawyers Using AI in Legal Cases Must Get Client Consent

Florida Bar Considers Whether Lawyers Using AI in Legal Cases Must Get Client Consent
File photo of a judge's gavel. (Joe Raedle/Getty Images)
Katabella Roberts
10/17/2023
Updated:
10/17/2023
0:00

The Florida bar is deliberating over whether or not lawyers using artificial intelligence (AI) in legal matters should get their client’s consent beforehand.

In an Oct. 13 statement, the Florida Board’s Review Committee on Professional Ethics said it is considering adopting a proposed advisory opinion on the use of AI—including programs like OpenAI’s ChatGPT, Google Bard, or Microsoft’s Bing—in legal matters, and is calling on Florida lawyers to share their thoughts on the matter.

Specifically, the board will consider whether or not a lawyer should be required to obtain a client’s informed consent to use generative AI during representation and whether or not generative AI and other similar large language model-based technology is subject to the same lawyer supervision requirements as non-lawyer assistants.

Officials will also examine “the ethical limitations and conditions that apply to a lawyer’s fees and costs when a lawyer uses generative AI or other similar large language model-based technology in providing legal services, including whether a lawyer must revise their fees to reflect an increase in efficiency due to the use of AI technology and whether a lawyer may charge clients for the time spent learning to use AI technology more effectively.”

Additionally, the board will consider whether or not law firms should be allowed to advertise their generative AI as “superior or unique” when compared to those used by other lawyers or providers and whether lawyers may encourage clients to rely on due diligence reports generated solely by AI technology.

The ChatGPT app is displayed on an iPhone in New York, May 18, 2023. (The Canadian Press/AP, Richard Drew)
The ChatGPT app is displayed on an iPhone in New York, May 18, 2023. (The Canadian Press/AP, Richard Drew)

Lawyers Use ‘Experimental’ AI Program

Florida bar members will have until Dec. 1 to submit comments on the new AI advisory opinion to the Ethics Counsel, which will meet again on Nov. 30.

The deliberation regarding the use of AI in legal matters comes as the technology is increasingly being employed in the legal sector, albeit with mixed results.

On Monday, new legal representation for rapper Prakazrel “Pras” Michel, a member of the 1990s hip-hop group the Fugees, argued that his previous lawyer, David Kenner, relied on an “experimental” AI program to write his closing argument for the trial, resulting in a negative outcome.

Mr. Michel was found guilty in April on charges of conspiring to make straw campaign donations, witness tampering, and acting as an unregistered foreign agent for China.
In court documents (pdf), his new legal team claimed that attorney Mr. Kenner “generated his closing argument—perhaps the single most important portion of any jury trial—using a proprietary prototype AI program” in which he allegedly had an undisclosed financial stake.
The OpenAI logo on a mobile phone in front of a computer screen displaying output from ChatGPT in Boston on March 21, 2023. (Michael Dwyer/AP Photo)
The OpenAI logo on a mobile phone in front of a computer screen displaying output from ChatGPT in Boston on March 21, 2023. (Michael Dwyer/AP Photo)

Chat GPT Generates ‘Fake Cases’

“Kenner’s closing argument made frivolous arguments, misapprehended the required elements, conflated the schemes, and ignored critical weaknesses in the Government case,” lawyers wrote, adding that the closing argument was “damaging to the defense.”

Elsewhere, U.S. District Judge P. Kevin Castel in June imposed sanctions on two New York lawyers who submitted a legal brief that included six fictional case citations generated by ChatGPT.

Lawyers Steven Schwartz, Peter LoDuca, and their law firm Levidow, Levidow & Oberman told the judge they were aware the chatbot technology, which is developed by OpenAI, was able to make up fake cases, adding that they had unknowingly included the false citations.

However, in his ruling, the judge stressed that while there is nothing “inherently improper” in lawyers using AI “for assistance,” lawyer ethics rules “impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

The judge also noted that the lawyers “continued to stand by the fake opinions” after the court had questioned whether or not the citations actually existed.

They were subsequently ordered to pay a $5,000 fine.

Reuters contributed to this report.