Categories: U.S.

Lawyer in hot water after using AI to present made up information: ‘incompetent’

close Video

How "hallucinating" artificial intelligence impacts court cases

Former litigator Jacqueline Schafer, who is the CEO and founder of Clearbrief, said AI is frequently used in the courtrooms and she created Clearbrief to fact check citations and court docs created by generative AI

A New York lawyer could face discipline after it was discovered a case she cited was generated by artificial intelligence and did not actually exist.

The 2nd U.S. Circuit Court of Appeals ordered lawyer Jae Lee to its grievance panel last week after discovering she used OpenAI’s ChatGPT to research prior cases for a medical malpractice lawsuit but failed to confirm whether the case she was citing actually existed, according to a report from Reuters.

The attorney included the fictitious state court decision in an appeal for her client’s lawsuit claiming that a Queens doctor botched an abortion, according to the report, leading the court to order that Lee submit a copy of the decision that the lawyer later found she was “unable to furnish.”

The lawyer’s conduct “falls well below the basic obligations of counsel,” the 2nd U.S. Circuit Court of Appeals concluded in its disciplinary review, which was sent to Lee.

LAWYERS WHO USED CHATGPT INCLUDED FAKE LEGAL RESEARCH FABRICATED BY AI CHATBOT

The ChatGPT logo on a laptop computer. (Gabby Jones/Bloomberg via Getty Images)

Lee would later admit to using a case that was “suggested” to her by ChatGPT, a popular AI chatbot, and failing to verify the results herself. 

The lawyer’s decision to use the popular application comes even though experts have warned against such practices, noting that AI is a relatively new technology that also is well-known for “hallucinating” false or misleading results.

“When using any AI application, it is important to understand that the AI’s number one objective is to make you happy, not to find the absolute truth. Additionally, it is crucial to recognize how AI works,” Christopher Alexander, the Chief Analytics Officer of Pioneer Development Group, told Fox News Digital. “AI is not a sentient genius that is the ultimate arbiter of all knowledge.”

Lee argued there was “no bad faith, willfulness, or prejudice towards the opposing party or the judicial system” involved in her decision to use the fake case, telling Reuters she is “committed to adhering to the highest professional standards and to addressing this matter with the seriousness it deserves.”

Lee did not immediately respond to a Fox News request for comment.

CHATGPT FOUND TO GIVE BETTER MEDICAL ADVICE THAN REAL DOCTORS IN BLIND STUDY: ‘THIS WILL BE A GAME CHANGER’

The attorney’s case comes as the judicial system has increasingly had to confront the use of AI in legal filings, including a case in June 2023 in which two New York lawyers were sanctioned after it was found they submitted a brief that contained six made-up citations, according to Reuters. Another Colorado lawyer was suspended from practicing law for similar actions in November.

“This illustrates how the American professional class is increasingly incompetent,” Samuel Mangold-Lenett, a staff editor at The Federalist, told Fox News Digital. “The very thought of using chatGPT, or any other service, for this purpose should have been beaten out of this lawyer during school. That there was no due diligence and further investigation into the court case generated by ChatGPT is obscene.”

Mangold-Lenett added that professional organizations and bar associations should do more “to hold their members accountable and further educate them on the consequences of being overdependent on AI for research,” arguing that using AI for research is not an issue but that the work needs to be verified by those using the technology.

AI has been previously known to hallucinate fake court cases. (iStock)

“Incompetency and complacency are running rampant in professional circles, things like this will continue to be an issue until people are educated and held accountable,” Mangold-Lenett said.

Jake Denton, a Research Associate at the Heritage Foundation’s Tech Policy Center, agreed that AI “holds tremendous promise,” but cautioned that the technology still has many challenges to overcome.

“The current lack of transparency in many Al systems is concerning, as it risks professionals, such as lawyers, trusting a system’s outputs even though they don’t fully understand how the output was generated,” Denton told Fox News Digital.

AI WILL MAKE HUMANS MORE CREATIVE, NOT REPLACE THEM, PREDICT ENTERTAINMENT EXECUTIVES

Those issues, including hallucinations, are “not a new phenomenon,” according to Bull Moose Project Policy Director Ziven Havens, who told Fox News Digital the technology should only be used by lawyers to “supplement” research.

“They should review all information to ensure its authenticity,” Havens said. “Additionally, AI developers must continue to work to lessen the number of hallucinations.”

The issues that led to Lee’s discipline are unlikely to change anytime soon, meaning people who use platforms such as ChatGPT are likely to continue to be held responsible when the technology gets it wrong.

“When people present AI-generated work as their own, they must be required to take full responsibility for that work — and face consequences when it is plagiarized, sloppy, or inaccurate,” Jon Schweppe, the Policy Director of the American Principles Project, told Fox News Digital. “Meanwhile, AI companies should be going out of their way to remind users that AI is still in its infancy and should not be viewed as reliable or trustworthy in any way.” 

Microsoft Bing Chat and ChatGPT AI chat applications are seen on a mobile device. (Jaap Arriens/NurPhoto via Getty Images)

According to the Reuters report, the 2nd Circuit’s rules committee has held discussions on AI but has not yet formed a panel to fully examine the issue. Meanwhile, a separate Reuters report in November revealed that the 5th U.S. Circuit Court of Appeals proposed a new rule that would require lawyers to certify that they did not rely on AI to draft briefs and that information obtained using AI had been reviewed for accuracy by humans.

CLICK HERE TO GET THE FOX NEWS APP

Such proposals come as AI is not ready to handle what is being asked of it by some lawyers, in large part thanks to the lack of sufficient case data for the technology to draw from.

“The amount of case data in the models today is not sufficient to use them to produce briefs or written opinions without an enormous amount of tinkering with the prompts. So the models will hallucinate,” Phil Siegel, the founder of the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS), told Fox News Digital. “Even after that there will need to be lots of quality control because the models will ingest both the legal language but also interpretations, which may contribute to model hallucination even after all the data is in the model.”

Meanwhile, Alexander sees a path forward for AI to be used in the legal field, though he cautioned that such a platform for that use does not currently exist.

“It is certainly possible to build or maybe modify an AI by, for and with legal experts, but that does not currently exist at the level required to use for legal citations,” Alexander said. “The top performing private sector AI platforms are generalist in nature and are refined through interaction and continuous training, and this should serve as a warning to anyone using AI for any purpose. AI is a tool that leads you to the solution, it rarely provides the solution itself.”

Share

Recent Posts

Philippine vice president makes public assassination threat against country’s president

close Video Rodrigo Duterte: What to know about the controversial Philippines president Learn about what…

4 hours ago

Scientists study ‘very rare’ frozen remains of 35,000-year-old saber-toothed cub

close Video Rare dinosaur skeletons found after catastrophic flooding Paleontologists in Brazil found skeletons of…

4 hours ago

Ric Grenell under consideration to be Trump’s point man on Ukraine: report

Richard "Ric" Grenell, the former acting director of National Intelligence in President-elect Trump's first administration,…

4 hours ago

Cheap Black Friday deals cost to your privacy

It’s the perfect time to pick up holiday gifts for your family and treat yourself…

7 hours ago

US scrambles as drones shape the landscape of war: ‘the future is here’

close Video U.S. Army buys 12,000 drones from Red Cat's Teal Drones U.S. Army beefs…

9 hours ago

Fox News AI Newsletter: Mr. Miyagi’s dramatic return

IN TODAY’S NEWSLETTER: - ‘Cobra Kai’ used AI to bring back ‘Karate Kid’ character in…

9 hours ago