Jacqueline Schafer says AI hallucinations in the legal industry are a major issue, and something that can be solved with more thoughtful — and specialized tools. Photo courtesy Clearbrief

What you probably already know: While health care and hiring often dominate the conversation around ethical uses and potential bias in artificial intelligence systems, there’s another industry where its use can cause serious issues: The law. Attorneys across the U.S. have been sanctioned for their use of AI when it contained errors or hallucinations. Just last week, a prosecutor in northern California was sanctioned for using AI that resulted in inaccurate citations, and a Massachusetts lawyer was fined for citing fictitious cases in court pleadings that were created using generative AI. The judge in the latter case called out law firms that “blindly file their resulting work product in court without first checking to see if it incorporates false or misleading information.” However, it can be incredible difficult to catch these hallucinations, says Jacqueline Schafer, founder and CEO of Clearbrief, which makes a system that uses AI to ensure proper citations in legal documents.

Why? “The challenge with generative AI mistakes is that they’re so weird and quirky, they’re very hard for the human eye to catch,” Schafer says. For instance, in a famous case involving the controversial company My Pillow, attorneys filed an AI-generated brief that cited fake cases. Those were easy to catch. However, it also cited real cases that seemed relevant. However, if you looked deeper, the case was actually in the wrong jurisdiction and thus was not binding in the court where the My Pillow case was being heard, Schafer says. And, if the court doesn’t catch the mistake, these fake cases can then be included in the court’s opinion and become part of the precedent for future rulings. “There are these downstream problems we’re starting to see and it’s very tricky to catch these things without some technology,” Schafer says.

What it means: Clearbrief was launched in 2020, before the onslaught of generative AI products like ChatGPT, so its system is custom-built, not based on an external large-language model, and layered on top of Microsoft Word, the most commonly used tool by litigators. Clearbrief reads the text and then connects to LexisNexis, the legal database, to pull the proper case. It will also flag if the case number is wrong or if it thinks the attorney’s citation doesn’t match the argument. Because it isn’t using the whole internet as the source of its citations, it doesn’t hallucinate fake cases. “It will also suggest other places that could better help prove your argument,” Schafer says. This can cut down on the amount of time litigators spend pouring over cases to cite, while also helping judges justify their rulings.

What happens now? It’s early days for AI in the legal industry, but Schafer feels like there’s enormous opportunities to help attorneys work more efficiently. She was inspired to build Clearbrief while working on a pro bono case where a mother and her toddler were seeking asylum in the U.S. It was high stakes. “If we lost, they would be sent back to Honduras and likely murdered,” Schafer says. She’d written a 50-page brief and referenced lots of exhibits, but the judge seemed openly hostile to the case. But then he saw one of the documents she’d referenced — a therapist’s report — and he changed his mind and granted the family asylum. That, she says, is something she wants to make simple and easy for every litigator and every judge, leveling the playing field for those seeking justice. “We powered a clinic in Washington state on MLK Day where over 115 families were helped using our AI tool,” she says, “so that’s the most exciting thing, that at scale we can help lawyers and their clients.”