
Pauliah v. Univ. of Mississippi Med. Center, et al., (S.D. Miss. Dec. 30, 2025)
While previous high-profile AI sanctions focused on "hallucinated" case law (fictitious judicial opinions), this ruling highlights another dangerous risk: the fabrication of facts. For legal professionals, this serves as a critical warning that delegating document summarization to generative AI without line-by-line verification can lead to severe reputational damage and monetary sanctions.
In Pauliah v. University of Mississippi Medical Center, Plaintiff filed a declaration in opposition to a motion for summary judgment that appeared to contain smoking-gun evidence. The 40-page, single-spaced document was peppered with what appeared to be direct, verbatim quotations from deposition transcripts of the defendant's employees, complete with specific page and line citations.
When defense counsel attempted to locate the quoted passages in the actual transcripts, they found that the statements simply did not exist. The court subsequently held a Rule 56(h) hearing to determine whether the declaration was submitted in bad faith. During the hearing, a blame game ensued: the plaintiff’s attorney claimed the client drafted much of the document, while the client, after initially denying it, admitted to using generative AI to summarize the depositions.
Neither the client nor the attorney had verified the AI-generated summaries against the actual record before signing and filing the declaration. The court noted that this was a novel and "unacceptable" act, as AI had "hallucinate[d] the facts" rather than the law.
Pauliah is an eyebrow-raising ruling for professionals navigating generative AI in evidentiary workflows. What began as an authoritative declaration with pinpoint quotations and citations unraveled into a cautionary example of unchecked AI summarization, ending in a blame-shifting dispute between plaintiff and counsel.
Risks will intensify as AI agents increasingly rely on upstream AI outputs, compounding hallucinations. For legal and eDiscovery practitioners, the lesson is clear: In order to maintain defensibility, AI-assisted review and summarization must meet the same standards as any discovery process, including transparency into when and by whom AI is used, auditable workflows (similar to chain-of-custody), and validation of facts, quotes, and citations.
Ultimately, Pauliah is a stark reminder that while AI can accelerate analysis and drive efficiency, it cannot assume professional responsibility. Bree Murphy, CEDS,Co-Founder, eDiscovery Chicks, and Strategic Account Executive, Exterro
This ruling completes the warning arc for the legal industry: AI is not a substitute for the manual red-line verification of transcripts and case law. When using AI for summarization, legal teams should treat the output as a draft that requires strict corroboration against "ground truth" source documents.
This alert is for informational purposes only and is not legal advice.