E-discovery Case Law Alerts

Court Limits Discovery Review to Secure AI Environments

In this ruling, the court limits discovery review to secure, closed AI environments, banning open-loop public AI tools, protecting data from exposure.

Why This Alert Is Important

As generative AI becomes a staple in legal workflows, courts are beginning to draw a hard line between "closed" enterprise-grade AI and "open-loop" public tools. This ruling establishes that while AI can enhance discovery efficiency, a party’s desire for cost savings does not override the producing party’s right to ensure their data—even non-confidential materials—remains shielded from public AI training models.

Overview Text

In a putative class action involving alleged toxic emissions from a chemical facility, the parties clashed over the use of artificial intelligence to review discovery materials. Initially, the court entered a protective order requiring that any AI technology used to review confidential information must operate in a secure environment to ensure such data is not used to train or improve the AI model. These provisions effectively restricted the parties to “closed” (or legal-grade) AI tools for sensitive documents.

Defendants sought to amend the discovery order to expand these restrictions to all "discovery materials," including non-confidential documents. They argued that "open" AI tools (public, open-loop generative models) rely on machine learning that encodes patterns from uploaded data, creating a risk of "disclosure, loss of control, and uncertainty" regarding data handling. They further contended that uploading such materials could violate the GDPR—as some defendants were European entities—since employees and correspondents had not consented to their data being fed into public models. The Plaintiffs opposed the motion, arguing it was an "umbrella" protective order that increased litigation costs and was based on speculative cybersecurity fears.

Ruling Summary

  • Good Cause for "Closed" AI Mandate The court found good cause under Rule 26(c) to limit AI usage to secure environments for all discovery materials. It noted that unlike closed tools, open AI tools risk making data "amenable to public consumption" by using it to continually improve the model. The court reasoned that allowing open-tool uploads would disincentivize comprehensive production, as parties might "err on the side of under-producing" or "make extensive redactions" to avoid losing control of their data to a public engine.
  • Failure to Demonstrate Undue Burden The court rejected plaintiffs' argument that being deprived of free or low-cost open AI tools created an unfair financial hurdle. The ruling observed that the plaintiffs "offered no support for an increased burden" and failed to quantify the extent of any purported cost increase. Without concrete evidence that using secure, professional-grade AI tools was prohibitively expensive, the court could not find that the restriction imposed an undue burden under the Federal Rules.
  • Privacy and GDPR ComplianceThe ruling emphasized that "wholesale submission of discovery materials" to open AI tools could trigger significant regulatory risks. Specifically, the court found that the Plaintiffs' proposal to simply redact personally identifiable information (PII) was insufficient to meet the "strict requirements of the GDPR," including necessary consent from the data subjects. Restricting review to closed AI tools was deemed a necessary safeguard to protect the privacy rights of the Defendants' employees and contractors.th "disclosure, loss of control, and uncertainty" are not exacerbated with the use of readily available open AI tools

Expert Analysis

Bryant Bell, Director of Product Marketing, eDiscovery, Exterro
For corporate legal teams, this case is a reminder that you need to have technology transparency with your outside counsel. Both your corporate litigation teams and your third parties need to ensure that risks associated with AI use. Ensure your electronically stored information protocols and protective orders, which set the ground rules for discovery handling and confidentiality, expressly define and distinguish open-loop from closed-loop AI tools. Proactively incorporating a closed-AI-only provision internally, with your outside counsel, and with third party providers helps protect sensitive matter data from exposure to public training models and strengthens alignment with global privacy and data-handling requirements, including the GDPR.

Tip Text

To learn more about the legal-grade AI Exterro deploys in its eDiscovery products, download our recent whitepaper on the technical development of Exterro Assist.