E-Discovery
Court Clarifies Flexibility and Transparency in TAR Protocol Disputes

Why This Alert on TAR Is Important
The court’s decision in In re Insulin Pricing Litigation addresses the increasingly common disagreements over how TAR (Technology-Assisted Review) should be implemented and validated in e-discovery. Legal teams using predictive coding, particularly continuous active learning (CAL) workflows, will find this case especially useful in understanding where courts draw the line between transparency, proportionality, and deference to the producing party’s methodology.
Overview of the Case
This multidistrict litigation involves allegations that insulin manufacturers and pharmacy benefit managers conspired to inflate insulin prices. Early in discovery, a major dispute arose over one defendant’s proposed TAR protocol, which involved the use of a CAL-based predictive coding platform. The defendant offered to disclose certain metrics but wanted flexibility in how it trained and validated the model.
Plaintiffs objected to multiple aspects of the proposed TAR workflow, including the absence of a defined stopping rule, the use of “elusion sampling” on a null set rather than the entire document universe, and a lack of preset validation metrics. They sought a court order to impose their version of the TAR protocol. The defendant opposed, citing proportionality and its right to control the method of production under the rules.
The court issued its ruling, citing the need for “transparency and cooperation among counsel” balanced by the need to “ensure a producing party has appropriately trained and implemented TAR through statistically sound validation methods.”
Ruling
Flexibility Allowed in Training TAR, But Advance Notice Required
The court acknowledged that producing parties generally have the right to decide how to implement TAR workflows, including training models. However, the ruling emphasized that flexibility must be balanced with transparency. It permitted the defendant to train its CAL model using only quality-controlled decisions, rather than all relevance-based coding, but imposed a condition: the defendant must meet and confer “when it decides to limit training to only those documents reviewed for quality control.”
Stopping Point Must Be Proportional, Not Rigidly Defined by Metrics
Plaintiffs proposed a fixed stopping point based on low relevance in sequential document batches. The court rejected this rigid approach, explaining that stopping criteria should not rest solely on numeric thresholds. The ruling cited FRCP 26(b)(1)’s proportionality standard, stating, “reasonableness and proportionality factors will determine the appropriate stopping point.” Accordingly, she declined to adopt plaintiffs’ stopping rule and reaffirmed that stopping decisions must be based on the totality of circumstances—including both statistical and practical factors—and that the defendant was best suited to make that decision for their data set.
Validation Must Include the Full TAR Population, Not Just the Null Set
The most significant holding came in the court’s analysis of TAR validation. The defendant wanted to conduct validation sampling only on a “null set” of uncoded documents, excluding material that had gone through the quality-control process. The court disagreed, requiring that validation include sampling from the entire document population subject to TAR. The court found the defendant’s proposed approach inadequate under Rule 26(g), stating, “The weight of the available authority raises concerns that adopting Defendant’s proposed methodology would result in opaque and potentially unreliable recall calculations.” While the court avoided dictating a specific methodology out of reluctance to “force a responding party to adopt validation metrics imposed by a requesting party,” the ruling clearly signals that meaningful, transparent validation must extend beyond a limited subset.
The parties here had not only reached agreement on many aspects of the TAR process, but agreed conceptually on others where the disagreement was in the level of implementation. So, for example, they agreed that a stopping point criteria was necessary, but disputed when it would be determined, and they agreed that validation was necessary but whether it would be Elusion testing or testing of the entire start to finish review process. Judge Singh balanced Sedona Principle 6 and the concepts of “transparency, cooperation and flexibility” in ruling on these issues.
Case Law Tip
Document review is widely acknowledged to be the most expensive stage of the e-discovery process. Take control of your e-discovery spend by implementing some of the metrics suggested in our whitepaper, 14 Pivotal Metrics for Reducing Document Review Costs.