This site uses cookies to store information on your computer. Some are essential to make our site work; others help us improve the user experience. By using the site, you consent to the placement of these cookies. Read our privacy policy to learn more.

Reasonable vs. Near Perfection: Court Rules for Tiered Predictive Coding Approach

Created on April 24, 2013

Director of Marketing Programs

Some is better than none. The court in In re: Biomet M2a Magnum Hip Implant Products Liability Litigation (N.D. Indiana, April 18, 2013) agreed with this sentiment regarding the defendant's back end e-discovery approach for employing predictive coding. The plaintiffs, a group of injured individuals, protested the defendant's, Biomet M2a, limited use of predictive coding and urged the court to order the defendant to re-do the discovery using predictive coding from beginning. The judge in the case, Robert L. Miller, was not persuaded. He ruled that the defendant's discovery process, which started with keyword searching, then document de-duplication and lastly predictive coding, was reasonable and thus compliant under the Federal Rules of Civil Procedure (FRCP).

In this products liability case, the plaintiffs alleged multiple defects in the defendant's hip implant product. At the outset of the case, the parties had identified 19.5 million documents and attachments for the defendant to search for responsiveness. Based on the extensive nature of e-discovery, the parties agreed on a set of protocols that would be used to “facilitate identification, retrieval, and production of electronically stored information (ESI)." Following these guidelines the defendant used a combination of search techniques to identify potentially relevant documents. Below is a rundown of the defendant's three stage culling process:

  • (1) Keyword searching and culling: Reduced the data set from 19.5 million to 3.9 million documents and attachments (1.5 terabytes of data)
  • (2) De-duplication: Removed duplicate documents, reducing the data set from 3.9 million to 2.5 million documents and attachments
  • (3) Predictive coding: Employed to identify relevant documents from the remaining 2.5 million documents and attachments

Disappointed, the plaintiffs in this case believed that the defendant's document production should have been closer to 10 million documents instead of the 2.5 million received. Based on this shortcoming, the plaintiffs argued that the defendant's discovery approach was “insufficient" as it relied on a “less accurate keyword search" applied to the initial set of 19.5 million documents. Subsequently the plaintiffs motioned the court to order the defendant to re-do the discovery and employ predictive coding to the original 19.5 million documents. The defendant objected to the plaintiffs' motion based primarily because the remedy proposed would have cost the defendant “millions more than the millions it already has spent in document production," violating the proportionality rule under FRCP 26(b).

The court agreed with the defendant, stating that the benefits of the plaintiff's request for the discovery re-do using predictive coding did not “equal or outweigh its additional burden on, and additional expense to, Biomet." Based on the applicable rules under the FRCP 26(b) and 34(b)(2), the court felt that the defendant's discovery actions were adequate. The defendant had cooperated with the plaintiffs to create ESI protocols, and the defendant even offered the plaintiffs a chance to “suggest additional search terms" as well as “produce the rest of the non-privileged documents from the post-keyword 2.5 million," so the plaintiffs would be able to verify that the defendant was producing responsive information.

In contrast, the court felt that the plaintiffs' request that the defendant “go back to square one and institute predictive coding at that earlier stage sits uneasily with the proportionality standard in Rule 26(b)(2)(C)." The court estimated that the cost to accomplish this would be in the “low seven-figures," and the “confidence tests Biomet ran as part of its process suggest a comparatively modest number of document would be found." Even though the court ruled against the plaintiffs, the court left the plaintiffs the option to re-do the discovery and apply the predictive coding approach as long as the plaintiffs paid for it.


E-Discovery junkies, like myself, have anxiously been awaiting a subsequent ruling on predictive coding, and Biomet has helped clarify and re-enforce three things:

1. Cooperation Before E-Discovery. Legal teams are afforded the flexibility to decide on the steps necessary for complying with the FRCP and pending production request. If predictive coding is going to be utilized, parties should negotiate and agree upon how the technology is going to be applied prior to discovery commencing. Be proactive.

2. Reasonableness vs. Near Perfection: Many in the e-discovery community have become enthralled with the idea that predictive coding could or can find close to all responsive documents. While I acknowledge that the vast majority aren't looking for a perfect solution, this desire may cause some people to lose sight of what the court requires – a reasonable e-discovery process. In Biomet, the defendant spent over $1 million on e-discovery (estimated total e-discovery expenses at the end of discovery: $2-$3.25 million), and the plaintiff still wanted a re-do that would have increased total e-discovery expenses into the seven figure range. The standard for what is considered reasonable may change (e.g. requirements for a defensible legal hold process), but as evidenced from Biomet, some courts are not yet ready to put all their eggs in the predictive coding basket.

3. Search Techniques Beyond Predictive Coding: While predictive coding may be the future of document review, legal teams who don't have predictive coding should not forget to utilize the search techniques they already have to cull down large data sets. Utilizing a combination of search techniques (keyword, proximity, concept, de-duplication, de-nisting, metadata, etc.) can help limit the number of documents being sent for review, thus increasing efficiency and lowering costs on the back-end of the e-discovery process.

To learn more about the cost saving benefits of predictive technology watch Exterro's latest on-demand webcast, Practical Predictive Intelligence for Proactive E-Discovery here.

Mike Hamilton, J.D. is a Sr. E-Discovery Analyst at Exterro, Inc., focusing on educating Exterro customers, prospects and industry experts on how to solve e-discovery issues proactively with technology. His e-discovery knowledge, legal acumen and practical experience give him a valuable perspective on bridging the gap between IT and legal teams. You can find him on Google+, Twitter and Linkedin.