A Brief AI Primer
In our personal lives, we use AI on a daily basis, even if we don’t think about it. Waze helps us find the quickest route home. Netflix recommends movies we watch to relax. Spam filters let us ignore thousands of pointless emails. As Maura Grossman, Research Professor in Computer Science at the University of Waterloo, explains, “One of the odd things about AI is that as soon as we get accustomed to it, we simply call it ‘software.’”
This normalization of AI is already happening in legal technology. E-Discovery professionals often refer to technology assisted review, AI, predictive coding, and machine learning as similar, or even the same thing, when in fact they are not.
On its own, AI is just a buzzword. There is no consensus definition. The term itself was first used in 1955 by John McCarthy in a research proposal for the Dartmouth Workshop, which was based on the idea that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
AI encompasses two types of approaches: rules-based approaches that perform tasks based on human-programmed rules and machine learning models. Earlier AIs (think Deep Blue, the first chess AI to beat a reigning world champion) were primarily rule-based. Today, most AIs use the machine learning model.
For a deeper investigation into the various forms of artificial intelligence in e-discovery today, download Exterro’s infographic What AI Is—And Isn’t—In E-Discovery.
What is Machine Learning?
Machine learning AIs use algorithms to learn from labeled training examples and/or iterative cycles of prediction and analysis of outcomes. Machine learning has applications across many fields, including predictive coding in the review phase of e-discovery.
How Predictive Coding Works
In E-Discovery, predictive coding is primarily used to quickly and accurately locate relevant documents during the review phase and greatly expedite the review process, resulting in significant cost savings. While all predictive algorithms will vary in their exact methodology, the process at a very simplistic level looks something like this:
Reviewers pull a representative cross-section of documents, known as a "seed set," from the full population of documents that need to be reviewed.
Responsive or unresponsive
Reviewers code (label) each document in the seed set as responsive or unresponsive and input those results into the predictive coding software.
Predictive formula generated
The software analyzes the seed set and creates an internal algorithm (formula) for predicting the responsiveness of future documents.
Sample and refine
Users sample the results of the algorithm on additional documents and refine the algorithm by continually coding and inputting sample documents until they achieve desired results.
The software applies that algorithm to the entire review set and codes all remaining documents as responsive or unresponsive.
As you can probably already tell there is a fair amount of technological sophistication underlying how predictive coding works. This is not the venue for a deep dive into predictive coding specifics, but two important concepts to know are:
Opinions vary as to what method is most effective, and there are several variations to each one. You can learn a lot more about active learning vs. passive learning by reading the recent study, "Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic Discovery." E-Discovery attorney and technologist Ralph Losey has done extensive research on the subject as well and reports his finding on the blog, "e-Discovery Team."
Predictive Coding in the Courts
When predictive coding first emerged there was a great deal of anticipation among legal practitioners on how courts would respond. Federal Magistrate Judge Andrew Peck's decision in Da Silva Moore v. Publicis Groupe (Southern District of New York, 2012) is considered the first official judicial endorsement of predictive coding as a way to review documents. Today, most judges agree that predictive coding has a well-established place in e-discovery.
Most disputes now center on how transparent parties have to be with how they use the technology, namely how they construct and code their seed sets. In another influential ruling in Rio Tinto Plc v. Vale S.A. (Southern District of New York, 2015), Judge Peck addressed the issue of transparency, stating that in general that he encourages parties to be transparent with their seed set development but also noted that there are other means to evaluate the efficacy of predictive coding without agreeing on seed set parameters, including manual sampling of coded documents.
Many experts would contend that predictive coding adoption trends have not lived up to the initial hype -- at least not yet. There are a few explanations for the industry’s tepid response to predictive coding, including:
Human Review Myth
There is a pervasive myth that manual human review represents the most thorough and accurate way to review documents for relevancy. A landmark study in 1985 disproved this theory, revealing that attorneys supervising skilled paralegals believed they had found at least 75% of the relevant documents from a document collection, using search terms and iterative search, when they had in fact found only 20%. Other studies since then have confirmed the fallibility of pure manual human review. However, the wide-held perception that human review represents the "gold standard" has certainly slowed adoption of predictive coding.
Introducing a disruptive technology in any environment is going to be met with a fair amount of skepticism. This holds especially true in the legal industry, which has historically been very resistant towards new technology (many lawyers would still take a legal pad over an iPad). Predictive coding is complex. It involves fairly advanced elements of data science and statistical sampling. Even though supporters of predictive coding will argue that many of those complexities are largely hidden from actual users of the technology, the promise of cheaper, more efficient e-discovery has yet to outweigh the fear of the unknown for many lawyers.
While predictive coding has been proven to drastically decrease the time and expense of document review, like any technology-driven process, it requires an upfront investment and deployment of a new tool. This often constitutes a capital expenditure, rather than an operational expense, which involves the IT and procurement teams and requires clear financial justification. Many legal teams don't have the time or energy to build a case for why a predictive coding tool is necessary, so they stick with the status quo. If you are interested in learning how to build a case for investing in e-discovery technology, our Comprehensive Guide to Buying E-Discovery Software is a great place to start.
Technology-assisted Review Continues to be Greatly Underutilized
Despite its potential to dramatically reduce review costs, technology-assisted review (“TAR") continues to be greatly underutilized. Advancements in TAR, including the emergence of a second generation of predictive coding tools—referred to as “TAR 2.0"—and the strategic use of TAR technologies should remove many of the barriers to its use.
Co-Chair, E-Discovery Practice Group
Moving Beyond Predictive Coding
While predictive coding is the principal type of AI in e-discovery review, new technologies are emerging that may rival its primacy. As research into AI has continued, the power of deep learning techniques to surpass the abilities of previous generations of AI has become evident. Google's AlphaGoZero, a go-playing engine that learned without any human inputs beyond the rules of the game, reached world champion levels of play in just three days!
Similar technologies are now reshaping AI-augmented document review. Exterro's new Smart Labeling builds upon the most advanced innovations in deep learning and natural language processing to eliminate the need for seed sets. Rather than requiring training, it learns in a way that mimics human brain functions, recognizing patterns in words, sentence structures, and more to understand which documents are relevant or privileged and then applying labels to them. This ensures that reviewers see the documents most likely to be relevant sooner, helping legal teams get to the facts of the matter sooner and at less expense.
Best Practices Checklist
Whether you're using predictive coding or another form of AI in document review, remember the maxim, "Garbage in, garbage out." Bad data going into the system will always produce bad data coming out. No technology can correct sloppy attorney work. Here are some best practices for getting the
most out of predictive coding and other AI tools:
Predictive Coding Best Practices
Get familiar with the technology before using it
You don't need to be a statistician or even a technical expert to use AI tools, but it's important to get familiar with the fundamental basics of how the system works. Does it employ active learning or passive learning? Does it require an initial seed set and if so, how large? Does the system employ a relevancy score or not? These are the types of questions that may come up with a judge or opposing party, so it's better to be proactive than appear uninformed when the process is put under a microscope.
Use expert reviewers for training the model
As we mentioned earlier, AI technology follows the adage, 'garbage in, garbage out.' Whether you're using a seed set or the technology is learning in the background, it's smart to employ your very strongest document reviewers, usually senior attorneys with close attention to detail and deep knowledge of the case. It's critical that the algorithm learns from accurate examples, or the entire review could be compromised.
Develop a relevancy threshold
A lot of predictive coding tools employ a relevancy score, which essentially rates the system's confidence on a document's relevance. You should come up with a systematic approach for determining which documents will be reviewed by human reviewers after the coding process. (Of course, with Smart Labeling, the system takes care of this task for you!) Namely, you want to establish a cutoff point where documents will be discarded and which will proceed to the next phase. There isn't a right answer for what the threshold point should be, but showing consistency will improve the defensibility of the process.
Rather than think of technology as a replacement of human review, it's better to think of it as something that augments the process. No review method is completely perfect. You should employ a system for manually sampling documents from both your relevant and non-relevant sets after the coding is complete to look for inconsistencies. However, discovering a large number of wrongfully coded documents may necessitate retraining the algorithm and running the review again. Keep in mind, the opposing party may ask to sample the non-relevant documents as well, so it's smart to validate internally before producing results.
The Only Constant is the Need to Change
I have learned that the only constant is the need to change, to keep up with the ever-changing technologies like predictive coding. That education process leads to modification of the practices. They must be understandable to work well, and not overly complicated. It is a constant struggle to maintain a beginner's mind. I have always promoted the interdisciplinary team approach to e-discovery where attorneys work with engineers, scientists and information technology specialists of all kinds.
E-Discovery Blogger / Attorney,
e-Discovery Team Blog
Predictive Coding Tools and AI
We've tried to explain how predictive coding works at a very basic level, but it's important to recognize that all products include different capabilities that impact the process. Some specific features to look for in e-discovery review software include:
To get additional insight into AI, predictive coding, and other cutting-edge e-discovery ideas, visit the edTalks library of videos
Predictive coding and artificial intelligence represent significant e-discovery technology innovation. Our next section looks at e-discovery software in general, exploring what types of tools are available, how to go about acquiring a new application, and highlighting key software characteristics.