e-discovery world

The trade-off between ‘Recall’ and ‘Precision’ in predictive coding (part 1 of 2)

This is a two-part series of posts relating to information retrieval by applying predictive coding analysis, and details out the trade-off between Recall and Precision.

Predicting Coding – sometimes referred to as ‘Technology Assisted Review’ (TAR) is basically the integration of technology into human document review process. The two-fold benefit of using TAR is speeding up the review process and reducing costs. Sophisticated algorithms are utilized to produce relevant set of documents. The underlying process in TAR Recall and Precision in predictive coding is based on concept of Statistics.

In TAR, a sample set of documents (seed-sets) are coded by subject matter experts, acting as the primary reference data to teach TAR machine recognition of relevant patterns in the larger data set. In simple terms, a ‘data sample’ is created based on chosen sampling strategies such as random, stratified, systematic, etc.

Remember, in Recall and Precision in predictive coding, it is critical to ensure that seed-sets are prepared by subject matter experts. Based on seed-sets, the algorithm in TAR platform starts assigning predictions to the documents in the database. Through an iterative process, adjustments can be made on the fly to reach desired objectives. The two important metrics used to measure the efficacy of TAR are:

  1. Recall
  2. Precision

Recall is the fraction of the documents that are relevant to the query that are successfully retrieved, whereas, Precision is the fraction of retrieved documents that are relevant to the find. If the computer, in trying to identify relevant documents, identifies a set of 100,000 documents, and after human review, 75,000 out of the 100,000 are found to be relevant, the precision of that set is 75%.

In a given population of 200,000 documents, assume 30,000 documents are selected for review as the result of TAR. If 20,000 documents are ultimately found within the 30,000 to be responsive, the selected set has a 66% precision measure. But if another 5,000 relevant documents are found in the remaining 170,000 that were not selected for review, which means the set selected for review has a recall of 80% (20,000 / 25,000).